Childhood Obesity in America reviewed in Psychology Today

Tags

,

Childhood obesity in AmericaPopular magazine Psychology Today has reviewed my book Childhood Obesity in America: Biography of an Epidemic in their “This is America” blog. The reviewer, Glenn C. Altschuler, Professor of American Studies at Cornell University, describes the book as “a fascinating survey of popular perceptions and changing attitudes toward diagnosis and treatment” and “filled with thought-provoking insights about changing attitudes toward causes and cures”.” You can read the full review here. Or, you can buy a copy of the book and see for yourself what all the fuss is about…

BUY THE BOOK

Childhood obesity epidemic – an end in sight?

Tags

, , ,

Childhood obesity coverIn 1963, doctors and nurses from the National Centre for Health Statistics set out around the country in specially fitted out Winnebagoes. They were going to take a tape measure to American children’s health. That study—the National Health and Nutrition Examination Survey—is still going today. The survey shows that childhood obesity has steadily risen, more than tripling in the fifty years since those doctors in the 1960s first hit the road with rulers and scales. The increases were even bigger in African American and Hispanic children. But last week researchers from the Centre and from the Public Health Service reported a new development: for the first time since the study began, rates of obesity had dropped in 2- to 5-year-olds. In young children, obesity is still more common than it was back in the 1960s, but is about 40% lower than its peak in the early 2000s. In older children, the inexorable rise over the past fifty years seems to have plateaued and hovers at around 17%. This is cause for cautious—very cautious—optimism.

It has been true for a long time that American kids are big, and have been getting bigger. In 1877, a Boston physician named Henry Bowditch carried out the first significant study of American children’s heights and weights. He had teachers in Boston schools measure their pupils’ heights and weights. With no calculators or computers in the nineteenth century, Bowditch got Beantown’s accountants to crunch the numbers. Even then, American children were larger—taller and especially heavier—than children in Europe. At a time when undernourishment was the major childhood nutritional problem, the fact that American children were big was something to be proud of. Bowditch didn’t consider that there could be “too big”. Tall and heavy, American children were strapping specimens compared with their spindly European peers. The American way of life with its opportunities, its egalitarianism, its freedoms, was being written onto children’s bodies.

Over the next century, American kids kept getting bigger. By the time the National Health and Nutrition Examination Survey was launched, an eleven-year-old boy was about 4 inches taller and 16.5 pounds heavier than a boy of the same age in Bowditch’s time. This process is called “secular change” and is generally attributed to better nutrition and better health and housing allowing children to achieve their genetic potential for growth. The increase in children’s size up to about the 1960s was thought to be a sign of good things. But after about 1970, with children still getting bigger and especially getting heavier more than they were getting than taller, there was a growing sense that a tipping point had been passed. Children were becoming “too big” and bulk brought potential health problems.

The big increase in childhood obesity since the 1960s is also a sign of how the American way of life is still shaping children’s bodies, but in ways that we no longer think are desirable. The childhood obesity epidemic has been implicated with aspects of modern life—car driving, computers, television, working families, unsafe neighborhoods, and cheap, calorie-dense food pushed by massive marketing campaigns. Ironically, some of the major culprits fuelling obesity in adults and children are things that we enjoy and have worked hard to achieve. It’s not that children have changed in any essential way to cause the increases in obesity that the National Health and Nutrition Examination Survey has found, but the American way of life has become steadily more “obesogenic”.

Exactly why in recent years the rate of obesity should have dropped in young children is not entirely clear. But, because the drop has happened in young children, it seems likely that this change is due to families changing their habits. Good job Mom and Dad. This, of course, is much easier to do when children are young. It gets harder once children start school and go out into the world. Spending more time out of the house, kids have to deal with the environment they find there, and that environment seductively encourages obesity. The federal government’s program to address childhood obesity, Let’s Move, has put a lot of emphasis on arming children against this environment. First Lady Michelle Obama is routinely photographed doing yoga with grade-schoolers, or picking beans in the White House vegetable patch. She heads up the charge to get children to resist the lures of their environment and adopt healthy habits from the get go.

Let’s Move is heavy on “encouraging”, “educating” and “empowering” children to take responsibility for their eating and exercise habits. It’s a tough ask. Adults struggle to make good choices in their eating and exercise. All too often convenience, price, and advertising favor bad choices. Asking a kid to deal with all that is even harder. So it’s no surprise that the NCHS data show that obesity has not dropped in older children in the thick of this obesogenic mayhem. Programs like Let’s Move may have helped halt the rise of obesity in older children, but have yet to make inroads on current levels. Actions to tackle the obesogenic environment have been politically harder to implement, but there have been some notable efforts such as selling healthier beverages in school canteens.

With luck, the decline in obesity in 2- to 5-year-olds will stick. If this cohort maintains these lower levels of obesity as they get older, rates of childhood obesity will drop across the board. Let’s Move will have achieved its aim of “solving the challenge of childhood obesity within a generation”. But a generation is a while to wait, and is little use to children now in that 6-19 age bracket, for whom about 1 in 5 are obese. There is more that can be done to make healthy choices the convenient, cheap choices. Children shouldn’t have to be always on the defensive and they shouldn’t have to be tender experts on diet and exercise. These latest results may be the beginnings of change, but the childhood obesity epidemic isn’t history yet.

Interested? Want more? Check out my new book Childhood Obesity In America: Biography of an Epidemic, available from Harvard University Press and Amazon 

Fighting fit

Tags

, , , , ,

Well, how are we all this morning? Cold and flu season is right round the corner, but fear not! The Doctor is now In.

Got something a little different for you with this post. Usually, Dr Then writes about the medical history behind current events, but today we’re palpitating the past and peering down the throat of yesteryear. This is a piece I wrote for the Guardian Newspaper’s Science Writing Prize, run in conjunction with the Wellcome Trust. Came runner-up. (Yay for me!)  Anyhoo, you can read my little offering either on the Guardian’s own website here, or just scroll on down for your very own private viewing….

Till next time, stay well,

Dr Then

Fighting fit: how dietitians tested if Britain would be starved into defeat

December, 1939. Britain had been at war with Germany for three months. U-boat attacks threatened incoming food shipments. And, armed with bicycles and walking boots, a group of medical researchers headed to the Lakes District to conduct a secret study: if Britain was totally cut off from food imports, would starvation hand victory to Germany?

This was an important medical question. Could the public stay fighting fit if food was rationed to what Britain alone could produce? If the ration was too low in protein, people would get “famine oedema” (swelling from fluid build-up). Before the war, Britain imported half its meat, more than half its cheese and a third of its eggs. Much of the protein in the British diet would therefore be lost if a shipping blockade succeeded. Anaemia (insufficient iron) and scurvy (lack of vitamin C) could also become a problem.

The rationed diet also had to provide enough fuel for the long hours in factories and farms needed for the war effort. If people were too weakened by lack of food, infectious diseases would pick them off, just as surely as bullets. Disease played a key part in deciding who won wars. Famously, Napoleon lost his Russian Campaign in 1812 after his army was decimated by typhus and dysentery. In total war, it wasn’t just the army who had to stay well to win. The home front also had to stay healthy. Having a sufficient diet was a medical issue that went to the heart of the war effort.

The researchers investigating whether Britain could win the food fight were Cambridge University physiologists Elsie Widdowson and Robert McCance. When war broke out, Elsie and Mac felt they could use their expertise in food and nutrition to answer whether, if German U-boats crippled food imports, would Britain be dieted into defeat?

Widdowson and McCance decided to experiment on themselves. Four students and Mac’s mother-in-law also volunteered. They would pretend that a German shipping blockade had curtailed imports and they had to eat only British food. Everyone would get equal shares of the available produce. To work out what this  might be, Elsie and Mac sought advice from Frank Engledow, a professor of agriculture who later helped set wartime food policy. British food production in 1938 became the basis for the experimental diet: 1 egg a week (1/3 the pre-war consumption); 1/4 pint of milk a day (half the pre-war consumption); a pound of meat and 4oz of fish per week, assuming trawlers would be commandeered for patrols. No butter and just 4oz of margarine. But they could eat as much potato, vegetables, and wholemeal bread as they wanted. The eight guineapigs would follow this diet for three months.

Happily, the gloomy spectres of famine oedema, scurvy, and anaemia did not arise. The guineapigs felt fit and well on the ration and could do their usual work. But there were two main difficulties. One was that meals took a long time to eat. Wholemeal bread without butter took ages to chew. The sheer quantity of potato needed to make up calories also took time to eat. All the fibre in the diet caused 250% bigger poos. They measured it.

The other problem with eating all that starch was the amount of flatus—gas—it produced. The consequences could be, in Widdowson and McCance’s description, “remarkable.”

To simulate the hardest physical work that might be expected of people during the war, some of the team headed to the Lakes District for an intensive fortnight of walking, cycling and mountaineering. It was tough going with snow and ice on the paths. But other than a sore knee for Elsie, the team did well enough that a professional mountaineer rated their performance “distinctly good”. And this was on the diet that might be the lot for all Britain if shipping imports failed.

“Mac” McCance, Elsie Widdowson and a student volunteer testing the ration in the Lakes District, December 1939. The student, Andrew Huxley, went on to win a Nobel Prize in medicine in 1963.

“Mac” McCance, Elsie Widdowson and a student volunteer testing the ration in the Lakes District, December 1939. The student, Andrew Huxley, went on to win a Nobel Prize in medicine in 1963.

In 1940, the British government rationed bacon, butter and sugar, just as the team finished their Lakes District trial. Their report and its conclusion—that Britain could stay fighting fit even if all food imports were lost—was circulated to government departments. But the study was kept secret until after the war. As more foods were rationed, the experiment provided assurance that home front health was secure. Had the conclusion been different, Britain may have had to decide whether to distribute the limited food equitably—and suffer the consequences of widely degraded health—or give more food to workers most important to the war effort. Elsie and Mac’s experiment showed this horrible reckoning was not necessary: Britain could afford to be fair and still be fighting fit. As it turned out, the experiment had been too severe. Rationing was always more generous with butter, sugar, meat, and fish than Elsie and Mac’s diet. Convoys from America were able to run the U-boat blockade and flesh out British food supplies.

Rationing during WWII caused problems—it was hard to cook inventively with limited ingredients, and queuing for supplies burdened housewives. But Elsie and Mac’s study showed that scurvy and starvation would not add to that burden.

Born to Rule: Royal births in history

Tags

, , , , , ,

B006XXID0W_400

Born to Rule

This summer, Catherine, Duchess of Cambridge (Kate, to us rabble-scrabble bloggers) is due to give birth to a child that will be heir to the British throne. Poor mite. The due date is, of course, a carefully guarded secret, but I am reliably informed by the Journal of Popular Culture – otherwise know as People Magazine – that it is the second week of July. The press is, of course, going wild about the whole thing. Kate’s maternity fashions! Kate’s morning sickness! Boy or girl?! I therefore, thought I’d add to the hype with a special blog on interesting royal pregnancies of the past. Replete with all the fun historical factoids you might want to zing out, should you be invited to the royal baby shower.

Henry-VIII1

That cod piece is not fooling anyone

Henry VIII – fertility struggles

Henry VIII might be best known for his truly heinous marital relations with his six wives (“Divorced, beheaded, died; divorced, beheaded, survived” as the rhyme goes) but his efforts to beget an heir were almost as dramatic. For Mr Potent, as he liked to think of himself (witness the VAST codpiece in his famous portraits), actually getting a healthy child was a massive struggle. Of his first three wives—the three that he was mostly sexually active with—between them they had at least 6 miscarriages, five children who were stillborn or died within weeks of birth, and just three children who lived beyond early childhood and all took a turn as reigning monarch. (Mary I, Elizabeth I, and Edward VII, a sickly child who died at age 15 and whose mother, Jane Seymour died bearing him.)

Historians have argued about why Henry VIII had such struggles with getting children. (Even adding in mistresses, Henry still wasn’t the super-stud his portraits presented him as. He had one acknowledged bastard son who lived to adulthood although there were also other suspected illegitimate children.) The older view was that Henry had contracted syphilis, which would have affected his fertility, as well as contributing to all the miscarriages his wives had. But this idea has largely been debunked, mainly because Henry didn’t show any syphilitic symptoms other than the fertility thing.

A more recent theory is that Henry may have had a particular blood variety, called Kell positive, and his unlucky wives a different blood group, Kell negative. A Kell negative woman can produce a healthy baby with a Kell positive man the first time she becomes pregnant to him, but that first pregnancy causes her to develop anti-bodies that will attack any subsequent Kell positive fetus. A fetus of a Kell positive man and a Kell negative woman has a 50-50 chance of being Kell positive. So one would expect 50% of later pregnancies to end in miscarriage. This blood group explanation accounts for the number of late-term miscarriages that Katherine of Aragon especially and Anne Boleyn suffered, but does rely on the assumption that Mary I (who was not Kathryn of Aragon’s first pregnancy) “must” have been Kell negative. Still, it’s a theory. The monarch’s fertility was such an issue—politically, religiously, socially—that even 500 years later we are still speculating as to reasons for this monarch’s droopy performance.

Warming pan

Fit a pretender to the throne in here?

James II and Mary of Modena – legitimacy

There were 42 eminent public figures in the birthing room when James II’s wife, Mary, gave birth to their first son (also James). Yep, for the royal court, it was the place to be on the morning of 10 June 1688. No “I’m washing my hair” excuses allowed. (Especially since the Stuarts like everyone else in the 17th century weren’t much into hair washing anyhow.) But, yes, along with about 15 of Mary’s ladies in waiting and senior women of the court, the King and important men also can along. Even the Archbishop of Canterbury showed up to hang out and watch the action.

Why the massive audience? Well, partly, it was in fact usual practice to invite along helpers to a birth—especially female helpers—and when a royal baby was born, this would also include statesmen who would look to certify the birth and be witness to the legitimacy of the offspring. That act of bearing witness to the royal birth carried with it the sense of certifying the continuity of rule and the stability of the state. But in this case, there was even more reason for the worthies to amass and lend their imprimatur to the occasion.

And that reason was that the Baby Daddy was not popular. Despite having only been on the throne for three years, James II was a deeply unpopular monarch—overbearing with Parliament, Catholic in his religion—and there was a growing movement to oust him. Part of this attempt to give him the boot was raising doubts about the line of succession, and behind closed doors there were whispers and allegations. So all those people in the birthing room were there as insurance against gossip and tattle that the baby wasn’t legit.

Didn’t work. The baby wasn’t really the king’s—went the marketplace gossip. The queen hadn’t even really been pregnant. The baby was smuggled into the birthing room in a warming pan! (A metal container with a long handle that you could put hot coals in to warm a bed. An early hot water bottle. See the nice picture.) Or maybe there was a secret trapdoor in the bedhead and the baby was popped through this. Or maybe Mary was actually pregnant and did give birth, but the baby wasn’t the king’s. It was a miller’s baby—an apparently particularly lusty profession for the 1600s. Those randy millers.

Regardless, the “warming pan baby scandal” put a permanent question mark over the legitimacy of Mary and James’ son. He never did become king. Mary and James fled from England when their baby son (the miller’s baby son?) was just five months old, to a life in exile in Europe. All that insurance in the birthing chamber came to nothing.

650px-Franz_Xaver_Winterhalter_Family_of_Queen_Victoria

The results of bliss

Queen Victoria – fun in the bedroom, anesthesia in the delivery room

Queen Victoria delighted in sex. Yes, ironically, for the Queen whose name is synonymous with an era of buttoned-up, laced-down, lie-back-and-think-of-England frigidity, she loved the ‘ol bedroom capers. She even had a switch installed in her and Albert’s bedside tables which would throw the door locks, should they be embarking on a bout of “bliss beyond belief” (as she described their wedding night.)

And very productive bliss it was, too. Their first child, also Victoria, was born precisely 9 months after their first encounter. But all this majestic romping did have a consequence: Victoria had nine children, averaging one every two years, and may conceivably have had more had Albert not died in 1861. (Although, by this time, Vicky was 42, so it may have been pushing it. Their last child was born in 1857, when the queen was 38.) She hated being repeatedly pregnant “more like a rabbit or a guinea pig than anything else and not very nice”, she wrote in her diary. Breast-feeding was ”disgusting” and babies, even pretty ones, were “frightful when undressed”. No, Mater Britannia didn’t much like the mater bit. Sex, yes. The results, not so much.

Victoria, however, did spark a trend amongst upper class women in England after she received chloroform during labour for her eighth and ninth deliveries (Prince Leopold and Princess Beatrice.) Her doctor was John Snow, most known for his classic epidemiological study where he traced an outbreak of cholera to a particular water pump—and ended the epidemic by having the pump’s handle taken off. The use of “the blessed chloroform”, as Vicky called it, declined after 1870 however, as evidence grew about how difficult it was to get a safe, but effective dosage. (The line between anesthetizing and killing was a very narrow one where chloroform was concerned.) Still today, mothers face complex decisions about birth and pain, with the search still on for perfectly safe means of delivering birthing women from pain.

Prince Phillip and Prince Charles – Dads in the delivery room

prince_charles_princess_elizabeth_prince_philip_640_full_topicalpressagency_3322530

Here for this bit

Although, as we saw, James II was in the bedroom when his wife Mary was giving birth, royal husbands of modern times have not been around. For the past two centuries, most royal papas have been no-where near the delivery room. Prince Phillip played squash with his private secretary during the birth of his son Charles in 1948.

But that, it should be said was entirely usual – fathers of that era did not front up for the births of their children. The idea that fathers can—or even should—be in the delivery room is a very recent development. Until the 1970s, most fathers were banished from the room to roam corridors or hang out in the pub until it was all over. Nowadays, there is a considerable expectation that soon-to-be-Dads should be toughing it out by the bedside and if all the bones in their hand gets broken…well, it took two to tango.

The BBC recently featured comments from fathers about their thoughts and experience on “being there” and it shows there is a big divide in opinion about this new role, from “I think the man should stay away until its all over. Give the nurses time to clean up the child, and give time for the wife to tidy up and get some make-up on” to “it’s amazing…I wouldn’t have missed it for the world.” Have a read – boys are so funny.

Anyhoo, back to the royal thing. Prince Charles is the first royal daddy of the past two centuries to be in the delivery room for the births of his sons, William and Henry. And those births were also the first royal births to take place in a hospital, rather than at home – another major transformation of childbirth in this century.

So we come to the end of our own romp through past royal births with the observation that, for the historian, royal births can be like wayposts, signing developments in medical care and changes over time. Since they are often well documented, they can be some of the best sources of information about what has been considered good care and appropriate behavior surrounding pregnancy and birth at different times in history. While historically royal births have been events that have engaged the nation (and sometimes shaped it as well), they are also very human endeavors. Take away the fact that Her Royal Majesty is doing the pushing, and what’s left are people trying to build a family.

Interested? Want more?

* Two great sources for information about Henry VIII’s fertility struggles and the warming pan scandal are the public lectures organized by moi: Henry VIII: The Quest for an Heir, by Peter Jones and Mary of Modena: A Royal Scandal, by Mary Fissell. The lectures are available in podcast here.

* The paper that suggests the Kell Positive blood group explanation for Henry VIII’s fertility woes is Whitely, Catrina Banks and Kyra Kramer, (2010). “A new explanation for the reproductive woes and midlife decline of Henry VIII”, The Historical Journal, Vol 53, Iss 4, pp.827-848

* Queen Victoria’s Journals are online here.

* History of anesthesia:

Snow, Stephanie, (2008) Blessed Days of Anesthesia: How anesthetics changes the world, Oxford University Press, Oxford.

Specifically on the history of pain control during child birth:

Wolf, Jacqueline H. (2009) Deliver me from pain, Johns Hopkins University Press, Baltimore.

* A good presentation on fathers in the delivery room is here by historian Laura King (2010) “Hiding in the pub to cutting the cord?”

Update – Angelina Jolie and the history of breast cancer

Tags

, , , , ,

Supreme CourtAn update on my earlier post about Angelina Jolie and her decision to undergo a preventive double mastectomy after genetic tests showed she carried a gene (BRACA1) that greatly increased her risk of breast and cervical cancer.

As some of the commentators on that post pointed out, the genetic test that Jolie had is expensive and in some countries – the US being one of them – not covered under public health (national insurance) schemes. For many women, this would have put the test out of financial reach. The reason why the genetic test for the BRACA 1 (and also BRACA 2) genes is so expensive is because the genes had been patented by a biotechnology company, Myriad Genetics, which had identified the genes. Myriad therefore was the only company able to offer tests for the BRACA1 and 2 genes. That exclusivity came with a price tag: the tests cost over $4000.

But today the US Supreme Court ruled that companies cannot patent human genes. So bye-bye to Myriad’s protective patent of BRACA 1 and 2, and hello competition. More companies will now be able to develop their own tests for BRACA 1 and 2 and offer them to the public. This is good news for two reasons. Economics kicks in. The increased competition will mean cheaper genetic tests. Jolie’s revelation may also mean more women will ask for them, and the increased demand may also help reduce the cost. And also, by freeing the genes from patent protection, more researchers will be able to work on them. The Supreme Court’s ruling applies to all human genes, as well. So patents of other human genes are also void and work on those genes is now also up for grabs. (You can, however, still patent engineered genes. It’s just the naturally occurring ones you can’t patent. Identifying a gene is not enough to secure patent protection.)

The history of patenting drugs and other medical discoveries is a fraught one. I’ll say a bit about drug patenting, because that’s the paradigm example here and illustrates the issues at stake. Before 1938, pharmaceutical companies were able to protect their products from being copied simply by keeping the ingredients secret. But in 1938, the Food, Drug and Cosmetic Act required companies to reveal all of a drug’s ingredients. This was to try and protect the public – drugs might contain dangerous compounds. Doctors needed to know exactly what they were prescribing. Patients needed to know exactly what they were taking. So…make ’em say what goes in the pill. But there was a trade-off: forcing drug companies to reveal their recipes meant that companies looked instead to patents to protect their inventions, rather than relying on secrecy.

Developing a new drug costs pharmaceutical companies or research institutes huge amounts of money. Doing the research costs money; testing and trialling the drugs costs money; and many lines of research just don’t produce a viable drug at the end of it. But drug regulations require companies to be completely clear about what has gone into a drug and how it is made – no secrets allowed. The idea is that patents should create an economic environment that makes it worth while for pharma companies to bring drugs to market, while still protecting the public from unknown ingredients in drugs. And patents work very well at doing that in some instances.

But there are also problems, as the BRACA 1 genetic test case makes clear. By denying competition for a fixed period, patenting buoys the price of drugs while they are still under patent. That is the whole point of patenting, and generally we’re fine with that as a mechanism for encouraging and rewarding innovation. But when the product in question is not so much a new type of TV but a new type of live-saving drug…or a genetic test for cancer…is it still OK to be actively keeping it expensive and restricting competition? There’s an argument that patenting is fine as a way of stimulating the market for, say, luxury goods, but inappropriate in the health industry. The history of drug legislation pushed pharma companies towards patenting, but we still haven’t yet worked out a good way of balancing public interest with encouraging innovation when it comes to the health market. One big area of interest in drug development is looking to the natural world for drugs – anti-biotics secreted by bacteria or exotic compounds found in the skin of Amazonian frogs, say. It is not clear yet whether the Supreme Court’s ruling will affect those lines of inquiry.

But right now, following the Supreme Court’s ruling, five clinical laboratory companies have announced they will start offering tests for BRACA 1 and 2 – and at less than Myriad’s price.

Death by Committee

Tags

, ,

Greetings, people, and a special welcome to those of you recently signed up to this blog on the medical history behind current events. Lovely to have you, and such a delight to have people other than those I am related to and who are, therefore, genetically obliged to read this. (Hi Mom!) It was also especially great to have your comments on the last post – I appreciate them, so thank you.

Today, I have something from the bizarre files of strange-but-true medical history for you. Here’s what made me think of it: this week, doctors at a conference in Barcelona for anesthesiologists called for more guidance on how to tell when a person is dead. Here’s the link.

I know, I know! What type of first dates are these people going on??? Kidding. Kidding. But, funnies aside, there is a very serious issue here, and one with a history.

When is a person dead? Historically, the answer to that question was always: “When their heart stops beating and they stop breathing.” Partly, this heart/lung definition of death related to an old idea, dating back to ancient Greek medicine, that the heart was the seat of life. It was, according to Aristotle, the first part of the body formed in utero, the source of the body’s heat and seat of the soul, directing and controlling the bodily economy. (None of which is the case. Sounds nice, though.) And partly it reflected the physiological fact that, without blood being pumped around the body carrying oxygen and glucose to the cells, all the organs would indeed fail and the person would definitely be dead.

Patent sketch (1882) for a "device for indicating life in buried persons".

Patent sketch (1882) for a “device for indicating life in buried persons”.

Under the heart/lung definition of death, the test for whether a person was dead or not was to feel for a pulse, and check with, say, a mirror for condensation from breathing. These tests were (and are) however not a fail-safe. People who are very cold or heavily drugged might seem not to have a pulse or be breathing. There are ghoulish accounts of corpses reanimating in mortuaries and exhumed skeletons found to have clawed the lids of their coffins. For this reason, laws were passed in the nineteenth century requiring a delay before burying a body after death had been certified by a doctor. Just in case. Enterprising inventors of the Victorian era offered coffins equipped with speaking tubes or bells, just in case you came to six feet down and needed to contact the surface to be dug out.

But let’s fast forward to the 1960s when the heart/lung view of when to call it quits came into question. Two developments made this an issue.

Development One was technological. Life support systems—artificial respirators which could take over breathing for a patient whose brain was not prompting this function—had made it possible to keep severely brain-damaged patients alive. (And antibiotics helped stave off infection which is a big risk for patients in a coma.) Electro-encephalograph machines (EEGs) allowed physicians to detect and measure the electrical activity of patient’s brains. EEGs showed that there were some patients in comas whose hearts were still beating (machines couldn’t take over that function), but their brains were so damaged that the neurons were no longer firing and sending out electrical signals. The brain was dead; the rest of the body was not. Barring rare accounts of miracles, for most such patients there was no hope of recovery. Instead they faced a continued existence—life would be too active a word—with machines invasively taking over their bodily functions until heart attack or infection killed them.

Christiaan Barnard (1922-2001)

Christiaan Barnard (1922-2001)

Development Two which really brought things to a head was this: in December 1967 South African surgeon Christiaan Barnard announced that he had successfully performed what Time magazine called “the ultimate surgery”. He had transplanted a donor heart into his patient, Louis Washkansky, a 54-year-old grocer dying of heart disease. Washkansky lived for 18 days after the surgery—not a long time, but long enough to show that the new heart had worked. The heart had come from Denise Darvall, a young woman who had been hit by a car the day before and was on life support. Her brain was no longer functioning and she was in a coma, on life support.

Successful kidney transplants from so-called “beating-heart cadavers” had been performed since 1953, but Barnard’s feat suggested the world was on the brink of a new era in medicine—an era of transplants. But only if donor organs could be found.

Under the older heart/lung criteria for whether a person was dead, patients like Denise Darvall in irreversible comas whose hearts were still beating were “alive”. Barnard for years kept secret the fact that Denise Darvall’s heart had not stopped of its own accord before he removed it from her body. He had induced a heart attack by injecting her heart with potassium. According to the law at that time (but not later), Christiaan Barnard had killed Denise Darvall when he stopped her heart to use it for transplantation. Even though patients in such irreversible comas weren’t breathing for themselves, their hearts were still beating and therefore, by the heart criteria they were still alive. They would have to stay in intensive care units until their hearts stopped—and this would probably mean that their organs were no longer viable for transplant.

So in 1968, Harvard Medical School convened a committee to redefine death. The Ad-Hoc Committee of Harvard Medical School to Examine the Definition of Brain Death (as it was called) was composed of neurologists, physiologists, public health clinicians, biochemists, transplant surgeons, a medical historian (shout out for the medical historian!) and an ethicist. Together, the committee met to develop criteria by which “irreversible coma” would become “the new criteria for death”. An irreversible coma was characterized by “a permanently nonfunctioning brain”.

In its report, published in the Journal of the American Medical Association, the committee set out diagnostic tests that would identify when a person in a coma had a “permanently nonfunctioning brain”: they would not respond to stimuli such as noise or pain; they couldn’t move or breathe for themselves; their EEG was flat (no electrical activity in the brain); and none of their reflexes worked (such as the pupil no longer contracting in response to bright light; tapping muscles with a reflex hammer producing no movement.) Brain death, argued the committee, not a non-beating heart, should be what signified that a person was not coming back.

And that is indeed what happened. The Harvard Committee’s recommendations today form the foundation of how death is defined for medical and legal purposes around the world. Death by committee. For those of us who have worked in bureaucracy, that is an all-too-familiar occurrence (he he) but I find it a startling episode in the history of medicine.

Medical historians and ethicists debate the various influences on the Harvard Committee’s decisions. The Committee itself said that it wanted to redefine death for two main reasons. One reason was to reduce the burden of brain-dead patients—the cruelty of continued massive medical intervention and the impact on relatives—as well as the burden on other patients needing hospital beds already occupied by comatose patients. The other reason the Committee gave was that, if transplant surgery was to become a reality, doctors needed legal certainty about when they could or could not take organs from someone. Putting it bluntly, the Committee wanted clear guidelines so that no one could accuse transplant surgeons of taking organs out of people while they were still “alive”.

Other commentators have said that the Committee wanted to ensure a supply of organs for transplantation. The reasons for this could either be altruistic—because surgeons wanted to be able to help people needing transplants—or for professional interest—because surgeons were enthused about this exciting new surgical opportunity and wanted to have a go at it.

One could see this episode in medical history as evidence of how the medical profession strives to weave an ethical pathway through the moral dilemmas thrown up by disease, technology, and the limits of medical attainment and try to work out the best way to help patients. Or, should you be so inclined, you could also interpret it as doctors trying to clear the way for them to indulge in exciting new opportunities for their skill. I think a fair reading of the evidence of the Harvard Committee’s deliberations and what they said for themselves suggests that there are elements of both of these factors. But I also think that delight in exercising skill as a doctor is perfectly compatible with caring for patients—people can both want to help people and take pleasure in their skill in doing so.

not-dead-yetSo why are the anesthesiologists at the conference this week wanting more guidance on when a person is dead? Why is determining death still an issue? Well, for one thing, death turns out to be extremely complicated. Like the joke in The Princess Bride, there’s “all dead” and then there’s a considerable range of “mostly dead.”

The criteria the Harvard Committee set out for determining death and which are used today with some variations and refinements are meant to determine when the “whole brain” has stopped functioning. But some people argue the criteria are too broad and so identify people as dead who are in fact not dead yet. And others say the criteria are too narrow and more people are dead than what the criteria would say. On the “too narrow” side there is evidence that in some people who pass the “whole brain” criteria some parts of their brains are still working—their bodily temperature is still being regulated, for example, or their neuro-hormones are still signaling. Something is still working in there, so clearly the “whole” brain is not dead.

And on the “too broad” side, some people say “whole brain” death is too much and what we really mean when we say a person is dead is that they are permanently unconscious and will never wake up again. That is, it is not the “whole brain” that matters but the “higher brain” dying that should be what “death” means. Under this definition, people in what is called a “persistent (or permanent) vegetative state” (PVS)—heart beating, breathing, but permanently unconscious—would be considered “dead” when under the “whole brain” criteria they are not.

And even setting these debates aside about how much dead is all dead, the criteria used to determine death vary between different countries. This is particularly the reason why the anesthesiologists meeting in Barcelona would like clarification. You might be dead in Italy, but in England, say, you’re still alive. (And I say that without casting any cultural aspersions.) Trying to find a ethical answer to the meaning of death when there are currently no perfect solutions is a difficult, fraught and important challenge. But surely, being dead should be a condition the same the world over, and so the World Health Organisation is working on developing an international standard. Presumably also by committee.

Interested? Want more?

* The Harvard Ad Hoc Committee’s report is:

Ad Hoc Committee of the Hard Medical School to Examine the Definition of Death. “A Definition of Irreversible Coma.” Journal of the American Medical Association 205, no. 6 (1968): 85-88.

* Later development of the Harvard criteria and review of the issues:

President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. “Defining Death: Medical, Legal and Ethical Issues in Determination of Death.” Washington, D.C: US. Government Printing Office, 1981.

* Reviews of the Harvard Committee’s work, including debates about what factors influenced its decisions:

Giacomini, Mia. “A Change of Heart and a Change of Mind? Technology and the Redefinition of Death in 1968.” Social Science of Medicine 44, no. 10 (1997): 1465-82.

Pernick, M.S. “Brain Death in a Cultural Context: The Reconstruction of Death 1967-1981.” In The Definition of Death: Contemporary Controversies, edited by S.J. Youngner, R.M. Arnold and R. Schapiro, 3-33. Baltimore: Johns Hopkins University Press, 1999.

Wijdicks, Eelco F.M. “The Neurologist and Harvard Criteria for Death.” Neurology 61, no. October (2003): 970-76.

* On use of brain death criteria in different countries around the world:

Baron, Leonard, Sam D. Shemie, Jeannie Teitelbaum, and Christopher James Doig. “History, Concept and Controversies in the Neurological Determination of Death.” Canadian Journal of Anesthesiology 53, no. 6 (2006): 602-08.

* On the debate about the “whole brain” standard:

Truong, Robert. “Is It Time to Abandon Brain Death?” The Hastings Center Report 27, no. 1 (1997): 29-37.

Angelina Jolie and the history of breast cancer

Tags

, , , ,

Greetings, all. The Doctor is In.

Sad news today, I’m afraid. Last Tuesday, the New York Times published a letter from actress, director and WHO ambassador Angelina Jolie. Jolie wrote that she had undergone a double mastectomy (that is, removal of both breasts.) Following her mother’s the death at a young age from cancer, Jolie said that she had been genetically tested. She had discovered that she, too, also carried a particular mutation in her BRCA1 gene that raised her risk for breast and ovarian cancer enormously.

For most women, the risk of breast cancer is around 12%; but some mutations of the BRCA1 gene increase that risk. The degree of increased risk the mutation causes varies, depending on the type of mutation and how many mutated copies of the gene the woman carries. On average, a harmful BRCA1 mutation raises a woman’s risk for breast cancer to 60%; Jolie had been counselled that her risk was even higher at 87%. Harmful BRCA1 mutations are also associated with raising the risk of ovarian cancer from 1.4% to between 15 to 40%.

images

Angelina Jolie

There is, however, no therapy currently available to fix this genetic mutation. It is not possible to shut down the faulty gene nor replace it with a non-harmful version. So, a woman carrying a BRCA1 mutation has a choice between two unsatisfactory options: live with the risk, have yearly MRI scans and mammograms, and hope that if a cancer begins, it is found soon enough for treatment to be successful.[1] Or, have one’s still  healthy breasts and ovaries removed as a preventive measure. Angelina Jolie chose the latter. She had her breasts removed. She also indicated in her letter to the New York Times that she may  later have her ovaries removed.

This choice is a horrific one to have to make—paying a pound of flesh to buy a reduced risk of the “dread disease”. In Jolie’s case, the poignancy of her choice is made even sharper by the nature of her most iconic film role: Lara Croft, Tomb Raider. This character (originally from a computer game) is distinguished by her long dark plait and her substantial, improbable bosom. (In playing the role, even Jolie’s glorious natural assets were slightly padded.) Jolie’s choice of a double mastectomy with subsequent cosmetic reconstruction is a brave one. In writing about it for other women, she has added another point to the list of reasons to admire her.

From a historical point of view, Jolie’s announcement is the latest example of a public figure speaking about her breast cancer. In 1973, Shirley Temple Black published an article in McCall’s magazine, saying how she had elected to undergo a two-step, simple mastectomy for her breast cancer. (Two step meaning two operations. In the first operation, some cells from the lump are removed and tested to see if they are cancerous. If the cells are cancerous, then—step two—the woman can opt for a second surgery to remove the tumour. In a simple mastectomy the breast is removed, but not the lymph nodes nor the underlying chest muscle.)

Shirley Temple Black

Shirley Temple Black

At the time, Temple Black’s choice for a two-step simple mastectomy was contentious. Most cancer surgeons had long favoured a more extensive operation: the radical mastectomy. In a radical mastectomy the lymph nodes and chest muscle are removed along with the breast itself. This more extensive operation was more likely to involve later complications of swelling, pain, and restricted motion. And the tradition was also to do it in one step—a single operation in which surgeons tested the tumour and removed it if it were cancerous. The patient would go under the anaesthetic not knowing whether or not she would wake up with a breast removed.

Shirley Temple Black’s surgeon had recommended the one-step, radical mastectomy option to her. But she didn’t want a one-step procedure. “The surgeon can make the incision,” she said, “I’ll make the decision.” And Temple Black had also heard of new research that had shown that the less invasive simple mastectomy had as good an outcome as the radical mastectomy. She insisted on a two-step procedure, and, when the biopsy results came back positive, a simple mastectomy. She wrote her McCall’s article to encourage other women to opt for two-step procedures and to push for more conservative surgery.

Happy Rockefeller

Happy Rockefeller

Betty Ford

Betty Ford

In 1974, Betty Ford, president Gerald Ford’s wife, announced through a press conference that she had undergone a one-step modified radical mastectomy. (Modified meaning surgeons left the chest muscle and only removed the breast and lymph nodes.) A few weeks later, Nelson Rockefeller also gave a press conference to say that his wife, Margaretta or “Happy”, had a two-step modified radical mastectomy.

Betty Rollin

Betty Rollin

And in 1975, Betty Rollin, an NBC news correspondent—who, ironically reported on the Ford and Rockefeller stories before discovering that she, too, had breast cancer—published First You Cry. The book in which Rollin talked about her experience being diagnosed with cancer, and her modified radical mastectomy, became a best-seller.

Together those four famous women raised awareness of breast cancer and the range of treatment options available. They also contributed to a trend away from the one-step radical mastectomy to two-step, more conservative procedures. This change also promoted women’s control over decisions about their cancer treatment and their bodies. Angelina Jolie has now joined their ranks. Her treatment—a preventive surgery based on genetic testing, is a more recent addition to breast cancer responses.

From ancient times, it had been clear that cancer sometimes ran in families. In the 1940s, systematic studies had shown that some rare cases of breast cancer—less than 10% of all breast cancer cases—followed family lines. This suggested there was some gene or genes involved in causing cancer in these cases. Developments in genetic screening and gene identification allowed a team of geneticists lead by Mary-Claire King at the University of California in Berkley to identify an area on chromosome 17 which seemed a likely location for this breast cancer gene. The particular gene involved was pinpointed in 1994 by geneticists at the University of Utah, and given the name BRCA1 (breast cancer susceptibility gene 1).

In most populations, the mutations in the BRCA1 gene that cause cancer are rare—only one in about 500 people carries a risk-raising mutation of their BRCA1 gene. (In Ashkenazi Jews the rate is higher—about one in 40—owing to the fact that Ashkenazi Jews share a small number of common ancestors, of whom one or more must have carried a BRCA1 mutation.) A second breast cancer gene, BRCA2, was identified in 1998. Other genes have also been found that are associated with hereditary breast cancer, but these are rarer than BRCA1 and 2 mutations.

From a historical point of view, Jolie’s announcement highlights these recent additions to the history of breast cancer: new technologies of genetic testing, new understanding of the inheritance of cancer, and surgery used preventively, rather than prophylacticly. But the announcement also highlights an essential sameness in the history of breast cancer: wrenching decisions and essentially unsatisfactory treatment options. Let us hope that when the next beat in this historical rhythm is played—when a public female figure speaks about her breast cancer in, say, 2030—she be announcing how the threat of cancer was successfully removed without surgery, without harsh side-effects and without on-going anxiety.

Till next time, stay well,

Dr Then

Interested? Want more?

Angelina Jolie’s letter to the NY Times:
Jolie, A, “My Medical Choice”, New York Times, http://www.nytimes.com/2013/05/14/opinion/my-medical-choice.html?ref=health, 14 May 2013.

Seminal papers on the discovery of the BRCA1 gene:
Hall JM, Lee MK,Newman B,Morrow JE,Anderson LA,Huey B, and King MC, “Linkage of early-onset familial breast cancer to chromosome 17q21”, Science 250 (1990):1684-9.

Miki Y, Swensen J, Shattuck-Eidens D, Futreal PA, Harshman K, Tavtigian S, Liu Q, Cochran C, Bennett LM, Ding W, et al, “A strong candidate for the breast and ovarian cancer susceptibility gene BRCA1”, Science. 1994 Oct 7;266(5182):66-71.

Summaries of current understanding of the BRCA genes:
National Cancer Institute. “BRCA1 and BRCA2: Cancer Risk and Genetic Testing.” National Institutes of Health, http://www.cancer.gov/cancertopics/factsheet/Risk/BRCA. 2009. Access date 2013.

Stanford Cancer Institute. “Hereditary Breast Ovarian Cancer Syndrome (BRCA1/BRCA2).” Stanford Medicine, http://cancer.stanford.edu/information/geneticsAndCancer/types/herbocs.html. 2013. Access date 2013.

On the history of breast cancer, and of famous women who spoke about their treatment:
Lerner, Barron H. The Breast Cancer Wars: hope, fear, and the pursuit of a cure in twentieth-century America. Oxford: Oxford University Press, 2001.

Olson, James S. Bathsheba’s Breast: Women, Cancer and History. Baltimore: Johns Hopkins University Press, 2005.

My own paper on breast cancer patient activists:
Dawes, Laura. “When Subjects Bite Back: The Bristol Cancer Help Centre Study and Increasing Consumer Involvement in UK Medical Research in the 1990s.” Social History of Medicine 25, no. 2 (2012): 500-19.

On the history of cancer in general:
Patterson, James T. The Dread Disease: Cancer and Modern American Culture. Cambridge: Harvard University Press, 1987.


[1] It may also be possible for the woman to take a drug such as tamoxifen or raloxifene to lower her risk of cancer. These drugs can have significant side effects, however, and the woman would still need to have regular scans.

Junking the BMI

Tags

, ,

_65559686_cupcakes304The Doctor is In.

We—he-heee-ll. What do we have here, mes amis? The BBC’s magazine website has an article asking “BMI: Does the Body Mass Index need fixing?” And answering, yes, it does, and one Nick Trefethen, Professor of Numerical Analysis at Oxford University is suggesting how that might be done. Here it is.

“Pourquoi?” you ask. (You are clearly, feeling somewhat French today. Or maybe that’s just me, so I’ll translate: “Huh?”) What’s this about BMI and it needing fixing?

BMI—that’s Body Mass Index to you good folks—is the measure that has been used since the 1960s as the basis for diagnosing obesity. It is a formula where you take your weight and divide it by your height squared. That is:

BMI = weight / height2

(It’s best if your weight is measured in kilograms and your height in meters because then the values come out at somewhere between about 15ish and 35ish—nice, easy to work with numbers. But if your height and weight are in different units—pounds and feet, say—there are scaling factors that can be used to make the numbers come out nicely. Mathematically speaking, it all works out the same.) You can find any one of a gazillion BMI calculators on the web that will take your height and weight in any units and spit out your BMI and tell you what it means. Here’s one.

BMI 18 to 25? Fine. No problemo. Looking good. BMI 25 to 30? On the plumpish side—technically “overweight”. Above that, you’re in serious “obese” territory. Get thee to a doctor. No, not this one—an MD.

Now, BMI is used as the diagnostic measure for obesity for one very important reason: it correlates with body fat content, and the amount of body fat one has is what medical thinking currently believes is the dangerous thing about being fat.  Note that it doesn’t measure body fat content – it just correlates, meaning if you have a high body fat content, you’re likely to have a high BMI, but the relationship is not exact.

It would be much better to actually measure body fat content and base the diagnosis of obesity on that, but it so happens that measuring body fat content is rather difficult. Rather difficult meaning, in this case, you have to be dead to get an accurate read. That sort of difficult. If you want to know your body fat content and still be alive, you’ll have to settle for an approximation. Which is what BMI is. (There are other ways of getting an approximate measure of body fat content—hydrodensitometry, DEXA, liquid scintillation detection—but if I just threw out there a whole lot of techno-jargon it’d just be showing off. 😉

BMI has a rather august history. Its first mentioned use was by a Belgian statistician, Adolphe Quetelet, in 1836. Adolphe—we’re on first name terms in this blog—was interested in applying mathematics to the study of people—births and deaths, and growth and size and so on. He experimented with using ratios of weight and height as indicators of a person’s heft or bulk and studied how these changed over time. While mostly Adolphe just divided weight by height—a much easier calculation than using powers—he did note that:

[d]’après des recherches nombreuses que j’ai faites sur la corrélation entre les tailles et les poids des homes adults, j’ai cru pouvoir conclure que les poids sont simplement comme les carrés des hauteurs[,]

Following numerous experiments that I have done on the correlation between adults’ height and weight, I have been able to conclude that weight varies simply with the square of height.

(Quetelet, Adolphe. Sur L’homme Et Le Développement De Ses Facultés, Ou Essai De Physique Sociale. Paris: Bachelier, 1835, p.97.)

And for that reason, the ratio of weight divided by the square of the height was given the name Quetelet’s Index. In 1972, the American physiologist Ancel Keys renamed (and depersonalized and denationalized) the index, calling it Body Mass Index (BMI), but every so often you do come across the occasional current reference to Quetelet’s Index, especially if the author is European and knows their history. And perhaps doesn’t like Americans.

BMI is, of course, not the only way you can combine height and weight in an index. You can, as Quetelet usually did, just divide weight by height. You can use weight divided by the cube of the height—called Rohrer’s Index. You can use height divided by the cube root of the weight—called Livi’s Index (which is just the inverse cube root of the Rohrer Index. Naturellement.)

So why, out of these various indices is BMI the one used most often today? It’s not the easiest one to calculate (weight/height is easier). It’s not the one where the dimensions of the top number in the fraction (the numerator) match the dimensions of the bottom number (the denominator). (Mathematicians love that in an index – makes it mathematically prettier. By that count, Livi or Roher’s indexes are prettiest.) So why BMI?

BMI correlates with body fat content and it does it the the best of all the indices. And that’s why it has become the measure for obesity.

Now here’s where the history of BMI becomes very important so that one doesn’t spend god knows how long and how many research dollars like Professor Nick Trefethen, Professor of Numerical Analysis at Oxford University, has done in announcing he has found a better way of measuring obesity than BMI when—sorry Nick—it’s been done before.

Question: Why do you want a better measure than BMI?

Answer: BMI only correlates with body fat content, it doesn’t measure it. So sometimes some funny things happen. People who are very highly muscled also have a very high BMI. That’s because muscle is very dense—it’s heavy, but it doesn’t take up a whole lot of space for that amount of weight. This fact—that BMI would be high for very muscly people as well as very fat people—was known in…wait for it…1942. (That study was done by a doctor working at the US Naval Yard where he tested All-American football players. They were chunky. Cubic. And had high BMI.) Clearly, the fact that BMI can’t pick out the difference between an All-American Football player and genuine butterball is something of a problem if you just want to mechanically measure heaps of people, like for an epidemiological study. Perhaps that so-called obesity epidemic is in fact an epidemic of weight-lifting??!! We’re not getting fatter, we’re getting more cut? Oh, I wish.

Question: Is Professor Nick right in saying you can find a better index than BMI for approximating body fat content?

Answer: Yes, you can, but only if you go with non-whole numbers in the exponents—the powers. Which is what Professor Nick has done. He says that 2.5 works well. (That is weight divided by height to the power 2.5). He may want to have a chat with R.J. Benn from the Medical Research Council who thinks 1.6 or 1.8 might do better. Which he said in 1971.

Sigh. I really am too young and lovely to be thinking that this is all old hat. But…meh.

So, OK, we can find an index that does better than BMI in approximating body fat content. And with computers nowadays, what’s a decimal or two in the exponent? But here’s the big question: should we junk the BMI for another formula that might approximate body fat content better?

Answer: Not much point, really. All the ways we have of measuring body fat content on a living person are approximations. And it is difficult to get consensus on how to measure and diagnose a condition—BMI has achieved this somewhat. Most obesity researchers use BMI to measure obesity. So current research into obesity can at least built on itself, rather than perpetually fumbling about with the question of “how do we measure this?”

And here’s the really critical thing: there is new research being done right now into what makes obesity harmful. The indications are that total body fat content is too gross a measure (ha ha – sorry for the pun) to capture the dangers of being fat. Rather, there are early suggestions that the location of the fat and how it is affecting the body’s metabolism (which are connected) determine how dangerous the fat is. It is more likely that in the future, obesity will be diagnosed with a blood test, looking for markers in the blood connected to the metabolic action of fat. Whether you are diagnosed as obese or not will not depend on your weight. It will depend on a chemical in your blood. It would therefore, be silly to junk BMI and the consensus for using it right now in favour of a measure which is only marginally better, when there is every promise that a measure that is radically better will come along in the next ten or so years.

Interesting things on the horizon, that’s for sure. Now, till then, put down that cookie, turn off the computer (yes, I know you’re on the computer. Right now in fact. Don’t be fibbing to me. Dr Then knows all) and

Stay well,

Doctor Then.

Interested? Want more?

For seminal investigations of the different height-weight indices, see

  • Billewicz, W.Z., W.F.F. Kemsley, and A.M. Thomson. “Indices of Adiposity.” British Journal of Preventative Social Medicine 16 (1962): 183-28.
  • Khosla, T., and C.R. Lowe. “Indices of Obesity Derived from Body Weight and Height.” British Journal of Preventative Social Medicine 21 (1967): 122-28.
  • Benn, R.T. “Some Mathematical Properties of Weight-for-Height Indices Used as Measures of Adiposity.” British Journal of Preventative Social Medicine 25 (1971): 42-50.

Ancel Keys’ paper renaming Quetelet’s Index as the Body Mass Index is Keys, Ancel, Flaminio Fidanza, Martti J. Karvonen, Noboru Kimura, and Henry L. Taylor. “Indices of Relative Weight and Obesity.” Journal of Chronic Diseases 25 (1972): 329-43.

An excellent summary of the use of BMI in government health reports is Kuczmarski, R.J., and Katherine M. Flegal. “Criteria for Definition of Overweight in Transition: Background and Recommendations for the United States.” American Journal of Clinical Nutrition 72 (2000): 1074-81.

Cough sniff splutter

Tags

, , ,

tissues

The Doctor is In.

Greetings, followers. In spite of these words of wisdom and insight that occasionally issue forth from her lips, Dr Then has to confess to being, sometimes, all too human: she has a sniffle. You know how it goes…there was one person in the office dripping nasally and spluttering over the photocopier and all of a sudden the rest of the team is coughing up a lung and looking reproachfully at “patient zero” who is now—of course—perfectly chirpy.

So today the subject of this missive is something dear to the hearts of social historians of medicine. Now, before the Big Reveal, I’m going to pause for a moment here for dramatic effect and to issue a warning: Egregious Technical Jargon Ahead. Oh yes, history of medicine is no more immune to scholarly impenetrability than [insert your favourite social science here] and does come with its own collation of terminology. Thankfully, I have tended to spare you much of it in these archives, but today you are all for it. I have a sniffle, and my public sympathy has gone fffsstt. So here’s a term I learned during those long, long, long years of postgraduate education and which means I am not just Ms Then, but Dr Then: sick role. Sick role is the topic of our discussion today.

What is the sick role? Sick role refers to the collection of behaviours that is considered appropriate for a person who is sick. The whole concept of the “sick role” in history of medicine links with an approach made popular in the 1960s: that it would be interesting to know what life in times past was like for average people—your Joe the Builder of 1650, for example. And since Joe the Builder sometimes got sick, it would be interesting to know what he was expected to do about it. History from the patient’s perspective—history “from below”—kitchen sink history—all that nobly normal, democratic stuff.

Typically, one feature of the sick role is that the sick person is given a get-out-of-regular-life-free card and is exempt from normal chores and responsibilities. No milking the cows! No ploughing the fields! No updating your Facebook status to “infectious”! But this is offset by the expectation that the sick person should be doing all they can to get better soon and quickly stop being such a pain in the a**e for the people having to listen to how raspy their sore throat is and could they possibly bring them a nice cup of tea…? You might be sick, but you’ve got obligations to try and not be: the sick role in a nutshell.

Now of course, different time periods and different people have quite different ideas as to those two critical features and what the details of them might be. For example, our current time is seeing a battle of opinions about that first point. (In fact, some historians proclaim “The Sick Role is Dead!” because of this disagreement. Dramatic bunch.) The fraught question is whether one should withdraw from the healthy world until one is better or…should one come to work and cough all over the post-it notes because one is so vital to the continued operation of the workplace? The good of public health collides with the good of productivity and self-esteem. Snigger, you may well do, but there is a whole line of pharmaceuticals dedicated to this rejection of the sick role: Solider On With Codral Cold and Flu Tablets. Dose Up, Reject Sick Role, and Infect.

And perspectives on that second point—what to do to get better—are also historically changeable. I, for example, favour the kettle-on-constant-reboil, good-book-in-bed approach and delicately scent my person with Vicks Vapour-rub. Très chic. Don’t you just wish these blogs came with smell’o’vision?

A physician, however, would expect that their patient ought to be following their prescriptions—taking their pills, and sometimes having very unpleasant things done to them such as cupping or blood letting to get better. And that’s something common to physicians ever since there were physicians. Doctors have been annoyed for thousands of years with patients who don’t do what they’re meant to: Hippocrates himself, the ancient Greek founder of medicine, spluttered (in ancient Greek) about his naughty patients. In the 1950s, physicians developed a term of their own for this pesky bunch: “non-compliant” patients. Non-compliant meaning, of course, “naughty”. Historians aren’t the only ones with a penchant for jargon.

So, there we are, with our brief foray into the concept of sick role. It’s a hugely productive idea—props to the sociologist who came up with it, Talcott Parsons. Need a research topic? Let’s investigate the sick role! When? Whenever!

I shall now shuffle off and give my demonstration of the sick role in an early twenty-first century, western urban environment. History as performance art.

Til next time, stay well, and if anyone’s passing by a kettle, I’m up for a cup.

Dr Then.

Interested? Want more?

Burnham, John. “The Death of the Sick Role”, Social History of Medicine online (2012)

Greene, Jeremy. “Therapeutic Infidelities: ‘Noncompliance’ Enters the Medical Literature, 1955–1975.” Social History of Medicine 17, no.3 (2004): 327-343.

Parsons, Talcott, Illness and the role of the physician: a sociological perspective.” American Journal of Orthopsychiatry 21, no.3 (1951): 452–460.

Porter, Roy. “The Patient’s View: Doing Medical History from Below.” Theory and Society 14, no. 2 (1985): 175-98.

Shorter, Edward. Bedside Manners: The Troubled History of Doctors and Patients. New York: Simon and Schuster, 1985.

My glands made me do it, your Honor

Tags

, , , ,

The Doctor is in.

Doctor Then has just received notice that someone is lodging a fraudulent insurance claim against her. Grrrrr. [Hackles rise] Dr Then wishes their pants would spontaneously ignite, but in the abscense of an ability to control fire (or pants), she has instead been drawn to pondering criminality. And medical history has a fair bit to say on that subject.

The most famous contributor to the notion that criminality could come under the purview of medicine was Cesare Lombroso (1835-1909), an Italian. A one-time army surgeon and asylum keeper, Lombroso was of the opinion that criminal tendencies were written on the body, and that a trained observer would be able to tell a career criminal from such details as the slope of the person’s forehead, or the posititioning of his or her (Lombroso recognized female criminality) ears. In his published works, he assembled a gallery of uglies who demonstrated the criminal features he had in mind. Criminals, said Lombroso, were throw-backs to a more primitive stage of human evolution. Just look at those long arms! Those hairy knuckles! (Arthur Conan Doyle was well versed in Lombroso’s theories–his hero, Sherlock Holmes, often does battle with deformed villains, and reads people’s habits–including moral persuasion–from their bodies.)

Lombroso's photos of criminals, allegedly showing physical signs of their criminality (as opposed to simple ugliness)

Lombroso’s photos of criminals, allegedly showing physical signs of their criminality (as opposed to simple ugliness)

Lombroso’s interest in the shape and distinctions of criminals’ bodies laid foundations for attempts to accurately identify arrested criminals. In the late 19th century, it was a major difficulty for the police to positively identify people who had been arrested and charged on previous crimes. Without a solid means of knowing ‘This person is Mr Black who has been arrested for burglary four times before’, people charged with a repeat offense could simply give a false name–‘I am Mr White’–and skip on off, leaving their long record of housebreaking behind them.

To try to deal with the identification problem, it was standard procedure in London for example, for policemen to drop by the exercise yard outside the criminal courts. From a special viewing platform, officers would look and see if there was anyone up on a charge who was–as the phrase goes–already ‘known to the police’.

In the 1880s, using Lombroso’s work as the starting point, the French police officer Alphonse Bertillion developed a system known as Bertillionage as a way of assigning a person a unique identifier. Bertillionage used bodily measurements–the length of the arm, the circumference of the skull and so forth. When crunched in a formula, the measurements would make a code that was, Bertillion believed, characteristic of a single individual. Knowing a criminal’s Bertillionage classification could therefore be used to identify him again. (The system didn’t work particularly well – it took ages to measure up a person and was not nearly accurate enough anyhow.) Fingerprinting eventually replaced Bertillionage from the start of the 20th century.

The twentieth century offered new medical insights on the issue of where criminality came from and what it looked like when it arose. In the 1920s, New York physicians Max Schlapp and Edward Smith suggested that criminal behavior was often caused by malfunctioning endocrine glands – pancreas, pituitary, suprarenals, and most especially the thyroid gland. Endocrinology–the medical specialty that deals with glands–was an emerging field at that time, and the excitement about ‘all things glandular’ was comparable to what genetics holds for us today.

With great enthusiasm, glands were thought to hold the keys to solve aging, sexual dysfunction, delinquency, obesity, mental impairment and…crime. Dr Schlapp in fact appeared in a number of court cases as an expert witness for the defense. In one case, young Archie Daniels (22), of New York, had taken his fiance out to an icecream parlor and a romantic walk and then shot her through the head with his .22 pistol. The girl had, it turned out, been about to leave him for an older and wealthier man. Dr Schlapp testified that Archie had been the victim of his glands. The worry over his fiance’s infidelity had made his glands secrete excessively. His excessive glandular activity had lowered his ‘explosion point’, lessening his ability to control himself, and making him act out on violent impulses. Shooting the girl wasn’t his fault. His glands made him do it, your Honor. Archie avoided the electric chair as a result of Schlapp’s testimony.

Now of course medicine still tries to help the courts today in the tendentious and difficult job of deciding between the mad and the bad, the culpable and the not responsible, the I-wasn’t-there and the blood-all-over-his-hands. The criminal mind fascinates, abhors and puzzles us today in equal measures, and spawns a thousand cop and forensic pathologist/psychologist/anthropologist/forensic-anything TV shows. But where does that leave us with the non-romantic, liar-liar-pants-on-fire insurance fraudsters of the type who are trying to get a new bumper bar out of me? What should one say to them? All I can say is, while my insurance company may indeed fold and give you that new bumper bar, you’ll still have a neanderthal forehead, gorilla arms, lopsided ears and a pulsating thyroid. So hah de hah!

Till next time, stay well,

Dr Then

Interested? Want more?

On fingerprinting and its history

Sengoopta C. 2004, Imprint of the Raj, New York: Pan Macmillan.

Cole S. A. 2002, Suspect Identities, Cambridge, MA: Harvard University Press.

On glands (although not about their connection with criminality – more in regard to.. shhhhh.. SEX and glands)

Sengoopta C. 2006, The Most Secret Quintessence of Live: Sex, Glands, and Hormones, 1850-1950, Chicago: University of Chicago Press.

Hamilton D. 1986, The Monkey Gland Affair, London: Chatto and Windus.

On medicine and the law

Mohr J. C. 1993, Doctors and the Law, Baltimore: Johns Hopkins University Press.

Watson K. D. 2011, Forensic Medicine in Western Society, London: Routledge.

Theories ‘in action’

Doyle A. C. 1887-1904, The Celebrated Cases of Sherlock Holmes, 1981 Edition, London: Octopus Books.