Skip to content

Peer Review —

April 10, 2014

Fat HealthFat ScienceExerciseFat News

Trigger warning: Discussion of research regarding weight and health.

tl;dr warning: This is one bad mama jama of a post going into great depth on a single study. I have attempted to break it down for accessibility so you can follow along how to deconstruct meta-analysis. The only way I was able to do this myself was by relying on the education, experience and wisdom of several good friends and confidantes who patiently answered my questions, read the analysis along with me and provided exhaustive feedback on how meta-analysis should work. Many thanks to Angie Meadows (aka Never Diet Again UK) and Kala, along with my epidemiologist friends who provided feedback behind the scenes. Making research accessible is vital if we are going to have a public conversation on weight and health and I hope that this post helps contribute to that conversation.

It was bound to happen sooner or later: I made a mistake.

Back in December, I wrote this post discussing two recent studies that supposedly “disproved” fat and fit. I tried to explain the difference between shitty science (i.e., bad methodology resulting in bad data) and shitty analysis (i.e., reliable data used to reinforce preexisting assumptions). In doing so, I was trying to make the case against the kind of science denialism I lamented in this post about Sandy Szwarc and Junkfood Science.

In my attempt to explain a complicated subject, I oversimplified some and drew conclusions that were not valid. Specifically:

We cannot allow ourselves to confuse shitty science with shitty analysis. The research teams that claim they have disproven fat and fit are making that analysis based on solid, reliable, trustworthy data. I’ve read the Danish study and did not see anything egregious about the data they used to draw their conclusions. The Danish study is science, not “science.”What the Danish study and the Canadian study both suffer from, however, is shitty analysis.

Although I did read the Danish study, I only ran across the Canadian study days before I already planned the post and I assumed that, like the Danish study, its data was reliable. But then I went back and actually read the Canadian study (sadly, it’s behind a paywall), as well as the peer critiques that were published alongside it, and suddenly the science got a lot murkier.

Open Access

In case you were wondering.

Even if you can’t read the study, you can at least read the conclusion:

Compared with metabolically healthy normal-weight individuals, obese persons are at increased risk for adverse long-term outcomes even in the absence of metabolic abnormalities, suggesting that there is no healthy pattern of increased weight.

You can also see that there are six letters responding to Kramer et. al.

Kramer Letters

This is the beauty of a peer-review process that keeps junk science in check. In theory.

In reality, when Kramer’s study got published, news outlets jumped on the abstract and flat-out declared “fat and fit” dead, as Alexandra Sifferlin did in Time.

Time

Other outlets basically said there’s no such thing as benign or healthy obesity.

Headlines

This is the reality of popular science, now. The people covering the research show very little willingness to comprehend the full scope of the research they’re reporting. After all, any coverage that implied that Kramer’s study proved fat and fit wrong is flat-out wrong.

That Time headline? Demonstrably false.

Now, I’ve provided lots of pictures up front to keep the reddit trolls focused up to this point because when they read “Demonstrably false” they most likely went, “yeah, right” or “psshhh” or “whatever, hamplanet.” I mean, I’m just spouting fat logic, right? I’m just some fat schmuck with a blog discussing the work of researchers at illustrious institutions. What the fuck do I know?

Fair enough.

But would you take the world of Dr. Caroline Kramer of the Leadership Sinai Centre for Diabetes at Mount Sinai Hospital in Toronto, lead author of that Canadian study?

Because when I spoke to Dr. Kramer in March, I asked her directly if her paper disproved the concept of fat and fit.

Actually, there were some misunderstandings. For example, when we talk about “fit” we are talking about exercise, we are talking about fitness. We haven’t looked at that. We are talking about metabolic problems and obesity. These are different things. For example, if you are obese, it’s best if you are fit. Let’s say you exercise three times a week or four times a week, it has a protection compared to people that are obese and do not exercise. It’s a different thing when we talk about fitness. Fitness for us, we understand you are talking about exercising, you are talking about cardiovascular fitness. [emphasis mine]

So when I say that Time magazine’s headline was demonstrably false, I mean it.

What Kramer’s study set out to show was how having metabolic disease or not affects your health in the future, whether you’re normal weight, overweight or obese. To assess that claim, Kramer et. al. performed a meta-analysis (comparing multiple, similar, peer-reviewed studies) to compare six groups: normal weight healthy, normal weight unhealthy, overweight healthy, overweight unhealthy, obese healthy and obese unhealthy. Kramer compared their health at baseline to followup, the length of which varied between the eight final studies used. The outcomes measured were cardiovascular disease (CVD) and all-cause mortality (ACM), grouped together (which is significant, but I’ll explain later).

The question Kramer actually asked was “Do fat people who don’t have metabolic disease today have an increased risk of CVD or ACM in the future compared to normal weight people?” Now, this question is close to the question of fat and fit, since metabolic disorder is what you’re attempting to prevent by exercising and eating a healthy, non-restrictive diet. But if you don’t even take into account the fitness level of the individuals being studied, then you don’t know whether Subject A is sedentary without metabolic disorder or active and has that metabolic protection Dr. Kramer mentioned.

In that previous post in which I briefly mentioned Kramer’s paper, I gave one example of the research examining the relationship between cardiorespiratory fitness and weight. Since then, I’ve found another meta-analysis that was published around the same time as Kramer’s, but it got no attention whatsoever. Plus, it’s open access, so rock on.

This study looked at 20 different studies with different outcomes and measures, and even compared a subset of eight studies that measured the physical activity of subjects. The authors said that “several studies found that increased physical activity offset the increased risk of CVD in the [metabolically healthy obese, or MHO].” The authors continued:

Recent research on sedentary behavior, independent of physical activity, has shown that increased sedentary time is associated with increased risk of diabetes, cardiovascular events, and cardiovascular as well as all-cause mortality. This is in agreement with the belief that the natural tendency of the MHO is to progress to [metabolically unhealthy obese] when physical activity is stopped or reduced. To date, no study has examined sedentary time in the MHO phenotype.

Isn’t that the essence of what Health at Every Size® (HAES) teaches? If you want to be healthy, then sustainable, lifelong exercise is the way to go. That’s the “fit” in fat and fat.

But setting aside the fact that Kramer didn’t even come close to disproving fat and fit, there’s the fact that this paper was in direct response to a paper I mentioned in that previous post. After I cited coverage of Kramer’s paper in an article on NBC News titled “New research disputes fat but fit claim,” I wrote the following:

The interesting thing about this NBC article is that the article specifically references a study that came out in January that found people with a BMI under 35 (“obese” being a BMI of 30 or more) did not have an increased mortality risk. The study by Dr. Katherine Flegal’s team received some good coverage, but more hostile coverage as it set off a firestorm of criticism from Dr. Walter Willett and company. You may recall that Willett and his amazing mustache has a long history of attacking Flegal’s research, which consistently pours cold water on the fat panic. [emphasis mine]

This is important: Flegal et. al. looked at only all-cause mortality. Her team did not compare metabolic status as Kramer did, and Flegal et. al. subdivided the obese category into Class 1 obesity (BMI 30-35) or Class 2 and 3 obesity (35 or more). Flegal’s data found that those in the overweight category had a reduced mortality risk of 6%, while Class 1 obese had a reduced risk of 5%. Class 2 and 3 combined had an increased risk of 29%, while merging all of obesity categories together gave an overall risk of 18%.

So if Kramer’s study was supposed to refute Flegal, it didn’t do a very good job. The results of Kramer and Flegal are not comparable in any way. What did Kramer find? Healthy obese people had a statistically insignificant increased risk of 19% when comparing all eight studies, but a statistically significant increased risk of 24% when they only looked at four studies lasting 10 years or longer. Again, like the merging of CVD and ACM, restricting the studies by followup time is relevant and I will address it shortly.

But what you should keep in mind as I explain Kramer’s methodology is that Flegal only found a reduced risk in that first obese category. It makes me wonder what Kramer would have found if she were able to subdivide the obese category as Flegal had.

Since there’s no way to figure that out, we must dig into the results Kramer got. And if you look at the comments of other professionals in the field, the results aren’t that impressive. For starters, there’s this review from the University of York Centre for Reviews and Dissemination, an organization that vets research for the National Health Service in the UK:

This generally well-conducted review found that metabolically healthy obese people were at an increased risk of cardiovascular events and death from any cause, in the long term (10 years), compared with metabolically healthy people of normal weight. This conclusion was based on one subgroup analysis, and may be too strong given the less conclusive results from the other analyses. [emphasis mine]

Ouch.

In essence, the authors said Kramer’s conclusions were too strong for the evidence given.

But there’s more. See that “generally well-conducted review” part? Well, there’s more to it that is relevant to the quality of Kramer’s data. I’m including footnotes to comment on each paragraph:

A suitable meta-analysis was performed to synthesise the study results.1 The estimates were not adjusted for confounding factors, which increased the risk of bias due to confounding.2 There was considerable variation in some analyses and this could not be explained.3 Many of the analyses of metabolically healthy participants produced results that were not statistically significant, but the authors based their primary conclusion on one long-term subgroup analysis (four studies) that had significant results.4

Footnotes:

  1. They gathered their studies fine.
  2. Although all of the studies used controlled for age and most controlled for gender (among other relevant characteristics, like smoking), Kramer et. al. used raw, unadjusted numbers. With an average subject age of 44 to 70 years, you would think age would affect CVD and ACM quite a bit. After all, CVD and ACM definitely have a bias against men. So not adjusting for any of these factors would heavily influence the results.  As it was explained to me, it would be like comparing the risk of mortality for women and men and concluding that “even in the absence of metabolic abnormalities there is no healthy pattern of being male.”
  3. In a nutshell, meta-analysis studies are tricky because the studies you use can vary in a million little ways. Cochrane Reviews (something similar to York for the United States) began using a measurement called I2 which measures the heterogeneity, or differences, between studies. As Wikipedia explains, “Ideally, the studies whose results are being combined in the meta-analysis should all be undertaken in the same way and to the same experimental protocols: study heterogeneity is a term used to indicate that this ideal is not fully met.” Kramer uses I2 in her study and defines anything above 50% as “moderate to high heterogeneity.” Well, all of the results from the metabolically unhealthy cohort had an I2 of 95% or more. Kramer attempted to reduce the heterogeneity by removing studies one at a time, but couldn’t. This is a fairly big red flag.
  4. This is the 10-year bit which is odd and requires a bit more than a footnote, so I’m jumping down …

… here.

Okay, so Kramer et. al. began its search for studies without restrictions on length of follow-up. In that analysis, the healthy obese group had an increased risk of 19%, but it was statistically insignificant, so you can’t say whether that result was due to chance or not. In the study itself, those results of the healthy obese group are reported as an relative risk (RR) of 1.19 and a confidence interval (CI) of 0.98 to 1.38. RR is where you get the 19%, while the CI is the range that tells you whether the RR is statistically significant. If you have an RR over 1 that’s an increased risk, and you’d need both ends of your CIs to be over 1 also for it to be statistically significant and reliable.

After that first analysis, Kramer et. al. removed any study that had a follow-up of less than 10 years, explaining, “This approach allows a longer time for the occurrence of events, which is the most appropriate strategy in evaluating a low-risk population.” Fair enough, but if that’s the case then why not begin their research by searching for longer studies?

The result of this sub-group analysis was to push the confidence interval into significance, giving an RR of 1.24 and a CI of 1.02 to 1.55. It’s this 24% increased risk that Kramer touted as proof that “there is no healthy pattern of increased weight” and the media interpreted as “you can’t be fat and fit.”

But let’s take a closer look at the effect the 10-year analysis had on the available data.

Pushing the curve

First and foremost, it cut the relevant studies in half, but it also gave one particular study the ability to nudge the numbers in the right direction. Above are the forest plots that compare the RRs and CIs of the two subsets of metabolically healthy obese subjects. The top one (C) is all eight studies, while the bottom one (D) is just the 10-year studies. The red arrow points to the baseline, which represents metabolically healthy normal weight subjects, the referent group. The sideways “I” lines are the confidence intervals and if the bar crosses the referent group line, then it’s not significant. The dot in the middle of the CI line is the relative risk. The green box indicates Meigs et. al., the study that had a highest relative risk of all eight studies and the one that achieved the greatest statistical significance. The blue box indicates the overall RR and CI, which was 19% for all eight studies and 24% for the 10-years.

By restricting half the studies, Meigs was able to put the left half of the CI line juuuuuuuuuuust over the referent line, giving Kramer what she needed to “disprove” healthy obesity. Based on these results, the Annals of Internal Medicine published an introductory paper describing the results as providing “strong evidence that healthy obesity is a myth.”

This is not strong evidence. Not even close. Again, if any reddit trolls made it this far (*grunt*words hard*grunt*) they may be saying, “What the hell do you know? You’re just some guy we intensely dislike.” Fair enough.

How about we see what Meigs et. al., have to say about their results? First of all, Meigs found a 46% increased risk of cardiovascular disease in metabolically healthy obese subjects, but it did not reach statistical significance. And when you look at the charts Meigs provides, the results are stark.

Meigs Broke Down

The three bars outlined in green are subjects from the metabolically healthy subjects, while the non-outlined ones are the metabolically unhealthy subjects; the red-outline indicates cumulative incidence of type 2 diabetes, while the blue outline is for CVD. The darkest bars are obese subjects, and when you compare all these bars together, you begin to put that 46% increased risk into perspective. And if that perspective isn’t clear enough, you can refer to Meigs et. al., who summarize, “The results of this study establish that there are BMI-metabolic risk subphenotypes in the community and that the metabolic consequences of elevated BMI are the critical factors that confer risk for type 2 diabetes or CVD associated with fatness.”

In other words, Meigs concluded that when you look at the whole picture, there really is a subset of fat, metabolically healthy people out there.

So, I asked Dr. Kramer why her conclusions were so much more negative than Meigs’ results.

It’s not so much negative as it is a way of interpreting that. Because we were able to put together a higher sample size and because of the numbers, if you look at the prevalence of healthy obesity in the population it’s a huge prevalence. If you look at 24% increase and you take the whole population with healthy obesity and you take the numbers and you see what this number tells late into the population, for my understanding this is a significant increase in events.

“But Shannon,” I hear the critics say, “I noticed that in the Meigs study, the results for metabolically healthy obese included an RR of 1.46 and a CI of 0.85–2.49, while Kramer’s paper said Meigs had an RR of 1.68 and a CI of 1.17-2.19. What is this madness?”

To that astute person I say bravo! You have caught another huge problem with the Kramer study. And you wouldn’t be alone. Recall that the York criticism said that Kramer did not adjust for confounding factors and you’ll begin to understand why the conclusions of Meigs and Kramer don’t match up. Among the six peer reviews of Kramer et. al., Dr. Flegal herself pointed out this glaring discrepancy:

The studies that they summarized had all published relative risks that were adjusted for confounding factors such as age and sex. Kramer et al. did not use those published relative risks but instead calculated new unadjusted relative risks for each study from counts of sample sizes and events. As a result, the unadjusted relative risks for individual studies in the article by Kramer et al do not match the published relative risks in the original studies.

Seriously, not one RR that Kramer used matches the studies she drew from.

Comparison Chart

 

What you have before you is a chart I created comparing the results from the eight studies side-by-side with the results Kramer published. Kramer’s numbers are bold, and when you go down the line it is striking how off these numbers are. So, again, I asked Dr. Kramer to explain why she would use unadjusted numbers when the results are so different from the original results.

This is very complex because when you do a study, of course, these things, age and sex, are confounders, but they are confounders actually if you have, for example, if you look at normal weight, overweight and obese, for example, and then in my obese population, the proportion of males, of men, is higher than in my overweight population. So then, maybe you can say, “Oh, okay, you are finding a higher instance of cardiovascular events in the obese population because you have a higher prevalence of males and we know that male gender is associated with increase in risk.” So that would be okay. Maybe it’s a confounder. But the fact is, in our study, all three groups, the prevalence of males between the groups are absolutely similar, there is no difference. There was no difference in the presence of males. There was no difference in age. The mean age in all this six categories that we look at were exactly the same.

Okay, but even if Kramer didn’t need to control for age or gender, she could have established that by doing a subgroup analysis. But either she doesn’t do such an analysis at all, or she doesn’t mention it in the paper. Ultimately it comes down to this: I, as the reader, should not have to sit there and guess why she didn’t either adjust or report stratified results because of confounding or effect modification.  She doesn’t even mention this issue in the limitations section, let alone address it in the paper with the appropriate supporting analysis.

According to another peer review letter from a team at the Johns Hopkins School of Public Health, the fact that the unhealthy cohort had such striking heterogeneity could be influenced by using raw numbers, explaining that “meta-regression analyses cannot overcome the limitations introduced by lack of within-study adjustment.

Finally, as if all these issues weren’t enough, Kramer et. al. did something that seems to be universally acknowledged as unacceptable within epidemiology: they combined results for CVD and ACM.

Why is this such a big deal? I put this question to the /r/epidemiology subreddit and got a few responses, but this answer in particularly gave a thorough explanation of the problem. In short, there are enough issues with analyzing two studies that may use slightly different definitions of CVD, so trying to merge CVD and ACM is like comparing apples to Voldemort. They aren’t even in the same universe of comparability. And because most epidemiologists would never attempt to merge CVD and ACM, an even bigger problem is that you can’t compare Kramer’s results to any other meta-analysis out there.

This single paper, this “strong evidence that healthy obesity is a myth” is an island of epidemiological research. Unless Kramer starts a fad of throwing diverse outcomes into the same pool, this is going to be the only meta-analysis of its kind in all of existence. It’s like the dodo bird of the research world.

This bird will never find a mate.

This dodo will never find a mate.

Again, I asked Kramer why they chose to merge CVD and ACM, when this practice is unheard of.

It’s a pretty common approach, actually, if you look at studies looking at events like for cardiovascular drugs, for drugs for treatment of diabetes, this isn’t an unusual. Maybe it’s not the ideal. Maybe we should look only at mortality and only cardiovascular events, but the thing is that most of the studies combine it, so we couldn’t not distinguish these two outcomes. So that was one of the reasons we combined because several studies combined. So we didn’t have the number to do this distinctively. But yeah, if you are absolutely strict you can say that maybe it should be best to look at separate, but it’s not at all unusual approach if you look at studies on clinical trials, the combining events, mainly cardiovascular and mortality, it is a usual approach. But again, if we could have done it, we would. But because the studies combine and we didn’t have the numbers, we couldn’t show that separately. [emphasis mine]

That bold part? Demonstrably false. She seems to imply that some of the studies combined CVD and ACM. But if you refer back to the comparison chart I created of the source, adjusted numbers and Kramer’s unadjusted numbers and you will see along the left-hand side the measured outcomes and none of them combined CVD and ACM (the one titled MACE stands for major adverse cardiac events, another way of quantifying CVD).

When asked why she would combine CVD and ACM in a meta-analysis of prospective or cross-sectional data, her response was basically “Well, they do it in clinical trials.” But meta-analysis is not the same as a clinical trial, so her point is moot.

These issues I raise aren’t the only problems people had with this paper. The other four peer review letters found other problems with the analysis. For instance, Finelli et. al. took exception with the use of BMI to gauge weight-related health rather than body composition; Sharma et. al. showed how Kramer’s definition of “unhealthy” (i.e., two or more components of metabolic syndrom) may have been too lenient; Esser et. al. explained that one-third of metabolically healthy obese people may be in a “transient state” of health, which is best measured by inflammatory  markers; Samocha-Bonet et. al. pointed out that the metabolically healthy obese group had half the risk of the metabolically unhealthy obese group, which suggests “metabolically healthy obese individuals are relatively protected from some adverse outcomes of obesity”; and Dr. Gerson Lesser showed that when the results were reduced to a 10-year followup, that the death rates were “very low” for subjects who began the study in their mid-50s.

In short, all of these issues add up to Kramer’s paper being a great, big, hot analytical mess that is either the result of gross incompetence or willful malpractice. And when I see that one of the members of Kramer’s team, Ravi Retnakaran, has disclosed conflicts of interest with Merck and Novo Nordisk (PDF), it makes me inclined to lean toward the latter. Coincidentally, 17 days after Kramer’s study was published, Merck (which partners with Weight Watchers and has tried (and failed) to develop the next blockbuster weight loss drug) issued a press release announcing their new business which would provide “comprehensive, evidence-based weight management interventions for employers, hospitals, medical groups, health plans and patients.” Meanwhile, Novo Nordisk, traditionally know for its diabetes medicine, announced on the same day as Kramer’s study was published that it would be filing for regulatory approval for a new weight loss drug with the U.S. Food & Drug Administration.

Knowing that Retnakaran gets grants and personal fees from both companies makes the whole thing even more unseemly. But I must emphasize, as I have in the past, that finding one shitty study does not justify outright cynicism toward all research. After all, the peer review process worked in this case. Between York and a half-dozen critics, studious readers who actually read Kramer were aware of these shortcomings. In an ideal world, this peer review process adds deeper context to the data and ensures that other researchers don’t take a glowing abstract for granted.

What Kramer’s analysis does justify is a robust skepticism over the way the media reports research, particularly over a charged issue like obesity. With this paper, and so many others in the past, we know that “science” journalists rarely dig deeper than the abstract. Again, it’s either gross incompetence or willful malpractice that any self-respecting journalist would publish a headline that says “You Can’t Be Fat and Fit” based on Kramer’s results. And yet, here we are.

In the end, I found only one media outlet that had adequate coverage of this study and it was, surprisingly, The Huffington Post. Anna Almandraia wrote this incisive piece giving Kramer’s study the skeptical eye it deserved, writing:

Major news outlets on TV, radio and online lit up Monday with news about a study claiming there was no such thing as “healthy obesity” … For example, NBC News’ headline said the study “Disputes Fat But Fit Claim,” while NPR went with the headline “Overweight And Healthy: A Combo That Looks Too Good To Be True.”

But, in fact, the study showed neither of those things.

Context matters, but the media insists on stripping obesity research of all context to continue pushing the obesity panic. If you want to be fat and fit, simply being metabolically healthy confers a reduced risk, but what the media should be explaining is the idea that fat people who exercise can mitigate the effects of metabolic disorder.

At least on this point, Dr. Kramer and I agree.

UPDATE

I’ve gotten a lot of positive feedback on the analytical part of this post, but some pushback on my conclusion that it’s the result of “gross incompetence or willful malpractice” and my implication that Dr. Retnakaran relationship with Merck and Novo Nordisk had something to do with it. To be clear, I have no evidence of that influence and neither company contributed to this study, so I should not have implied as much. Also, accusations of “willful malpractice” were needlessly hyperbolic. I was told that such an accusation in the research community would be akin to accusing an ordinary person of being a child molester.

All I can say is that these comments were borne of a frustration with the Annals of Internal Medicine for publishing this study along with an introduction that paints its results in glowing terms, when it seems pretty obvious to people who understand this research that there are glaring problems contained within. I’m also frustrated with the media for regurgitating the results without question and, in some cases, inflating those results to mean something they do not. Compare the media’s response to Kramer with the media’s predictable response to Flegal, which is always critical, always seeking opposing viewpoints and always minimizing her team’s findings. All of this frustration contributes to a cynicism within myself that I proactively try to stifle as I look at the research, but there are times like this that it surfaces and sensationalizes something that, on its own, is already sensational.

We don’t need conspiracy theories or unfounded accusations to refute shitty science and I am sorry that I tarnished this post at the end with both.

Advertisements
9 Comments leave one →
  1. purple peonies permalink
    April 10, 2014 5:18 pm

    this review was SO NEEDED. thank you for taking so much time to write this out.

    • April 10, 2014 10:14 pm

      Awesome! I’m so glad you enjoyed it. As I scream past 2,000, 3,000, 4,000 words, my biggest fear is tl;dr. Thank you for reading it all!

      Peace,
      Shannon

  2. April 10, 2014 6:47 pm

    While math normally makes my eyes glaze over, I’ve been digging into the research more and more lately. I found one paper recently that described the “super obese” as “massive” amongst other gems. I had to stop and pinch myself to make sure I wasn’t seeing things. I guess we really shouldn’t be surprised that a good portion of the scientific community is in someone’s pocket. Sad.

    • April 10, 2014 10:20 pm

      Hey reeneejune,
      I’m glad you enjoyed this. I’m not a science person either, so it has taken a lot of time and help from very smart people to get to the point where I can dissect a study competently. Keep at it because it’s the only way to really know what researchers are saying on the subject.

      As far as conflicts of interest go, I wouldn’t say that a “good portion” of the scientific community is in someone’s pocket. Largely because our government has divested in research and science, corporate and non-profit money has become what I would call to a greater or lesser extent a necessary evil. I think there are times when the COI is obvious, like an oil company producing a study on how great oil is. But most times it’s a scientist who acts as an adviser to companies. If I’m not mistaken, Dr. Arya Sharma, someone I greatly respect, is an adviser for Jenny Craig, while the great Dr. Steven Blair is working with (I believe) Weight Watchers and Coca-Cola. I’m going to write a post about this eventually because it’s an important subject and one that makes a lot of people understandably cynical about science. But context is important and I plan to write about it at some point.

      Thank you for reading and sharing your thoughts. I’m glad you enjoyed it. Keep digging! 🙂

      Peace,
      Shannon

  3. April 11, 2014 3:14 am

    It sounds as if the results were used to match the hypothesis come what may. Clearly the opinion was fat people are unfit and unhealthy. I don’t know why a study was even conducted if that was what they were thinking. Great research done by you though, perhaps they should take a look at what you have written to see how it’s done.

  4. Paul Ernsberger permalink
    April 11, 2014 12:40 pm

    “Willful malpractice” is not far from the mark. The use of unadjusted data that do not control for gender, race and age is especially egregious. This is because people gain weight as they age, so the obese group will always be a few years older than the “normal” group. Death and heart disease increase with age exponentially. Death rates double with every decade of aging –this is universal around the world and age has the same dismal effect on every living creature down to the lowly worm. Failing to fully adjust for age demonstrates a willful bias. Removing all adjustment for age is very bold. Tactics to watch for are the use of inadequate adjustment for age. One favorite is to use age categories instead of precise age. Within a 10-year age range, for example, more fat people will be near 49 and more thin people will be near 40. Given that a 49-year-old has double or more the risk of becoming ill or dying compared to a 40-year-old, this is a sly way to stack the deck in favor of your foregone conclusion.

  5. Paul Ernsberger permalink
    April 11, 2014 12:51 pm

    Regarding the conflict of interest issue, drug company influence does not work the way you think it does. Drug company money does not work like a campaign contribution that influences a decision-maker’s direction in exchange for money. The way it works is that drug company analysts pore over the research and medical literature and find the most extreme fat bashing obesity experts. They are always on the lookout for anyone willing to distort reality to zealously promote fat panic. The drug companies then take these anti-fat fanatics and promote their careers by giving them grants, sponsoring speaking tours and publishing their opinions in sponsored journals. They make sure the most extreme fat bashers get prominent spots at medical conferences by sponsoring plenary lectures in the big ballroom by Dr. Fat Hater and then issuing press releases to promote Dr. F.H.’s views.

  6. April 12, 2014 3:17 am

    I hope that one day you will combine all the research you have done into a book. It would be a book that is long overdue.
    Before I gave up listening to KRFX in Denver for good, I emailed their fatphobic morning DJ’s, Lewis and Floorwax, asking them to consider inviting you as a guest on their morning show, as they have guests on a daily basis. I doubt they ever contacted you, but would instead rather keep talking about how fat is bad, mmmkay.
    I switched to KBCO instead. Their DJ’s mostly shut up and play the music.

  7. April 12, 2014 7:05 am

    I figured something was fishy with that study the moment I saw the headline on NPR (and we all know how much NPR loves fat people…..*rolleyes*). Thanks for all the work you do to point out entrenched bullshit like this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: