P2P Agenda: What the Huh?

Less than six weeks from the NIH P2P Workshop on ME/CFS, and we now have an agenda with speakers and talk titles.  So is it good or bad?

I reached out to the six ME/CFS members of the Working Groups for their thoughts on the agenda. Dr. Suzanne Vernon told me, “[T]he agenda uses a comparative effectiveness approach and format to address the key questions initially posed by the working group.  I think they’ve invited a relativity good mix of speakers with a range of expertise and experience to help inform the panel.” Three members did not respond and one responded but did not comment. The sixth member of the Working Group would only comment off the record, and said that some of the line up was solid but some of it was disappointing.

I agree that there are some bona fide experts on the agenda, speakers like Dr. Jason, Dr. Nacul, Dr. Klimas, and Dr. Snell. They are obvious choices for their expertise. But then there are some speakers that made me stop and think “What the what? Are you kidding me?”

The Problematic Cluster

The second main topic of the Workshop is titled: “Given the unique challenges of ME/CFS, how can we foster innovative research to enhance the development of treatments for patients?” This is a critical question, and brings to mind things like centralized data repositories, the case definition issue, drug repurposing, or systems networking and bioinformatics work like that of Dr. Gordon Broderick or Dr. Patrick McGowan.

Instead, this section focuses on studying diseases other than ME/CFS as a way to back into ME/CFS results. I’m fully in favor of out of the box thinking and learning from other scientific areas. Autoimmune and autoinflammatory diseases come to mind as one promising area. But that’s not what we’ll hear at the P2P Workshop.

The Speakers

The three speakers for this section are Dr. Dedra Buchwald, Dr. Dan Clauw, and Dr. Niloofar Afari. If anyone thought that psychosocial theories and functional somatic syndromes would not make an appearance at the Workshop, I’m afraid I must correct your false workshop belief.

Dr. Dedra Buchwald has been a CFS researcher for years, heading one of the ill-fated NIH CFS Research Centers more than a decade ago. She has built a large twin registry at the University of Washington, and has developed broad expertise in Native American health issues and studies. But her work on chronic fatigue syndrome is controversial.

Many of Dr. Buchwald’s publications focus on the predisposing role of trauma in CFS, the high rates of depression and anxiety in CFS, and an integrative view that blends it with FM and a broad view of the illness.

As just one example, Dr. Buchwald co-authored a topic summary on CFS with Dr. Craig Sawchuk (who did his postdoc work with Dr. Buchwald and was co-director of the Harborview Chronic Fatigue Clinic). This summary includes many elements from the integrative or psychosocial model of CFS. Under risk factors, the guide states, “Premorbid psychiatric illness can increase risk for CFS, including personality traits such as ‘action-proneness’ and emotional instability,” and “Historical patterns of persistent over- and underactivity levels may confer risk for CFS in adulthood.” For treatment, pharmacotherapies are not recommended, except for comorbid disorders. “CBT and graduated exercise therapy have the greatest cost effectiveness and probability of yielding symptom and functional improvements.” Exercise begins with 5-minute periods of aerobic exercise five times a week. CBT is “designed to modify thoughts, behaviors, and environmental contingencies that are maintaining or exacerbating symptoms and impairments.” Finally, “Treatment-resistant patients should be referred to a mental health professional.

These kinds of recommendations are familiar to every patient with ME/CFS. It is a view of the disease based on disordered patient perception and secondary deconditioning. This view persists in the face of thousands of papers to the contrary. All the contrary data – and there is a lot of it – is ignored.

Dr. Dan Clauw is Director of the Chronic Pain and Fatigue Research Center at the University of Michigan. His focus has been fibromylagia and the “chronic multisymptom illnesses,” such as CFS and GWI, that accompany it.  Dr. Clauw treats FM with a balanced approach using both medication and exercise/CBT. He has extensive connections to pharmaceutical companies, including funding from Eli Lily to create FibroGuide, a CBT program for people with FM.

Dr. Clauw spoke at the P2P Workshop on Opioids and Chronic Pain at the end of September, and some of his comments were surprising, to say the least. Using the term “fibromyalgianess,” Dr. Clauw favors using a self-report questionnaire rather than “a stupid tender point count.” Acknowledging that he was being provocative, Dr. Clauw said, “I view opioids as the Kardashians in the chronic pain field. And the reason I say that is that I think they receive an undue amount of attention. We should really have just stopped the meeting after we went through all those slides showing that they’re not efficacious in chronic pain.” Then there was this gem:

I also think we can probably prevent the transition from chronic nociceptive pain to chronic centralized pain. I think that that’s something that we likely…. I can with a reasonable degree of accuracy, I can look at someone’s medical record at age 20 and tell you if they’re going to develop fibromyalgia at age 40. They would already have 2 or 3 regional pain syndromes by their early 20s and they’re marching down this course and we don’t do anything right now to try to do primary or secondary prevention and I think  most of us feel that fibromyalgia’s just sort of the end of a continuum.

I appreciate that Dr. Clauw is saying that early and adequate treatment of pain can prevent that pain from becoming chronic and refractory to treatment. But claiming he can look at the chart of a 20 year old and predict whether they will have FM? Really? Do you know what my chart looked like up until age 26? Perfectly clean, healthy, with a few short acute infections that fully resolved, and absolutely no other medical or psychological problems of any kind. Then I got sick with an apparent viral infection and here I am 20 years later, in my 40s, with ME/CFS and FM and POTS. The claim that this could be predicted from my chart at age 20 is ridiculous.

Dr. Niloofar Afari is a psychologist who was Associate Director of Dr. Buchwald’s center from 1998 to 2006, and is now at UC San Diego. Much of her work has focused on twin registry projects and treating a number of pain conditions. One of her more recent papers is an examination of the association between psychological trauma and functional somatic syndromes like CFS and FM. The study found that people with trauma were 2.7 times more likely to have a functional somatic syndrome, and CFS was included in that category.

Drs. Buchwald, Clauw and Afari overlap each other to a great degree. They have co-authored papers together, they’ve worked together. Their views on ME/CFS are at the opposite end of the spectrum from researchers like Dr. Klimas, Dr. Jason or Dr. Montoya. Having an opposite view does not automatically disqualify them from speaking at the Workshop, in my opinion. But their presence on the agenda, especially in the context of fostering innovative research, is a huge red flag.

We already have a systematic evidence review that combined eight ME and CFS case definitions, despite stated misgivings that it could include people without the disease. Layered on that, we have this attempt to create innovative research by looking at overlapping and co-morbid conditions – but only functional somatic syndromes. The Workshop is not going to look at the co-morbidities of orthostatic intolerance, Lyme disease, reactivated viral infections, cancers, or Ehlers-Danlos syndrome. Nor is it going to draw clean lines around ME/CFS for research purposes. If anything, this paves the way for broader cohorts. ME/CFS research and the care of ME/CFS patients will not be advanced through this dangerous sinkhole of examining trauma or lumping our already overly broad category into the shapeless mass of “fibromyalgianess.” Studying so-called functional somatic syndromes is not innovative research; this approach is outdated, and brings confusion instead of clarity to ME/CFS research.

The Overlap

There’s another layer of weird overlap, in addition to the research/publishing overlap of Buchwald, Clauw and Afari.

In 2012, NIH hosted a Workshop on Overlapping Chronic Pain Conditions. The meeting was co-chaired by Dr. Clauw and Dr. Beth Unger. Among the talks were:

  • Overview of the meeting and chronic overlapping pain conditions by Dr. Clauw
  • Understanding chronic overlapping pain conditions: Lessons learned from twin studies by Dr. Afari
  • What has the CDC’s ME/CFS program taught us about overlapping conditions? by Dr. Unger
  • Overlapping pain conditions: Disparities and special populations by Dr. Carmen Green (University of Michigan)

That last talk is important, because Dr. Carmen Green is the chairman of our P2P Panel. Dr. Green is on the faculty of the University of Michigan, as is Dr. Clauw. She is also a member of the Interagency Pain Research Coordinating Committee (IPRCC). Dr. Green and Dr. Unger serve together on the subcommittee on Public Health Disparities for the IPRCC.

Does this seem weird? One section of the P2P Workshop is being given to Buchwald, Clauw and Afari, who have published together and share a particular disease view of ME/CFS. Then we throw in this 2012 meeting, which Dr. Unger co-chaired with Dr. Clauw, and where she spoke about what CDC’s program on ME/CFS can teach us about overlapping conditions. Also presenting at that meeting was Dr. Green, who not only serves on the IPRCC, but also serves on a subcommittee of IPRCC with Dr. Unger.

ME/CFS is a small field, and there is going to be overlap among researchers, meetings, and programs. That overlap does not in and of itself disqualify anyone from participating in other activities like this Workshop. But recall that the P2P Panel is selected by NIH to be ME/CFS bias-free. The idea is to form an impartial “jury” that has not researched or treated people with ME/CFS. We already have a high obstacle given that 85% of doctors believe CFS is wholly or partially psychological, so finding a blank slate group of experts was always going to be a challenge.

We can’t know for sure what Dr. Green’s views are, especially as the Panel selection process has been completely hidden from the public. She may or may not share views with Unger, Clauw, Afari or Buchwald. Maybe she thinks they’re wrong, or maybe she is keeping an open mind and will listen to all the speakers. But the overlaps here are a cause for concern.

Now someone could very well say, Hey, you’re not complaining about the overlap between Klimas and Natelson on the IOM committee and this meeting. Or the connections between Dr. Jason, Dr. Taylor and Ms. Brown. But Dr. Green is the P2P Panel Chair, and so her connections with some speakers who view CFS as part of a broader morass of fatigue and pain conditions is a legitimate concern.

Old Guard

This looks like old guard NIH. That “innovative research” section represents the same disease view that filled the CFS Special Emphasis Panel with experts on TMJ and FM. It is the disease view that refuses to acknowledge the muddle of case definitions, including the disproportionate selection of people with primary depression. This is the disease view that, as stated on the draft agenda I received through FOIA, has Dr. Susan Maier speaking about “overwhelming fatigue or malaise as a public health problem.”

The first part of the equation is the fundamentally flawed evidence review that ignored all of the most promising biomarker and treatment research. Add that to this Workshop which has an entire section based on the innovative old view of CFS as a condition perpetuated by deconditioning and emotional problems. What can we expect the sum of those two factors to be? Speakers like Klimas, Snell, Jason and others have an uphill climb to present all the evidence that contradicts this old guard view, and succeed in convincing the unknown Panelists that it is time for NIH to move forward.

Posted in Advocacy, Research | Tagged , , , , , , , , , , | 4 Comments

Illness Beliefs (or Why I Am Not an ME/CFS Activist)

Today, Joe Landson shares his thoughts on how the false illness beliefs (or even cognitive bias) among scientists is holding our field back. Joe says it is time to tear down the walls and think horizontally – to the horizon, even. Let’s look at what is actually in front of us, and not what we expect to see.

I am not an ME/CFS activist because of incorrect illness beliefs. Yes, incorrect illness beliefs – as I see it, they’re the main challenge in ME/CFS. No, no, not our beliefs as patients – the Wessely School’s notions that our beliefs make us sick are absurd, and always were. No, I mean incorrect medical ideas generally, specifically ideas of what makes an illness ‘real’ or not, or what makes an illness at all.

Let me explain. No, there is too much, let me sum up. For a very long time, doctors and researchers defined an illness as ‘organic’ by the tissue damage they could see. For example, the tumors they could see, feel and biopsy made cancer ‘real’. It was damage to an organ; hence, organic.

However this approach has had some treatment limitations. Cut the tumor out; more grow back. Eventually, medicos devised treatments to shrink the tumors and make them less likely to return: namely, chemotherapy and radiation.

For decades, treatment of cancer and other organic diseases generally improved. However treatment of those other diseases, those without detectable organic damage and called functional disorders, was decidedly mixed. It’s fair to say that treatment philosophies for those invisible functional disorders often (though not always) featured extraordinary contempt for the illness and the patient. From Dr. Lewis Yealland’s electrocution of World War One shell-shock patients to the Wessely School’s use of forced exercise for ME/CFS, it seems that contempt generally wins. Contempt is quick and seems to produce clear results… much the same way that cutting out the tumor seemed to ‘cure’ the cancer. Except it didn’t, and still doesn’t.

Meanwhile cancer treatment has evolved. As I write this, the Food and Drug Administration (FDA) is fast tracking radical new immunotherapy for cancer. Immunotherapy doesn’t affect the organic damage directly; rather, it blocks, damps or corrects the immune signals that encourage the tumors or other organic damage to occur. It’s about the signals – the signals that tell the organic damage to start or stop.

This begs the question: What about immune signals that don’t produce organic damage? What about illnesses with a chronic pattern of bad immune signals, but no apparent organic damage at all? What if the signal pattern is the damage?

This mental leap surmounts the wall medicos have built between organic diseases and functional disorders. Both types of illness can potentially be treated the same way – perhaps even with the same drugs, if the Rituximab studies are any indication. Moreover, doctors now can sometimes detect and treat disease before the organic damage ever happens.

This shift in medical beliefs is going on all around us, but not for us, because most of the official gatekeepers of ME/CFS are working so very hard to keep this illness category locked in place, endlessly describing empirical symptoms instead of genuinely investigating their underlying mechanisms. In the constant balancing act in life between control and progress, they side with control. Rather than waste time arguing with these gatekeepers, I’d like to do an end run around them, and point out that all these invisible functional disorders are ‘organic’, if we only change our minds and amend what we mean by ‘organic’. Organic can be a pattern of immune signals rather than the organ damage of yesteryear. Similarly I think our well-meaning friends who insist our bad signals must be located in our brains – organ damage all over again – are thinking too narrowly. Certainly our brains are deeply affected, but that doesn’t mean the bad signals start or end there.

When I grow up, I want to be a bomb-throwing medical anarchist. For NSA-types scanning this blog, no, I don’t mean actual bombs. I want to blow up medical ideas of what ‘organic disease’ really means. I want to explode the borders of medicine – the borders between organic disease and functional disorders; the borders between medicine and psychiatry generally. But just to keep you nervous, internet police, I can and will say that in Arabic if I have to!

This is a long way of explaining why I’m not an ME/CFS activist, per se. I think trying to maintain ME/CFS as a category is a narrow goal and a rigged game – rigged because our government seems dedicated to ‘evidence-based’ approaches to ME/CFS, rather than re-imagining the evidence we have. I think arguing over this or that definition of ME, or CFS, is a poor use of our time and energy, because none of the definitions extant define the immune signals that I suspect (but can’t prove yet) make us sick. To me, all the tiny, empirical functional categories, from bipolar disorder to Morgellon’s, are empty shells of outdated thinking. In pure immune research, someone is finding those signals as we speak – it’s just not labeled ME/CFS research, or Morgellon’s or bipolar research. At least, not yet.

We should seek this research out, celebrate and promote it. We should do as some are already doing, and point out both the sorry current state of – and the immense future possibilities for – almost all the invisible illnesses. Most of all we should see and portray the invisible illnesses as part of a continuum of immune signaling disorders, beyond their separate, and inherently unequal, empirical definitions.

 

Posted in Commentary | Tagged , , , , , , , | 6 Comments

Comments on P2P Systematic Evidence Review

After four weeks of intense work, a group of advocates has submitted forty pages of comments on the P2P systematic evidence review. We published a summary of our comments last week. If you want to read the full document, you can view it in two pieces:

Part One addresses the issues with the Evidence Review’s base assumption that all CFS and ME definitions represent the same disease or set of closely related diseases, and the analysis and conclusions drawn regarding diagnostic methods, accuracy and concordance of definitions, subgroups and diagnostic harms.

Part Two addresses the analysis and conclusions drawn regarding treatment effects and harms; and issues related to applicability, reliability and future research directions.

I was proud to work with the following advocates who join me in making these comments:

  • Mary Dimmock
  • Claudia Goodell, M.S.
  • Denise Lopez-Majano, Speak Up About ME
  • Lori Chapo Kroger, R.N., PANDORA Org CEO and President
  • Pat Fero, MEPD, President, Wisconsin ME & CFS Association, INC.
  • Darlene Fentner
  • Leonard Goodell, Jr.
  • Alan Gurwitt, M.D.
  • Wilhelmina D. Jenkins
  • Joseph Landson, M.S.
  • Margaret Lauritson-Lada
  • Jadwiga Lopez-Majano
  • Mike Munoz, PANDORA Org Board of Directors
  • Matina Nicholson
  • Charmian Proskauer
  • Mary M. Schweitzer, Ph.D.
  • Amy L. Squires, MPA
  • Susan Thomas
  • Erica Verrillo, Author

There is still time for you to submit your own comments. Feel free to draw inspiration from the work we publish here today, or the other posts I’ve published over the last several weeks.

 

Posted in Advocacy | Tagged , , , , , , , , , , , , , | 25 Comments

Evidence Review Comments Preview

This post comes via Mary Dimmock, Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution and a link back to this post. You are also welcome to use this (and other material we’ve gathered) as a framework for your own comments on the draft evidence review - due October 20th.

It’s been a challenging few weeks, digesting and analyzing the AHRQ Draft Systematic Evidence Review on Diagnosis and Treatment of ME/CFS.  We continue to be deeply concerned about the many flaws in the review, in terms of both the approach it took and how it applied the study protocol.

Our comments on the Review will reflect our significant concerns about how the Evidence Review was conducted, the diagnostic, subgroup, and harms treatment conclusions drawn by this report, and the risk of undue harm that this report creates for patients with ME. We believe a final version should not be published until these scientific issues are resolved.

Most fundamentally, the Evidence Review is grounded in the flawed assumption that eight CFS and ME definitions all represent the same group of patients that are appropriately studied and treated as a single entity or group of closely related entities. Guided by that assumption, this Evidence Review draws conclusions on subgroups, diagnostics, treatments and harms for all CFS and ME patients based on studies done in any of these eight definitions. In doing so, the Evidence Review disregards its own concerns, as well as the substantial body of evidence that these definitions do not all represent the same disease and that the ME definitions are associated with distinguishing biological pathologies. It is unscientific, illogical and risky to lump disparate patients together without regard to substantive differences in their underlying conditions.

Compounding this flawed assumption are the a priori choices in the Review Protocol that focused on a more narrow set of questions than originally planned and that applied restrictive inclusion and exclusion criteria. As a result, evidence that would have refuted the flawed starting assumption or that was required to accurately answer the questions was never considered. Some examples of how these assumptions and protocol choices negatively impacted this Evidence Review include:

  • Evidence about the significant differences in patient populations and in the unreliability and inaccuracy of some of these definitions was ignored and/or dismissed. This includes: Dr. Leonard Jason’s work undermining the Reeves Empirical definition; a study that shows the instability of the Fukuda definition over time in the same patients; studies demonstrating that Fukuda and Reeves encompass different populations; and differences in inclusion and exclusion criteria, especially regarding PEM and psychological disorders.
  • Diagnostic methods were assessed without first establishing a valid reference standard. Since there is no gold reference standard, each definition was allowed to stand as its own reference standard without demonstrating it was a valid reference.
  • Critical biomarker and cardiopulmonary studies, some of which are in clinical use today, were ignored because they were judged to be intended to address etiology, regardless of the importance of the data. This included most of Dr. Snell’s and Dr. Keller’s work on two day CPET, Dr. Cook’s functional imaging studies, Dr. Gordon Broderick’s systems networking studies, Dr. Klimas’s and Dr. Fletcher’s work on NK cells and immune function, and all of the autonomic tests. None of it was considered.
  • Treatment outcomes associated with all symptoms except fatigue were disregarded, potentially resulting in a slanted view of treatment effectiveness and harm. This decision excluded Dr. Lerner’s antiviral work, as well as entire classes of pain medications, antidepressants, anti-inflammatories, immune modulators, sleep treatments and more. If the treatment study looked at changes in objective measures like cardiac function or viral titers, it was excluded. If the treatment study looked at outcomes for a symptom other than fatigue, it was excluded.
  • Treatment trials that were shorter than 12 weeks were excluded, even if the treatment duration was therapeutically appropriate. The big exclusion here was the rituximab trial; despite following patients for 12 months, it was excluded because administration of rituximab was not continuous for 12 weeks (even though rituximab is not approved for 12 weeks continuous administration in ANY disease). Many other medication trials were also excluded for not meeting the 12 week mark.
  • Counseling and CBT treatment trials were inappropriately pooled without regard for the vast differences in therapeutic intent across these trials. This meant that CBT treatments aimed at correcting false illness beliefs were lumped together with pacing and supportive counseling studies, and treated as equivalent.
  • Conclusions about treatment effects and harms failed to consider what is known about ME and its likely response to the therapies being recommended. This means that the PACE (an Oxford study) results for CBT and GET were not only accepted (despite the many flaws in those data), but were determined to be broadly applicable to people meeting any of the case definitions. Data on the abnormal physiological response to exercise in ME patients were excluded, and so the Review did not conclude that CBT and GET could be harmful to these patients (although it did allow it might be possible).
  • The Evidence Review states that its findings are applicable to all patients meeting any CFS or ME definition, regardless of the case definition used in a particular study.

The issues with this Evidence Review are substantial in number, magnitude and extent. At its root is the assumption that any case definition is as good as the rest, and that studies done on one patient population are applicable to every other patient population, despite the significant and objective differences among these patients. The failure to differentiate between patients with the symptom of subjective unexplained fatigue on the one hand, and objective immunological, neurological and metabolic dysfunction on the other, calls into question the entire Evidence Review and all conclusions made about diagnostic methods, the nature of this disease and its subgroups, the benefits and harms of treatment, and the future directions for research.

As the Evidence Review states, the final version of this report may be used in the development of clinical practice guidelines or as a basis for reimbursement and coverage policies. It will also be used in the P2P Workshop and in driving NIH’s research strategy. Given the likelihood of those uses and the Evidence Review’s claim of broad applicability to all CFS and ME patients, the flaws within this report create an undue risk of significant harm to patients with ME and will likely confound research for years to come. These issues must be addressed before this Evidence Review is issued in its final form.

 

Posted in Advocacy, Commentary | Tagged , , , , , , , , , , , , , , , , , | 24 Comments

They Know What They’re Doing (Not)

This post comes via Mary Dimmock, with assistance from Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution to Mary Dimmock.

 

Last week, Jennie Spotila and Erica Verillo posted summaries of just some of the issues with AHRQ’s Draft Systematic Evidence Review, conducted for P2P.

Jennie and Erica highlighted serious and sometimes insurmountable flaws with this Review, including:

  • The failure to be clear and specific about what disease was being studied.
  • The acceptance of 8 disparate ME or CFS definitions as equivalent in spite of dramatic differences in inclusion and exclusion criteria.
  • The bad science reflected in citing Oxford’s flaws and then using Oxford studies anyway.
  • The well-known problems with the PACE trial.
  • The flawed process that used non-experts on such a controversial and conflicted area.
  • Flawed search methods that focused on fatigue.
  • Outright errors in some of the basic information in the report and apparent inconsistencies in how inclusion criteria were applied.
  • Poorly designed and imprecise review questions.
  • Misinterpretation of cited literature.

In this post, I will describe several additional key problems with the AHRQ Evidence Review.

Keep in mind that comments must be submitted by October 20, 2014. Directions for doing so are at the end of this post.

We Don’t Need No Stinking Diagnostic Gold Standard

Best practices for diagnostic method reviews state that a diagnostic gold standard is required as the benchmark. But there is no agreed upon diagnostic gold standard for this disease, and the Review acknowledges this. So what did the Evidence Review do? The Review allowed any of 8 disparate CFS or ME definitions to be used as the gold standard and then evaluated diagnostic methods against and across the 8 definitions. But when a definition does not accurately reflect the disease being studied, that definition cannot be used as the standard. And when the 8 disparate definitions do not describe the same disease, you cannot draw conclusions about diagnostic methods across them.

What makes this worse is that the reviewers recognized the importance of PEM but failed to consider the implications of Fukuda’s and Oxford’s failure to require it. The reviewers also excluded, ignored or downplayed substantial evidence demonstrating that some of these definitions could not be applied consistently, as CDC’s Dr. Reeves demonstrated about Fukuda.

Beyond this, some diagnostic studies were excluded because they did not use the “right” statistics or because the reviewer judged the studies to be “etiological” studies, not diagnostic methods studies. Was NK-Cell function eliminated because it was an etiological study? Was Dr. Snell’s study on the discriminative value of CPET excluded because it used the wrong statistics? And all studies before 1988 were excluded. These inclusion/exclusion choices shaped what evidence was considered and what conclusions were drawn.

Erica pointed out that the Review misinterpreted some of the papers expressing harms associated with a diagnosis. The Review failed to acknowledge the relief and value of finally getting a diagnosis, particularly from a supportive doctor. The harm is not from receiving the diagnostic label, but rather from the subsequent reactions of most healthcare providers. At the same time, the Review did not consider other harms like Dr. Newton’s study of patients with other diseases being diagnosed with “CFS” or another study finding some MS patients were first misdiagnosed with CFS. The Review also failed to acknowledge the harm that patients face if they are given harmful treatments out of a belief that CFS is really a psychological or behavioral problem.

The Review is rife with problems: Failing to ask whether all definitions represent the same disease. Using any definition as the diagnostic gold standard against which to assess any diagnostic method. Excluding some of the most important ME studies. It is no surprise, then, that the Review concluded that no definition had proven superior and that there are no accepted diagnostic methods.

But remarkably, reviewers felt that there was sufficient evidence to state that those patients who meet CCC and ME-ICC criteria were not a separate group but rather a subgroup with more severe symptoms and functional limitations. By starting with the assumption that all 8 definitions encompass the same disease, this characterization of CCC and ICC patients was a foregone conclusion.

But Don’t Worry, These Treatment Trials Look Fine

You would think that at this point in the process, someone would stand up and ask about the scientific validity of comparing treatments across these definitions. After all, the Review acknowledged that Oxford can include patients with other causes of the symptom of chronic fatigue. But no, the Evidence Review continued on to compare treatments across definitions regardless of the patient population selected. Would we ever evaluate treatments for cancer patients by first throwing in studies with fatigued patients? The assessment of treatments was flawed from the start.

But the problems were then compounded by how the Review was conducted. The Review focused on subjective measures like general function, quality of life and fatigue, not objective measures like physical performance or activity levels. In addition, the Review explicitly decided to focus on changes in the symptom of fatigue, not PEM, pain or any other symptom. Quality issues with individual studies were either not considered or ignored. Counseling and CBT studies were all lumped into one treatment group, without consideration of the dramatic difference in therapeutic intent of the two. Some important studies like Rituxan were not considered because the treatment duration was considered too short, regardless of whether it was therapeutically appropriate.

And finally, the Review never questioned whether the disease theories underlying these treatments were applicable across all definitions. Is it really reasonable to expect that a disease that responds to Rituxan or Ampligen is going to also respond to therapies that reverse the patient’s “false illness beliefs” and deconditioning? Of course not.

If their own conclusions on the diagnostic methods and the problems with the Oxford definition were not enough to make them stop, the vast differences in disease theories and therapeutic mechanism of action should have made the reviewers step back and raise red flags.

At the Root of It All

This Review brings into sharp relief the widespread confusion on the nature of ME and the inappropriateness of having non-experts attempt to unravel a controversial and conflicting evidence base about which they know nothing.

But just as importantly, this Review speaks volumes about the paltry funding and institutional neglect of ME reflected in the fact that the study could find only 28 diagnostic studies and 9 medication studies to consider from the last 26 years. This Review speaks volumes about the institutional mishandling that fostered the proliferation of disparate and sometimes overly broad definitions, all branded with the same “CFS” label. The Review speaks volumes about the institutional bias that resulted in the biggest, most expensive and greatest number of treatment trials being those that studied behavioral and psychological pathology for a disease long proven to be the result of organic pathology.

This institutional neglect, mishandling and bias have brought us to where we are today. That the Evidence Review failed to recognize and acknowledge those issues is stunning.

Shout Out Your Protest!

This Evidence Review is due to be published in final format before the P2P workshop and it will affect our lives for years to come. Make your concerns known now.

  1. Submit public comments on the Evidence Review to the AHRQ website by October 20.
  2. Contact HHS and Congressional leaders with your concerns about the Evidence Review, the P2P Workshop and HHS’ overall handling of this disease. Erica Verillo’s recent post provides ideas and links for how to do this.

The following information provides additional background to prepare your comments:

However you choose to protest, make your concerns known!

 

Posted in Advocacy, Commentary | Tagged , , , , , , , , , , , , , , , , , , , | 18 Comments

A Review of the P2P Systematic Review

IMG_3530The draft systematic evidence review on the Diagnosis and Treatment of ME/CFS was published online last week. It’s a monster – 416 pages in total. I know many ME/CFS patients may not be able to read this report, so in this post I’m going to focus on three things: the purpose of the report, the lumping of multiple case definitions, and the high quality rating given to the PACE trial. If you read nothing else about this systematic review, then these are the biggest takeaway messages.

The Purpose of the Systematic Review

NIH requested the review for the purposes of the P2P Workshop, and the Agency for Healthcare Research and Quality contracted with the Oregon Health & Sciences University to perform the review for about $350,000.

The primary purpose of the review is to serve as the cornerstone of knowledge for the P2P Panel. The Panel will be made up entirely of non-ME/CFS experts. In order to give them some knowledge base for the Workshop presentations, the Panel will receive this review and a presentation by the review authors (behind closed doors). Until the Workshop itself, this review will be the Panel’s largest source of information about ME/CFS.

But that is not the only use for this report. AHRQ systematic reviews are frequently published in summary form in peer reviewed journals, as was the 2001 CFS review. The report will be available online, and will be given great credence simply because it is an AHRQ systematic review. The conclusions of this review – including the quality rating of the PACE trial – will be entrenched for years to come.

You can expect to see this review again and again and again. In the short term, this review will be the education given to the P2P Panel of non-ME/CFS experts in advance of the Workshop. But the review will also be published, cited, and relied upon by others as a definitive summary of the state of the science on diagnosing and treating ME/CFS.

Case Definition: I Told You So

When the protocol for this systematic review was published in May 2014, I warned that the review was going to lump all case definitions together, including the Oxford definition. After analyzing the review protocol and the Workshop agenda, Mary Dimmock and I wrote that the entire P2P enterprise was based on the assumption that all the case definitions described the same single disease, albeit in different ways, and that this assumption put the entire effort at risk. Some people may have hoped that a systematic review would uncover how different Oxford and Canadian Consensus Criteria patients were, and would lead to a statement to that effect.

Unfortunately, Mary and I were correct.

The systematic review considered eight case definitions, including Oxford, Fukuda, Canadian, Reeves Empirical, and the International Consensus Criteria, and treated them as describing a single patient population. They lumped all these patient cohorts together, and then tried to determine what was effective in diagnosing and treating this diverse group. The review offers no evidence to support their assumption, beyond a focus on the unifying feature of fatigue.

What I find particularly disturbing is that the review did acknowledge that maybe Oxford didn’t belong in the group:

We elected to include trials using any pre- defined case definition but recognize that some of the earlier criteria, in particular the Oxford (Sharpe, 1991) criteria, could include patients with 6 months of unexplained fatigue and no other features of ME/CFS. This has the potential of inappropriately including patients that would not otherwise be diagnosed with ME/CFS and may provide misleading results. (p. ES-29, emphasis added)

But then they did it anyway.

Credit: ElodieUnderGlass

This is inexplicably bad science. How can they acknowledge that Oxford patients may not have ME/CFS and acknowledge that including them may provide misleading results, and then include them anyway? Is it just because Oxford papers claim to be about CFS and include people with medically unexplained fatigue? The systematic review authors clearly believed that this was a sufficient minimum standard for inclusion in analysis, despite the acknowledged risk that it could produce misleading results.

I will have a lot more to say on this topic and the problems in the review’s analysis. For now, the bottom line takeaway message is that the systematic review combined all the case definitions, including Oxford, and declared them to represent a single disease entity based on medically unexplained fatigue.

PACE is Ace

One of the dangers of the review’s inclusion of the Oxford definition and related studies was the risk that PACE would be highly regarded. And that is exactly what happened.

The PACE trial is one of seven treatment studies (out of a total of thirty-six) to receive the “Good” rating, which has a specific technical meaning in this context (Appendix E). In the systematic review, a randomized control trial is “Good” if it includes comparable groups, uses reliable and valid measurement instruments, considers important outcomes, and uses an intention-to-treat analysis. I’m certainly no expert in these issues, but I can spot a couple problems.

First of all, the PACE trial may have used comparable groups within the study, but that internal consistency is different from whether the PACE cohort was comparable to other ME/CFS patients. The systematic review already acknowledged that the Oxford cohort may include people who do not actually have ME/CFS, and in my opinion that is the comparable group that matters.

In terms of important outcomes, the systematic review focused on patient-centered outcomes related to overall function, quality of life, ability to work and measures of fatigue. Yet there is no discussion or acknowledgement that patient performance on a 6 minute walking test at the end of PACE showed that they remained severely impaired. There is also no acknowledgement that a patient could enter PACE with an SF-36 score of 65, leave the trial with a score of 60, and be counted as recovered. That is because so many changes were made to the study in post-hoc analysis, including a change to the measures of recovery. Incredibly, the paper in which the PACE authors admit to those post-hoc changes is not cited in the systematic review. It is also important to point out that much of the discussion of the PACE flaws has occurred in Letters to the Editor and other types of publications, many of which were wholly excluded from the systematic review.

Again, I will have a lot more to say about how the systematic review assessed treatment trials, particularly trials like PACE. For now, the takeaway message is that the systematic review gave PACE its highest quality rating, willfully ignoring all the evidence to the contrary.

Final Equation

Where does this leave us, at the most basic and simple level?

  • The review lumped eight case definitions together.
  • The review acknowledged that the Oxford definition could include patients without ME/CFS, but forged ahead and included those patients anyway.
  • The review included nine treatment studies based on the Oxford definition.
  • The review rated the PACE trial and two other Oxford CBT/GET/counseling studies as good.
  • The review concluded that it had moderate confidence in the finding that CBT/GET are effective for ME/CFS patients, regardless of definition.

If that does not make sense to you, join the club. I do not understand how it can be scientifically acceptable to generalize treatment trial results from patients who have fatigue but not ME/CFS to patients who do have ME/CFS. Can anyone imagine generalizing treatment results from a group of patients with one disorder to patients with another disease? For example, would the results of a high cholesterol medicine trial be generalized to patients with high blood pressure? No, even though some patients with high blood pressure may have elevated cholesterol, we would not assume the risk of generalizing results from one patient population to another.

But the systematic review’s conclusion is the predictable output of an equation that begins with treating all the case definitions as a single disease entity.

I will be submitting a detailed comment on the systematic evidence review. I encourage everyone to do the same because the report authors must publicly respond to all comments.  More detailed info will be forthcoming this week on possible points to consider in commenting.

This review is going to be with us for a long time. I think it is fair and reasonable to ask the authors to address the multitude of mistakes they have made in their analysis.

Edited to add: Erica Verillo posted a great summary of problems with the review, as well.

 

Posted in Advocacy, Commentary, Research | Tagged , , , , , , , , , , , , , | 44 Comments

Mary Dimmock: Fight the Power

The draft P2P evidence review report has been issued and we have all had a chance to see just how appallingly bad it is. Now the question is what to do next.

Some have called for us to oppose P2P by boycotting it. I absolutely agree that we must oppose P2P. But where I differ is in the nature and breadth of tactics that we need to use.

What has been done to ME patients for thirty years and is being perpetuated in this P2P evidence review report is scientifically indefensible and irresponsible. Starting with the fact that the entire “CFS” enterprise as a clinical entity has been constructed on the sole basis of medically unexplained chronic fatigue. Seriously? Where is the scientific justification and evidence that all of the conditions encompassed by the common, ill-defined symptom of fatigue plus the current state of our medical knowledge are the same medical condition that should be studied and treated as one? There is none and there never can be.

And yet, for thirty years, that pseudoscience has held ME patients hostage in a living hell.

Such pseudoscience is the bread and butter of those with agendas to keep science from moving forward or to protect their own vested interests. We have seen it with cigarette smoking and acid rain and we are seeing it again with climate change. But you don’t fight climate change deniers by not exposing the flaws, bias and the hidden agendas in their “scientific” claims. You fight them by exposing where their “facts” are wrong, their “science” is unsound and their agendas are driven by self-interest. This is what ME advocates have been doing with the PACE trial and in my opinion is what we need to do with P2P.

Providing formal input to P2P allows us to expose the “science” of “CFS” for the scientific sham that it is. AHRQ (Agency for Healthcare Research and Quality, part of HHS) must respond to our comments, which become part of the public record that we can use later. Providing such input is as valid and necessary as a form of protest as boycotting or writing letters directly to HHS leaders. All forms of opposition are needed.

But given the history of this disease, we should be under no delusions that left to its own devices, HHS will listen to our P2P opposition, whether it takes the form of letters to HHS leaders, boycotting the meeting, or the submission of comments on the evidence review. Each can be dismissed by those who have chosen not to listen.

And much more fundamentally, we need to remember that P2P is just one event in a string of utter failures in HHS’ public policy toward ME that stretches back to Incline Village. You all know the issues – lack of research funding, harmful medical guidelines, abysmal medical care, lack of a strategy, the nightmare of insurance, disability and school accommodations and an agency hell-bent on acting unilaterally and with complete disregard of both disease experts and patients.

Ultimately, the real question is not what specific form our opposition to P2P should take. There is a place for all actions that shine a light on this travesty. Whatever you choose, make sure your voice is heard. Do not let your silence be construed as consent.

The real question is what else are we going to do to protest, not only about P2P but also about every other aspect of HHS’ handling of this disease for the last thirty years.

If ever there was a time for us to revolt as a community, by whatever means available, it is now.

Contact your congressional leaders and ask every one of your family and friends to do the same. Call your local and/or national media. Twitter. Sue the government. Contact the ACLU. Conduct a lie down demonstration. Protest at P2P. Whatever means of opposition that you can think of and are able to do, just do it!

 

Posted in Advocacy | Tagged , , , , , , , , , , | 42 Comments

NIH Says No, and Also No

noWith no announcement or fanfare, the CFS Advisory Committee has posted a response from HHS to the June 2014 recommendations. My information is that  – inexplicably – even CFSAC members were not notified when the response was posted. I urge you to read the entire response, but I am going to focus on just a few sentences. There are very serious implications for the future of ME/CFS research, but despite NIH’s entrenched position, there are still things we can do about it.

No Data Sharing Platform For You

The first recommendation was that NIH create and maintain a data sharing platform for ME/CFS research. NIH’s response? No. But their reasoning is remarkable:

[D]eveloping and maintaining a unique ME/CFS database is cost prohibitive in light of the small number of researchers . . . the cost of developing and maintaining an ME/CFS database would significantly reduce funds available for funding research on ME/CFS . . .

Translation: There are not enough of you to make this platform idea worth the money.

But the implication of that last sentence is astounding: maintaining such a database would reduce the funds available for research. Translation: NIH will only spend a fixed amount of money on ME/CFS. Even if NIH decided to create a database, there would be no increase in funds to cover the cost – that money would simply be reallocated from grants.

The background document to the recommendation specifically states that a central data sharing platform would “greatly accelerate research discovery” and foster “opportunities for new scientists to enter the field.” The platform would lower barriers to conducting ME/CFS research. But NIH responds: No, because there aren’t enough researchers and we won’t increase our ME/CFS spending.

Put another way, ME/CFS has a problem because there are not enough researchers. CFSAC proposes a solution of a data platform that could attract the interest of new researchers. NIH says no, because you don’t have enough researchers.

Wait, what?

There Will Be No RFA

The second recommendation was that NIH fund an RFA to address the gaps in ME/CFS research. NIH’s response? No. And the reasoning on this one will make your head hurt, it is so circuitous.

Unfortunately there remains a lack of definitive evidence regarding the etiology, diagnosis, and treatment for ME/CFS. As such, issuing a Request for Applications (RFA) would not be an effective strategy as RFAs generally encourage a narrowly defined research area that addresses more specific gaps in scientific knowledge.

First of all, NIH issued an RFA for ME/CFS in 2006 and it was targeted at Neuroimmune Mechanisms and Chronic Fatigue Syndrome. So the gaps were obvious enough to issue an RFA eight years ago, and more gaps were identified at the 2011 State of the Knowledge meeting, but now we don’t know enough to target those gaps????

Second of all, why is there a “lack of definitive evidence”? Obviously, because NIH funding at $5 million a year is not likely to produce much in the way of definitive evidence on etiology, diagnosis and treatment.

It seems to me that what NIH is actually saying is: we haven’t provided enough funding to identify definitive evidence, and because you haven’t identified definitive evidence we can’t provide you with more funding. If that doesn’t qualify as circular reasoning, I don’t know what does.

What this response tells us is that if NIH persists in this approach, we will be waiting a long time for an RFA or increase in funding. We will have to wait until a) there is a miracle discovery on etiology, diagnosis and treatment or b) 10 to 15 years for the career development idea to produce more researchers who are doing ME/CFS research.

Despite the thorough background and support for the recommendation provided by CFSAC, despite letters from members of Congress in support of an RFA, despite the pleas of advocates and organizations like IACFS/ME, NIH is steadfastly refusing to provide the one thing that we know would accelerate research progress: the money. UNACCEPTABLE.

What You Can Do

The NIH response leaves the door open just a crack – and that crack could make all the difference. The response says that RFAs are “designed to build upon recommendations . . . that incorporate findings from workshops and conferences.” Remind you of anything? Think P2P.

This makes the P2P Workshop more mission critical than ever, especially now that the draft systematic review has been published. The P2P report is supposed to identify gaps in ME/CFS research. NIH has left the door open to an RFA that incorporates findings from workshops. So we need to do everything possible to make sure the P2P report identifies accurate and appropriate gaps.

The systematic review says that CBT is moderately effective. It treats all the case definitions as equivalent. Remember that this review is the single piece of evidence given to the P2P Panel in advance of the Workshop. Do you want the P2P Panel report to incorporate those findings? Do you want an RFA based on findings like that?

I don’t. So here is what you can do:

Now is not the time to lie down. NIH says No? I say push back. This is a critical moment. If we slip and fall now, the consequences will affect us for many years to come.

 

Posted in Advocacy | Tagged , , , , , , , , , , , | 19 Comments

Draft Systematic Review is UP

The draft systematic evidence review on the Diagnosis and Treatment of ME/CFS has been published.

This review is extraordinarily important because it is being presented to the P2P Panel in a closed door session any day now. This review will be the only evidence presented to the P2P Panel in advance of the Workshop on December 9-10, 2014. Expert presentations at the Workshop may support, refute or expand upon the review, but it is likely that the Panel will ascribe very heavy weight to this report.

I have not read the report yet, and will hold off on commenting until I do. A group of advocates is working together to review the material and prepare highlighted issues that others can use in their comments.

Public comment on the review will be accepted through October 20th. Regardless of whether you plan to submit comment, please read at least the executive summary of this report if you are able to do so. It will be one of the most important documents on ME/CFS published by the government this year.

 

Posted in Advocacy | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | 7 Comments

P2P Participation, Part 2

I have new information on participation in the Pathways to Prevention ME/CFS Workshop:

The Office of Disease Prevention confirmed via telephone that the public will be able to participate in discussion at the P2P Workshop, in person and online. ODP explicitly said that people attending in person can ask questions or make comments via microphones or computers in the room. Webcast viewers can type in comments and questions in a comment box on the webpage. There is a total of 3.5 hours of “Discussion” time noted on the draft agenda, and this is when public input will be addressed. The ME/CFS meeting will follow a procedure very similar to the upcoming P2P meeting on opioid use, so we will be able to see how it works. While there is no guarantee of how much we will be included in the discussion, I am very glad that we finally got some clarity on this issue.

Dr. Susan Maier (NIH) confirmed via email that the comment period on the P2P final report will be extended. Originally, we were going to have from December 12 to December 26th to submit comment on this vital report on the direction of ME/CFS research. This is the worst possible timing for a population as disabled as ME/CFS patients, falling right at the holidays. Multiple groups and individuals requested an extension of this time as an accommodation of our disability. Dr. Maier has confirmed that the comment deadline will be extended to 30 days, meaning the new deadline should be around January 12, 2015. This is a fair and reasonable period of time, and I thank NIH for making this accommodation.

So here is where I repeat my plea for as many people as possible to attend the meeting on December 9-10th, watch it via webcast, and comment on the draft report. Register for the meeting here.

I know that some advocates believe that watching the meeting or submitting comments is some kind of endorsement of the process, and that this participation will be used against us. I strongly disagree. Silence will be interpreted as consent. This is especially true given that we now have better opportunities to participate (although it remains to be seen how many of our questions are actually addressed, of course). We have been complaining for years that NIH needs to do more about ME/CFS, and now they believe they are taking a big step to do more.

I am on record as saying that I believe the P2P Workshop is fundamentally flawed in its present form. But I will attend this meeting, I will ask questions, and I will submit comment. I am not doing so because I think I can fix the fundamental flaw by myself. I am doing so – I am doing all the P2P work I have done – because at the very least, I will make sure that this process is conducted in the light. I will make sure that people know what is being done, how and by whom.

P2P is offering us a tiny itty bitty piece of a microphone. I say hold on, and speak up.

 

Posted in Advocacy | Tagged , , , , , , , , , , , | 14 Comments