Archive

Archive for the ‘Commentary’ Category

Illness Beliefs (or Why I Am Not an ME/CFS Activist)

October 21st, 2014 4 comments

Today, Joe Landson shares his thoughts on how the false illness beliefs (or even cognitive bias) among scientists is holding our field back. Joe says it is time to tear down the walls and think horizontally – to the horizon, even. Let’s look at what is actually in front of us, and not what we expect to see.

I am not an ME/CFS activist because of incorrect illness beliefs. Yes, incorrect illness beliefs – as I see it, they’re the main challenge in ME/CFS. No, no, not our beliefs as patients – the Wessely School’s notions that our beliefs make us sick are absurd, and always were. No, I mean incorrect medical ideas generally, specifically ideas of what makes an illness ‘real’ or not, or what makes an illness at all.

Let me explain. No, there is too much, let me sum up. For a very long time, doctors and researchers defined an illness as ‘organic’ by the tissue damage they could see. For example, the tumors they could see, feel and biopsy made cancer ‘real’. It was damage to an organ; hence, organic.

However this approach has had some treatment limitations. Cut the tumor out; more grow back. Eventually, medicos devised treatments to shrink the tumors and make them less likely to return: namely, chemotherapy and radiation.

For decades, treatment of cancer and other organic diseases generally improved. However treatment of those other diseases, those without detectable organic damage and called functional disorders, was decidedly mixed. It’s fair to say that treatment philosophies for those invisible functional disorders often (though not always) featured extraordinary contempt for the illness and the patient. From Dr. Lewis Yealland’s electrocution of World War One shell-shock patients to the Wessely School’s use of forced exercise for ME/CFS, it seems that contempt generally wins. Contempt is quick and seems to produce clear results… much the same way that cutting out the tumor seemed to ‘cure’ the cancer. Except it didn’t, and still doesn’t.

Meanwhile cancer treatment has evolved. As I write this, the Food and Drug Administration (FDA) is fast tracking radical new immunotherapy for cancer. Immunotherapy doesn’t affect the organic damage directly; rather, it blocks, damps or corrects the immune signals that encourage the tumors or other organic damage to occur. It’s about the signals – the signals that tell the organic damage to start or stop.

This begs the question: What about immune signals that don’t produce organic damage? What about illnesses with a chronic pattern of bad immune signals, but no apparent organic damage at all? What if the signal pattern is the damage?

This mental leap surmounts the wall medicos have built between organic diseases and functional disorders. Both types of illness can potentially be treated the same way – perhaps even with the same drugs, if the Rituximab studies are any indication. Moreover, doctors now can sometimes detect and treat disease before the organic damage ever happens.

This shift in medical beliefs is going on all around us, but not for us, because most of the official gatekeepers of ME/CFS are working so very hard to keep this illness category locked in place, endlessly describing empirical symptoms instead of genuinely investigating their underlying mechanisms. In the constant balancing act in life between control and progress, they side with control. Rather than waste time arguing with these gatekeepers, I’d like to do an end run around them, and point out that all these invisible functional disorders are ‘organic’, if we only change our minds and amend what we mean by ‘organic’. Organic can be a pattern of immune signals rather than the organ damage of yesteryear. Similarly I think our well-meaning friends who insist our bad signals must be located in our brains – organ damage all over again – are thinking too narrowly. Certainly our brains are deeply affected, but that doesn’t mean the bad signals start or end there.

When I grow up, I want to be a bomb-throwing medical anarchist. For NSA-types scanning this blog, no, I don’t mean actual bombs. I want to blow up medical ideas of what ‘organic disease’ really means. I want to explode the borders of medicine – the borders between organic disease and functional disorders; the borders between medicine and psychiatry generally. But just to keep you nervous, internet police, I can and will say that in Arabic if I have to!

This is a long way of explaining why I’m not an ME/CFS activist, per se. I think trying to maintain ME/CFS as a category is a narrow goal and a rigged game – rigged because our government seems dedicated to ‘evidence-based’ approaches to ME/CFS, rather than re-imagining the evidence we have. I think arguing over this or that definition of ME, or CFS, is a poor use of our time and energy, because none of the definitions extant define the immune signals that I suspect (but can’t prove yet) make us sick. To me, all the tiny, empirical functional categories, from bipolar disorder to Morgellon’s, are empty shells of outdated thinking. In pure immune research, someone is finding those signals as we speak – it’s just not labeled ME/CFS research, or Morgellon’s or bipolar research. At least, not yet.

We should seek this research out, celebrate and promote it. We should do as some are already doing, and point out both the sorry current state of – and the immense future possibilities for – almost all the invisible illnesses. Most of all we should see and portray the invisible illnesses as part of a continuum of immune signaling disorders, beyond their separate, and inherently unequal, empirical definitions.

 

Evidence Review Comments Preview

October 15th, 2014 21 comments

This post comes via Mary Dimmock, Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution and a link back to this post. You are also welcome to use this (and other material we’ve gathered) as a framework for your own comments on the draft evidence review - due October 20th.

It’s been a challenging few weeks, digesting and analyzing the AHRQ Draft Systematic Evidence Review on Diagnosis and Treatment of ME/CFS.  We continue to be deeply concerned about the many flaws in the review, in terms of both the approach it took and how it applied the study protocol.

Our comments on the Review will reflect our significant concerns about how the Evidence Review was conducted, the diagnostic, subgroup, and harms treatment conclusions drawn by this report, and the risk of undue harm that this report creates for patients with ME. We believe a final version should not be published until these scientific issues are resolved.

Most fundamentally, the Evidence Review is grounded in the flawed assumption that eight CFS and ME definitions all represent the same group of patients that are appropriately studied and treated as a single entity or group of closely related entities. Guided by that assumption, this Evidence Review draws conclusions on subgroups, diagnostics, treatments and harms for all CFS and ME patients based on studies done in any of these eight definitions. In doing so, the Evidence Review disregards its own concerns, as well as the substantial body of evidence that these definitions do not all represent the same disease and that the ME definitions are associated with distinguishing biological pathologies. It is unscientific, illogical and risky to lump disparate patients together without regard to substantive differences in their underlying conditions.

Compounding this flawed assumption are the a priori choices in the Review Protocol that focused on a more narrow set of questions than originally planned and that applied restrictive inclusion and exclusion criteria. As a result, evidence that would have refuted the flawed starting assumption or that was required to accurately answer the questions was never considered. Some examples of how these assumptions and protocol choices negatively impacted this Evidence Review include:

  • Evidence about the significant differences in patient populations and in the unreliability and inaccuracy of some of these definitions was ignored and/or dismissed. This includes: Dr. Leonard Jason’s work undermining the Reeves Empirical definition; a study that shows the instability of the Fukuda definition over time in the same patients; studies demonstrating that Fukuda and Reeves encompass different populations; and differences in inclusion and exclusion criteria, especially regarding PEM and psychological disorders.
  • Diagnostic methods were assessed without first establishing a valid reference standard. Since there is no gold reference standard, each definition was allowed to stand as its own reference standard without demonstrating it was a valid reference.
  • Critical biomarker and cardiopulmonary studies, some of which are in clinical use today, were ignored because they were judged to be intended to address etiology, regardless of the importance of the data. This included most of Dr. Snell’s and Dr. Keller’s work on two day CPET, Dr. Cook’s functional imaging studies, Dr. Gordon Broderick’s systems networking studies, Dr. Klimas’s and Dr. Fletcher’s work on NK cells and immune function, and all of the autonomic tests. None of it was considered.
  • Treatment outcomes associated with all symptoms except fatigue were disregarded, potentially resulting in a slanted view of treatment effectiveness and harm. This decision excluded Dr. Lerner’s antiviral work, as well as entire classes of pain medications, antidepressants, anti-inflammatories, immune modulators, sleep treatments and more. If the treatment study looked at changes in objective measures like cardiac function or viral titers, it was excluded. If the treatment study looked at outcomes for a symptom other than fatigue, it was excluded.
  • Treatment trials that were shorter than 12 weeks were excluded, even if the treatment duration was therapeutically appropriate. The big exclusion here was the rituximab trial; despite following patients for 12 months, it was excluded because administration of rituximab was not continuous for 12 weeks (even though rituximab is not approved for 12 weeks continuous administration in ANY disease). Many other medication trials were also excluded for not meeting the 12 week mark.
  • Counseling and CBT treatment trials were inappropriately pooled without regard for the vast differences in therapeutic intent across these trials. This meant that CBT treatments aimed at correcting false illness beliefs were lumped together with pacing and supportive counseling studies, and treated as equivalent.
  • Conclusions about treatment effects and harms failed to consider what is known about ME and its likely response to the therapies being recommended. This means that the PACE (an Oxford study) results for CBT and GET were not only accepted (despite the many flaws in those data), but were determined to be broadly applicable to people meeting any of the case definitions. Data on the abnormal physiological response to exercise in ME patients were excluded, and so the Review did not conclude that CBT and GET could be harmful to these patients (although it did allow it might be possible).
  • The Evidence Review states that its findings are applicable to all patients meeting any CFS or ME definition, regardless of the case definition used in a particular study.

The issues with this Evidence Review are substantial in number, magnitude and extent. At its root is the assumption that any case definition is as good as the rest, and that studies done on one patient population are applicable to every other patient population, despite the significant and objective differences among these patients. The failure to differentiate between patients with the symptom of subjective unexplained fatigue on the one hand, and objective immunological, neurological and metabolic dysfunction on the other, calls into question the entire Evidence Review and all conclusions made about diagnostic methods, the nature of this disease and its subgroups, the benefits and harms of treatment, and the future directions for research.

As the Evidence Review states, the final version of this report may be used in the development of clinical practice guidelines or as a basis for reimbursement and coverage policies. It will also be used in the P2P Workshop and in driving NIH’s research strategy. Given the likelihood of those uses and the Evidence Review’s claim of broad applicability to all CFS and ME patients, the flaws within this report create an undue risk of significant harm to patients with ME and will likely confound research for years to come. These issues must be addressed before this Evidence Review is issued in its final form.

 

They Know What They’re Doing (Not)

October 6th, 2014 18 comments

This post comes via Mary Dimmock, with assistance from Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution to Mary Dimmock.

 

Last week, Jennie Spotila and Erica Verillo posted summaries of just some of the issues with AHRQ’s Draft Systematic Evidence Review, conducted for P2P.

Jennie and Erica highlighted serious and sometimes insurmountable flaws with this Review, including:

  • The failure to be clear and specific about what disease was being studied.
  • The acceptance of 8 disparate ME or CFS definitions as equivalent in spite of dramatic differences in inclusion and exclusion criteria.
  • The bad science reflected in citing Oxford’s flaws and then using Oxford studies anyway.
  • The well-known problems with the PACE trial.
  • The flawed process that used non-experts on such a controversial and conflicted area.
  • Flawed search methods that focused on fatigue.
  • Outright errors in some of the basic information in the report and apparent inconsistencies in how inclusion criteria were applied.
  • Poorly designed and imprecise review questions.
  • Misinterpretation of cited literature.

In this post, I will describe several additional key problems with the AHRQ Evidence Review.

Keep in mind that comments must be submitted by October 20, 2014. Directions for doing so are at the end of this post.

We Don’t Need No Stinking Diagnostic Gold Standard

Best practices for diagnostic method reviews state that a diagnostic gold standard is required as the benchmark. But there is no agreed upon diagnostic gold standard for this disease, and the Review acknowledges this. So what did the Evidence Review do? The Review allowed any of 8 disparate CFS or ME definitions to be used as the gold standard and then evaluated diagnostic methods against and across the 8 definitions. But when a definition does not accurately reflect the disease being studied, that definition cannot be used as the standard. And when the 8 disparate definitions do not describe the same disease, you cannot draw conclusions about diagnostic methods across them.

What makes this worse is that the reviewers recognized the importance of PEM but failed to consider the implications of Fukuda’s and Oxford’s failure to require it. The reviewers also excluded, ignored or downplayed substantial evidence demonstrating that some of these definitions could not be applied consistently, as CDC’s Dr. Reeves demonstrated about Fukuda.

Beyond this, some diagnostic studies were excluded because they did not use the “right” statistics or because the reviewer judged the studies to be “etiological” studies, not diagnostic methods studies. Was NK-Cell function eliminated because it was an etiological study? Was Dr. Snell’s study on the discriminative value of CPET excluded because it used the wrong statistics? And all studies before 1988 were excluded. These inclusion/exclusion choices shaped what evidence was considered and what conclusions were drawn.

Erica pointed out that the Review misinterpreted some of the papers expressing harms associated with a diagnosis. The Review failed to acknowledge the relief and value of finally getting a diagnosis, particularly from a supportive doctor. The harm is not from receiving the diagnostic label, but rather from the subsequent reactions of most healthcare providers. At the same time, the Review did not consider other harms like Dr. Newton’s study of patients with other diseases being diagnosed with “CFS” or another study finding some MS patients were first misdiagnosed with CFS. The Review also failed to acknowledge the harm that patients face if they are given harmful treatments out of a belief that CFS is really a psychological or behavioral problem.

The Review is rife with problems: Failing to ask whether all definitions represent the same disease. Using any definition as the diagnostic gold standard against which to assess any diagnostic method. Excluding some of the most important ME studies. It is no surprise, then, that the Review concluded that no definition had proven superior and that there are no accepted diagnostic methods.

But remarkably, reviewers felt that there was sufficient evidence to state that those patients who meet CCC and ME-ICC criteria were not a separate group but rather a subgroup with more severe symptoms and functional limitations. By starting with the assumption that all 8 definitions encompass the same disease, this characterization of CCC and ICC patients was a foregone conclusion.

But Don’t Worry, These Treatment Trials Look Fine

You would think that at this point in the process, someone would stand up and ask about the scientific validity of comparing treatments across these definitions. After all, the Review acknowledged that Oxford can include patients with other causes of the symptom of chronic fatigue. But no, the Evidence Review continued on to compare treatments across definitions regardless of the patient population selected. Would we ever evaluate treatments for cancer patients by first throwing in studies with fatigued patients? The assessment of treatments was flawed from the start.

But the problems were then compounded by how the Review was conducted. The Review focused on subjective measures like general function, quality of life and fatigue, not objective measures like physical performance or activity levels. In addition, the Review explicitly decided to focus on changes in the symptom of fatigue, not PEM, pain or any other symptom. Quality issues with individual studies were either not considered or ignored. Counseling and CBT studies were all lumped into one treatment group, without consideration of the dramatic difference in therapeutic intent of the two. Some important studies like Rituxan were not considered because the treatment duration was considered too short, regardless of whether it was therapeutically appropriate.

And finally, the Review never questioned whether the disease theories underlying these treatments were applicable across all definitions. Is it really reasonable to expect that a disease that responds to Rituxan or Ampligen is going to also respond to therapies that reverse the patient’s “false illness beliefs” and deconditioning? Of course not.

If their own conclusions on the diagnostic methods and the problems with the Oxford definition were not enough to make them stop, the vast differences in disease theories and therapeutic mechanism of action should have made the reviewers step back and raise red flags.

At the Root of It All

This Review brings into sharp relief the widespread confusion on the nature of ME and the inappropriateness of having non-experts attempt to unravel a controversial and conflicting evidence base about which they know nothing.

But just as importantly, this Review speaks volumes about the paltry funding and institutional neglect of ME reflected in the fact that the study could find only 28 diagnostic studies and 9 medication studies to consider from the last 26 years. This Review speaks volumes about the institutional mishandling that fostered the proliferation of disparate and sometimes overly broad definitions, all branded with the same “CFS” label. The Review speaks volumes about the institutional bias that resulted in the biggest, most expensive and greatest number of treatment trials being those that studied behavioral and psychological pathology for a disease long proven to be the result of organic pathology.

This institutional neglect, mishandling and bias have brought us to where we are today. That the Evidence Review failed to recognize and acknowledge those issues is stunning.

Shout Out Your Protest!

This Evidence Review is due to be published in final format before the P2P workshop and it will affect our lives for years to come. Make your concerns known now.

  1. Submit public comments on the Evidence Review to the AHRQ website by October 20.
  2. Contact HHS and Congressional leaders with your concerns about the Evidence Review, the P2P Workshop and HHS’ overall handling of this disease. Erica Verillo’s recent post provides ideas and links for how to do this.

The following information provides additional background to prepare your comments:

However you choose to protest, make your concerns known!

 

A Review of the P2P Systematic Review

September 29th, 2014 44 comments

IMG_3530The draft systematic evidence review on the Diagnosis and Treatment of ME/CFS was published online last week. It’s a monster – 416 pages in total. I know many ME/CFS patients may not be able to read this report, so in this post I’m going to focus on three things: the purpose of the report, the lumping of multiple case definitions, and the high quality rating given to the PACE trial. If you read nothing else about this systematic review, then these are the biggest takeaway messages.

The Purpose of the Systematic Review

NIH requested the review for the purposes of the P2P Workshop, and the Agency for Healthcare Research and Quality contracted with the Oregon Health & Sciences University to perform the review for about $350,000.

The primary purpose of the review is to serve as the cornerstone of knowledge for the P2P Panel. The Panel will be made up entirely of non-ME/CFS experts. In order to give them some knowledge base for the Workshop presentations, the Panel will receive this review and a presentation by the review authors (behind closed doors). Until the Workshop itself, this review will be the Panel’s largest source of information about ME/CFS.

But that is not the only use for this report. AHRQ systematic reviews are frequently published in summary form in peer reviewed journals, as was the 2001 CFS review. The report will be available online, and will be given great credence simply because it is an AHRQ systematic review. The conclusions of this review – including the quality rating of the PACE trial – will be entrenched for years to come.

You can expect to see this review again and again and again. In the short term, this review will be the education given to the P2P Panel of non-ME/CFS experts in advance of the Workshop. But the review will also be published, cited, and relied upon by others as a definitive summary of the state of the science on diagnosing and treating ME/CFS.

Case Definition: I Told You So

When the protocol for this systematic review was published in May 2014, I warned that the review was going to lump all case definitions together, including the Oxford definition. After analyzing the review protocol and the Workshop agenda, Mary Dimmock and I wrote that the entire P2P enterprise was based on the assumption that all the case definitions described the same single disease, albeit in different ways, and that this assumption put the entire effort at risk. Some people may have hoped that a systematic review would uncover how different Oxford and Canadian Consensus Criteria patients were, and would lead to a statement to that effect.

Unfortunately, Mary and I were correct.

The systematic review considered eight case definitions, including Oxford, Fukuda, Canadian, Reeves Empirical, and the International Consensus Criteria, and treated them as describing a single patient population. They lumped all these patient cohorts together, and then tried to determine what was effective in diagnosing and treating this diverse group. The review offers no evidence to support their assumption, beyond a focus on the unifying feature of fatigue.

What I find particularly disturbing is that the review did acknowledge that maybe Oxford didn’t belong in the group:

We elected to include trials using any pre- defined case definition but recognize that some of the earlier criteria, in particular the Oxford (Sharpe, 1991) criteria, could include patients with 6 months of unexplained fatigue and no other features of ME/CFS. This has the potential of inappropriately including patients that would not otherwise be diagnosed with ME/CFS and may provide misleading results. (p. ES-29, emphasis added)

But then they did it anyway.

Credit: ElodieUnderGlass

This is inexplicably bad science. How can they acknowledge that Oxford patients may not have ME/CFS and acknowledge that including them may provide misleading results, and then include them anyway? Is it just because Oxford papers claim to be about CFS and include people with medically unexplained fatigue? The systematic review authors clearly believed that this was a sufficient minimum standard for inclusion in analysis, despite the acknowledged risk that it could produce misleading results.

I will have a lot more to say on this topic and the problems in the review’s analysis. For now, the bottom line takeaway message is that the systematic review combined all the case definitions, including Oxford, and declared them to represent a single disease entity based on medically unexplained fatigue.

PACE is Ace

One of the dangers of the review’s inclusion of the Oxford definition and related studies was the risk that PACE would be highly regarded. And that is exactly what happened.

The PACE trial is one of seven treatment studies (out of a total of thirty-six) to receive the “Good” rating, which has a specific technical meaning in this context (Appendix E). In the systematic review, a randomized control trial is “Good” if it includes comparable groups, uses reliable and valid measurement instruments, considers important outcomes, and uses an intention-to-treat analysis. I’m certainly no expert in these issues, but I can spot a couple problems.

First of all, the PACE trial may have used comparable groups within the study, but that internal consistency is different from whether the PACE cohort was comparable to other ME/CFS patients. The systematic review already acknowledged that the Oxford cohort may include people who do not actually have ME/CFS, and in my opinion that is the comparable group that matters.

In terms of important outcomes, the systematic review focused on patient-centered outcomes related to overall function, quality of life, ability to work and measures of fatigue. Yet there is no discussion or acknowledgement that patient performance on a 6 minute walking test at the end of PACE showed that they remained severely impaired. There is also no acknowledgement that a patient could enter PACE with an SF-36 score of 65, leave the trial with a score of 60, and be counted as recovered. That is because so many changes were made to the study in post-hoc analysis, including a change to the measures of recovery. Incredibly, the paper in which the PACE authors admit to those post-hoc changes is not cited in the systematic review. It is also important to point out that much of the discussion of the PACE flaws has occurred in Letters to the Editor and other types of publications, many of which were wholly excluded from the systematic review.

Again, I will have a lot more to say about how the systematic review assessed treatment trials, particularly trials like PACE. For now, the takeaway message is that the systematic review gave PACE its highest quality rating, willfully ignoring all the evidence to the contrary.

Final Equation

Where does this leave us, at the most basic and simple level?

  • The review lumped eight case definitions together.
  • The review acknowledged that the Oxford definition could include patients without ME/CFS, but forged ahead and included those patients anyway.
  • The review included nine treatment studies based on the Oxford definition.
  • The review rated the PACE trial and two other Oxford CBT/GET/counseling studies as good.
  • The review concluded that it had moderate confidence in the finding that CBT/GET are effective for ME/CFS patients, regardless of definition.

If that does not make sense to you, join the club. I do not understand how it can be scientifically acceptable to generalize treatment trial results from patients who have fatigue but not ME/CFS to patients who do have ME/CFS. Can anyone imagine generalizing treatment results from a group of patients with one disorder to patients with another disease? For example, would the results of a high cholesterol medicine trial be generalized to patients with high blood pressure? No, even though some patients with high blood pressure may have elevated cholesterol, we would not assume the risk of generalizing results from one patient population to another.

But the systematic review’s conclusion is the predictable output of an equation that begins with treating all the case definitions as a single disease entity.

I will be submitting a detailed comment on the systematic evidence review. I encourage everyone to do the same because the report authors must publicly respond to all comments.  More detailed info will be forthcoming this week on possible points to consider in commenting.

This review is going to be with us for a long time. I think it is fair and reasonable to ask the authors to address the multitude of mistakes they have made in their analysis.

Edited to add: Erica Verillo posted a great summary of problems with the review, as well.

 

Burning Underground

September 3rd, 2014 11 comments

Credit: Cole Young*

Just over a year ago, advocate Leela Play noticed something odd on a federal contracting website. What she found was a notice of intent to award a sole source contract to the Institute of Medicine to create clinical diagnostic criteria for ME/CFS. And just like that, the ME/CFS landscape changed.

What followed was a month-long attempt to stop the government from issuing this contract, and when that failed more attempts were made to get the contract rescinded. The advocacy and scientific communities faced divisions over positions and tactics. Meanwhile, the IOM contract has moved towards its conclusion in March 2015.

Current activity – both IOM and advocacy – is smoldering underground. But no one should mistake this period of quiet to mean that nothing is happening.

Where Is IOM?

The process and schedule for this IOM study is set forth in the contract, and is moving pretty much on track. The committee was selected in December 2013, and held two public listening sessions (January and May 2014). The committee has met behind closed doors four times, with a fifth meeting scheduled for this week. Bare bones meeting summaries are posted on the project website after the meetings.

Committee members have reviewed a great volume of material. An extensive literature search has been conducted. In addition, the public has submitted comments and materials over the course of the contract, numbering more than 4,000 pages the last time I checked. There are also indications that the committee may have examined raw data, although details about that are not yet available.

The study seems to be running slightly ahead of the schedule laid out in the contract, at least judging from the meeting dates. If so, then it means the committee is putting the finishing touches on its recommendations and the case definition. The next step is sending the draft report out for peer review, with delivery on track for early 2015.

Where Are We?

As reflected on this and other blogs, discussion forums, and social media, ME/CFS advocacy exploded when we learned about the contract. I’ve compared it to dropping a match on a lake of gasoline. For the most part, we focused our attention outward towards the government, IOM and the media. But at various times, we’ve also focused attention inward. We’ve criticized each other for our positions on the contract, the degree to which we have participated in the process, and for the tactics we’ve used. Sometimes, the criticism has not been constructive. This is not unexpected when people feel cornered and the stakes are high.

DHHS stated at the June 2014 CFS Advisory Committee meeting that it wants to work with the advocacy community on a path forward after the IOM report. As I wrote in my meeting summary, if this “means the kind of involvement we have had to date, then there is nothing to really talk about.” HHS holds all the cards here, and it will take more than token efforts on both sides to actually move forward together. Obviously, this begs the question of whether ME/CFS advocates will even want to move forward with the IOM report. It all depends on what that report says.

What Next?

I think one possible analogy for where we are now is the Centralia mine fire. This fire has been burning in a coal seam beneath the town of Centralia, Pennsylvania for 52 years. Underground coal fires can burn for years undetected. Eventually, the ground collapses in sinkholes, allowing oxygen to rush in and fuel the fire even more.

On the surface, it may not seem like advocates are paying much attention to the IOM study right now. A number of prominent voices in our community have retired (temporarily, I hope) or taken breaks to recover from the crashes brought on by advocacy. The scientific community has not been publicly discussing IOM. And the IOM committee members themselves are bound by very strict confidentiality rules, so they’re not talking either.

Don’t let the quiet fool you. Work has continued on multiple fronts this year, and I expect we will hear developments in the near future. It won’t take much disturbance on the surface to refuel this fire. A sink hole, some oxygen, and we’ll be at it again. What I’m wondering these days is who is going to get burned.

 

*Photo credit: Cole Young, Flickr, Creative Commons license

P2P: The Question They Will Not Ask

July 21st, 2014 37 comments

by Mary Dimmock and Jennie Spotila

cornerstone-contentThe most important question about ME/CFS – the question that is the cornerstone for every aspect of ME/CFS science – is the question that the P2P Workshop will not ask:

How do ME and CFS differ? Do these illnesses lie along the same continuum of severity or are they entirely separate with common symptoms? What makes them different, what makes them the same? What is lacking in each case definition – do the non-overlapping elements of each case definition identify a subset of the illness or do they encompass the entirety of the population?

Boiled down to its essence, this set of questions is asking whether all the “ME/CFS” definitions represent the same disease or set of related diseases. The failure to ask this question puts the entire effort at risk.

This fundamental question was posed in the 2012 application for the Office of Disease Prevention to hold the P2P meeting (which I obtained through FOIA). It was posed in the 2013 contract between AHRQ and the Oregon Health & Science University for the systematic evidence review (which I obtained through FOIA). It was posed to the P2P Working Group at its January 2014 meeting to refine the questions for the evidence review and Workshop (according to Dr. Susan Maier at the January 2014 Institute of Medicine meeting).

And then the question disappeared.

The systematic evidence review protocol does not include it. Dr. Beth Collins-Sharp said at the June 2014 CFSAC meeting that the Evidence Practice Center is not considering the question because there is “not enough evidence” in the literature to answer the question. However, she said that the P2P Workshop could still consider the question.

But the draft agenda for the Workshop does not include it. Furthermore, every aspect of the P2P Workshop treats “ME/CFS” as a single disease:

  • The P2P description of ME/CFS refers to it as a single disorder or illness throughout the meeting webpage.
  • The P2P website characterizes the names myalgic encephalomyelitis and chronic fatigue syndrome as synonymous.
  • Every section of the Workshop agenda lumps all the populations described by the multiple case definitions together, discussing prevalence, tools, subsets, outcomes, presentation, and diagnosis of this single entity.

A 20 minute presentation on “Case Definition Perspective” is the only lip service paid to this critical issue. This is completely inadequate, if for no other reason than because the presentation is isolated from discussions on the Workshop Key Questions and dependent topics like prevalence and natural history. As a result, it is unlikely to be thoroughly discussed unless one of the Panelists has a particular interest in it.

Why is this problematic? Because both the P2P Workshop and the evidence review are based on the assumption that the full set of “ME/CFS” case definitions describe the same disease. This assumption has been made without proof that it is correct and in the face of data that indicate otherwise, and therein lies the danger of failing to ask the question.

What if the case definitions do not actually describe a single disease? If there are disparate conditions like depression, deconditioning, non-specific chronic fatigue and a neuroimmune disease characterized by PEM encompassed by the full set of “ME/CFS” definitions, then lumping those together as one entity would be unscientific.

The most important part of designing scientific studies is to properly define the study subjects. One would not combine liver cancer and breast cancer patients into a single cohort to investigate cancer pathogenesis. The combination of those two groups would confound the results; such a study would be meaningful only if the two groups were separately defined and then compared to one another to identify similarities or differences. The same is true of the P2P evidence review of diagnostics and treatments: assuming that all “ME/CFS” definitions capture the same disease (or even a set of biologically related diseases) and attempting to compare studies on the combined patients will yield meaningless and confounded results if those definitions actually encompass disparate diseases.

There is a growing body of evidence that underscores the need to ask the fundamental question of whether “ME/CFS” definitions represent the same disease:

  • The P2P Workshop is focused on “extreme fatigue” as the defining characteristic of “ME/CFS,” but fatigue is a common but ill-defined symptom across many diseases. Further, not all “ME/CFS” definitions require fatigue or define it in the same way. For instance, Oxford requires subjective fatigue, and specifically excludes patients with a physiological explanation for their fatigue. But the ME-ICC does not require fatigue; instead it requires PENE, which is defined to have a physiological basis.
  • When FDA asked CFS and ME patients to describe their disease, we did not say “fatigue.” Patients told FDA that post-exertional malaise was the most significant symptom: “complete exhaustion, inability to get out of bed to eat, intense physical pain (including muscle soreness), incoherency, blacking out and memory loss, and flu-like symptoms.”
  • Multiple studies by Jason, Brenu, Johnston and others have demonstrated significant differences in disease severity, functional impairment, levels of immunological markers and patient-reported symptoms among the different case definitions.
  • Multiple studies have demonstrated that patients with PEM have impairment in energy metabolism and lowered anaerobic threshold, and have shown that patients with depression, deconditioning and a number of other chronic illnesses do not have this kind of impairment.
  • Multiple studies have demonstrated differences in exercise-induced gene expression between Fukuda/CCC patients and both healthy and disease control groups.
  • The wide variance in prevalence estimates shines a light on the case definition problem. Prevalence estimates for Oxford and Empirical populations are roughly six times higher than the most commonly accepted estimate for Fukuda. Even Fukuda prevalence estimates vary widely, from 0.07% to 2.6%, underscoring the non-specificity of the criteria. Nacul, et al., found that the prevalence using CCC was only 58% of the Fukuda prevalence. Vincent, et al., reported that 36% of Fukuda patients had PEM, representing a smaller population that would be eligible for diagnosis under CCC.
  • The work of Dr. Jason highlights the danger of definitions that include patients with primary psychiatric illnesses, especially because such patients may respond very differently to treatments like CBT and GET.

By contrast, there have not been any published studies that demonstrate that the set of “ME/CFS” definitions being examined in P2P encompass a single entity or biologically related set of entities. From Oxford to Fukuda to ME-ICC, there are significant differences in the inclusion and exclusion criteria, including differences in the exclusion of primary psychiatric illness. The magnitude of these differences makes the lack of such proof problematic.

Given that treating all “ME/CFS” definitions as a single entity is based on an unproven assumption of the clinical equivalence of these definitions, and given that there is ample proof that these definitions do not represent the same disease or patient population, it is essential that the P2P “ME/CFS” study start by asking this question:

Does the set of “ME/CFS” definitions encompass the same disease, a spectrum of diseases, or separate, discrete conditions and diseases?

The failure to tackle this cornerstone question up-front in both the agenda and the evidence review puts the scientific validity of the entire P2P Workshop at risk. If this question is not explicitly posed, then the non-ME/CFS expert P2P Panel will swallow the assumption of a single disorder without question, if for no other reason than that they do not know the literature well enough to recognize that it is an assumption and not established fact.

 

This post was translated into Dutch with my permission.

 

Guest Post: Longtime Patient, New Advocate

June 30th, 2014 12 comments

I am very pleased to share this guest post from Darlene Prestwich in which she shares her experiences as a new(ish) advocate. I’ve been doing this so long, sometimes I forget what it was like to jump in the deep end of the advocacy pool. Darlene describes her own experiences with grace, and I am so grateful she is sharing them here today.

findyourvoice

This week I’m home alone. My family is on an annual week-long camping trip to a neighboring state. Its incredibly painful sending them off to do things that I absolutely love to do year after year, but I don’t want ME/CFS to take those experiences away from them, too. So they stock the fridge before they leave and go adventuring without me. Last year I found it incredibly difficult to send them off. I was homebound and dealing with a particularly nasty and long-lived crash that looked as if it may be my new baseline. I had to spend much of the day in bed, being capable of self care but not much more. I was lonely, sad, and so very sick.

I could have reached out to friends, extended family, or supportive church groups, but I simply didn’t have enough energy for social interaction. That’s just one of the cruel tricks this disease plays. I decided to venture online and began to get a greater sense of the depth of the ME/CFS community there. Perhaps it was because I needed it so much right then (I’d dabbled around a bit before), but I was hooked. These people were speaking my language! Plus, I could rest mid-sentence if I needed to. Here were formerly active, capable, and successful people whose bodies and brains were so whacked out that simple physical or cognitive tasks could be overwhelming, and even lead to relapse. Many had been able to find a sense of acceptance despite the desolation of this disease and the toll it takes. Some were desperate and didn’t know if they could go on another day; they felt misunderstood, mistreated, and so very broken. It was both heartrending and encouraging and most of all, familiar.

At times going online was simply overwhelming. The combination of new terminology and technology I wasn’t very familiar with was daunting to say the least. It’s incredibly taxing to learn new things when your brain is a foggy mess. But the online advocacy community was so intriguing. Here was a group of people who were trying to rise up, be heard, and effect change. Most were doing it primarily from their beds. A few months into my forays online, HHS contracted with IOM to create a new case definition for ME/CFS. Suddenly I was signing petitions, writing letters, and urging family and friends to do the same. And all at once I went from being pleased that there was a group of people online who were speaking my language, to wondering just what language these people were speaking.

Things seemed to be in code. I’ve never been much for acronyms, and now I was swimming in them. Even Google was stumped at times. Adding to the confusion was how often simply rearranging the same letters meant something completely different: i.e. IOM,OMI, & IMO (or its perhaps more gracious variation, IMHO). Many a browsing session turned into an IAMGOTOBED experience. (Internet Acronym Mess Got Overwhelming, Tired Out Brain Ends Day)

Without advocates who were willing to educate me I would have been completely lost. There are many patient, inclusive, and kind people in this community. It takes work to bring someone up to speed, and it’s a steep learning curve for an absolute beginner. I am very appreciative of those who were—and continue to be—willing to use precious energy to answer my sometimes incredibly basic questions. The more I learn about the history of ME/CFS, the more my admiration grows for those who have been advocating tirelessly for years. (Well, maybe not tirelessly, but in spite of being profoundly tired.) There are also many who have worn themselves out trying to be heard.

These were people with strong opinions who felt passionate about their cause, but who didn’t always agree. The IOM contract was hugely divisive, and it was disconcerting to see how viciously some advocates attacked other advocates. It seemed so counterproductive, especially within a movement which faces the unique challenges this one does. It has been said that advocacy is a messy business and those who want to contribute should put on their “big girl pants” and grow a thicker skin. I’m sure that can be helpful advice, but it seems doubly challenging for people who are often so ill they rarely even put on pants. On the other hand, I’ve watched advocates who were sharply divided quickly leap to other’s defense when attacks came from without the community. I got the sense that this community feels sort of like a family.

I was enjoying this business of being an advocate. I was getting a better grasp of the technology, and with repeated use the terminology wasn’t so intimidating either. Then I ran across an opinion that gave me pause. Someone had posted that there were too many people claiming the title of advocate. They suggested that signing petitions and writing letters Does Not an Advocate Make. Well, I’m not a lobbyist or a lawyer, and I haven’t started a patient organization. I don’t run a support group or make films. I don’t even have a blog. So… maybe I’m just some sort of a wannabe advocate. I suppose the answer lies in how one defines ‘advocate’. I do know that I am advocating. And at times it comes at a substantial personal cost; it doesn’t take much to do that, unfortunately. But it feels good to be doing something; and for now I suppose that will have to be enough.

Through all this I’ve become more open about my illness with my friends and extended family. I’ve appealed to government representatives and become more willing to attempt to educate my various healthcare providers. After all, it takes courage simply to admit I have an illness as lame as Chronic Fatigue Syndrome sounds. And although Myalgic Encephalomyelitis now trips easily off my tongue, even my closest family has yet to master that mouthful consistently. I also feel a much greater responsibility to fight for others who are suffering, as well as those who will be stricken down by this devastating disease.

So this week will be quiet, and a bit lonely. But I’m pleased that I have new friends and acquaintances that I didn’t have last year. Many are, without a doubt, Completely Legitimate Advocates. I still have so much to learn, and not nearly enough capacity to do everything I would like. But I’ve come to believe that my voice is important. After all, imo we need every voice we can get.

Parsing CFSAC

June 24th, 2014 17 comments

tangledthreadsI feel like a broken record, saying that the June 16-17th CFS Advisory Committee meeting was frustrating. This meeting struck me as a tangle of threads that can only be understood by teasing them apart. There were signals buried in the discussion that should raise concern in the advocacy community. Rather than summarize the content of the entire meeting, I would like to parse some of the issues with you.
 

Toothless Recommendations

 
Watching group wordsmithing is always incredibly painful. I know many patients got frustrated during the Committee’s discussions of their recommendations. Despite the fact that Dr. Dane Cook’s group presented a comprehensive summary of the Researcher Recruitment Working Group rationale and well-drafted recommendations, the conversation still went off the rails a few times. Rather than recap the whole thing, I’ll just focus on the recommendations themselves.

The first recommendation was for NIH to fund and support a data platform for biobank and clinical data. The idea is based on the NDAR platform, and Dan Hall gave a great presentation on NDAR but not until after the CFSAC had already passed the recommendation. As a result of this backwards agenda, the CFSAC failed to discuss or include a very important element: funding.

Dan Hall estimated that cloning NDAR for ME/CFS would cost about $1 million, and then somewhat less to maintain annually thereafter. The CFSAC recommendation does not include the price tag for the data platform, and no one discussed the feasibility of requesting this kind of funding. Remember that $1 million is 20% of NIH’s annual spending on ME/CFS research. How likely is it that NIH will spend this kind of money on a data platform for us? I strongly support the recommendation, as a data platform like this is desperately needed and none of the non-profits have the resources to make it a reality. But even with the background support document drafted by Dr. Cook’s Working Group, it seems optimistic to believe that NIH will approve this in the short term.

The second recommendation for an RFA was very controversial, and discussed on both days. The original proposal was that the Trans-NIH ME/CFS Working Group, led by Dr. Mariela Shirley, would recommend the content of an RFA based on the P2P Workshop and the 2011 State of the Knowledge meeting. CFSAC member were appropriately concerned about voting for an RFA based on a document that won’t be written for many months. There was extensive argument, but a motion to remove the reference to P2P failed. Chris Williams (Solve ME/CFS Initiative) pointed out that the recommendation would be “toothless” without a dollar figure, but that was ignored.

There was also great controversy over whether to include a deadline for the RFA. A minority of the CFSAC members felt that including a date would kill the entire recommendation. One suggested deadline was December 31, 2015, but Dr. Alisa Koch (new CFSAC member) pointed out that this would mean grants would not even be reviewed until 2016, let alone funded. Eventually, the CFSAC amended the recommendation to state a deadline of “November 1, 2014, or as soon as feasible.” I agree wholeheartedly with the CFSAC members who pointed out that the “as soon as feasible” would be used by NIH to delay the RFA until whenever it sees fit.

Finally, the CFSAC voted to establish two new working groups. The first, suggested by Dr. Jose Montoya (new CFSAC member) will develop a case for Centers of Excellence. This is a long standing and much repeated recommendation of CFSAC, and developing the case for it will be fantastic.

The second working group, suggested by Dr. Gary Kaplan, will examine ways to interface with Patients Like Me and push that out to the community. I was really surprised by this. While the presentation by Patients Like Me was impressive, Ben Heywood admitted that PLM has not invested any effort in building out the ME/CFS community there. There are multiple problems with the way ME/CFS is defined and measured on PLM. And not a single person raised the issue that PLM is a for-profit company. They aggregate and sell their data. I don’t see how the federal government (directly or through CFSAC) can undertake a project that will specifically benefit a single for-profit company.

The worrying signal here is the Committee’s failure to make its recommendations based on a full assessment of all the facts and a view of the overall landscape. Dr. Cook’s Working Group presented the best prepared recommendations we’ve seen in quite some time, but the failure to include target numbers and meaningful deadlines continues to be a problem.
 

Compromising to Get Along

 
The most disturbing thing about the meeting was the conflicting approaches of the CFSAC members. This was most on display during discussion of P2P and the RFA recommendation.

Dr. Cook explained that the reason the RFA recommendation included a reference to P2P was because the group believed NIH would wait for the P2P regardless of what CFSAC said. Therefore, the recommendation should just accept P2P as a done deal in order to avoid antagonizing NIH. Dr. Cook and Dr. Casillas, backed up by Dr. Nancy Lee, said the recommendation would fail otherwise. NIH has apparently sent a letter to IACFS/ME responding to their RFA request, and Drs. Friedberg, Cook and Lee all said that the letter states NIH will wait for the P2P before issuing an RFA (I haven’t seen this letter).

This conciliatory view was expressed most frequently by Dr. Gary Kaplan and Dr. Fred Friedberg (IACFS/ME). I copied down multiple statements from both. Dr. Kaplan said that CFSAC should be “more aligned” with NIH, making a “polite suggestion.” He said CFSAC should “be collegial so they’ll want to work with us.” He also said we have “nothing to fear” from P2P.

Dr. Friedberg was more emphatic. He said that the recommendation should not exclude something just because we might not like it, and that he doesn’t like us vs. them thinking. He said that the recommendation should “eliminate implicit antagonism,” and, “I don’t like the demand quality.” Regarding the prospect that CFSAC (or advocates) may not like some or all of the IOM and/or P2P recommendations, he said we should “make lemonade” rather than engage in  “wholesale condemnation.”

The opposing view was expressed by Steve Krafchick, who said Dr. Kaplan’s collegial approach was “naive.” Dr. Mary Ann Fletcher specifically responded to Dr. Kaplan’s comment as well, saying that the CFSAC charter doesn’t say anything about getting along with NIH. She said that the Committee’s job was to advise the Secretary as experts in the field, and they they were not being fair to patients by putting things off to be collegial.

There was an inherent contradiction in the research recommendations, too. The recommendation on the data platform was passed with no discussion of cost or likelihood of success. There is a need for a data platform, so the Committee recommended it – and that is as it should be. But for the RFA, the majority felt that P2P should be accepted as part of the process simply because that is how NIH appears to be doing business, regardless of the fact that everyone agreed that RFA funding was needed now.

The worrying signal here was identified by Mary Dimmock (from the audience). She pointed out that it was a dangerous precedent to put forward recommendations that seemed likely to succeed, as opposed to the best recommendations that are most needed. I could not agree more. CFSAC’s job is to give the Secretary the best advice, not the advice that the Secretary or the agencies want to hear.
 

Moving forward . . . . together?

 
The last session of the meeting was facilitated by Deputy Assistant Secretary Anand Parekh. I was fascinated by the move to bring him in to lead this discussion. Was this a tacit recognition that Dr. Nancy Lee has had difficulty facilitating discussion about IOM, like the awkward session at the December 2013 CFSAC meeting? The other new development was an actual open forum. In the past, “open” discussion with the audience has been limited to the Chairman selecting questions that have been written on index cards. In this case, members of the audience were handed a microphone and they could address the Committee directly. I wish this had been more widely publicized (a simple email on the CFSAC listserv would have sufficed). I am probably not the only person who would have risked the health consequences to attend for that opportunity. Several prominent advocates had left the meeting by then, as well.

Margaret Jacobs from the American Epilepsy Association presented on the epilepsy community’s experiences with their own IOM report and subsequent cooperation with HHS. Because a number of epilepsy organizations helped fund that IOM study, they had input into the statement of work, received monthly status calls, and received the recommendations a week before public release so they could prepare their messaging. The cooperation before and during the IOM process laid a strong foundation for continued cooperation afterwards, with the epilepsy community and HHS working together.

The same is true for our situation: what happened before the IOM study is setting the stage for what will come after. HHS pursued the IOM study in secret without involving the stakeholders outside the federal government. The ME/CFS advocacy community found out about the contract by accident, and when we protested, HHS simply changed the contract mechanism to one that did not require public notice. There was no collaboration, no engagement, and communications were terrible.

Now HHS seems to think we can all come to the table and work together. I am deeply troubled by the fact that the government holds all the cards here. They will have about a week to prepare messaging on the IOM report, while we will have no opportunity to do so. The P2P report is issued pretty quickly after the meeting, but NIH will be in control of the press conference push behind the report. This simply isn’t creating a dynamic where the stakeholders can actually collaborate. I’m not sure if it will be possible, and the content of the IOM/P2P reports is only one factor in the way.

The worrying signal here is the open question of whether HHS actually wants to change the paradigm and is willing to do the work necessary. Dr. Lee said they “don’t want to do this without [community] involvement,” but if she means the kind of involvement we have had to date, then there is nothing to really talk about. It is going to take a great deal of work on both sides to change the trajectory here.

Dr. Parekh said twice that “there is a lot of angst among patient groups about IOM.” It’s not angst. We have legitimate scientific and policy concerns. Angst is easily dismissed as unreasonable anxiety. I do not know if HHS understands and appreciates the difference.

 

P2P Agenda Fatigue

May 22nd, 2014 20 comments

everyday+tiredness+may+progress+to+serious+health+problems_16000193_800453098_0_0_7077591_300HHS officials have made confusing statements about the goals of the P2P Workshop, but I have obtained documents through FOIA that give us insight into the structure of the meeting. Two versions of the Workshop draft agenda strongly suggest that the meeting will be focused on the broad category of unexplained fatigue, and the most effective treatments for that symptom.
 

The Agenda Documents

 
I obtained these two agenda documents from NIH through a FOIA request. The first is titled “DRAFT Agenda” (previously obtained by another advocate, as well) and the second is titled “Agenda Example.” Neither document is dated, but circumstantial evidence suggests that both were drafted after the January 2014 Working Group meeting.

The Draft includes a list of possible speakers, including several advocates to address the “Patient Perspective.” My name is on that list, but I have not been contacted by anyone at NIH at any time about serving in that capacity. I don’t even know who put my name forward. Whether an invitation will be extended to me remains to be seen.

The Key Questions as presented in the Draft and Example documents are likely out of date, now that the systematic review protocol has been published. The Questions from the evidence review will structure the meeting, but the agendas are important indicators of NIH’s perspective and overall approach to the meeting.
 

Framing With Fatigue

 
Both of the documents include the same description of the overview that will begin the meeting:

Dr. Maier will detail the topic and why it is of public health importance:

Overwhelming fatigue or malaise as a public health problem
Controversies that exist
Treating ME/CFS with drug and non-drug therapies

Just to be sure you didn’t miss it, here’s is the framing for this meeting on ME/CFS: Overwhelming fatigue or malaise as a public health problem. Not ME/CFS as a public health problem. Not post-exertional malaise as a public health problem. Not cognitive dysfunction and disability as a public health problem. To NIH, “overwhelming fatigue” is the public health problem.

This is so wrong. It ignores what we’ve been saying about our experiences for years. It ignores the science on PEM and cognitive dysfunction. It ignores the fundamental question of what disease or diseases are being included in the CFS bucket. In fact, it steps back in time to the Oxford definition: overwhelming fatigue alone.

The real public health problem is that since 1988, CFS has been a wastebasket and dumping ground for people with unexplained chronic fatigue. Some of those people have depression, anxiety, MS, and other illnesses. Some of those people have medically unexplained fatigue. Some of those people have a disease characterized by PEM and cognitive dysfunction.

To lump all of that together as a public health problem of “overwhelming fatigue” completely and utterly misses the point. It perpetuates the hand waving and blurred lines in the government’s approach to my disease, and there’s just no excuse for it at this point.
 

Treatment Barges In

 
At the May 2013 CFSAC meeting, Dr. Maier said that treatment research was part of the evidence review, but she portrayed it as relating back to the case definition question:

The goal of the evidence-based methodology workshop is to understand and identify how the evidence shows up for case definitions, for outcomes, for interventions, and for treatments. If it turns out that some interventions have more impact or a more positive outcome for post-exertional malaise, then we’re going to know that post- exertional malaise in a case definition is probably going to be a good thing to do. Dr. Maier, May 23, 2013 CFSAC Minutes, p. 11. (emphasis added)

But now we know that the evidence review will ask about treatment harms and benefits, and the characteristics of subgroups, responders and non-responders. The agenda documents reveal what this treatment focus will look like.

The Draft Agenda focuses on tools to measure outcomes, rather than comparative effectiveness of treatments. The Agenda Example document is very different. Here are the treatment presentations from that document (each topic allocated 20 minutes):

Cognitive Behavioral Therapy
Graded Exercise Programs
Symptom-based Medication Management
Harms
Patient-Centered Outcomes
Quality of Life

So we have an evidence review that lumps all the case definitions together, including Oxford. And we have an agenda that gives more time to CBT and GET than it does to symptom-based medication. And there is nothing here on disease-modifying treatment, like rituximab, Ampligen, or antivirals.

The topic selection and allocation of time among these treatment topics sends a subtle but powerful message to the Panel of non-ME/CFS experts, especially in light of the failure to distinguish among the case definitions at the outset. Previous evidence reviews, including AHRQ’s review from 2001 and the Brurberg, et al review published in February found no significant differences among case definitions or treatment outcomes, but those reviews were not set up to critically examine those differences. And as I’ve already pointed out, this current review assumes that differences among definitions represent subtypes and not separate diseases.
 

Design Flaws

 
The agenda documents show that the P2P Workshop is fundamentally flawed. The meeting is framed with the public health problem of “overwhelming fatigue.” The evidence review will include studies on adults with fatigue, and exclude those with unspecified underlying diagnoses. All the case definitions are lumped together for the purposes of assessing the reliability of those definitions and the effectiveness of treatments.

The evidence review and meeting agenda should begin with the proper scientific question: are ME and CFS the same disease, separate diseases, or points along a spectrum of fatiguing illnesses? That was the original starting question in the AHRQ evidence review contract, by the way. But it’s gone. The decision was made (not sure by whom) to assume the answer: that it is all one disease, separated only by subgroups. That assumption is the fatal flaw in this entire enterprise.

Remember that the P2P outcome will be decided by a panel of non-ME/CFS experts. We don’t know how they will be screened for bias. We won’t know who they are until shortly before the meeting. We will have no input into their selection.

This is not good science. This is sloppy, not precise. To revisit Dr. Maier’s jury analogy: this process will ask the allegedly impartial jury (selected by only one side) to reach conclusions based on evidence that has been marred by bias and assumptions. Maybe they will reach the right conclusions, or maybe the deck is stacked against us.

We have to find ways to speak out about this. I’m working on something right now, and there will be ways for you to express your own concerns.  I hope you will join me.

 

Will the Real P2P Please Stand Up?

May 19th, 2014 22 comments

standingoutWhat is the purpose of the ME/CFS P2P meeting at NIH? You would think that we would know by now, since Assistant Secretary Dr. Howard Koh first announced the effort in October 2012. But to say the rhetoric has evolved over time would be a kind description.

HHS keeps changing the answers to questions about the purpose of the workshop, what kind of research is on the table, and whether the ME/CFS experts have meaningful input.

To me, it looks more like a bait and switch where the meeting sounded better the further back in time you look, and key information (like the panel being 100% non-experts) was withheld until the very last minute. The reality of this meeting is very different from the picture they portrayed early on.
 

Are we making a research case definition or not?

 
First they told us the workshop would create a new case definition:
 

The NIH has made a commitment to conduct an evidence‐based review of the status of ME/CFS research and also convene a dedicated workshop to address the research case definition for ME/CFS. Dr. Howard Koh, October 3, 2012 CFSAC Minutes, p. 5.

To address the highest priority identified, which was “case definition issues,” the Working Group submitted a competitive application for an Evidence-based Methodology Workshop (EbMW) on ME/CFS coordinated by the NIH Office of Disease Prevention. May 1, 2013, Response from Dr. Howard Koh to CFS Advisory Committee, p. 3

 
Then they told us it wouldn’t:
 

The purpose of the Pathways to Prevention Program and the ME/CFS workshop is not —and I repeat, not—to create a new case definition for research for ME/CFS. Dr. Susan Maier, December 11, 2013 CFSAC Minutes, p. 16.

 
But in the middle, they said the meeting might help with a research case definition:
 

This will not create a research case definition in the end, but will inform anyone who wants to do research in this area about what aspects of the case definition are really strong, which are really lacking, and how those holes might be filled. Dr. Beth Collins-Sharp, May 23, 2013 CFSAC Minutes, P. 16.

 

But the meeting is about research, right?

 
The answer might depend on the day, and the person you ask. Here are the research-oriented answers:
 

The purpose of an evidence-based methodology workshop is to identify methodological and scientific weaknesses in a scientific area and move the field forward through the unbiased and evidence-based assessment of a very complex clinical issue. Dr. Susan Maier, May 23, 2013 CFSAC Minutes, p. 6.

The takeaways from a systematic review are answers to the key questions that identify where there’s strong evidence, where there are gaps, and some ideas about how those gaps may be filled. Those are called research recommendations. Dr. Beth Collins-Sharp, May 23, 2013 CFSAC Minutes, p. 13.

It has the potential to be both [research and clinical], but understanding that we are a research organization and our focus is to improve the, um, the integrity of the science that is used for translation into clinical care means that we have to focus on besting the science that is used for the evidence. Dr. Susan Maier, Institute of Medicine Public Meeting, January 27, 2014, Minute 0:19.

 
Bob Miller, who served on the P2P Working Group, certainly thinks the meeting is about research:
 

NIH is hosting a Pathway to Prevention workshop to identify gaps in scientific research, to guide a path forward for NIH research. Bob Miller, March 11, 2014 CFSAC Transcript, p. 114.

 
But at other points, it appears the focus is back on the case definition:
 

The purpose of the Pathways to Prevention Program for ME/CFS is to evaluate the research evidence surrounding the outcome from the use of multiple case definitions for ME/CFS and address the validity, reliability, and ability of the current case definitions to identify those individuals with or without the illness or to identify subgroups of individuals with the illness who might be reliably differentiated with the different specific case definitions. Dr. Susan Maier, December 11, 2013 CFSAC Minutes, p. 16.

 
Doesn’t this assessment of multiple case definitions and what research tells us about subgroups sound like what the IOM panel is doing right now? And if IOM is already doing this, why do we need a separate process at NIH where the decision makers are ALL non-ME/CFS experts?
 

The expert gets to decide, right?

 
I went back through CFSAC minutes and other documents, looking for the first time Dr. Maier or another federal employee told an ME/CFS audience that the P2P Panel would be composed entirely of non-ME/CFS experts. It was January 27, 2014 in her presentation to the Institute of Medicine, when Dr. Maier offered her ill-fated “jury model” analogy. Dr. Susan Maier, Institute of Medicine Public Meeting, January 27, 2014, Minute 6:25.

Just to be clear, the earliest public discussion of P2P was October 2012, but it wasn’t until almost 16 months later that Dr. Maier finally told us that the P2P Panel would have no ME/CFS experts on it. Why did it take so long? Maybe the better question is why January 2014. Would Dr. Maier have talked about her jury model of “They don’t know, they don’t know anything” if I had not already exposed this here on January 6, 2014? Maybe, but it strikes me as more than odd that despite at least two opportunities to tell CFSAC about the “jury model,” she waited until the IOM meeting to actually disclose it.

But the government says Don’t Worry! Your experts are participating!
 

The working group will meet to develop the questions that will form the basis of the evidence-based review, develop workshop themes and structures, suggest speakers, and develop an agenda for the meeting. The deliverable from this meeting will be a list of questions for the evidence review, themes for the workshop, perhaps a draft agenda, and any speaker names for those who will speak at the meeting. Dr. Susan Maier, December 11, 2013 CFSAC Minutes, p. 16-17.

Our experts and I had real input into the agenda and questions. The Working Group drove the agenda, and we will participate in the Workshop. I believe the prep work for the Workshop is being done with strong representation from our illness, laying the foundation for a good outcome. Bob Miller, January 12, 2014.

 
It sure sounds like that Working Group finalized the questions for the evidence review:
 

The Key Questions were defined by the Working Group of content experts at a planning meeting organized by the NIH Office of Disease Prevention. May 2, 2014 Email from CFSAC listserv.

 
There’s a problem, everybody. Multiple sources who are in a position to know what happened at the January 2014 Working Group meeting told me that the questions in the study protocol were not the questions defined at the meeting. Did something happen between that January meeting and the release of the study protocol? I don’t know whether someone continued to tinker with the questions, or why the Working Group was not consulted. But either the questions have been significantly changed, or the information from my sources is deeply flawed.

Why is this a problem? Well, in addition to all the problems I documented with the study protocol, those questions form the structure of the P2P Workshop. Those questions give us a pretty good idea of what will be on the Workshop agenda, and I will supplement that with additional exclusive information in my next blog post.