Archive

Author Archive

Illness Beliefs (or Why I Am Not an ME/CFS Activist)

October 21st, 2014 2 comments

Today, Joe Landson shares his thoughts on how the false illness beliefs (or even cognitive bias) among scientists is holding our field back. Joe says it is time to tear down the walls and think horizontally – to the horizon, even. Let’s look at what is actually in front of us, and not what we expect to see.

I am not an ME/CFS activist because of incorrect illness beliefs. Yes, incorrect illness beliefs – as I see it, they’re the main challenge in ME/CFS. No, no, not our beliefs as patients – the Wessely School’s notions that our beliefs make us sick are absurd, and always were. No, I mean incorrect medical ideas generally, specifically ideas of what makes an illness ‘real’ or not, or what makes an illness at all.

Let me explain. No, there is too much, let me sum up. For a very long time, doctors and researchers defined an illness as ‘organic’ by the tissue damage they could see. For example, the tumors they could see, feel and biopsy made cancer ‘real’. It was damage to an organ; hence, organic.

However this approach has had some treatment limitations. Cut the tumor out; more grow back. Eventually, medicos devised treatments to shrink the tumors and make them less likely to return: namely, chemotherapy and radiation.

For decades, treatment of cancer and other organic diseases generally improved. However treatment of those other diseases, those without detectable organic damage and called functional disorders, was decidedly mixed. It’s fair to say that treatment philosophies for those invisible functional disorders often (though not always) featured extraordinary contempt for the illness and the patient. From Dr. Lewis Yealland’s electrocution of World War One shell-shock patients to the Wessely School’s use of forced exercise for ME/CFS, it seems that contempt generally wins. Contempt is quick and seems to produce clear results… much the same way that cutting out the tumor seemed to ‘cure’ the cancer. Except it didn’t, and still doesn’t.

Meanwhile cancer treatment has evolved. As I write this, the Food and Drug Administration (FDA) is fast tracking radical new immunotherapy for cancer. Immunotherapy doesn’t affect the organic damage directly; rather, it blocks, damps or corrects the immune signals that encourage the tumors or other organic damage to occur. It’s about the signals – the signals that tell the organic damage to start or stop.

This begs the question: What about immune signals that don’t produce organic damage? What about illnesses with a chronic pattern of bad immune signals, but no apparent organic damage at all? What if the signal pattern is the damage?

This mental leap surmounts the wall medicos have built between organic diseases and functional disorders. Both types of illness can potentially be treated the same way – perhaps even with the same drugs, if the Rituximab studies are any indication. Moreover, doctors now can sometimes detect and treat disease before the organic damage ever happens.

This shift in medical beliefs is going on all around us, but not for us, because most of the official gatekeepers of ME/CFS are working so very hard to keep this illness category locked in place, endlessly describing empirical symptoms instead of genuinely investigating their underlying mechanisms. In the constant balancing act in life between control and progress, they side with control. Rather than waste time arguing with these gatekeepers, I’d like to do an end run around them, and point out that all these invisible functional disorders are ‘organic’, if we only change our minds and amend what we mean by ‘organic’. Organic can be a pattern of immune signals rather than the organ damage of yesteryear. Similarly I think our well-meaning friends who insist our bad signals must be located in our brains – organ damage all over again – are thinking too narrowly. Certainly our brains are deeply affected, but that doesn’t mean the bad signals start or end there.

When I grow up, I want to be a bomb-throwing medical anarchist. For NSA-types scanning this blog, no, I don’t mean actual bombs. I want to blow up medical ideas of what ‘organic disease’ really means. I want to explode the borders of medicine – the borders between organic disease and functional disorders; the borders between medicine and psychiatry generally. But just to keep you nervous, internet police, I can and will say that in Arabic if I have to!

This is a long way of explaining why I’m not an ME/CFS activist, per se. I think trying to maintain ME/CFS as a category is a narrow goal and a rigged game – rigged because our government seems dedicated to ‘evidence-based’ approaches to ME/CFS, rather than re-imagining the evidence we have. I think arguing over this or that definition of ME, or CFS, is a poor use of our time and energy, because none of the definitions extant define the immune signals that I suspect (but can’t prove yet) make us sick. To me, all the tiny, empirical functional categories, from bipolar disorder to Morgellon’s, are empty shells of outdated thinking. In pure immune research, someone is finding those signals as we speak – it’s just not labeled ME/CFS research, or Morgellon’s or bipolar research. At least, not yet.

We should seek this research out, celebrate and promote it. We should do as some are already doing, and point out both the sorry current state of – and the immense future possibilities for – almost all the invisible illnesses. Most of all we should see and portray the invisible illnesses as part of a continuum of immune signaling disorders, beyond their separate, and inherently unequal, empirical definitions.

 

Comments on P2P Systematic Evidence Review

October 18th, 2014 24 comments

After four weeks of intense work, a group of advocates has submitted forty pages of comments on the P2P systematic evidence review. We published a summary of our comments last week. If you want to read the full document, you can view it in two pieces:

Part One addresses the issues with the Evidence Review’s base assumption that all CFS and ME definitions represent the same disease or set of closely related diseases, and the analysis and conclusions drawn regarding diagnostic methods, accuracy and concordance of definitions, subgroups and diagnostic harms.

Part Two addresses the analysis and conclusions drawn regarding treatment effects and harms; and issues related to applicability, reliability and future research directions.

I was proud to work with the following advocates who join me in making these comments:

  • Mary Dimmock
  • Claudia Goodell, M.S.
  • Denise Lopez-Majano, Speak Up About ME
  • Lori Chapo Kroger, R.N., PANDORA Org CEO and President
  • Pat Fero, MEPD, President, Wisconsin ME & CFS Association, INC.
  • Darlene Fentner
  • Leonard Goodell, Jr.
  • Alan Gurwitt, M.D.
  • Wilhelmina D. Jenkins
  • Joseph Landson, M.S.
  • Margaret Lauritson-Lada
  • Jadwiga Lopez-Majano
  • Mike Munoz, PANDORA Org Board of Directors
  • Matina Nicholson
  • Charmian Proskauer
  • Mary M. Schweitzer, Ph.D.
  • Amy L. Squires, MPA
  • Susan Thomas
  • Erica Verrillo, Author

There is still time for you to submit your own comments. Feel free to draw inspiration from the work we publish here today, or the other posts I’ve published over the last several weeks.

 

Evidence Review Comments Preview

October 15th, 2014 21 comments

This post comes via Mary Dimmock, Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution and a link back to this post. You are also welcome to use this (and other material we’ve gathered) as a framework for your own comments on the draft evidence review - due October 20th.

It’s been a challenging few weeks, digesting and analyzing the AHRQ Draft Systematic Evidence Review on Diagnosis and Treatment of ME/CFS.  We continue to be deeply concerned about the many flaws in the review, in terms of both the approach it took and how it applied the study protocol.

Our comments on the Review will reflect our significant concerns about how the Evidence Review was conducted, the diagnostic, subgroup, and harms treatment conclusions drawn by this report, and the risk of undue harm that this report creates for patients with ME. We believe a final version should not be published until these scientific issues are resolved.

Most fundamentally, the Evidence Review is grounded in the flawed assumption that eight CFS and ME definitions all represent the same group of patients that are appropriately studied and treated as a single entity or group of closely related entities. Guided by that assumption, this Evidence Review draws conclusions on subgroups, diagnostics, treatments and harms for all CFS and ME patients based on studies done in any of these eight definitions. In doing so, the Evidence Review disregards its own concerns, as well as the substantial body of evidence that these definitions do not all represent the same disease and that the ME definitions are associated with distinguishing biological pathologies. It is unscientific, illogical and risky to lump disparate patients together without regard to substantive differences in their underlying conditions.

Compounding this flawed assumption are the a priori choices in the Review Protocol that focused on a more narrow set of questions than originally planned and that applied restrictive inclusion and exclusion criteria. As a result, evidence that would have refuted the flawed starting assumption or that was required to accurately answer the questions was never considered. Some examples of how these assumptions and protocol choices negatively impacted this Evidence Review include:

  • Evidence about the significant differences in patient populations and in the unreliability and inaccuracy of some of these definitions was ignored and/or dismissed. This includes: Dr. Leonard Jason’s work undermining the Reeves Empirical definition; a study that shows the instability of the Fukuda definition over time in the same patients; studies demonstrating that Fukuda and Reeves encompass different populations; and differences in inclusion and exclusion criteria, especially regarding PEM and psychological disorders.
  • Diagnostic methods were assessed without first establishing a valid reference standard. Since there is no gold reference standard, each definition was allowed to stand as its own reference standard without demonstrating it was a valid reference.
  • Critical biomarker and cardiopulmonary studies, some of which are in clinical use today, were ignored because they were judged to be intended to address etiology, regardless of the importance of the data. This included most of Dr. Snell’s and Dr. Keller’s work on two day CPET, Dr. Cook’s functional imaging studies, Dr. Gordon Broderick’s systems networking studies, Dr. Klimas’s and Dr. Fletcher’s work on NK cells and immune function, and all of the autonomic tests. None of it was considered.
  • Treatment outcomes associated with all symptoms except fatigue were disregarded, potentially resulting in a slanted view of treatment effectiveness and harm. This decision excluded Dr. Lerner’s antiviral work, as well as entire classes of pain medications, antidepressants, anti-inflammatories, immune modulators, sleep treatments and more. If the treatment study looked at changes in objective measures like cardiac function or viral titers, it was excluded. If the treatment study looked at outcomes for a symptom other than fatigue, it was excluded.
  • Treatment trials that were shorter than 12 weeks were excluded, even if the treatment duration was therapeutically appropriate. The big exclusion here was the rituximab trial; despite following patients for 12 months, it was excluded because administration of rituximab was not continuous for 12 weeks (even though rituximab is not approved for 12 weeks continuous administration in ANY disease). Many other medication trials were also excluded for not meeting the 12 week mark.
  • Counseling and CBT treatment trials were inappropriately pooled without regard for the vast differences in therapeutic intent across these trials. This meant that CBT treatments aimed at correcting false illness beliefs were lumped together with pacing and supportive counseling studies, and treated as equivalent.
  • Conclusions about treatment effects and harms failed to consider what is known about ME and its likely response to the therapies being recommended. This means that the PACE (an Oxford study) results for CBT and GET were not only accepted (despite the many flaws in those data), but were determined to be broadly applicable to people meeting any of the case definitions. Data on the abnormal physiological response to exercise in ME patients were excluded, and so the Review did not conclude that CBT and GET could be harmful to these patients (although it did allow it might be possible).
  • The Evidence Review states that its findings are applicable to all patients meeting any CFS or ME definition, regardless of the case definition used in a particular study.

The issues with this Evidence Review are substantial in number, magnitude and extent. At its root is the assumption that any case definition is as good as the rest, and that studies done on one patient population are applicable to every other patient population, despite the significant and objective differences among these patients. The failure to differentiate between patients with the symptom of subjective unexplained fatigue on the one hand, and objective immunological, neurological and metabolic dysfunction on the other, calls into question the entire Evidence Review and all conclusions made about diagnostic methods, the nature of this disease and its subgroups, the benefits and harms of treatment, and the future directions for research.

As the Evidence Review states, the final version of this report may be used in the development of clinical practice guidelines or as a basis for reimbursement and coverage policies. It will also be used in the P2P Workshop and in driving NIH’s research strategy. Given the likelihood of those uses and the Evidence Review’s claim of broad applicability to all CFS and ME patients, the flaws within this report create an undue risk of significant harm to patients with ME and will likely confound research for years to come. These issues must be addressed before this Evidence Review is issued in its final form.

 

They Know What They’re Doing (Not)

October 6th, 2014 18 comments

This post comes via Mary Dimmock, with assistance from Claudia Goodell, Denise Lopez-Majano, and myself. You are welcome to publish it on your site with attribution to Mary Dimmock.

 

Last week, Jennie Spotila and Erica Verillo posted summaries of just some of the issues with AHRQ’s Draft Systematic Evidence Review, conducted for P2P.

Jennie and Erica highlighted serious and sometimes insurmountable flaws with this Review, including:

  • The failure to be clear and specific about what disease was being studied.
  • The acceptance of 8 disparate ME or CFS definitions as equivalent in spite of dramatic differences in inclusion and exclusion criteria.
  • The bad science reflected in citing Oxford’s flaws and then using Oxford studies anyway.
  • The well-known problems with the PACE trial.
  • The flawed process that used non-experts on such a controversial and conflicted area.
  • Flawed search methods that focused on fatigue.
  • Outright errors in some of the basic information in the report and apparent inconsistencies in how inclusion criteria were applied.
  • Poorly designed and imprecise review questions.
  • Misinterpretation of cited literature.

In this post, I will describe several additional key problems with the AHRQ Evidence Review.

Keep in mind that comments must be submitted by October 20, 2014. Directions for doing so are at the end of this post.

We Don’t Need No Stinking Diagnostic Gold Standard

Best practices for diagnostic method reviews state that a diagnostic gold standard is required as the benchmark. But there is no agreed upon diagnostic gold standard for this disease, and the Review acknowledges this. So what did the Evidence Review do? The Review allowed any of 8 disparate CFS or ME definitions to be used as the gold standard and then evaluated diagnostic methods against and across the 8 definitions. But when a definition does not accurately reflect the disease being studied, that definition cannot be used as the standard. And when the 8 disparate definitions do not describe the same disease, you cannot draw conclusions about diagnostic methods across them.

What makes this worse is that the reviewers recognized the importance of PEM but failed to consider the implications of Fukuda’s and Oxford’s failure to require it. The reviewers also excluded, ignored or downplayed substantial evidence demonstrating that some of these definitions could not be applied consistently, as CDC’s Dr. Reeves demonstrated about Fukuda.

Beyond this, some diagnostic studies were excluded because they did not use the “right” statistics or because the reviewer judged the studies to be “etiological” studies, not diagnostic methods studies. Was NK-Cell function eliminated because it was an etiological study? Was Dr. Snell’s study on the discriminative value of CPET excluded because it used the wrong statistics? And all studies before 1988 were excluded. These inclusion/exclusion choices shaped what evidence was considered and what conclusions were drawn.

Erica pointed out that the Review misinterpreted some of the papers expressing harms associated with a diagnosis. The Review failed to acknowledge the relief and value of finally getting a diagnosis, particularly from a supportive doctor. The harm is not from receiving the diagnostic label, but rather from the subsequent reactions of most healthcare providers. At the same time, the Review did not consider other harms like Dr. Newton’s study of patients with other diseases being diagnosed with “CFS” or another study finding some MS patients were first misdiagnosed with CFS. The Review also failed to acknowledge the harm that patients face if they are given harmful treatments out of a belief that CFS is really a psychological or behavioral problem.

The Review is rife with problems: Failing to ask whether all definitions represent the same disease. Using any definition as the diagnostic gold standard against which to assess any diagnostic method. Excluding some of the most important ME studies. It is no surprise, then, that the Review concluded that no definition had proven superior and that there are no accepted diagnostic methods.

But remarkably, reviewers felt that there was sufficient evidence to state that those patients who meet CCC and ME-ICC criteria were not a separate group but rather a subgroup with more severe symptoms and functional limitations. By starting with the assumption that all 8 definitions encompass the same disease, this characterization of CCC and ICC patients was a foregone conclusion.

But Don’t Worry, These Treatment Trials Look Fine

You would think that at this point in the process, someone would stand up and ask about the scientific validity of comparing treatments across these definitions. After all, the Review acknowledged that Oxford can include patients with other causes of the symptom of chronic fatigue. But no, the Evidence Review continued on to compare treatments across definitions regardless of the patient population selected. Would we ever evaluate treatments for cancer patients by first throwing in studies with fatigued patients? The assessment of treatments was flawed from the start.

But the problems were then compounded by how the Review was conducted. The Review focused on subjective measures like general function, quality of life and fatigue, not objective measures like physical performance or activity levels. In addition, the Review explicitly decided to focus on changes in the symptom of fatigue, not PEM, pain or any other symptom. Quality issues with individual studies were either not considered or ignored. Counseling and CBT studies were all lumped into one treatment group, without consideration of the dramatic difference in therapeutic intent of the two. Some important studies like Rituxan were not considered because the treatment duration was considered too short, regardless of whether it was therapeutically appropriate.

And finally, the Review never questioned whether the disease theories underlying these treatments were applicable across all definitions. Is it really reasonable to expect that a disease that responds to Rituxan or Ampligen is going to also respond to therapies that reverse the patient’s “false illness beliefs” and deconditioning? Of course not.

If their own conclusions on the diagnostic methods and the problems with the Oxford definition were not enough to make them stop, the vast differences in disease theories and therapeutic mechanism of action should have made the reviewers step back and raise red flags.

At the Root of It All

This Review brings into sharp relief the widespread confusion on the nature of ME and the inappropriateness of having non-experts attempt to unravel a controversial and conflicting evidence base about which they know nothing.

But just as importantly, this Review speaks volumes about the paltry funding and institutional neglect of ME reflected in the fact that the study could find only 28 diagnostic studies and 9 medication studies to consider from the last 26 years. This Review speaks volumes about the institutional mishandling that fostered the proliferation of disparate and sometimes overly broad definitions, all branded with the same “CFS” label. The Review speaks volumes about the institutional bias that resulted in the biggest, most expensive and greatest number of treatment trials being those that studied behavioral and psychological pathology for a disease long proven to be the result of organic pathology.

This institutional neglect, mishandling and bias have brought us to where we are today. That the Evidence Review failed to recognize and acknowledge those issues is stunning.

Shout Out Your Protest!

This Evidence Review is due to be published in final format before the P2P workshop and it will affect our lives for years to come. Make your concerns known now.

  1. Submit public comments on the Evidence Review to the AHRQ website by October 20.
  2. Contact HHS and Congressional leaders with your concerns about the Evidence Review, the P2P Workshop and HHS’ overall handling of this disease. Erica Verillo’s recent post provides ideas and links for how to do this.

The following information provides additional background to prepare your comments:

However you choose to protest, make your concerns known!

 

A Review of the P2P Systematic Review

September 29th, 2014 44 comments

IMG_3530The draft systematic evidence review on the Diagnosis and Treatment of ME/CFS was published online last week. It’s a monster – 416 pages in total. I know many ME/CFS patients may not be able to read this report, so in this post I’m going to focus on three things: the purpose of the report, the lumping of multiple case definitions, and the high quality rating given to the PACE trial. If you read nothing else about this systematic review, then these are the biggest takeaway messages.

The Purpose of the Systematic Review

NIH requested the review for the purposes of the P2P Workshop, and the Agency for Healthcare Research and Quality contracted with the Oregon Health & Sciences University to perform the review for about $350,000.

The primary purpose of the review is to serve as the cornerstone of knowledge for the P2P Panel. The Panel will be made up entirely of non-ME/CFS experts. In order to give them some knowledge base for the Workshop presentations, the Panel will receive this review and a presentation by the review authors (behind closed doors). Until the Workshop itself, this review will be the Panel’s largest source of information about ME/CFS.

But that is not the only use for this report. AHRQ systematic reviews are frequently published in summary form in peer reviewed journals, as was the 2001 CFS review. The report will be available online, and will be given great credence simply because it is an AHRQ systematic review. The conclusions of this review – including the quality rating of the PACE trial – will be entrenched for years to come.

You can expect to see this review again and again and again. In the short term, this review will be the education given to the P2P Panel of non-ME/CFS experts in advance of the Workshop. But the review will also be published, cited, and relied upon by others as a definitive summary of the state of the science on diagnosing and treating ME/CFS.

Case Definition: I Told You So

When the protocol for this systematic review was published in May 2014, I warned that the review was going to lump all case definitions together, including the Oxford definition. After analyzing the review protocol and the Workshop agenda, Mary Dimmock and I wrote that the entire P2P enterprise was based on the assumption that all the case definitions described the same single disease, albeit in different ways, and that this assumption put the entire effort at risk. Some people may have hoped that a systematic review would uncover how different Oxford and Canadian Consensus Criteria patients were, and would lead to a statement to that effect.

Unfortunately, Mary and I were correct.

The systematic review considered eight case definitions, including Oxford, Fukuda, Canadian, Reeves Empirical, and the International Consensus Criteria, and treated them as describing a single patient population. They lumped all these patient cohorts together, and then tried to determine what was effective in diagnosing and treating this diverse group. The review offers no evidence to support their assumption, beyond a focus on the unifying feature of fatigue.

What I find particularly disturbing is that the review did acknowledge that maybe Oxford didn’t belong in the group:

We elected to include trials using any pre- defined case definition but recognize that some of the earlier criteria, in particular the Oxford (Sharpe, 1991) criteria, could include patients with 6 months of unexplained fatigue and no other features of ME/CFS. This has the potential of inappropriately including patients that would not otherwise be diagnosed with ME/CFS and may provide misleading results. (p. ES-29, emphasis added)

But then they did it anyway.

Credit: ElodieUnderGlass

This is inexplicably bad science. How can they acknowledge that Oxford patients may not have ME/CFS and acknowledge that including them may provide misleading results, and then include them anyway? Is it just because Oxford papers claim to be about CFS and include people with medically unexplained fatigue? The systematic review authors clearly believed that this was a sufficient minimum standard for inclusion in analysis, despite the acknowledged risk that it could produce misleading results.

I will have a lot more to say on this topic and the problems in the review’s analysis. For now, the bottom line takeaway message is that the systematic review combined all the case definitions, including Oxford, and declared them to represent a single disease entity based on medically unexplained fatigue.

PACE is Ace

One of the dangers of the review’s inclusion of the Oxford definition and related studies was the risk that PACE would be highly regarded. And that is exactly what happened.

The PACE trial is one of seven treatment studies (out of a total of thirty-six) to receive the “Good” rating, which has a specific technical meaning in this context (Appendix E). In the systematic review, a randomized control trial is “Good” if it includes comparable groups, uses reliable and valid measurement instruments, considers important outcomes, and uses an intention-to-treat analysis. I’m certainly no expert in these issues, but I can spot a couple problems.

First of all, the PACE trial may have used comparable groups within the study, but that internal consistency is different from whether the PACE cohort was comparable to other ME/CFS patients. The systematic review already acknowledged that the Oxford cohort may include people who do not actually have ME/CFS, and in my opinion that is the comparable group that matters.

In terms of important outcomes, the systematic review focused on patient-centered outcomes related to overall function, quality of life, ability to work and measures of fatigue. Yet there is no discussion or acknowledgement that patient performance on a 6 minute walking test at the end of PACE showed that they remained severely impaired. There is also no acknowledgement that a patient could enter PACE with an SF-36 score of 65, leave the trial with a score of 60, and be counted as recovered. That is because so many changes were made to the study in post-hoc analysis, including a change to the measures of recovery. Incredibly, the paper in which the PACE authors admit to those post-hoc changes is not cited in the systematic review. It is also important to point out that much of the discussion of the PACE flaws has occurred in Letters to the Editor and other types of publications, many of which were wholly excluded from the systematic review.

Again, I will have a lot more to say about how the systematic review assessed treatment trials, particularly trials like PACE. For now, the takeaway message is that the systematic review gave PACE its highest quality rating, willfully ignoring all the evidence to the contrary.

Final Equation

Where does this leave us, at the most basic and simple level?

  • The review lumped eight case definitions together.
  • The review acknowledged that the Oxford definition could include patients without ME/CFS, but forged ahead and included those patients anyway.
  • The review included nine treatment studies based on the Oxford definition.
  • The review rated the PACE trial and two other Oxford CBT/GET/counseling studies as good.
  • The review concluded that it had moderate confidence in the finding that CBT/GET are effective for ME/CFS patients, regardless of definition.

If that does not make sense to you, join the club. I do not understand how it can be scientifically acceptable to generalize treatment trial results from patients who have fatigue but not ME/CFS to patients who do have ME/CFS. Can anyone imagine generalizing treatment results from a group of patients with one disorder to patients with another disease? For example, would the results of a high cholesterol medicine trial be generalized to patients with high blood pressure? No, even though some patients with high blood pressure may have elevated cholesterol, we would not assume the risk of generalizing results from one patient population to another.

But the systematic review’s conclusion is the predictable output of an equation that begins with treating all the case definitions as a single disease entity.

I will be submitting a detailed comment on the systematic evidence review. I encourage everyone to do the same because the report authors must publicly respond to all comments.  More detailed info will be forthcoming this week on possible points to consider in commenting.

This review is going to be with us for a long time. I think it is fair and reasonable to ask the authors to address the multitude of mistakes they have made in their analysis.

Edited to add: Erica Verillo posted a great summary of problems with the review, as well.

 

Mary Dimmock: Fight the Power

September 25th, 2014 42 comments

The draft P2P evidence review report has been issued and we have all had a chance to see just how appallingly bad it is. Now the question is what to do next.

Some have called for us to oppose P2P by boycotting it. I absolutely agree that we must oppose P2P. But where I differ is in the nature and breadth of tactics that we need to use.

What has been done to ME patients for thirty years and is being perpetuated in this P2P evidence review report is scientifically indefensible and irresponsible. Starting with the fact that the entire “CFS” enterprise as a clinical entity has been constructed on the sole basis of medically unexplained chronic fatigue. Seriously? Where is the scientific justification and evidence that all of the conditions encompassed by the common, ill-defined symptom of fatigue plus the current state of our medical knowledge are the same medical condition that should be studied and treated as one? There is none and there never can be.

And yet, for thirty years, that pseudoscience has held ME patients hostage in a living hell.

Such pseudoscience is the bread and butter of those with agendas to keep science from moving forward or to protect their own vested interests. We have seen it with cigarette smoking and acid rain and we are seeing it again with climate change. But you don’t fight climate change deniers by not exposing the flaws, bias and the hidden agendas in their “scientific” claims. You fight them by exposing where their “facts” are wrong, their “science” is unsound and their agendas are driven by self-interest. This is what ME advocates have been doing with the PACE trial and in my opinion is what we need to do with P2P.

Providing formal input to P2P allows us to expose the “science” of “CFS” for the scientific sham that it is. AHRQ (Agency for Healthcare Research and Quality, part of HHS) must respond to our comments, which become part of the public record that we can use later. Providing such input is as valid and necessary as a form of protest as boycotting or writing letters directly to HHS leaders. All forms of opposition are needed.

But given the history of this disease, we should be under no delusions that left to its own devices, HHS will listen to our P2P opposition, whether it takes the form of letters to HHS leaders, boycotting the meeting, or the submission of comments on the evidence review. Each can be dismissed by those who have chosen not to listen.

And much more fundamentally, we need to remember that P2P is just one event in a string of utter failures in HHS’ public policy toward ME that stretches back to Incline Village. You all know the issues – lack of research funding, harmful medical guidelines, abysmal medical care, lack of a strategy, the nightmare of insurance, disability and school accommodations and an agency hell-bent on acting unilaterally and with complete disregard of both disease experts and patients.

Ultimately, the real question is not what specific form our opposition to P2P should take. There is a place for all actions that shine a light on this travesty. Whatever you choose, make sure your voice is heard. Do not let your silence be construed as consent.

The real question is what else are we going to do to protest, not only about P2P but also about every other aspect of HHS’ handling of this disease for the last thirty years.

If ever there was a time for us to revolt as a community, by whatever means available, it is now.

Contact your congressional leaders and ask every one of your family and friends to do the same. Call your local and/or national media. Twitter. Sue the government. Contact the ACLU. Conduct a lie down demonstration. Protest at P2P. Whatever means of opposition that you can think of and are able to do, just do it!

 

NIH Says No, and Also No

September 23rd, 2014 19 comments

noWith no announcement or fanfare, the CFS Advisory Committee has posted a response from HHS to the June 2014 recommendations. My information is that  – inexplicably – even CFSAC members were not notified when the response was posted. I urge you to read the entire response, but I am going to focus on just a few sentences. There are very serious implications for the future of ME/CFS research, but despite NIH’s entrenched position, there are still things we can do about it.

No Data Sharing Platform For You

The first recommendation was that NIH create and maintain a data sharing platform for ME/CFS research. NIH’s response? No. But their reasoning is remarkable:

[D]eveloping and maintaining a unique ME/CFS database is cost prohibitive in light of the small number of researchers . . . the cost of developing and maintaining an ME/CFS database would significantly reduce funds available for funding research on ME/CFS . . .

Translation: There are not enough of you to make this platform idea worth the money.

But the implication of that last sentence is astounding: maintaining such a database would reduce the funds available for research. Translation: NIH will only spend a fixed amount of money on ME/CFS. Even if NIH decided to create a database, there would be no increase in funds to cover the cost – that money would simply be reallocated from grants.

The background document to the recommendation specifically states that a central data sharing platform would “greatly accelerate research discovery” and foster “opportunities for new scientists to enter the field.” The platform would lower barriers to conducting ME/CFS research. But NIH responds: No, because there aren’t enough researchers and we won’t increase our ME/CFS spending.

Put another way, ME/CFS has a problem because there are not enough researchers. CFSAC proposes a solution of a data platform that could attract the interest of new researchers. NIH says no, because you don’t have enough researchers.

Wait, what?

There Will Be No RFA

The second recommendation was that NIH fund an RFA to address the gaps in ME/CFS research. NIH’s response? No. And the reasoning on this one will make your head hurt, it is so circuitous.

Unfortunately there remains a lack of definitive evidence regarding the etiology, diagnosis, and treatment for ME/CFS. As such, issuing a Request for Applications (RFA) would not be an effective strategy as RFAs generally encourage a narrowly defined research area that addresses more specific gaps in scientific knowledge.

First of all, NIH issued an RFA for ME/CFS in 2006 and it was targeted at Neuroimmune Mechanisms and Chronic Fatigue Syndrome. So the gaps were obvious enough to issue an RFA eight years ago, and more gaps were identified at the 2011 State of the Knowledge meeting, but now we don’t know enough to target those gaps????

Second of all, why is there a “lack of definitive evidence”? Obviously, because NIH funding at $5 million a year is not likely to produce much in the way of definitive evidence on etiology, diagnosis and treatment.

It seems to me that what NIH is actually saying is: we haven’t provided enough funding to identify definitive evidence, and because you haven’t identified definitive evidence we can’t provide you with more funding. If that doesn’t qualify as circular reasoning, I don’t know what does.

What this response tells us is that if NIH persists in this approach, we will be waiting a long time for an RFA or increase in funding. We will have to wait until a) there is a miracle discovery on etiology, diagnosis and treatment or b) 10 to 15 years for the career development idea to produce more researchers who are doing ME/CFS research.

Despite the thorough background and support for the recommendation provided by CFSAC, despite letters from members of Congress in support of an RFA, despite the pleas of advocates and organizations like IACFS/ME, NIH is steadfastly refusing to provide the one thing that we know would accelerate research progress: the money. UNACCEPTABLE.

What You Can Do

The NIH response leaves the door open just a crack – and that crack could make all the difference. The response says that RFAs are “designed to build upon recommendations . . . that incorporate findings from workshops and conferences.” Remind you of anything? Think P2P.

This makes the P2P Workshop more mission critical than ever, especially now that the draft systematic review has been published. The P2P report is supposed to identify gaps in ME/CFS research. NIH has left the door open to an RFA that incorporates findings from workshops. So we need to do everything possible to make sure the P2P report identifies accurate and appropriate gaps.

The systematic review says that CBT is moderately effective. It treats all the case definitions as equivalent. Remember that this review is the single piece of evidence given to the P2P Panel in advance of the Workshop. Do you want the P2P Panel report to incorporate those findings? Do you want an RFA based on findings like that?

I don’t. So here is what you can do:

Now is not the time to lie down. NIH says No? I say push back. This is a critical moment. If we slip and fall now, the consequences will affect us for many years to come.

 

Draft Systematic Review is UP

September 22nd, 2014 7 comments

The draft systematic evidence review on the Diagnosis and Treatment of ME/CFS has been published.

This review is extraordinarily important because it is being presented to the P2P Panel in a closed door session any day now. This review will be the only evidence presented to the P2P Panel in advance of the Workshop on December 9-10, 2014. Expert presentations at the Workshop may support, refute or expand upon the review, but it is likely that the Panel will ascribe very heavy weight to this report.

I have not read the report yet, and will hold off on commenting until I do. A group of advocates is working together to review the material and prepare highlighted issues that others can use in their comments.

Public comment on the review will be accepted through October 20th. Regardless of whether you plan to submit comment, please read at least the executive summary of this report if you are able to do so. It will be one of the most important documents on ME/CFS published by the government this year.

 

P2P Participation, Part 2

September 18th, 2014 14 comments

I have new information on participation in the Pathways to Prevention ME/CFS Workshop:

The Office of Disease Prevention confirmed via telephone that the public will be able to participate in discussion at the P2P Workshop, in person and online. ODP explicitly said that people attending in person can ask questions or make comments via microphones or computers in the room. Webcast viewers can type in comments and questions in a comment box on the webpage. There is a total of 3.5 hours of “Discussion” time noted on the draft agenda, and this is when public input will be addressed. The ME/CFS meeting will follow a procedure very similar to the upcoming P2P meeting on opioid use, so we will be able to see how it works. While there is no guarantee of how much we will be included in the discussion, I am very glad that we finally got some clarity on this issue.

Dr. Susan Maier (NIH) confirmed via email that the comment period on the P2P final report will be extended. Originally, we were going to have from December 12 to December 26th to submit comment on this vital report on the direction of ME/CFS research. This is the worst possible timing for a population as disabled as ME/CFS patients, falling right at the holidays. Multiple groups and individuals requested an extension of this time as an accommodation of our disability. Dr. Maier has confirmed that the comment deadline will be extended to 30 days, meaning the new deadline should be around January 12, 2015. This is a fair and reasonable period of time, and I thank NIH for making this accommodation.

So here is where I repeat my plea for as many people as possible to attend the meeting on December 9-10th, watch it via webcast, and comment on the draft report. Register for the meeting here.

I know that some advocates believe that watching the meeting or submitting comments is some kind of endorsement of the process, and that this participation will be used against us. I strongly disagree. Silence will be interpreted as consent. This is especially true given that we now have better opportunities to participate (although it remains to be seen how many of our questions are actually addressed, of course). We have been complaining for years that NIH needs to do more about ME/CFS, and now they believe they are taking a big step to do more.

I am on record as saying that I believe the P2P Workshop is fundamentally flawed in its present form. But I will attend this meeting, I will ask questions, and I will submit comment. I am not doing so because I think I can fix the fundamental flaw by myself. I am doing so – I am doing all the P2P work I have done – because at the very least, I will make sure that this process is conducted in the light. I will make sure that people know what is being done, how and by whom.

P2P is offering us a tiny itty bitty piece of a microphone. I say hold on, and speak up.

 

Charter Changes

September 16th, 2014 Comments off

Change - Blue ButtonIt came down to the wire, but HHS Secretary Sylvia Burwell has renewed the charter of the CFS Advisory Committee. While there are no sweeping changes to the charter, some of the changes may have you scratching your head.

CFSAC is a chartered advisory committee, meaning that it is created by the head of HHS and must be reauthorized every two years (by law). The charter is the operational framework for this committee, defining its purpose and the basics of its functioning. Regulations and HHS policy run in the background, but the charter sets many of the rules. I did a line by line comparison between the old and new charter to see what will be different for the next two years.

All in a Name

Throughout the charter, the word CFS has been replaced with ME/CFS. On the one hand, this reflects the overall change in how people refer to this illness. But the name of the committee is the same; it is not the ME/CFS Advisory Committee. It is still CFSAC, despite the changes in the document itself.

The other puzzler here is the fact that the IOM study includes a recommendation on the name of the disease. What will happen if IOM says the disease should be called ME or Ramsay’s Disease or something entirely new? Will we have to fight HHS all over again for them to use the appropriate terminology?

Purpose

The purpose of the CFSAC is unchanged: to provide advice and recommendations on a broad range of issues related to ME/CFS. As a side note, it is interesting to see how the areas covered by CFSAC have been stripped away by other initiatives. The CFSAC is supposed to advise on the state of knowledge and gaps in research, but that’s being done by P2P. Impact and implications of diagnostics and treatment is partly covered by IOM. Development of education programs is partly IOM and partly CDC (which has strongly resisted CFSAC’s attempts to influence here). Partnering to improve patient quality of life is about the only thing still solidly CFSAC.

Report Structure

As in previous charters, CFSAC makes its recommendations to the Secretary through the Assistant Secretary. Management and support services are provided by the Office of the Assistant Secretary, as before. The Office of Women’s Health (OWH) has never been mentioned by name in the charter, but there is little doubt that the CFSAC will remain in that office. The new DFO, Barbara James, is a staff member in OWH, and Dr. Nancy Lee has said she will remain available to assist in the transition.

Money

The most significant change in the charter is the committee budget. The annual cost for operating the committee, which includes the travel stipend but excludes the cost of staff support, has decreased 47%. This probably reflects the move to only one in-person meeting per year.

However, the cost of staff support has gone up. The estimated staff time is 1.5 full time equivalent staff for the year. This does not mean that one person only works on CFSAC, though. It’s an estimate of combined staff time, and presumably includes the contractor cost for the meetings. The cost of that staff time has increased almost 52%.

Overall, the budget for CFSAC has increased by about 12%. That sounds reasonable, but it comes at the cost of an in-person meeting. If the travel stipend was retained for two meetings per year, the increase would have been at least 33%.

The More Things Change . . .

What difference will any of these changes make? Probably not much. We already knew that we were going to lose an in-person meeting, given the trend over the last year. The CFSAC is still lodged in OWH, with a member of Dr. Lee’s staff in the role of DFO. We don’t know much about Barbara James at this point. Her public health career has focused on women’s and minority health issues, including a project to include gender focus in the Healthy People 2010 initiative. The fall meeting of the CFSAC will be our first opportunity to assess how she will approach her new role as DFO of the committee.

 

*My thanks for Denise Lopez-Majano for assisting with the research for this piece.