Stick a Fork In It
After three years of controversy about the purported association between CFS and XMRV, and after two years of waiting for the definitive Lipkin study to be finished (full text of the paper is here), we have our answer. Stick a fork in it, people, because XMRV is done. There are plenty of places to get summaries (such as here and here and here, and a quite revealing interview with Dr. Lipkin here). I want to focus my analysis on the issues that come up the most in patients’ discussions about XMRV.
Did the Lipkin study use the right patients? As far as I can tell, yes. The Lipkin study used the Fukuda and Canadian Consensus Criteria, and only selected patients that had a sudden viral-like onset. Dr. Lipkin said today that this was done so that they chose subjects with a high likelihood of having an infection. The patients were selected by six clinicians around the country (to avoid any limitations based on geography): Dr. Cindy Bateman, Dr. Nancy Klimas, Dr. Anthony Komaroff, Dr. Susan Levine, Dr. Jose Montoya, and Dr. Dan Peterson. If you know anything about CFS, then you recognize these clinicians’ names. Cases were further examined for exclusionary conditions, like hepatitis, HIV, and thyroid dysfunction. Again, this was done to avoid confounding the results by introducing too many variables. Finally, cases had to demonstrate a required level of functional impairment based on responses to four clinical questionnaires. By comparison, the patients in the original Lombardi study met both the Fukuda and Canadian criteria, and presented with severe disability (Lombardi Supplemental). Those samples came from the Whittemore Peterson Institute’s national tissue repository (Lombardi), included samples from geographical diverse areas and included some cluster outbreaks. The Lombardi paper does not specify whether testing was done to exclude other infections, how long the samples had been stored prior to use in the study, or the mean length of illness. The Lipkin study appears to have done everything possible to identify a clean cohort of severely ill CFS patients who were likely to have evidence of infection, and we have much more information about this cohort than we do about the original Lombardi cohort.
Why not test the same patients as Lombardi, et. al? That’s been done. The Singh and Levy studies both retested reported positives from WPI. The Blood Working Group study also used reported positives from WPI and Lo, et al. Those studies were all negative. Dr. Lipkin did not explicitly address this issue in the press conference, but I assume that the point was to start with a fresh cohort and try to replicate the association between these viruses and CFS.
Were the samples handled correctly? As far as I can tell, yes. Dr. Lipkin spent significant time at the press conference this morning addressing the issue of why blood was used in this study. This is important, because after the negative replication attempts began to pile up, XMRV-theory supporters began claiming that the virus could not be detected in blood, but could be found in tissues. The question was raised again this morning by both Hillary Johnson and Deborah Waroff. Dr. Lipkin and Dr. Alter both said that the reason blood was used in this study is because both the Lombardi and Lo studies found XMRV/pMLVs in blood. I’m not sure why this gets lost in discussions about XMRV. The Lombardi paper found XMRV in 67% of the CFS blood samples they analyzed. It wasn’t tissue; it wasn’t lymph; it was blood. So an attempt to confirm the Lombardi and Lo findings has to look at the blood.
For the Lipkin study, blood was drawn fresh from patients and controls between 10am and 2pm, within the same season (to control for possible communal infections and diurnal variations). The blood was treated with EDTA (an anticoagulant) and shipped overnight to Columbia University where the coding and sample splitting was done. The Lombardi Supplemental says that blood samples were treated with sodium heparin, a different anticoagulant. There is no information provided about the time of day or time of year that samples were collected, nor the length of time samples were stored before the study. I’ve seen some claims online that the type of anticoagulant used makes a difference, but I have no information to say for sure either way.
After coding, each sample in the Lipkin study was divided into multiple aliquots. There were four testing sites (more on this in the next section): CDC, FDA, Dr. Hanson’s lab at Cornell, and Dr. Ruscetti’s lab at NIH. Each group received a double set of samples – 2 each of every patient and control. In addition, the groups received artificially spiked positive controls and known negative controls.
Did they use the right tests? Yes. Each of the four testing sites used their own assays in the Lipkin study. CDC used assays previously described in earlier papers to perform multiple rounds of PCR. FDA used a modified version of the assays described in the Lo paper to run PCR. Dr. Hanson performed PCR on samples that had first been cultured in the Ruscetti lab. Finally, the Ruscetti lab used a serologic assay that was slightly modified from what had been reported in Lombardi. Each lab used both negative and positive controls and got accurate results with those samples. Is it a weakness that these labs did not use the precise assays used in Lombardi and Lo? I suppose one could make that argument, but the flip side is that refinements in testing should be used to produce (hopefully) better results. If the labs had been required to use the exact tests used in Lombardi, and the results were negative, it would be a fair question why they were prevented from applying what had been learned since Lombardi. With this design, each lab could use the technique that it felt was most likely to produce accurate results.
Were the positive/negative results reliable? Yes. The Lipkin study testing involved PCR of plasma, PCR of PBMCs, PCR of cultured PBMCs, and serology testing. Remember that each subject sample was not only split among the labs (so subject x was tested by each group), but each sample was sent to each location twice (so subject x was tested twice by lab A, twice by lab B, etc). In order to be counted as a positive result, the Lipkin study states: “Subjects with two positive results in the same sample type were considered positive for XMRV/pMLV.” In other words, subject x had to test positive in two plasma samples or two PBMC samples, etc. to be counted positive. The original Lombardi and Lo studies did not use this redundancy.
The results were clear: No positives were found by CDC in plasma. No positives were found by FDA in plasma or PBMCs. No positives were found by Hanson in cultured PBMCs. (Lipkin study Table 3). Zero, zip, nada, nyet, nothing. Why do they think that PCR is reliable? Because Lombardi used PCR. This is another fact that has frequently been forgotten or swept under the rug during online discussions of XMRV. People have been claiming that you can’t find XMRV with PCR, when the original study used PCR. The Lipkin study included positive and negative controls, checks for contamination, and PCR found NOTHING (except in the positive controls).
The serology results were not clear cut. Approximately 6% of both the patients and the controls were found to be positive using a slightly modified version of the serology assay used in Lombardi. However, the Lipkin study points out that this antibody cannot be validated in a sample known to be positive for clinical XMRV infection, since there is no confirmed case of human XMRV infection. Furthermore, the antibody used may be cross-reactive, meaning it appears positive in the case of a non-XMRV infection. The CFIDS Association article on the study explains this nicely. In the paper, the authors state, “We posit that positive results represent either nonspecific or cross-reactive binding.” The fact that the same number of positives were found in both patients and controls is strongly suggestive that the result is not associated with CFS. In his interview with Nature, Dr. Lipkin said, “If you consider this in the context of the work that shows that XMRV originates in the laboratory, then I think we can probably close the door on this once and for all.”
The consensus reached by all of the study authors could not be clearer: “Our results definitively indicate that there is no relationship between CFS/ME and infection with either XMRV or pMLV.”
Where do we go from here? The Lipkin study ends as follows:
We remain committed to investigating the pathogenesis of CFS/ME and to ensuring that the focus on this complex syndrome is maintained. Studies under way include the search for known and novel pathogens and biomarkers through deep sequencing and proteomics.
There was much discussion at the press conference about the promising avenues for further inquiry, including additional pathogen discovery work, examining host response, and looking at protein and gene products. Dr. Lipkin is involved in some of that work through the Chronic Fatigue Institute. In his interview on This Week in Virology, Dr. Lipkin also said that he is professionally invested in the search for the pathogenesis of CFS.
Much was also made of the fact that the samples gathered for the Lipkin study represent an extraordinary resource for future research. The samples are stored in freezers at NIH and are available for qualified investigators. Application to use these samples must be made through NIH, and Dr. Lipkin stated that two investigators had already received samples and were working on them (although he offered no more specific information). In the TWiV interview, Dr. Lipkin said that it was the panel of investigators on this study that approved applicants for samples, but he didn’t explain how that works. He also said that there would be NIH money available for an RFA to use the samples, but this is the first I heard that. Obviously, an RFA from NIH with money attached would be big news and tremendous progress – so I hope we can get that confirmed by NIH. Finally, he said there is enough plasma stored for 50 labs to conduct studies. I think this is a huge positive outcome; more money was spent on this study by NIH than for any other CFS study in 2011. I am thrilled at the prospect that we might learn more from these samples.
Is XMRV really over? Yes. On top of the definitive statement in the paper, Dr. Mikovits and Dr. Alter firmly closed the door on XMRV/pMLV and CFS today. Dr. Alter said this study was “quite definitive.” Dr. Mikovits said the study rigorously excluded the earlier findings and that XMRV is “simply not there.” Furthermore, she said she was “100% confident in the results.” It can’t be said any plainer than that. Dr. Mikovits is on record as denying an association between XMRV/pMLVs and CFS. I’m fairly certain there will be some people who are still not convinced, but they will have to make their claims in spite of what Dr. Mikovits said herself.
What about the prostate cancer papers? That’s done, too. A study published today in PlosOne today concludes “In summary, our findings do not support any association between XMRV infection and prostate cancer, and by extension indicate that XMRV has never replicated outside of the laboratory setting. The initial discovery linking XMRV to prostate cancer in 2006 arose from laboratory contamination of clinical samples by an XMRV-infected LNCaP cell line. In turn, the LNCaP cells were most likely previously infected by 22Rv1, from which XMRV almost certainly originated through in vivo passaging of the CWR22 xenograft in mice.” Update at 8:10 pm: the original XMRV and prostate cancer study from 2006 is now retracted.