Showing posts with label causality. Show all posts
Showing posts with label causality. Show all posts

Sunday, March 24, 2013

Why Most Clinical Trials Fail: The Case of Eritoran and Immunomodulatory Therapies for Sepsis

The experimenter's view of the trees.
The ACCESS trial of eritoran in the March 20, 2013 issue of JAMA can serve as a springboard to consider why every biological and immunomodulatory therapy for sepsis has failed during the last 30 years.  Why, in spite of extensive efforts spanning several decades have we failed to find a therapy that favorably influences the course of sepsis?  More generally, why do most clinical trials, when free from bias, fail to show benefit of the therapies tested?

For a therapeutic agent to improve outcomes in a given disease, say sepsis, a fundamental and paramount precondition must be met:  the agent/therapy must interfere with part of the causal pathway to the outcome of interest.  Even if this precondition is met, the agent may not influence the outcome favorably for several reasons:
  • Causal pathway redundancy:  redundancy in causal pathways may mitigate the agent's effects on the downstream outcome of interest - blocking one intermediary fails because another pathway remains active
  • Causal factor redundancy:  the factor affected by the agent has both beneficial and untoward effects in different causal pathways - that is, the agent's toxic effects may outweigh/counteract its beneficial ones through different pathways
  • Time dependency of the causal pathway:  the agent interferes with a factor in the causal pathway that is time dependent and thus the timing of administration is crucial for expression of the agent's effects
  • Multiplicity of agent effects:  the agent has multiple effects on multiple pathways - e.g., HMG-CoA reductase inhibitors both lower LDL cholesterol and have anti-inflammatory effects.  In this case, the agent may influence the outcome favorably, but it's a trick of nature - it's doing so via a different mechanism than the one you think it is.

Monday, January 28, 2013

Coffee Drinking, Mortality, and Prespecified Falsification Endpoints

A few months back, the NEJM published this letter in response to an article by Freedman et al in the May 17, 2012 NEJM reporting an association between coffee drinking and reduced mortality found in a large observational dataset.  In a nutshell, the letter said that there was no biological plausibility for mortality reductions resulting from coffee drinking so the results were probably due to residual confounding, and that reductions in mortality in almost all categories (see Figure 1 of the index article) including accidents and injuries made the results dubious at best.  The positive result in the accidents and injuries category was in essence a failed negative control in the observational study.

Last week in the January 16th issue of JAMA Prasad and Jena operationally formalized this idea of negative controls for observational studies, especially in light of Ioannidis' call for a registry of observational studies.  They recommend that investigators mining databases establish a priori hypotheses that ought to turn out negative because they are biologically implausible.  These hypotheses can therefore serve as negative controls for the observational associations of interest, the ones that the authors want to be positive.  In essence, they recommend that the approach to observational data become more scientific.  At the most rudimentary end of the dataset analysis spectrum, investigators just mine the data to see what interesting associations they can find.  In the middle of the spectrum, investigators have a specific question that they wish to answer (usually in the affirmative), and they leverage a database to try to answer that question.  Prasad and Jena are suggesting going a step further towards the ideal end of the spectrum:  to specify both positive and negative associations that should be expected in a more holistic assessment of the ability of the dataset to answer the question of interest.  (If an investigator were looking to rule out an association rather than to find one, s/he could use a positive control rather than a negative one [a falsification end point] to establish the database's ability to confirm expected differences.)

I think that they are correct in noting that the burgeoning availability of large databases (of almost anything) and the ease with which they can be analyzed poses some problems for interpretation of results.  Registering observational studies and assigning prespecified falsification end points should go a long way towards reducing incorrect causal inferences and false associations.

I wish I had thought of that.

Added 3/3/2013 - I just realized that another recent study of dubious veracity had some inadvertent unspecified falsification endpoints, which nonetheless cast doubt on the results.  I blogged about it here:  Multivitamins caused epistaxis and reduced hematuria in male physicians.