Showing posts with label type II error. Show all posts
Showing posts with label type II error. Show all posts

Tuesday, April 4, 2017

Tipping the Scales of Noninferiority: Abbott's "Emboshield and Xact Carotid Stent System"

I just stumbled across this and think it's worth musing over it a bit.  The recently published ACT I trial by Rosenfield et al is a noninferiority trial of an already approved device, the "emboshield embolic protection system" used in conjunction with the "Xact carotid stent system" both proprietary devices from Abbott.  I'm scrutinizing this trial (and others) to determine if adequate justification is given for the noninferiority hypothesis around which the trial is designed.  One thing I'm looking for is  evidence that there are clear secondary advantages of the novel or experimental therapy that justify accepting some degree of worse efficacy, compared to the active control, which falls within the prespecified margin of noninferiority.  This is what the authors (or their ghosts) write in the introduction:
"Most carotid revascularization procedures in the United States are carotid endarterectomies performed for the treatment of asymptomatic atherosclerotic disease. Revascularization is also performed by means of stenting with devices to capture and remove emboli (“embolic protection” devices).3,4 In the Carotid Revascularization Endarterectomy versus Stenting Trial (CREST), no significant difference was found between carotid endarterectomy and stenting with embolic protection for the treatment of atherosclerotic carotid bifurcation stenosis with regard to the composite end point of stroke, death, or myocardial infarction.5 CREST included both symptomatic and asymptomatic patients, and it was not sufficiently powered to discern whether the carotid endarterectomy and stenting with embolic protection were equivalent according to symptomatic status. The primary aim of the Asymptomatic Carotid Trial (ACT) I was to compare the outcomes of carotid endarterectomy versus stenting with embolic protection in patients with asymptomatic severe carotid-artery stenosis who were at standard risk for surgical complications."
That's a mouthful, to say the least, and probably ought to be expectorated.

Thursday, January 5, 2017

RCT Autopsy: The Differential Diagnosis of a Negative Trial

At many institutions, Journal Clubs meet to dissect a trial after its results are published to look for flaws, biases, shortcomings, limitations.  Beyond the dissemination of the informational content of the articles that are reviewed, Journal Clubs serve as a reiteration and extension of the limitations part of the article discussion.  Unless they result in a letter to the editor, or a new peer-reviewed article about the limitations of the trial that was discussed, the debates of Journal Club begin a headlong recession into obscurity soon after the meeting adjourns.

The proliferation and popularity of online media has led to what amounts to a real-time, longitudinally documented Journal Club.  Named “post-publication peer review” (PPPR), it consists of blog posts, podcasts and videocasts, comments on research journal websites, remarks on online media outlets, and websites dedicated specifically to PPPR.  Like a traditional Journal Club, PPPR seeks to redress any deficiencies in the traditional peer review process that lead to shortcomings or errors in the reporting or interpretation of a research study.

PPPR following publication of a “positive” trial, that is one where the authors conclude that their a priori criteria for rejecting the null hypothesis were met, is oftentimes directed at the identification of a host of biases in the design, conduct, and analysis of the trial that may have led to a “false positive” trial.  False positive trials are those in which either a type I error has occurred (the null hypothesis was rejected even though it is true and no difference between groups exists), or the structure of the experiment was biased in such a way as that the experiment and its statistics cannot be informative.  The biases that cause structural problems in a trial are manifold, and I may attempt to delineate them at some point in the future.  Because it is a simpler task, I will here attempt to list a differential diagnosis that people may use in PPPRs of “negative” trials.

Wednesday, July 22, 2015

There is (No) Evidence For That: Epistemic Problems in Evidence Based Medicine

Below is a Power Point Presentation that I have delivered several times recently including one iteration at the SMACC conference in Chicago.  It addresses epistemic problems in our therapeutic knowledge, and calls into question all claims of "there is evidence for ABC" and "there is no evidence for ABC."  Such claims cannot be taken at face value and need deeper consideration and evaluation considering all possible states of reality - gone is the cookbook or algorithmic approach to evidence appraisal as promulgated by the User's Guides.  Considered in the presentation are therapies for which we have no evidence, but they undoubtedly work (Category 1 - Parachutes) and therapies for which we have evidence of efficacy or lack thereof (Category 2) but that evidence is subject to false positives and false negatives, for numerous reasons including: the Ludic Fallacy, study bias (See: Why Most Published Research Findings Are False), type 1 and 2 errors, the "alpha bet" (the arbitrary and lax standard used for alpha, namely 0.05), Bayesian interpretations, stochastic dominance of the null hypothesis, inadequate study power in general and that due to delta inflation and subversion of double significance hypothesis testing.  These are all topics that have been previously addressed to some degree on this blog, but this presentation presents them together as a framework for understanding the epistemic problems that arise within our "evidence base."  It also provides insights into why we have a generation of trials in critical care the results of which converge on the null and why positive studies in this field cannot be replicated.

Sunday, April 6, 2014

Underperforming the Market: Why Researchers are Worse than Professional Stock Pickers and A Way Out

I was reading in the NYT yesterday a story about Warren Buffet and how the Oracle of Omaha has trailed the S&P 500 for four of the last five years.  It was based on an analysis done by a statistician who runs a blog called Statistical Ideas, which has a post on p-values that links to this Nature article a couple of months back that describes how we can be misled by P-values.  And all of this got me thinking.

We have a dual problem in medical research:  a.)  of conceiving alternative hypotheses which cannot be confirmed in large trials free of bias;  and b.) not being able to replicate the findings of positive trials.  What are the reasons for this?