LSE PH500 Meeting

The (Vaccine) Booster Wars: A prepost

[We are experimenting with Twitter threads.]

We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft in errorstatistics.com. I’ll update or replace this prepost after reviewing.

The Booster wars

Like most wars, the recent “booster wars” have (unintended) casualties. I refer, of course, to the disagreement about whether third shots of Covid vaccines are called for because of the evidence of waning protection after 6 or so months, coupled with the more virulent delta variant. Last week’s skirmish, resulting in the FDA advisory committee voting 16 to 2 against approving a third shot of Pfizer’s vaccine (to anyone over 16) seemed to be more of a backlash by some members in FDA’s Office of Vaccines against being sidelined by the White House when they announced last month already that a booster shot was forthcoming for all. (Two members, including the director of the Office of Vaccines is leaving, presumably as a result, at least that’s how it was described in the press). The FDA advisory committee claimed there was not enough evidence of benefit to recommend boosters for all, given the unknown risks such as myocarditis (although the data Pfizer presented include just 1 case, I believe). 

I watched the last 3 hours of the day’s session on Friday September 17. (It was oddly reassuring that the FDA had at least as many technical glitches with zoom as the rest of us; but not reassuring to see the seeming cavalier attitude of some members). Right after voting the booster plan down the panel immediately turned around and approved the booster for anyone over 65 or who was in a severe risk group. Then, 15 minutes later, they broadened that to include anyone “at high risk of occupational exposure”—at any level of exposure—to Covid,  such as healthcare workers, teachers and many others.

Do our health experts realize how detrimental their infighting is to the rest of us? Couldn’t they have come to somewhat of an agreement—at least as to how to explain their opposed standpoints—before making rival pronouncements and issuing dueling preprints? For at least one whole month now, we’ve been witnessing squabbling agencies. It came as a surprise to hear Biden/Fauci announce in mid-August that boosters were necessary, it was only a matter of time. “Even among government scientists, the idea has been met with skepticism and anger” we read in the NYT. Fauci said last month it would probably be 8 months, no make that 6 months (after the last vaccine).[1] “Fauci said: he was ‘certain‘ that Americans would need booster shots of the COVID-19 vaccine” possibly at 5 months! Today he even strengthened that view, asserting “the third shots should be viewed as a part of the COVID-19 vaccine regimen, just like the first and second shots,… I think that three shots will be the actual correct regimen”. 

Does that mean he’s prepared now, despite the analysis of the FDA panel, to have the booster mandated wherever the others currently are? Apparently.

We’d all love to see the plan

So what would the plan be then? Boosters every 6 months? Israel is preparing for a 4th dose already. What about the development of boosters for Delta and other variants? Is that in the works in the U.S.? And if boosters are to be recommended on the basis of declining levels of neutralizing antibodies (correlated, it is thought, with breakthrough cases), why not recommend people test their levels? I did go out and get my levels tested a couple of weeks ago but it was anything but routine. Friday’s FDA panel voted to wait until more evidence is in. If our numbers are high, then, is it advisable to wait, even if we fall into the FDA’s (vague) permissible category? The answers we are getting are simplistic, defensive, and, to my knowledge, don’t address this and other fairly obvious conundrums for an anxious public.

The main basis for the rejection by the FDA panel was described in aLancet article appearing right before: they find the available evidence pointing to the need for boosters to be weak, based on observational studies, they claim, of just a few weeks:

“Randomised trials are relatively easy to interpret reliably, but there are substantial challenges in estimating vaccine efficacy from observational studies undertaken in the context of rapid vaccine roll-out.

Although the benefits of primary COVID-19 vaccination clearly outweigh the risks, there could be risks if boosters are widely introduced too soon, or too frequently, especially with vaccines that can have immune-mediated side-effects (such as myocarditis, which is more common after the second dose of some mRNA vaccines, or Guillain-Barre syndrome, which has been associated with adenovirus-vectored COVID-19 vaccines). If unnecessary boosting causes significant adverse reactions, there could be implications for vaccine acceptance that go beyond COVID-19 vaccines. Thus, widespread boosting should be undertaken only if there is clear evidence that it is appropriate.”[2]

This seems a sensible precautionary stance, unfortunately obscured by the feeling it reflected agency power dynamics.[3] Maybe the U.S. would have more of its own data, if the CDC had not stopped recording breakthrough infections in May, 2021 (except for those who are hospitalized or die). Anyway, Fauci does not address the panel’s concerns about limited data. But, given those concerns, it does make one wonder why the same panel turned around and recommended approval of the booster for various occupations, rather than recommending waiting for more data. The FDA’s misgivings will doubtless also give grounds for unvaccinees to point out that even the FDA is worried about safety of the approved vaccines. After all, a third dose, 6 months after the second, does not seem substantially riskier, especially given the lack of caveats when telling those who have had Covid to get fully vaccinated in addition. Actually, it now appears that getting vaxxed after having Covid provides “superhuman” Covid immunity.

Will our immunity (from vaccinations) evolve, or be obstructed?

In a study published online last month, [Paul] Bieniasz and his colleagues found antibodies in these individuals that can strongly neutralize the six variants of concern tested, including delta and beta, as well as several other viruses related to SARS-CoV-2, including one in bats, two in pangolins and the one that caused the first coronavirus pandemic, SARS-CoV-1. (see link)

In fact, these antibodies were even able to deactivate a virus engineered, on purpose, to be highly resistant to neutralization. This virus contained 20 mutations that are known to prevent SARS-CoV-2 antibodies from binding to it. Antibodies from people who were only vaccinated or who only had prior coronavirus infections were essentially useless against this mutant virus. But antibodies in people with the “hybrid immunity” could neutralize it.

Understandably, many are excited about the possibility that a booster shot will create, in vaccinated people, the kind of “super-human” immunity response seen in those who followed Covid infections with vaccines (i.e., those with hybrid immunity). Then Covid, it is thought, would become like the common cold. Even though it was only 14 people, that they all showed this is impressive. (There isn’t information on the reverse order, vaccine, then infection.) Throughout the pandemic, I have found that Paul Bieniasz, who led this study, is doing some of the most interesting and path-breaking work.

But there are worries by other researchers that repeated infection with one strain can actually reduce the development of immunity to novel strains—although you don’t typically hear about this.

In the case of Covid, some scientists are concerned that the immune system’s reaction to the vaccines being deployed now could leave an indelible imprint, and that next-generation products, updated in response to emerging variants of the SARS-CoV-2, won’t confer as much protection. (Stat News)

Immunologists call this ‘original antigenic sin’, and it is apparently a key obstacle to creating immunity to flu variants—although, again, we don’t hear about it in the yearly prodding to get flu shots.

The concern is that even when a booster variant comes along, our immune systems, having repeatedly encountered the early Covid variant, will largely trigger neutralizing antibodies to it rather than the novel variant. As such, I’ve heard some doctors advise people to try to go as long as they can with the primary shots. To know how long to wait, we’d need to know our (approximate) neutralizing antibody levels. As of now, if you do manage to get a quantitative test (no dichotomania), you have to go through non-standard channels to find an interpretation of numbers. Not even doctors seem to know. The public is capable of understanding that, at present, there is no clear “correlate of protection” as they call it, (between neutralizing antibodies and infection); that’s not a reason to obscure or bury the information, especially as policy decisions that affect them rely on precisely these numbers. We should also be conducting studies to test what those numbers mean in terms of infection, disease and transmission (V. Prasad)

Here are some very useful discussions:

It may be argued that future booster variants are going to be so loaded up that they will force our immune systems to pay attention (to the new variant)–but are we sure? And do we want to get to that point? Of course, like many of you,  I’m just a member of the non-expert lay population whose life is affected by Covid policy decisions that are made without my input. (I don’t know what P. Bieniasz thinks of this, but he’s convincing on the need for boosters.)

A simple first step

We’re bound to hear, any day now—perhaps even before I put up this prepost—of the FDA’s ruling on Pfizer, based on Friday’s FDA panel. Presumably they will concur with the panel, and a similar approval seems likely for Moderna in a few weeks (although I hear Moderna wants the booster to be a half dose of the original[4]). But these narrow rulings will not address the broad and legitimate questions people have, and without answers to those questions, people cannot wisely decide whether to take up any opportunity to get a booster. This just increases the feeling that agencies and politicians have their agendas, and we have to fend for ourselves. As a simple first step, how about calling all of the point people on Covid vaccines together—being particularly sure to include representatives of rival positions—to address these specific questions, and reveal the uncertainties that are the engine behind their policies, although they are generally hidden under wraps. Not one of these hour long glitzy “roundtables”, but an extended (and perhaps ongoing) forum, where answers are challenged by others and by data.

Lest people start to have hesitations with this new policy, why not give the public the information they need to critically navigate the pandemic for themselves? It’s fairly clear that our agencies aren’t doing it for us.

What do you think? Please write with your thoughts and corrections. I’d be interested to hear as well, what questions you’d like the vaccine and virology experts to answer. 

[1] In Mid-august, CDC director Walensky, agreeing with Fauci, gave these reasons for boosting: “First, vaccine-induced protection against SARS-CoV-2 infection begins to decrease over time. Second, vaccine effectiveness against severe disease, hospitalization and death remains relatively high. And third, vaccine effectiveness is generally decreased against the delta variant.” (Washington Post)

[2] An additional shot has already been approved for anyone considered immunocompromised. Several other countries are either contemplating or already giving boosters.

[3] Perhaps the disagreement is between the weight to be given infections vs severe disease. Or perhaps it’s about which is worse: that announcing boosters would increase vaccine hesitancy, or that declining anti-virus potency of vaccines will increase transmission. They also felt it would be more beneficial to increase global vaccination, but the committee announced at the start, that such considerations would not be considered relevant.

[4] So are half-doses being manufactured, or will people who want Moderna boosters have to wait until they’re produced?

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl

Abstract: This applied paper reflects on the challenges in measuring the impact of Covid-19 lockdowns on the volume and profile of domestic violence. The presentation has two parts. First, I present preliminary findings from analyses of large-scale police data from seven English police forces that disentangle longer-term trends from the effect of the imposing and lifting of lockdown restrictions. Second, I reflect on the methodological challenges involved in accessing, analysing and drawing inferences from police administrative data. 

Katrin Hohl (Department of Sociology, City University London). Dr Katrin Hohl joined City University London in 2012 after completing her PhD at the LSE. Her research has two strands. The first revolves around various aspects of criminal justice responses to violence against women, in particular: the processes through which complaints of rape fail to result in a full police investigation, charge, prosecution and conviction; the challenges rape victims with mental health conditions pose to criminal justice, and the use of victim memory as evidence in rape complaints. The second strand focusses on public trust in the police, police legitimacy, compliance with the law and cooperation with the police and courts. Katrin has collaborated with the London Metropolitan Police on several research projects on the topics of public confidence in policing, police communication and neighbourhood policing. She is a member of the Centre for Law Justice and Journalism and the Centre for Crime and Justice Research.


Readings: 

Journal article
Piquero et al. (2021) Domestic violence during the Covid-19 pandemic – Evidence from a systematic review and meta-analysis, Journal of Criminal Justice, 74 (May-June). (PDF)
 
Blog post: 
Hohl, K. and Johnson K. (2020) A crisis exposed – how Covid-19 is impacting domestic abuse reported to the police. 
https://campaignforsocialscience.org.uk/news/a-crisis-exposed-how-covid-19-is-impacting-domestic-abuse-reported-to-the-police/

Slides & Video Links: 

Katrin Hohl presentation (Video Link)
Link to paste into browser: https://philstatwars.files.wordpress.com/2021/07/hohl-presentation-edited.mp4


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 18 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

May 20: “Objective Bayesianism from a philosophical perspective” (Jon Williamson)

The ninth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

20 May 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“Objective Bayesianism from a philosophical perspective” 

Jon Williamson

Abstract: This talk addresses the ‘statistics war’ between frequentists and Bayesians, and argues for a reconciliation of sorts. We start with an overview of Bayesianism and a divergence that has taken place between Bayesianism as adopted by philosophers and Bayesianism as adopted by statisticians. This divergence centres around the use of direct inference principles, which are now widely advocated by philosophers. I consider two direct inference principles, Reichenbach’s Principle of the Narrowest Reference Class and Lewis’ Principal Principle, and I argue that neither can be adequately accommodated within a standard Bayesian framework. A non-standard version of objective Bayesianism, however, can accommodate such principles. I introduce this version of objective Bayesianism and explain how it integrates both frequentist and Bayesian inference. Finally, I illustrate the application of the approach to medicine and suggest that this sort of approach offers a very natural solution to the statistical matching problem, which is becoming increasingly important.

Jon Williamson (Centre for Reasoning, University of Kent) works in the area of philosophy of science and medicine. He works on the philosophy of causality, the foundations of probability, formal epistemology, inductive logic, and the use of causality, probability and inference methods in science and medicine. Williamson’s books Bayesian Nets and Causality and In Defence of Objective Bayesianism develop the view that causality and probability are features of the way we reason about the world, not a part of the world itself. His books Probabilistic Logics and Probabilistic Networks and Lectures on Inductive Logic apply recent developments in Bayesianism to motivate a new approach to inductive logic. His latest book, Evaluating Evidence of Mechanisms in Medicine, seeks to broaden the range of evidence considered by evidence-based medicine. Jon Williamson’s webpage.


Readings: 

(1)  Christian Wallmann and Jon Williamson: The Principal Principle and subjective Bayesianism, European Journal for the Philosophy of Science 10(1):3, 2020. doi:10.1007/s13194-019-0266-4 (Link to PDF)

(2) Jon Williamson: Why Frequentists and Bayesians Need Each Other, Erkenntnis 78:293-318, 2013.  (Link to PDF)


Slides & Video Links: 

J. Williamson’s “Objective Bayesianism from a Philosophical Perspective” Slides.
His full talk is in this Presentation Video.

D. Mayo Casualties Slide
J. Williamson response to Casualties video


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 17 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

April 22 “How an information metric could bring truce to the statistics wars” (Daniele Fanelli)

The eighth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

22 April 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“How an information metric could bring truce to the statistics wars

Daniele Fanelli

Abstract: Both sides of debates on P-values, reproducibility, and other meta-scientific issues are entrenched in traditional methodological assumptions. For example, they often implicitly endorse rigid dichotomies (e.g. published findings are either “true” or “false”, replications either “succeed” or “fail”, research practices are either “good” or “bad”), or make simplifying and monistic assumptions about the nature of research (e.g. publication bias is generally a problem, all results should replicate, data should always be shared).

Thinking about knowledge in terms of information may clear a common ground on which all sides can meet, leaving behind partisan methodological assumptions. In particular, I will argue that a metric of knowledge that I call “K” helps examine research problems in a more genuinely “meta-“ scientific way, giving rise to a methodology that is distinct, more general, and yet compatible with multiple statistical philosophies and methodological traditions.

This talk will present statistical, philosophical and scientific arguments in favour of K, and will give a few examples of its practical applications.

Daniele Fanelli is a London School of Economics Fellow in Quantitative Methodology, Department of Methodology, London School of Economics and Political Science. He graduated in Natural Sciences, earned a PhD in Behavioural Ecology and trained as a science communicator, before devoting his postdoctoral career to studying the nature of science itself – a field increasingly known as meta-science or meta-research. He has been primarily interested in assessing and explaining the prevalence, causes and remedies to problems that may affect research and publication practices, across the natural and social sciences. Fanelli helps answer these and other questions by analysing patterns in the scientific literature using meta- analysis, regression and any other suitable methodology. He is a member of the Research Ethics and Bioethics Advisory Committee of Italy’s National Research Council, for which he developed the first research integrity guidelines, and of the Research Integrity Committee of the Luxembourg Agency for Research Integrity (LARI).


Readings: 

Fanelli D (2019) A theory and methodology to quantify knowledge. Royal Society Open Science – doi.org/10.1098/rsos.181055. (PDF)

(Optional) Background: Fanelli D (2018) Is science really facing a reproducibility crisis, and do we need it to? PNAS –doi.org/10.1073/pnas.1708272114. (PDF)


Slides & Video Links: 

D. Fanelli “How an information metric could bring truce to the statistics wars” and D. Mayo’s “Casualties” (Video link).


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 16 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

February 18 “Testing with models that are not true” (Christian Hennig)

The sixth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

18 February, 2021

TIME: 15:00-16:45 (London); 10-11:45 a.m. (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link. 

.

Testing with Models that Are Not True

Christian Hennig

ABSTRACT:The starting point of my presentation is the apparently popular idea that in order to do hypothesis testing (and more generally frequentist model-based inference) we need to believe that the model is true, and the model assumptions need to be fulfilled. I will argue that this is a misconception. Models are, by their very nature, not “true” in reality. Mathematical results secure favourable characteristics of inference in an artificial model world in which the model assumptions are fulfilled. For using a model in reality we need to ask what happens if the model is violated in a “realistic” way. One key approach is to model a situation in which certain model assumptions of, e.g., the model-based test that we want to apply, are violated, in order to find out what happens then. This, somewhat inconveniently, depends strongly on what we assume, how the model assumptions are violated, whether we make an effort to check them, how we do that, and what alternative actions we take if we find them wanting. I will discuss what we know and what we can’t know regarding the appropriateness of the models that we “assume”, and how to interpret them appropriately, including new results on conditions for model assumption checking to work well, and on untestable assumptions. 

Christian Hennig is a Professor in the Department of Statistical Sciences,“Paolo Fortunati”, at the University of Bologna since November 2018. Hennig’s research interests are cluster analysis, multivariate data analysis incl. classification and data visualisation, robust statistics, foundations and philosophy of statistics, statistical modelling and applications. He was Senior Lecturer in Statistics at UCL, London, 2005- 2018. Hennig studied Mathematics in Hamburg and Statistics in Dortmund. He was promoted at the University of Hamburg in 1997 and habilitated in 2005. In 2017 Hennig got his Italian habilitation. After having obtained his PhD, he worked as research assistant and lecturer at the University of Hamburg and ETH Zuerich.


Readings:

M. Iqbal ShamsudheenChristian Hennig(2020) Should we test the model assumptions before running a model-based test? (PDF)

Mayo D. (2018). “Section 4.8 All Models Are False” excerpt from Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars, CUP. (pp. 296-301)


Slides and Video Links: 

Christian Hennig’s slides: Testing In Models That Are Not True

Christian Hennig Presentation

Christian Hennig Discussion


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 14 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

January 7: “Putting the Brakes on the Breakthrough: On the Birnbaum Argument for the Strong Likelihood Principle” (D.Mayo)

The fourth meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

January 7, 16:00 – 17:30  (London time)
11 am-12:30 pm (New York, ET)**
**note time modification and date change

Putting the Brakes on the Breakthrough,

or “How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations” 

Deborah G. Mayo

.

ABSTRACT: An essential component of inference based on familiar frequentist (error statistical) notions p-values, statistical significance and confidence levels, is the relevant sampling distribution (hence the term sampling theory). This results in violations of a principle known as the strong likelihood principle (SLP), or just the likelihood principle (LP), which says, in effect, that outcomes other than those observed are irrelevant for inferences within a statistical model. Now Allan Birnbaum was a frequentist (error statistician), but he found himself in a predicament: He seemed to have shown that the LP follows from uncontroversial frequentist principles! Bayesians, such as Savage, heralded his result as a “breakthrough in statistics”! But there’s a flaw in the “proof”, and that’s what I aim to show in my presentation by means of 3 simple examples:.

  • Example 1: Trying and Trying Again
  • Example 2: Two instruments with different precisions
    (you shouldn’t get credit/blame for something you didn’t do)
  • The Breakthrough: Don’t Birnbaumize that data my friend

As in the last 9 years, I posted an imaginary dialogue (here) with Allan Birnbaum at the stroke of midnight, New Year’s Eve, and this will be relevant for the talk.

Deborah G. Mayo is professor emerita in the Department of Philosophy at Virginia Tech. Her Error and the Growth of Experimental Knowledge won the 1998 Lakatos Prize in philosophy of science. She is a research associate at the London School of Economics: Centre for the Philosophy of Natural and Social Science (CPNSS). She co-edited (with A. Spanos) Error and Inference (2010, CUP). Her most recent book is Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). She founded the Fund for Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (E.R.R.O.R Fund) which sponsored a 2 week summer seminar in Philosophy of Statistics in 2019 for 15 faculty in philosophy, psychology, statistics, law and computer science (co-directed with A. Spanos). She publishes widely in philosophy of science, statistics, and philosophy of experiment. She blogs at errorstatistics.com and phil-stat-wars.com.

For information about the Phil Stat Wars forum and how to join, click on this link. 


Readings:

One of the following 3 papers:

My earliest treatment via counterexample:

A deeper argument can be found in:

For an intermediate Goldilocks version (based on a presentation given at the JSM 2013):

This post from the Error Statistics Philosophy blog will get you oriented. (It has links to other posts on the LP & Birnbaum, as well as background readings/discussions for those who want to dive deeper into the topic.)


Slides and Video Links:

D. Mayo’s slides: “Putting the Brakes on the Breakthrough, or ‘How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations’”

D. Mayo’s  presentation:

Discussion on Mayo’s presentation:


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

You may wish to look at my rejoinder to a number of statisticians: Rejoinder “On the Birnbaum Argument for the Strong Likelihood Principle”. (It is also above in the link to the complete discussion in the 3rd reading option.)

I often find it useful to look at other treatments. So I put together this short supplement to glance through to clarify a few select points.

*Meeting 12 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

November 19: “Randomisation and control in the age of coronavirus?” (Stephen Senn)

The third meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

November 19: 15:00 – 16:45  (London time)
10-11:45 am (New York, EST) 

“Randomisation and Control in the Age of Coronavirus

Stephen Senn

ABSTRACT: Many critics of randomisation have assumed that it is supposed to guarantee balance of prognostic factors, proceeded to show that this is impossible and then concluded that the theory is flawed. However, the shocking truth about randomisation is exactly the opposite of what they suppose. If we knew that all prognostic factors in a randomised clinical trial were balanced, the standard analysis of such trials would be wrong. The analysis that Fisher proposed for randomised experiments makes an allowance for factors being unbalanced. I shall show how this fundamental misunderstanding of how the randomisation and analysis combination deals with error is the origin of a serious error in interpreting trials. I shall illustrate the points with a game of chance and an actual trial. I conclude by recommending that would-be commentators should not presume to analyse the logic of trials until they have analysed some results.

Stephen Senn is a consultant statistician in Edinburgh. His expertise is in statistical methods for drug development and statistical inference. He consults extensively for the pharmaceutical industry in the UK, Europe and the USA on: planning of clinical trials and drug development programmes, project evaluation and prioritization, regulatory advice and representation, data safety monitoring board advice, specialist analyses, and statistical training. Stephen Senn has worked as a statistician but also as an academic in various positions in Switzerland, Scotland, England and Luxembourg. From 2011-2018 he was head of the Competence Center for Methodology and Statistics at the Luxembourg Institute of Health in Luxembourg. He was a Professor in Statistics at the University of Glasgow (2003) and University College London (1995-2003). He received the George C Challis Award of the University of Florida for contributions to biostatistics, 2001 and the PSI Award for most interesting speaker in 25 years of PSI in 2002. In 2009, he was awarded the Bradford Hill Medal of the Royal Statistical Society. In 2017 he gave the Fisher Memorial Lecture. He is an honorary life member of PSI and ISCB.

Information about the Phil Stat Wars forum and how to join is here. 


Readings:

For related posts on randomization by Stephen Senn, see these guest posts from the Error Statistics Philosophy blog:

Slides and Video Links:

Stephen Senn’s slides: Randomisation and Control in the Age of Coronavirus

Stephen Senn’s presentation:

Discussion on Senn’s presentation:

 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

*Meeting 11 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)

The second meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

September 24: 15:00 – 16:45  (London time)
10-11:45 am (New York, EDT) 

“Bayes Factors from all sides:
who’s worried, who’s not, and why”

Richard Morey

.

Richard Morey is a Senior Lecturer in the School of Psychology at the Cardiff University. In 2008, he earned a PhD in Cognition and Neuroscience and a Masters degree in Statistics from the University of Missouri. He is the author of over 50 articles and book chapters, and in 2011 he was awarded the Netherlands Research Organization Veni Research Talent grant Innovational Research Incentives Scheme grant for work in cognitive psychology. His work spans cognitive science, where he develops and critiques statistical models of cognitive phenomena; statistics, where he is interested in the philosophy of statistical inference and the development of new statistical tools for research use; and the practical side of science, where he is interested in increasing openness in scientific methodology. Morey is the author of the BayesFactor software for Bayesian inference and writes regularly on methodological topics at his blog.

Readings:

R. Morey: Should we Redefine Statistical Significance

Relevant background readings for this meeting covered in the initial LSE 500 Phil Stat Seminar can be found on the Meeting #4 blogpost 
     SIST: Excursion 4 Tour II    Megateam: Redefine Statistical Significance: 

Information and directions for joining our forum are here..

Slides and Video Links:

Morey’s slides “Bayes Factors from all sides: who’s worried, who’s not, and why” are at this link: https://richarddmorey.github.io/TalkPhilStat2020/#1

Video Link to Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_presentation.mp4

Video Link to Discussion of Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_discussion.mp4


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

*Meeting 9 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

Meeting 7 (July 30)–Discussion of JSM 2020 Panel on P-values & “Statistical Significance”

All: On July 30 (10am EST) I will give a virtual version of my JSM presentation, remotely like the one I will actually give on Aug 6 at the JSM. Co-panelist Stan Young may as well. One of our surprise guests tomorrow (not at the JSM) will be Yoav Benjamini!  If you’re interested in attending our July 30 practice session* please follow the directions here. Background items for this session are in the “readings” and “memos” of session 5.

Members: Materials resulting from Meeting 7:

“Work of renowned UK psychologist Hans Eysenck ruled ‘unsafe’”, The Guardian (Oct 11, 2019) (LINK).

*unless you’re already on our LSE Phil500 list

JSM 2020 Panel Flyer (PDF)
JSM online program w/panel abstract & information):

Slides & Video Links for Meeting 7:

DRAFT OF Mayo JSM 2020 SLIDES (PDF)

FINAL Mayo JSM 2020 SLIDES (PDF)

 

Meeting 4 (June 11)

getting beyond…

IV. (June 11) Rejection Fallacies: Do P-values exaggerate evidence? Jeffreys-Lindley paradox or Bayes/Fisher disagreement:

Reading:

SIST: Excursion 4 Tour II

Recommended (if time): Excursion 4 Tour I: The Myth of “The Myth of Objectivity” 


Mayo Memos for Meeting 4

–Souvenirs  Meeting 4: Q: Have We Drifted From Testing Country? (Notes From an Intermission); R: The Severity Interpretation of Rejection (SIR)

FUN! Take a look at Richard Morey’s newly updated SEV app. It will display P-values, power and SEV (click display options). You can change the default by clicking the tab details and then using that link. Don’t forget to change the range of parameter values. If you change n to 25, you’ll get the answers to the example I gave in meeting #2.

  1. Solutions to problems given in Meeting #2: With X̅ =154 (PDF); with X̅ = 152 (PDF)
  2. Using the app for simple P-values: I wasn’t able to use the board to draw the curves for different P-values in meeting #2. Here’s how you can view them using Morey’s app for simple P-values. 

How do you interpret it? This just came out in NEJM (in defending policies based on antibody tests). “In the world of randomized clinical trials, statisticians test scientific hypotheses by requiring a probability of less than 5% that the observed result could have occurred by chance.” (Waiting for Certainty on Covid-19 Antibody Tests — At What Cost?)  https://www.nejm.org/doi/full/10.1056/NEJMp2017739?source=nejmtwitter&medium=organic-social

-See details on Bonus Meeting: June 25.


Slides & Video Links for Meeting

Slides: (PDF)