Foundations of Statistics

January 7: “Putting the Brakes on the Breakthrough: On the Birnbaum Argument for the Strong Likelihood Principle” (D.Mayo)

The fourth meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

January 7, 16:00 – 17:30  (London time)
11 am-12:30 pm (New York, ET)**
**note time modification and date change

Putting the Brakes on the Breakthrough,

or “How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations” 

Deborah G. Mayo

.

ABSTRACT: An essential component of inference based on familiar frequentist (error statistical) notions p-values, statistical significance and confidence levels, is the relevant sampling distribution (hence the term sampling theory). This results in violations of a principle known as the strong likelihood principle (SLP), or just the likelihood principle (LP), which says, in effect, that outcomes other than those observed are irrelevant for inferences within a statistical model. Now Allan Birnbaum was a frequentist (error statistician), but he found himself in a predicament: He seemed to have shown that the LP follows from uncontroversial frequentist principles! Bayesians, such as Savage, heralded his result as a “breakthrough in statistics”! But there’s a flaw in the “proof”, and that’s what I aim to show in my presentation by means of 3 simple examples:.

  • Example 1: Trying and Trying Again
  • Example 2: Two instruments with different precisions
    (you shouldn’t get credit/blame for something you didn’t do)
  • The Breakthrough: Don’t Birnbaumize that data my friend

As in the last 9 years, I posted an imaginary dialogue (here) with Allan Birnbaum at the stroke of midnight, New Year’s Eve, and this will be relevant for the talk.

Deborah G. Mayo is professor emerita in the Department of Philosophy at Virginia Tech. Her Error and the Growth of Experimental Knowledge won the 1998 Lakatos Prize in philosophy of science. She is a research associate at the London School of Economics: Centre for the Philosophy of Natural and Social Science (CPNSS). She co-edited (with A. Spanos) Error and Inference (2010, CUP). Her most recent book is Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). She founded the Fund for Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (E.R.R.O.R Fund) which sponsored a 2 week summer seminar in Philosophy of Statistics in 2019 for 15 faculty in philosophy, psychology, statistics, law and computer science (co-directed with A. Spanos). She publishes widely in philosophy of science, statistics, and philosophy of experiment. She blogs at errorstatistics.com and phil-stat-wars.com.

For information about the Phil Stat Wars forum and how to join, click on this link. 


Readings:

One of the following 3 papers:

My earliest treatment via counterexample:

A deeper argument can be found in:

For an intermediate Goldilocks version (based on a presentation given at the JSM 2013):

This post from the Error Statistics Philosophy blog will get you oriented. (It has links to other posts on the LP & Birnbaum, as well as background readings/discussions for those who want to dive deeper into the topic.)


Slides and Video Links:

D. Mayo’s slides: “Putting the Brakes on the Breakthrough, or ‘How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations’”

D. Mayo’s  presentation:

Discussion on Mayo’s presentation:


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

You may wish to look at my rejoinder to a number of statisticians: Rejoinder “On the Birnbaum Argument for the Strong Likelihood Principle”. (It is also above in the link to the complete discussion in the 3rd reading option.)

I often find it useful to look at other treatments. So I put together this short supplement to glance through to clarify a few select points.

*Meeting 12 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

The Statistics Debate

October 15, 2020: Noon – 2 pm ET
(17-19:00 London Time)

Website: https://www.niss.org/events/statistics-debate
(Online webinar debate, free but must register to attend on website above)

 

Debate Host: Dan Jeske (University of California, Riverside)

Participants:
Jim Berger (Duke University)
Deborah Mayo (Virginia Tech)
David Trafimow (New Mexico State University)

Where do you stand?

  • Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used?
  • Do you think the use of estimation and confidence intervals eliminates the need for hypothesis tests?
  • Bayes Factors – are you for or against?
  • How should we address the reproducibility crisis?

If you are intrigued by these questions and have an interest in how these questions might be answered – one way of the other – then this is the event for you!

Want to get a sense of the thinking behind the practicality (or not) of various statistical approaches?  Interested in hearing both sides of the story – during the same session!?

This event will be held in a debate type of format. The participants will be given selected questions ahead of time, so they have a chance to think about their responses, but this is intended to be much less of a presentation and more of a give and take between the debaters.

So – let’s have fun with this!  The best way to find out what happens is to register and attend!