April 22 “How an information metric could bring truce to the statistics wars” (Daniele Fanelli)

The eighth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

22 April 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“How an information metric could bring truce to the statistics wars

Daniele Fanelli

Abstract: Both sides of debates on P-values, reproducibility, and other meta-scientific issues are entrenched in traditional methodological assumptions. For example, they often implicitly endorse rigid dichotomies (e.g. published findings are either “true” or “false”, replications either “succeed” or “fail”, research practices are either “good” or “bad”), or make simplifying and monistic assumptions about the nature of research (e.g. publication bias is generally a problem, all results should replicate, data should always be shared).

Thinking about knowledge in terms of information may clear a common ground on which all sides can meet, leaving behind partisan methodological assumptions. In particular, I will argue that a metric of knowledge that I call “K” helps examine research problems in a more genuinely “meta-“ scientific way, giving rise to a methodology that is distinct, more general, and yet compatible with multiple statistical philosophies and methodological traditions.

This talk will present statistical, philosophical and scientific arguments in favour of K, and will give a few examples of its practical applications.

Daniele Fanelli is a London School of Economics Fellow in Quantitative Methodology, Department of Methodology, London School of Economics and Political Science. He graduated in Natural Sciences, earned a PhD in Behavioural Ecology and trained as a science communicator, before devoting his postdoctoral career to studying the nature of science itself – a field increasingly known as meta-science or meta-research. He has been primarily interested in assessing and explaining the prevalence, causes and remedies to problems that may affect research and publication practices, across the natural and social sciences. Fanelli helps answer these and other questions by analysing patterns in the scientific literature using meta- analysis, regression and any other suitable methodology. He is a member of the Research Ethics and Bioethics Advisory Committee of Italy’s National Research Council, for which he developed the first research integrity guidelines, and of the Research Integrity Committee of the Luxembourg Agency for Research Integrity (LARI).


Readings: 

Fanelli D (2019) A theory and methodology to quantify knowledge. Royal Society Open Science – doi.org/10.1098/rsos.181055. (PDF)

(Optional) Background: Fanelli D (2018) Is science really facing a reproducibility crisis, and do we need it to? PNAS –doi.org/10.1073/pnas.1708272114. (PDF)


Slides & Video Links: 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 16 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

March 25 “How should applied science journal editors deal with statistical controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman

Mark Burgman is the Director of the Centre for Environmental Policy at Imperial College London and Editor-in-Chief of the journal Conservation Biology, Chair in Risk Analysis & Environmental Policy. Previously, he was Adrienne Clarke Chair of Botany at the University of Melbourne, Australia. He works on expert judgement, ecological modelling, conservation biology and risk assessment. He has written models for biosecurity, medicine regulation, marine fisheries, forestry, irrigation, electrical power utilities, mining, and national park planning. He received a BSc from the University of New South Wales (1974), an MSc from Macquarie University, Sydney (1981), and a PhD from the State University of New York at Stony Brook (1987). He worked as a consultant ecologist and research scientist in Australia, the United States and Switzerland during the 1980’s before joining the University of Melbourne in 1990. He joined CEP in February, 2017. He has published over two hundred and fifty refereed papers and book chapters and seven authored books. He was elected to the Australian Academy of Science in 2006.

Abstract: Applied sciences come with different focuses. In environmental science, as in epidemiology, the framing and context of problems is often in crises. Decisions are imminent, data and understanding are incomplete, and ramifications of decisions are substantial. This context makes the implications of inferences from data especially poignant. It also makes the claims made by fervent and dedicated authors especially challenging. The full gamut of potential statistical foibles and psychological frailties are on display. In this presentation, I will outline and summarise the kinds of errors of reasoning that are especially prevalent in ecology and conservation biology. I will outline how these things appear to be changing, providing some recent examples. Finally, I will describe some implications of alternative editorial policies.

Some questions:

  • Would it be a good thing to dispense with p-values, either through encouragement or through strict editorial policy?
  • Would it be a good thing to insist on confidence intervals?
  • Should editors of journals in a broad discipline, band together and post common editorial policies for statistical inference?
  • Should all papers be reviewed by a professional statistician?
  • If so, which kind?

Slides and Readings: 

*Mark Burgman’s Draft Slides:  “How should applied science journal editors deal with statistical controversies?” (pdf)

*D. Mayo’s Slides: “The Statistics Wars and Their Casualties for Journal Editors: Intellectual Conflicts of Interest: Questions for Burgman” (pdf)

*A paper of mine from the Joint Statistical Meetings,Rejecting Statistical Significance Tests: Defanging the Arguments”, discusses an episode that is relevant for the general topic of how journal editors should deal with statistical controversies.


Video Links: 

Mark Burgman’s presentation:

D. Mayo’s Casualties:

Please feel free to continue the discussion by posting questions or thoughts in the comments section of this post below.

 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 15 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

February 18 “Testing with models that are not true” (Christian Hennig)

The sixth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

18 February, 2021

TIME: 15:00-16:45 (London); 10-11:45 a.m. (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link. 

.

Testing with Models that Are Not True

Christian Hennig

ABSTRACT:The starting point of my presentation is the apparently popular idea that in order to do hypothesis testing (and more generally frequentist model-based inference) we need to believe that the model is true, and the model assumptions need to be fulfilled. I will argue that this is a misconception. Models are, by their very nature, not “true” in reality. Mathematical results secure favourable characteristics of inference in an artificial model world in which the model assumptions are fulfilled. For using a model in reality we need to ask what happens if the model is violated in a “realistic” way. One key approach is to model a situation in which certain model assumptions of, e.g., the model-based test that we want to apply, are violated, in order to find out what happens then. This, somewhat inconveniently, depends strongly on what we assume, how the model assumptions are violated, whether we make an effort to check them, how we do that, and what alternative actions we take if we find them wanting. I will discuss what we know and what we can’t know regarding the appropriateness of the models that we “assume”, and how to interpret them appropriately, including new results on conditions for model assumption checking to work well, and on untestable assumptions. 

Christian Hennig is a Professor in the Department of Statistical Sciences,“Paolo Fortunati”, at the University of Bologna since November 2018. Hennig’s research interests are cluster analysis, multivariate data analysis incl. classification and data visualisation, robust statistics, foundations and philosophy of statistics, statistical modelling and applications. He was Senior Lecturer in Statistics at UCL, London, 2005- 2018. Hennig studied Mathematics in Hamburg and Statistics in Dortmund. He was promoted at the University of Hamburg in 1997 and habilitated in 2005. In 2017 Hennig got his Italian habilitation. After having obtained his PhD, he worked as research assistant and lecturer at the University of Hamburg and ETH Zuerich.


Readings:

M. Iqbal ShamsudheenChristian Hennig(2020) Should we test the model assumptions before running a model-based test? (PDF)

Mayo D. (2018). “Section 4.8 All Models Are False” excerpt from Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars, CUP. (pp. 296-301)


Slides and Video Links: 

Christian Hennig’s slides: Testing In Models That Are Not True

Christian Hennig Presentation

Christian Hennig Discussion


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 14 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

January 28 “How can we improve replicability” (Alexander Bird)

The fifth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

28 January, 2021

TIME: 15:00-16:45 (London); 10-11:45 a.m. (New York, EST)

“How can we improve replicability?”

Alexander Bird

ABSTRACT: It is my view that the unthinking application of null hypothesis significance testing is a leading cause of a high rate of replication failure in certain fields.  What can be done to address this, within the NHST framework?

Alexander Bird President, British Society for the Philosophy of Science, Bertrand Russell Professor, Department of Philosophy, University of Cambridge, Fellow and Director of Studies, St John’s College, Cambridge. Previously he was the Peter Sowerby Professor of Philosophy and Medicine, Department of Philosophy, King’s College London and prior to that held the chair in Philosophy at the University of Bristol, and was lecturer and then reader at the University of Edinburgh before that. His work is principally in those areas where philosophy of science overlaps with metaphysics and epistemology. He has a particular interest in the philosophy of medicine, especially regarding methodological issues in causal and statistical inference. Website: http://www.alexanderbird.org 

For information about the Phil Stat Wars forum and how to join, click on this link. 


Readings:

Bird, A. Understanding the Replication Crisis as a Base Rate Fallacy, The British Journal for the Philosophy of Science, axy051, (13 August 2018).

A few pages from D. Mayo: Statistical Inference as Severe Testing: How to get beyond the statistics wars (SIST), pp. 361-370:  Section 5.6 “Positive Predictive Value: Fine for Luggage”.


Slides and Video Links: (to be posted when available)

Alexander Bird Presentation:

Alexander Bird Discussion:

 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 13 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

January 7: “Putting the Brakes on the Breakthrough: On the Birnbaum Argument for the Strong Likelihood Principle” (D.Mayo)

The fourth meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

January 7, 16:00 – 17:30  (London time)
11 am-12:30 pm (New York, ET)**
**note time modification and date change

Putting the Brakes on the Breakthrough,

or “How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations” 

Deborah G. Mayo

.

ABSTRACT: An essential component of inference based on familiar frequentist (error statistical) notions p-values, statistical significance and confidence levels, is the relevant sampling distribution (hence the term sampling theory). This results in violations of a principle known as the strong likelihood principle (SLP), or just the likelihood principle (LP), which says, in effect, that outcomes other than those observed are irrelevant for inferences within a statistical model. Now Allan Birnbaum was a frequentist (error statistician), but he found himself in a predicament: He seemed to have shown that the LP follows from uncontroversial frequentist principles! Bayesians, such as Savage, heralded his result as a “breakthrough in statistics”! But there’s a flaw in the “proof”, and that’s what I aim to show in my presentation by means of 3 simple examples:.

  • Example 1: Trying and Trying Again
  • Example 2: Two instruments with different precisions
    (you shouldn’t get credit/blame for something you didn’t do)
  • The Breakthrough: Don’t Birnbaumize that data my friend

As in the last 9 years, I posted an imaginary dialogue (here) with Allan Birnbaum at the stroke of midnight, New Year’s Eve, and this will be relevant for the talk.

Deborah G. Mayo is professor emerita in the Department of Philosophy at Virginia Tech. Her Error and the Growth of Experimental Knowledge won the 1998 Lakatos Prize in philosophy of science. She is a research associate at the London School of Economics: Centre for the Philosophy of Natural and Social Science (CPNSS). She co-edited (with A. Spanos) Error and Inference (2010, CUP). Her most recent book is Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). She founded the Fund for Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (E.R.R.O.R Fund) which sponsored a 2 week summer seminar in Philosophy of Statistics in 2019 for 15 faculty in philosophy, psychology, statistics, law and computer science (co-directed with A. Spanos). She publishes widely in philosophy of science, statistics, and philosophy of experiment. She blogs at errorstatistics.com and phil-stat-wars.com.

For information about the Phil Stat Wars forum and how to join, click on this link. 


Readings:

One of the following 3 papers:

My earliest treatment via counterexample:

A deeper argument can be found in:

For an intermediate Goldilocks version (based on a presentation given at the JSM 2013):

This post from the Error Statistics Philosophy blog will get you oriented. (It has links to other posts on the LP & Birnbaum, as well as background readings/discussions for those who want to dive deeper into the topic.)


Slides and Video Links:

D. Mayo’s slides: “Putting the Brakes on the Breakthrough, or ‘How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations’”

D. Mayo’s  presentation:

Discussion on Mayo’s presentation:


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

You may wish to look at my rejoinder to a number of statisticians: Rejoinder “On the Birnbaum Argument for the Strong Likelihood Principle”. (It is also above in the link to the complete discussion in the 3rd reading option.)

I often find it useful to look at other treatments. So I put together this short supplement to glance through to clarify a few select points.

*Meeting 12 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

November 19: “Randomisation and control in the age of coronavirus?” (Stephen Senn)

The third meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

November 19: 15:00 – 16:45  (London time)
10-11:45 am (New York, EST) 

“Randomisation and Control in the Age of Coronavirus

Stephen Senn

ABSTRACT: Many critics of randomisation have assumed that it is supposed to guarantee balance of prognostic factors, proceeded to show that this is impossible and then concluded that the theory is flawed. However, the shocking truth about randomisation is exactly the opposite of what they suppose. If we knew that all prognostic factors in a randomised clinical trial were balanced, the standard analysis of such trials would be wrong. The analysis that Fisher proposed for randomised experiments makes an allowance for factors being unbalanced. I shall show how this fundamental misunderstanding of how the randomisation and analysis combination deals with error is the origin of a serious error in interpreting trials. I shall illustrate the points with a game of chance and an actual trial. I conclude by recommending that would-be commentators should not presume to analyse the logic of trials until they have analysed some results.

Stephen Senn is a consultant statistician in Edinburgh. His expertise is in statistical methods for drug development and statistical inference. He consults extensively for the pharmaceutical industry in the UK, Europe and the USA on: planning of clinical trials and drug development programmes, project evaluation and prioritization, regulatory advice and representation, data safety monitoring board advice, specialist analyses, and statistical training. Stephen Senn has worked as a statistician but also as an academic in various positions in Switzerland, Scotland, England and Luxembourg. From 2011-2018 he was head of the Competence Center for Methodology and Statistics at the Luxembourg Institute of Health in Luxembourg. He was a Professor in Statistics at the University of Glasgow (2003) and University College London (1995-2003). He received the George C Challis Award of the University of Florida for contributions to biostatistics, 2001 and the PSI Award for most interesting speaker in 25 years of PSI in 2002. In 2009, he was awarded the Bradford Hill Medal of the Royal Statistical Society. In 2017 he gave the Fisher Memorial Lecture. He is an honorary life member of PSI and ISCB.

Information about the Phil Stat Wars forum and how to join is here. 


Readings:

For related posts on randomization by Stephen Senn, see these guest posts from the Error Statistics Philosophy blog:

Slides and Video Links:

Stephen Senn’s slides: Randomisation and Control in the Age of Coronavirus

Stephen Senn’s presentation:

Discussion on Senn’s presentation:

 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

*Meeting 11 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

The P-Values Debate

 

National Institute of Statistical Sciences (NISS): The Statistics Debate (Video)

 

The Statistics Debate

October 15, 2020: Noon – 2 pm ET
(17-19:00 London Time)

Website: https://www.niss.org/events/statistics-debate
(Online webinar debate, free but must register to attend on website above)

 

Debate Host: Dan Jeske (University of California, Riverside)

Participants:
Jim Berger (Duke University)
Deborah Mayo (Virginia Tech)
David Trafimow (New Mexico State University)

Where do you stand?

  • Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used?
  • Do you think the use of estimation and confidence intervals eliminates the need for hypothesis tests?
  • Bayes Factors – are you for or against?
  • How should we address the reproducibility crisis?

If you are intrigued by these questions and have an interest in how these questions might be answered – one way of the other – then this is the event for you!

Want to get a sense of the thinking behind the practicality (or not) of various statistical approaches?  Interested in hearing both sides of the story – during the same session!?

This event will be held in a debate type of format. The participants will be given selected questions ahead of time, so they have a chance to think about their responses, but this is intended to be much less of a presentation and more of a give and take between the debaters.

So – let’s have fun with this!  The best way to find out what happens is to register and attend!

September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)

The second meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

September 24: 15:00 – 16:45  (London time)
10-11:45 am (New York, EDT) 

“Bayes Factors from all sides:
who’s worried, who’s not, and why”

Richard Morey

.

Richard Morey is a Senior Lecturer in the School of Psychology at the Cardiff University. In 2008, he earned a PhD in Cognition and Neuroscience and a Masters degree in Statistics from the University of Missouri. He is the author of over 50 articles and book chapters, and in 2011 he was awarded the Netherlands Research Organization Veni Research Talent grant Innovational Research Incentives Scheme grant for work in cognitive psychology. His work spans cognitive science, where he develops and critiques statistical models of cognitive phenomena; statistics, where he is interested in the philosophy of statistical inference and the development of new statistical tools for research use; and the practical side of science, where he is interested in increasing openness in scientific methodology. Morey is the author of the BayesFactor software for Bayesian inference and writes regularly on methodological topics at his blog.

Readings:

R. Morey: Should we Redefine Statistical Significance

Relevant background readings for this meeting covered in the initial LSE 500 Phil Stat Seminar can be found on the Meeting #4 blogpost 
     SIST: Excursion 4 Tour II    Megateam: Redefine Statistical Significance: 

Information and directions for joining our forum are here..

Slides and Video Links:

Morey’s slides “Bayes Factors from all sides: who’s worried, who’s not, and why” are at this link: https://richarddmorey.github.io/TalkPhilStat2020/#1

Video Link to Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_presentation.mp4

Video Link to Discussion of Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_discussion.mp4


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

*Meeting 9 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

August 20 (meeting 8) of Phil Stat Seminar : Preregistration as a Tool to Evaluate Severity (D. Lakens)

.

We begin our new Phil Stat forum:

The Statistics Wars
and Their Casualties

August 20: The time is 15:00 – 16:45  (London) 10-11:45 am (New York) EDT

“Preregistration as a Tool to Evaluate
the Severity of a Test”

Daniël Lakens

Eindhoven University of Technology

Reading (by Lakens)

“The value of preregistration for psychological science: A conceptual analysis”, Japanese Psychological Review 62(3), 221–230, (2019).

Optional editorial: “Pandemic researchers — recruit your own best critics”, Nature 581, p. 121, (2020).

Information and directions for joining our forum are here.


SLIDES & VIDEO LINKS FOR MEETING 8:

Prof. D. Lakens’ slides (PDF)

 

VIDEO LINKS (3 parts):
(Viewing in full screen mode helps with buffering issues.)

Part 1: Mayo’s Introduction & Lakens’ presentation
Part 2: Lakens’ presentation continued
Part 3: Discussion