The Stat Wars and Their Casualties

March 25 “How should applied science journal editors deal with statistical controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman

Mark Burgman is the Director of the Centre for Environmental Policy at Imperial College London and Editor-in-Chief of the journal Conservation Biology, Chair in Risk Analysis & Environmental Policy. Previously, he was Adrienne Clarke Chair of Botany at the University of Melbourne, Australia. He works on expert judgement, ecological modelling, conservation biology and risk assessment. He has written models for biosecurity, medicine regulation, marine fisheries, forestry, irrigation, electrical power utilities, mining, and national park planning. He received a BSc from the University of New South Wales (1974), an MSc from Macquarie University, Sydney (1981), and a PhD from the State University of New York at Stony Brook (1987). He worked as a consultant ecologist and research scientist in Australia, the United States and Switzerland during the 1980’s before joining the University of Melbourne in 1990. He joined CEP in February, 2017. He has published over two hundred and fifty refereed papers and book chapters and seven authored books. He was elected to the Australian Academy of Science in 2006.

Abstract: Applied sciences come with different focuses. In environmental science, as in epidemiology, the framing and context of problems is often in crises. Decisions are imminent, data and understanding are incomplete, and ramifications of decisions are substantial. This context makes the implications of inferences from data especially poignant. It also makes the claims made by fervent and dedicated authors especially challenging. The full gamut of potential statistical foibles and psychological frailties are on display. In this presentation, I will outline and summarise the kinds of errors of reasoning that are especially prevalent in ecology and conservation biology. I will outline how these things appear to be changing, providing some recent examples. Finally, I will describe some implications of alternative editorial policies.

Some questions:

  • Would it be a good thing to dispense with p-values, either through encouragement or through strict editorial policy?
  • Would it be a good thing to insist on confidence intervals?
  • Should editors of journals in a broad discipline, band together and post common editorial policies for statistical inference?
  • Should all papers be reviewed by a professional statistician?
  • If so, which kind?

Slides and Readings: 

*Mark Burgman’s Draft Slides:  “How should applied science journal editors deal with statistical controversies?” (pdf)

*D. Mayo’s Slides: “The Statistics Wars and Their Casualties for Journal Editors: Intellectual Conflicts of Interest: Questions for Burgman” (pdf)

*A paper of mine from the Joint Statistical Meetings,Rejecting Statistical Significance Tests: Defanging the Arguments”, discusses an episode that is relevant for the general topic of how journal editors should deal with statistical controversies.


Video Links: 

Mark Burgman’s presentation:

D. Mayo’s Casualties:

Please feel free to continue the discussion by posting questions or thoughts in the comments section of this post below.

 


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.

*Meeting 15 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

The Statistics Debate

October 15, 2020: Noon – 2 pm ET
(17-19:00 London Time)

Website: https://www.niss.org/events/statistics-debate
(Online webinar debate, free but must register to attend on website above)

 

Debate Host: Dan Jeske (University of California, Riverside)

Participants:
Jim Berger (Duke University)
Deborah Mayo (Virginia Tech)
David Trafimow (New Mexico State University)

Where do you stand?

  • Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used?
  • Do you think the use of estimation and confidence intervals eliminates the need for hypothesis tests?
  • Bayes Factors – are you for or against?
  • How should we address the reproducibility crisis?

If you are intrigued by these questions and have an interest in how these questions might be answered – one way of the other – then this is the event for you!

Want to get a sense of the thinking behind the practicality (or not) of various statistical approaches?  Interested in hearing both sides of the story – during the same session!?

This event will be held in a debate type of format. The participants will be given selected questions ahead of time, so they have a chance to think about their responses, but this is intended to be much less of a presentation and more of a give and take between the debaters.

So – let’s have fun with this!  The best way to find out what happens is to register and attend!

September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)

The second meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

September 24: 15:00 – 16:45  (London time)
10-11:45 am (New York, EDT) 

“Bayes Factors from all sides:
who’s worried, who’s not, and why”

Richard Morey

.

Richard Morey is a Senior Lecturer in the School of Psychology at the Cardiff University. In 2008, he earned a PhD in Cognition and Neuroscience and a Masters degree in Statistics from the University of Missouri. He is the author of over 50 articles and book chapters, and in 2011 he was awarded the Netherlands Research Organization Veni Research Talent grant Innovational Research Incentives Scheme grant for work in cognitive psychology. His work spans cognitive science, where he develops and critiques statistical models of cognitive phenomena; statistics, where he is interested in the philosophy of statistical inference and the development of new statistical tools for research use; and the practical side of science, where he is interested in increasing openness in scientific methodology. Morey is the author of the BayesFactor software for Bayesian inference and writes regularly on methodological topics at his blog.

Readings:

R. Morey: Should we Redefine Statistical Significance

Relevant background readings for this meeting covered in the initial LSE 500 Phil Stat Seminar can be found on the Meeting #4 blogpost 
     SIST: Excursion 4 Tour II    Megateam: Redefine Statistical Significance: 

Information and directions for joining our forum are here..

Slides and Video Links:

Morey’s slides “Bayes Factors from all sides: who’s worried, who’s not, and why” are at this link: https://richarddmorey.github.io/TalkPhilStat2020/#1

Video Link to Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_presentation.mp4

Video Link to Discussion of Morey Presentation: https://philstatwars.files.wordpress.com/2020/09/richard_discussion.mp4


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

*Meeting 9 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

August 20 (meeting 8) of Phil Stat Seminar : Preregistration as a Tool to Evaluate Severity (D. Lakens)

.

We begin our new Phil Stat forum:

The Statistics Wars
and Their Casualties

August 20: The time is 15:00 – 16:45  (London) 10-11:45 am (New York) EDT

“Preregistration as a Tool to Evaluate
the Severity of a Test”

Daniël Lakens

Eindhoven University of Technology

Reading (by Lakens)

“The value of preregistration for psychological science: A conceptual analysis”, Japanese Psychological Review 62(3), 221–230, (2019).

Optional editorial: “Pandemic researchers — recruit your own best critics”, Nature 581, p. 121, (2020).

Information and directions for joining our forum are here.


SLIDES & VIDEO LINKS FOR MEETING 8:

Prof. D. Lakens’ slides (PDF)

 

VIDEO LINKS (3 parts):
(Viewing in full screen mode helps with buffering issues.)

Part 1: Mayo’s Introduction & Lakens’ presentation
Part 2: Lakens’ presentation continued
Part 3: Discussion

 

New Phil Stat Forum

The Statistics Wars
and Their Casualties

(Rescheduled for 24-25 September 2021*) is now a monthly remote forum** 

*London School of Economics (CPNSS)

Alexander Bird(King’s College London), Mark Burgman (Imperial College London),
Daniele Fanelli (London School of Economics and Political Science),
Roman Frigg (London School of Economics and Political Science),
David Hand (Imperial College London), Christian Hennig (University of Bologna), Katrin Hohl (City University London), Daniël Lakens (Eindhoven University of Technology), Deborah Mayo (Virginia Tech), Richard Morey (Cardiff University),
Stephen Senn (Edinburgh, Scotland), Jon Williamson (University of Kent)*

While the field of statistics has a long history of passionate foundational controversy the last decade has, in many ways, been the most dramatic. Misuses of statistics, biasing selection effects, and high powered methods of Big-Data analysis, have helped to make it easy to find impressive-looking but spurious, results that fail to replicate. As the crisis of replication has spread beyond psychology and social sciences to biomedicine, genomics and other fields, people are getting serious about reforms.  Many are welcome (preregistration, transparency about data, eschewing mechanical uses of statistics); some are quite radical. The experts do not agree on how to restore scientific integrity, and these disagreements reflect philosophical battles–old and new– about the nature of inductive-statistical inference and the roles of probability in statistical inference and modeling. These philosophical issues simmer below the surface in competing views about the causes of problems and potential remedies. If statistical consumers are unaware of assumptions behind rival evidence-policy reforms, they cannot scrutinize the consequences that affect them (in personalized medicine, psychology, law, and so on). Critically reflecting on proposed reforms and changing standards requires insights from statisticians, philosophers of science, psychologists, journal editors, economists and practitioners from across the natural and social sciences. This workshop will bring together these interdisciplinary insights–from speakers as well as attendees.

Workshop OrganizersD. Mayo and R. Frigg

Logistician (chief logistics and contact person): Jean Miller 

**FORUM: This will be both a continuation of our LSEPH500 Seminar and a link to our delayed (but future) workshop. For information about how to join, see this pdf

For an explanation about the meaning of statistical crises and their casualties see here.

Past & Future Meetings:

[For information about how to join, see this pdf]

June 25, 2020 (LSE PH500 Bonus Meeting/Phil Stat Wars forum): Professor David Hand (Imperial College, London (mini-bio)) “Trustworthiness of Statistical Analysis”. (Abstract; For slides, recording & readings from this meeting, see this post.)

August 20, 2020 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Professor Daniël Lakens (Eindhoven University of Technology (mini-bio)) “Preregistration as a Tool to Evaluate the Severity of a Test”. (For slides, recording & readings from this meeting, see this post.)

September 24, 2020 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT):  Professor Richard Morey (Cardiff University (mini-bio)). “Bayes factors from all sides: who’s worried, who’s not, and why”. (For slides, recording & readings from this meeting, see this post.)

October 15, 2020. Statistics (P-value) Debate. Sponsored by the National Institute of Statistical Science: https://www.niss.org/events/statistics-debate. (For a recording, see this article.)

November 19, 2020 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Stephen Senn (Statistical Consultant, Scotland (mini-bio)).“Randomisation and control in the age of coronavirus?” (Abstract; For slides, recording & readings from this meeting, see this post.)

January 7, 2021 (16:00-17:30 (London); 11-12:30 (New York) EDT): D. Mayo (Philosophy, Virginia Tech (mini-bio)) “Putting the Brakes on the Breakthrough, or ‘How I used simple logic to uncover a flaw in a controversial 60-year old “theorem” in statistical foundations’”. (Abstract).

January 28, 2021 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT):Alexander Bird (Dept. of Philosophy,  University of Cambridge (mini-bio)). “How can we improve replicability?” (For slides, recording & readings from this meeting, see this post.)

February 18, 2021(15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Christian Hennig (Dept. of Statistics, University of Bologna (mini-bio)). “Testing with Models that are Not True” (For slides, recording & readings from this meeting, see this post.)

March 25, 2021 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Mark Burgman (Centre for Environmental Policy, Imperial College London (mini-bio)). “How should applied science journal editors deal with statistical controversies?” (For slides, recording & readings from this meeting, see this post.)

April 22, 2021 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Daniele Fanelli (Dept. of Methodology, LSE (mini-bio)). “How an information metric could bring truce to the statistics wars”.

May 20, 2021 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Jon Williamson (Centre for Reasoning, University of Kent (mini-bio)). “Objective Bayesianism from a philosophical perspective”.

.

June 24, 2021 (15:00-16:45 (London); 10-11:45 a.m. (New York) EDT): Katrin Hohl (Department of Sociology, City University London (mini-bio)).