March 25 “How should applied science journal editors deal with statistical controversies?” (Mark Burgman)
The seventh meeting of our Phil Stat Forum*:
The Statistics Wars
and Their Casualties
25 March, 2021
TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE)
For information about the Phil Stat Wars forum and how to join, click on this link.
“How should applied science journal editors deal with statistical controversies?“
Mark Burgman is the Director of the Centre for Environmental Policy at Imperial College London and Editor-in-Chief of the journal Conservation Biology, Chair in Risk Analysis & Environmental Policy. Previously, he was Adrienne Clarke Chair of Botany at the University of Melbourne, Australia. He works on expert judgement, ecological modelling, conservation biology and risk assessment. He has written models for biosecurity, medicine regulation, marine fisheries, forestry, irrigation, electrical power utilities, mining, and national park planning. He received a BSc from the University of New South Wales (1974), an MSc from Macquarie University, Sydney (1981), and a PhD from the State University of New York at Stony Brook (1987). He worked as a consultant ecologist and research scientist in Australia, the United States and Switzerland during the 1980’s before joining the University of Melbourne in 1990. He joined CEP in February, 2017. He has published over two hundred and fifty refereed papers and book chapters and seven authored books. He was elected to the Australian Academy of Science in 2006.
Abstract: Applied sciences come with different focuses. In environmental science, as in epidemiology, the framing and context of problems is often in crises. Decisions are imminent, data and understanding are incomplete, and ramifications of decisions are substantial. This context makes the implications of inferences from data especially poignant. It also makes the claims made by fervent and dedicated authors especially challenging. The full gamut of potential statistical foibles and psychological frailties are on display. In this presentation, I will outline and summarise the kinds of errors of reasoning that are especially prevalent in ecology and conservation biology. I will outline how these things appear to be changing, providing some recent examples. Finally, I will describe some implications of alternative editorial policies.
- Would it be a good thing to dispense with p-values, either through encouragement or through strict editorial policy?
- Would it be a good thing to insist on confidence intervals?
- Should editors of journals in a broad discipline, band together and post common editorial policies for statistical inference?
- Should all papers be reviewed by a professional statistician?
- If so, which kind?
Slides and Readings:
*Mark Burgman’s Draft Slides: “How should applied science journal editors deal with statistical controversies?” (pdf)
*D. Mayo’s Slides: “The Statistics Wars and Their Casualties for Journal Editors: Intellectual Conflicts of Interest: Questions for Burgman” (pdf)
*A paper of mine from the Joint Statistical Meetings, “Rejecting Statistical Significance Tests: Defanging the Arguments”, discusses an episode that is relevant for the general topic of how journal editors should deal with statistical controversies.
Mark Burgman’s presentation:
- Link to paste into browser: https://philstatwars.files.wordpress.com/2021/03/burgman-main-presentation_v2.mp4
D. Mayo’s Casualties:
- Link to paste into browser: https://philstatwars.files.wordpress.com/2021/03/burgman_mayo-casualities-and-reply.mp4
Please feel free to continue the discussion by posting questions or thoughts in the comments section of this post below.
Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting. Please check back closer to the meeting day.
*Meeting 15 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21