Review of Brand et al. — MetaPsychology osf.io/6s29n
Overall Comments
Very interesting paper, and very interesting method, one which seems easy to integrate into current Bayesian modeling practice.
It took me a while to figure out what you meant by posterior passing. It might be worthwhile explaining the method more simply in the abstract; “posterior passing, where posterior found in a past analysis is used as the Bayesian prior for the subsequent analysis.” This seems simpler to me. Others may disagree.
Methodological Comments
1) If the simulation was an attempt to replicate the various methods, why is the study size fixed for the NHST methods? The parameter passing method allows the Bayesian approach to take advantage of prior data, but the way in which prior data is incorporated in NHST is at least partially via power calculations; they should vary the sample size based on the previously observed effect size.
2) If the intent is for posterior passing to be used in place of meta-analysis, shouldn’t the analysis of frequentist methods include a meta-analysis of the results from the 80 trials, to compare to the result found with prior-passing?
3) You note the importance of file-drawer bias. Would it be possible to run the analysis of the posterior-passing method only allowing passing of results when they are above some threshold, to account for this?
General conceptual introduction and attempt to improve science overall:
The presentation in the paper mentions that “the attempts of advocates of Bayesian methods of data analysis to introduce these methods to psychologists… have been without widespread success or response from the field.” To remedy this, some model of how it might change is necessary, and that model should explain the observation.
One plausible explanation is offered earlier in the review; “due to incentives for high numbers of publications, poorer methods that 65 produced false positives persisted and proliferated.” Another plausible explanation is that newer methods are more complex, and people prefer not to learn new methods.
Ideally, at least a comment should explain how the proposal would address the presented problems — the answer to which eludes me. Perhaps embracing the proposed method needs to be a standard for the method to fix the problem of people incentivized to use simpler/easier to cheat methods, in which case how and why would people start to use it?
Alternatively, the background should be cut significantly, and the problem presented should be more closely restricted to “what method would reduce false positive rates and incorporate / replace reproducibility?” (This seems to be what was actually done.)