Quantifying Independently Reproducible Machine Learning - Edward Raff, writing at The Gradient

This resource first appeared in issue #12 on 28 Feb 2020 and has tags Technical Leadership: Reproducibility, Technical Leadership: Software Development

Quantifying Independently Reproducible Machine Learning - Edward Raff, writing at The Gradient

We worry a lot about about replication and reproducibility in research computing. In this article, the author — who attempted to independently replicate the results and basic methods in 255 (!!!) ML papers. Crucial here is independent replication; it’s not enough to just run the code, but to implement independently. He was successful 162 times.

That’s enough papers to do some quantitative analysis, and it’s interesting what aspects of the work were not correlated with successful independent replication. A clearly written paper and answering emails was much more important than published source code or worked sample problems.

<<<<<<< HEAD
======= >>>>>>> c1d069a... First pass at category pages