The theory that would not die by Sharon Bertsch McGrayne is an interesting book on the history of Bayesian logic and its contest with frequentist statistics.
I think Bayesian purists would probably consider the book too “light” while novices will not be able to gain a proper insight on Bayesian statistics. The strengths of the book are in providing a historical narrative for how Bayes/Laplace’s logic has been used in making statistical inference when (as most often the case) there are few or no equivalent observations.
A number of the historical applications I did not know. The focus on code-breaking I find a little tiresome as there is no real insight on these methodologies here. The chapter on military accidents that could have resulted in nuclear detonations and the calculation of the probability of that happening is impressive. It seems that nuclear disarmament activists were right all along.
A number of famous quotes are worth repeating. Fisher criticising equal priors is credited with “Thinking that you don’t know and thinking that the probabilities of all the possibilities are equal is not the same thing”. And Volinsky defending Bayesian Model Averaging studies, makes the following statement, “when two models that are not highly correlated are combined in a smart way, the combination often does better than either individual model”. Also interesting for me is a statement from the noted psychologist Amos Tversky that he used frequentist methods as a ‘matter of expedience’ since he could not publish his research otherwise.
I found this book can help me provide a good history of decision trees and might be helpful on a course I teach on our MBA next year.