What excited as well as shocked me in his writing, actually, is how he mined the old knowledge (KS statistics introduced in 1930s, normalized substraction method at the beginning of 1980s, while his article was published in 1996) and developed them into a clear framework where several algorithms share almost the same mathematic background. Interestingly , without knowing the framework before, I had attempted to use the Dmax in my project and got initial results from it. However, my collaborator's question "Has anybody else ever used this method" made me dumb and I did not spend more time mining any existing information - including the KS statistics which I have heard tens of times but without understanding the essence thoroughly.
So in my project, later on I turned back to the most intuitive integration method (also introduced in the manuscript) and gave back a version of results based on it. However, it always troubles me whether my first attempts, those with Dmax method (though without knowing its name), could hold and how can they be validated mathematically. Until today. So how did Bruce Bagwell succeed over my attemps?
- Mining existing knowledge. Actually the methodology Bruce used was almost the same with mine - at least at the beginning of deduction: using arbitrary example, elaborate algorithms with that example. However, Bruce is much better in mining existing knowledge, with which he could have easy access to KS statistics and the normalized substraction method. Though I also considered the one of the advanced algorithms he introduced from a pragmatic view, I failed to bind it to existing mathematical/statistical models to have it authorized. And
- inheriting the first item, I also could not establish a clear prove to my hypothesis, which was quite successful in the case of Bruce. It reflects my disadvantage in learning (studying): not thorough enough. Though I had heared KS statistics many times, I have never tried to figure them out carefully: what does they mean? why is it one of the most widely used nonparametric test? what statistics did it use?
- Lack of the sensitivity to formulate my ideas and get more out of the formulation. The deduction of enhanced Dmax method as well as that of enhanced normalized substraction method were benefited much from the fact that Bruce formulated the algorithms nicely and was sensitive to the points where improvements could be made.
- At last he introduced an overview where synthetic data (by Access!) were analyzed with several algorithms, very intuitive high-level overview and comparison of the algorithms elaborated. It impressed me a lot and similar approach shall be highlighted in my work.
The impact of this manuscript, honestly, is huge among the ones since years: it warns me against any kind of laziness and tells me much more than only about immunofluorescence analyses: Formulate and visualize your idea, nicely, you will benefit a lot. Do not stop thinking, mining and learning.