In research the old principle of publishing or perishing has long been replaced by the more quantitative performance metric of publishing in journals with a high impact factor. For example, in the case of the last paper on which I am a coauthor (today papers rarely have a single author, you could not keep up with ranking and funding if you would do that), we did not look which journal is the best fit for our paper.
Instead, we compiled a list of journals that would potentially publish our paper given its subject matter and the importance of its contribution to the advancement of science. Then we sorted the list by impact factor and submitted the manuscript to the New Journal of Physics, with impact factor 3.264. We are obviously very objective and our manuscript will go through a second round after revision, so we picked the correct journal.
Being at the end of my tour of duty as associate editor for a journal with impact factor 0.757, I am very aware of the current bad trend of shopping one's manuscript. In this trend a manuscript is submitted to a top tier journal like Nature (28.751) or Science (26.372) and then submitted again and again to journals with lower impact factor until it is no longer rejected.
Not only does this tax the system by requiring many unnecessary reviews (handling a rejected paper costs about $500, which are not offset by page charges and download fees), but it backfires when a reviewer gets the same manuscript though a different journal. In fact, a good associate editor will seek reviewers that are most familiar with the research described in the paper, so two different associate editors from two different journals will draw on the same small set of potential reviewers. The manuscript then gets rejected as a resubmission of an old manuscript.
It is important to understand that the impact factor is a relative quantity. For example, the Cancer Journal for Clinicians has an impact factor of 69.026, but this does not mean that at 0.757 my journal sucks. On the contrary, compared to other journals in the field it performs quite well. For example its sister journal from the same publisher has an impact factor of 0.455 and that of an other society comes out at 0.220. These journals tend not to publish many survey papers, which inflate the impact factor.
Message 1: use the impact factor with a grain of salt.
As my wife says, even if you win the rat race, you are still just a rat. In reality, today you do not need an impact factor to make an impact. While we can dream of publishing in Science and winning the Nobel Prize (or the Judd Award for us color scientists), today we can shower our incremental knowledge onto humanity with publishing media like personal web sites and blogs. You are reading this post, proving it works!
Granted, it is very informal, but there are some easy steps up. The traditional better medium is the technical report, which can be freely downloaded from the institution's web site, like HPL-2008-109 in the case of my most recent collaboration with co-blogger Nathan Moroney.
For those who want to publish something more glossy, HP Labs is incubating MagCloud, which will satisfy all your vanity requirements. From this link your can order the same technical report, but this time printed on heavy glossy stock and laid out nicely in InDesign instead of LaTeX.
There is even a cloud service to publish the slides of your public presentations. For example, we wrote the above report in the Beamer class, which lays out the same content in a format suitable for overhead projectors (called beamers in German). You can then make available your slides on Slide Share, or even embed them in your blog, like so: http://www.slideshare.net/berettag/cognitive-aspects-of-color-presentation
Finally, since in these difficult economic times everybody is jumping on LinkedIn, you can use a LinkedIn application to post your slides directly on your profile page.
Message 2: use informal media to give impact to your research.
No comments:
Post a Comment