A Network-Based Scientific Evaluation Approach
Here is a scheme that requires little beyond some formalization, thinking, and Google Scholar or similar technology.
With the internet, anybody can publish anything. Whether anybody else is actually going to read it is another question entirely (but so is the case for a large number of scientific publications). So any researcher is free to publish papers, reports, articles that are readily available to the scientific community at large. In a scheme reminiscent of the good old time of personal correspondences, each researcher should then convince other researchers to read their reports and comment on them, write about them, reference them in their own writings.
This solves the question of the number of publications: do not limit it, encourage proliferation (it is very low cost). If something is really good, it will gain acceptance through readership and references. This is the model followed by Blogs, and it also abolishes the somewhat artificial boundaries delimiting "fields." This model also does not precludes journals and conference proceedings as they exist now: it only suggests that publication in this or that context cannot-should not-be taken at face value, and that counting publications is no longer an acceptable metric. The key structure here is the network of scientific relationships that researchers can build. This is already somewhat the case in practice, but it is currently hindered by field and subfield boundaries. Why not use the idea to compute scientific impact?
Observing that the significance of a scientific result is correlated with its relevance to a larger (scientific) audience suggests a new metric for evaluating scientific merit of publications. Intuitively, the metric is the dispersion, or reach of the network of references that a particular scientific result generates. For example, an iterative result on some very specific technique, although worthwhile within a small group, will have little impact beyond this limited community. On the other hand, a result suggesting a deep paradigm shift that affects similar categories of problems across several fields will gain links from very dispersed sources.
The technology seems very similar to that developed for search on the internet, and tools that are already emerging could be put to good use in developing such a system. With adequate rules, this approach might be beneficial to scientific dissemination and evaluation. Abandoning bin-counting (of publications) cannot be bad. For example, everyone would be officially free to write as many versions of a paper explaining the same idea as necessary to communicate it to as many people as possible (without any fear of "multiple counting"). This would both maximize the dissemination of scientific knowledge where appropriate, and discourage unnecessary over-publication. It would also make so-called cross-disciplinary research easier to evaluate (and in fact encourage it).
Note: the proposed scheme would not prevent self-sustained communities to arise and thrive. In a well designed system, however, their reach would remain limited.
With the internet, anybody can publish anything. Whether anybody else is actually going to read it is another question entirely (but so is the case for a large number of scientific publications). So any researcher is free to publish papers, reports, articles that are readily available to the scientific community at large. In a scheme reminiscent of the good old time of personal correspondences, each researcher should then convince other researchers to read their reports and comment on them, write about them, reference them in their own writings.
This solves the question of the number of publications: do not limit it, encourage proliferation (it is very low cost). If something is really good, it will gain acceptance through readership and references. This is the model followed by Blogs, and it also abolishes the somewhat artificial boundaries delimiting "fields." This model also does not precludes journals and conference proceedings as they exist now: it only suggests that publication in this or that context cannot-should not-be taken at face value, and that counting publications is no longer an acceptable metric. The key structure here is the network of scientific relationships that researchers can build. This is already somewhat the case in practice, but it is currently hindered by field and subfield boundaries. Why not use the idea to compute scientific impact?
Observing that the significance of a scientific result is correlated with its relevance to a larger (scientific) audience suggests a new metric for evaluating scientific merit of publications. Intuitively, the metric is the dispersion, or reach of the network of references that a particular scientific result generates. For example, an iterative result on some very specific technique, although worthwhile within a small group, will have little impact beyond this limited community. On the other hand, a result suggesting a deep paradigm shift that affects similar categories of problems across several fields will gain links from very dispersed sources.
The technology seems very similar to that developed for search on the internet, and tools that are already emerging could be put to good use in developing such a system. With adequate rules, this approach might be beneficial to scientific dissemination and evaluation. Abandoning bin-counting (of publications) cannot be bad. For example, everyone would be officially free to write as many versions of a paper explaining the same idea as necessary to communicate it to as many people as possible (without any fear of "multiple counting"). This would both maximize the dissemination of scientific knowledge where appropriate, and discourage unnecessary over-publication. It would also make so-called cross-disciplinary research easier to evaluate (and in fact encourage it).
Note: the proposed scheme would not prevent self-sustained communities to arise and thrive. In a well designed system, however, their reach would remain limited.
Comments