October 15, 2006

Reality is Created in the Brain

"In any good game the great graphics are happening in your imagination and not on the screen."

Sid Meier interviewed by Richard Rouse III in his book Game Design - Theory and Practice, second edition, p.35.

August 24, 2006

Objectives

"Ambition is the last refuge of failure." -- Oscar Wilde

July 28, 2006

Algorithms and Interaction

"Algorithms are 'sales contracts' that deliver an output in exchange for an input, while objects are ongoing 'marriage contracts.' An object's contract with its clients specifies its behavior for all contingencies of interaction (in sickness and in health) over the lifetime of the object (till death do us part)."

Peter Wegner, "Why interaction is more powerful than algorithms," Communications of the ACM, 40(5):80-91, May 1997.

July 12, 2006

Human thought is not like logic

"Many scientists who study artificial intelligence use the mathematics of formal logics--the predicate calculus--as their major tool to simulate thought.
But human thought--and its close relatives, problem solving and planning--seem more rooted in past experience than in logical deduction. Mental life is not neat and orderly. It does not proceed smoothly and gracefully in neat, logical form. Instead it hops, skips and jumps its way from idea to idea, tying together things that have no business being put together; forming new creative leaps, new insights and concepts. Human thought is not like logic; it is fundamentally different in kind and in spirit. The difference is neither worse nor better. But it is the difference that leads to creative discovery and to great robustness of behavior."

Donald A. Norman, The Design of Everyday Things, Basic Books, 1988. (p. 115 in 2002 paperback edition; highlights added for this post)

June 28, 2006

Teaching and coffee

"As with the tantalizing aroma of freshly ground coffee beans, the implied promises may suggest more than can be delivered."

Dan Pratt, "Personal Philosophies of Teaching: A False Promise?" in ACADEME, American Association of University Professors, 91(1), 32-36, January-February, 2005.

May 05, 2006

Computer Science Education

"[...] the great advances in physics, chemistry, biology and the other hard sciences didn't come about because they were wildly promoted, they came about because, as Feynman said, of the joy of finding things out. There is incredible underlying beauty in software, and teaching that is something that I think universities could do to encourage this field [...]."

Grady Booch (Computer Sicence Education post on IBM developerWorks)

March 04, 2006

Accountability in peer-review systems

In most peer-review systems, the identity of the reviewers is kept secret. This is supposed to protect the reviewers from pressure or retaliation. Double-blind peer-review, in which the identity of the authors is kept from the reviewers, is a popular variant. The justifiction here is fairness, while in practice, true anonymity is difficult to ensure.

In any case, the authors of a conference paper or journal article sign their name on their work. On the other hand, the reviewers who have a major influence on whether the work will be published, are never held publicly accountable for their reviews. If many reviewers take their role very seriously, many more hide behind their anonymity to produce inappropriate output. Worse, the program chairs and editors who rely on the reviewers's work, have no real incentive system to correct the situation.

The quality of reviews could be dramatically improved by bringing the activity into the mainstream of research output, and counting it as such in performance evaluations. For example, in the free publication scheme I proposed earlier, reviews could be published in the same structure, and linked to the reviewed article (either by text analysis, or with an explicit typed link). The review article would be signed by the reviewer, and be part of his/her publications. This would also allow reviewing of reviews, because just like an author benefits from reviews to improve the presentation of research work, a reviewer may improve his/her reviewing skills through peer-review...

February 25, 2006

Learner-centered

Here is a personal thought on what "learner-centered" could mean: learning is so much easier and more natural than teaching that teachers should focus on developing and nurturing their student's interest in learning.

February 22, 2006

Theory and Practice

"In theory, there is no difference between theory and practice. But, in practice, there is."

Jan L.A. van de Snepscheut

February 05, 2006

Pleasure

Pleasure is a fine balance between met expectations and surprise.

January 23, 2006

A Network-Based Scientific Evaluation Approach

Here is a scheme that requires little beyond some formalization, thinking, and Google Scholar or similar technology.

With the internet, anybody can publish anything. Whether anybody else is actually going to read it is another question entirely (but so is the case for a large number of scientific publications). So any researcher is free to publish papers, reports, articles that are readily available to the scientific community at large. In a scheme reminiscent of the good old time of personal correspondences, each researcher should then convince other researchers to read their reports and comment on them, write about them, reference them in their own writings.

This solves the question of the number of publications: do not limit it, encourage proliferation (it is very low cost). If something is really good, it will gain acceptance through readership and references. This is the model followed by Blogs, and it also abolishes the somewhat artificial boundaries delimiting "fields." This model also does not precludes journals and conference proceedings as they exist now: it only suggests that publication in this or that context cannot-should not-be taken at face value, and that counting publications is no longer an acceptable metric. The key structure here is the network of scientific relationships that researchers can build. This is already somewhat the case in practice, but it is currently hindered by field and subfield boundaries. Why not use the idea to compute scientific impact?

Observing that the significance of a scientific result is correlated with its relevance to a larger (scientific) audience suggests a new metric for evaluating scientific merit of publications. Intuitively, the metric is the dispersion, or reach of the network of references that a particular scientific result generates. For example, an iterative result on some very specific technique, although worthwhile within a small group, will have little impact beyond this limited community. On the other hand, a result suggesting a deep paradigm shift that affects similar categories of problems across several fields will gain links from very dispersed sources.

The technology seems very similar to that developed for search on the internet, and tools that are already emerging could be put to good use in developing such a system. With adequate rules, this approach might be beneficial to scientific dissemination and evaluation. Abandoning bin-counting (of publications) cannot be bad. For example, everyone would be officially free to write as many versions of a paper explaining the same idea as necessary to communicate it to as many people as possible (without any fear of "multiple counting"). This would both maximize the dissemination of scientific knowledge where appropriate, and discourage unnecessary over-publication. It would also make so-called cross-disciplinary research easier to evaluate (and in fact encourage it).

Note: the proposed scheme would not prevent self-sustained communities to arise and thrive. In a well designed system, however, their reach would remain limited.