April 28, 2009

Time in Computation

This is the last post in the series Brain - Time - Music - Computing.
Previous: Time and the Brain

The notion of computation was explicitly created for use outside of the flow of MWT. Models of computation provide primitives for describing processes in a purely timeless context (computability), or in an artificial and abstract flow of time marked by computational operations (algorithmic complexity). The resulting abstract manifestation of time in computing is enforced as a strong invariant, universally and implicitly relied upon.

This state of affairs has serious implications for the design and use of computing artifacts in MW. Music exists at the confluence of creation, representation and performance, where inherent limitations of time representations in computation become evident. Music systems fall into two broad categories: online or real-time systems, and off-line system. The significance of these categories in the context of interaction is explored in [1]. Henkjan Honing more specifically analyses and classifies the issues related to the representation of time in music computing [2].

Process-oriented systems, which address real-time needs such as sound synthesis or performance (sequencing), adopt either tacit or implicit representations of time. Tacit representations restrict the notion of time to the “now,” the flow of MWT. Implicit representations tie the flow of time in the system to that of MWT in a direct, fixed and inescapable relation. For example, the Max paradigm in its various forms, e.g. Max/MSP and Pure Data, embraces this approach. Composition and editing systems, on the other hand, manipulate static representations of music, with explicit modeling of time relationships in their mathematical abstraction, outside of any flow of time. OpenMusic is a representative member of this category. The dataflow model of OpenMusic maps to the functional programming model, and does not provide for the creation and manipulation of a flow of time in relation to MWT.

These orthogonal representations of time, and the computational models they support, serve their particular needs well. However, the dichotomy they introduce with respect to time representations seems irreversible. Puckette for example denounces the divide between processing and representation paradigms as a major obstacle to creating a comprehensive system for music making [3]. Musical improvisation requires a seamless blend of composition (representation) and real-time performance (processing), the ability to move freely in and out of flows of PT and MWT. The design of a real-time interactive musical improvisation system, such as the multimodal interactive improvisation system MIMI, requires to bridge the representation/processing gap. Past efforts in this area have leveraged existing programming models to produce ad hoc solutions; MIMI benefits from the use of a new computational paradigm that offers a general solution.

The Hermes/dl design language

Hermes/dl adopts an asynchronous concurrent computation model. Its data model distinguishes volatile data, in a flow of time, from persistent data outside of any flow of time. As a result, the language primitives afford the modeling of concurrent flows of time and computation, and of processes that take information in and out of these flows.

Arrangements of language primitives consistently associated with specific functionalities or behavior constitute patterns of the language; they can be used as models or guides for system design. The patterns of primitives and interactions that characterize the transfer of information from a flow of time into persistent form constitute instances of the Aggregator pattern. The patterns of primitives and their interactions that characterize the transfer of information from persistent form into a specific flow of time constitute instances of the Sampler pattern. The next section introduces the Hermes/dl design language in the wider context of the motivations that lead to its creation.

[1] Alexandre R.J. Francois and Elaine Chew. An architectural framework for interactive music systems. In Proceedings of the International Conference on New Interfaces for Musical Expression, Paris, France, June 2006.
[2] Henkjan Honing. Issues in the representation of time and structure in music. Contemporary Music Review, 9:221–239, 1993.
[3] Miller S. Puckette. A divide between ‘compositional’ and ‘performative’ aspects of Pd. In Proceedings of the First International Pd Convention, Graz, Austria, 2004.

April 18, 2009

Time and the Brain

This is part 4 of 5 in the series Brain - Time - Music - Computing.
Previous: Time and Perception
Next: Time in Computation

If the flow of MWT is immutable, the human brain hardly perceives it as such. Gooddy, in his book Time and the Nervous System [3], distinguishes between Personal Time (PT) and Government Time (GT). The former marks the flow of time (MWT) as perceived by the individual brain; the latter refers to the passage of time as measured by a collectively recognized reference clock, from the brain’s perspective an external synchronization device to MWT. Alteration in the perception of time, of the flow of one’s PT, occur within the perceiving agent’s mind.

Fraisse emphasizes the necessity to separate the perception of duration, which takes place in the psychological present, and the estimation of duration which “takes place when memory is used either to associate a moment in the past with a moment in the present or to link two past event” [2]. Memory frees the mind from the continuous, irreversible flow of MWT. The mind can manipulate memories, events taken out of MWT, and place them in flows of time, that extend in the past and future (including the flow of MWT). Gooddy captures this ability in his observation that “the [human] brain is the place or mechanism or medium by which time is converted into space and space into time.”

Musical tasks constantly engage the ability of the brain to move information in and out of the flows of times (going back and forth between time and space): memorizing a piece of music, performing it from memory, writing a piece of music as a score, performing a piece of music from a score. A musical notation system affords spatial representation of the perception of time through music; a performance is the re-creation of this perception from its spatial representation. The goal of a music notation system that effectively supports total communication between the composer and the performer remains elusive. Technology relatively recently afforded exact recording and recreation of performances. But a recording only captures one single, permanently frozen, physical manifestation of the musical material, with no place for re-creation or re-interpretation. The recording is super-naturally faithful to the performance, but does not explicitly encode, nor allows the full recovery of, the full depth of intent of the source material. Furthermore, the exactness of the reproduction is not necessarily a significant, or desirable, feature from the listener’s point of view. The human brain is approximate; memory is event-based and selective. Each experience, each performance are different, and bring the prospect of renewed excitement to the listening brain. Successive re-experiences of a recording only remain interesting to a listener as long as she herself keeps changing from the experience.

Considerations about the nature and meaning of notation extend to the performing arts, such as dance and theater, and beyond. The thinking brain in MW finds itself in a constant struggle between the desire to stop time and the necessity to live (and experience) in the present. Bamberger has studied extensively the evolution of the spatialization (notation) of temporal patterns (rhythms) during child development. She reflects [1]:
We necessarily experience the world in and through time. How and why, then, do we step off these temporal action paths to selectively and purposefully interrupt, stop, and contain the natural passage of continuous actions/events?

How do we transform the elusiveness of actions that take place continuously through time, into representations that hold still to be looked at and upon which to reflect?

Perhaps the very notion of complexity lies in engaging the resilient paradoxes that emerge when we confront the implications of our static, discrete symbolic conventions with our immediate experience of always “going on.”
The ability to control the flow of one’s PT allows the brain to take temporal experiences out of the immutable flow of MWT, contemplate or otherwise manipulate these experiences outside of MWT, and later re-create them in the flow of MWT. This ability enables individuals to adapt, learn, generalize, create. Without it, symbolic thought would be impossible, or at least completely detached from, and therefore irrelevant to, life in MW.

[1] Jeanne Bamberger. Evolving meanings: Revisiting Luria and Vygotsky. In G. O. Mazur, editor, Thirty Year Commemoration to the Life of A. R. Luria. Semenenko Foundation, New York, 2008.
[2] Paul Fraisse. Perception and estimation of time. Annual Review of Psychology, 35:1–36, 1984.
[3] William Gooddy. Time and the Nervous System. Praeger Pub, 1988.

April 08, 2009

Time and Perception

This is part 3 of 5 in the series Brain - Time - Music - Computing.
Previous: The Brain in Middle World
Next: Time and the Brain

Scales of time in MW play a crucial role in the recognition and interpretation of temporal patterns, by the brain, as symbolic relationships such as causality and synchrony.

Events that are perceived as shortly following each other in time tend to be interpreted in a causality relationship. Brains learn the range of latencies that may separate an action and the perception of its effect in MW. The quantitative characterization of acceptable latencies is crucial to the understanding of interaction. Human-computer interaction researchers [4][1] categorize acceptable time delays into three orders of magnitude, which coincide with Newell’s cognitive band in his time scale of human actions [5]: the 0.1s (100ms) scale characterizes perceptual processing, perceived instantaneous reaction; the 1s scale characterizes immediate response, continuous flow of thought (consistent with the notion of psychological present [3]); and, the 10s scale characterizes unit tasks, continued and sustained attention. These orders of magnitude define relatively narrow ranges of applicability for different levels of cognitive activities, especially when compared with the longer time order ranges that characterize other activities in MW: the rational band (100-10000s, i.e. minutes to hours), the social band (100000-10000000s, i.e. days to months) and the historical band (100000000-10000000000s, i.e. years to millenia).

Ensemble musical performance requires both interaction and synchronization. Synchrony, defined as the exact co-occurrence of several observations, is a perceptual abstraction. The different, and finite, speeds at which light and sound travel imply that events whose percepts occur in absolute synchrony would hardly ever have occurred synchronously in MW, or would never have occurred naturally at all. For example, explosions in movies usually occur as if sound and light traveled at the same speed in the air. In MW, sound travels quite slowly, about 33cm in 1ms, whereas light travels so fast that travel times are negligible. Events that do occur simultaneously cause in an observer a variety of percepts that are not received synchronously, but that exhibit specific - and predictable - temporal patterns, which brains learn to understand as signs of a common cause, and thus synchronous origin in MWT. This is especially relevant in music making (and enjoying), due to the relatively low speed of sound travel in air.

Fraisse offers a comprehensive analysis of psychophysical experiments that aim to characterize the perceptual limits of time properties: event succession and duration [3]. Exact numbers depend on many factors, including task modality, and stimulus type, intensity, and duration. Pierce places the time resolution of the ear on the order of 1ms [6], which leads him to question whether human’s acute time resolution is actually of any use in music. Certainly the brain perceives as simultaneous auditory events that the ear detects as distinct. Pierce cites, among others, an experiment by Rasch, which revealed that synchronization in performed small ensemble music is only accurate to 30-50ms [7]. Incidentally, this range also characterizes the travel time of sound between the opposite end of an orchestral stage. Each individual musician in the orchestra experiences the performance in a necessarily specific and unique way. Yet the musicians are collectively capable, under the direction of the conductor, of producing an expert and consistent ensemble rendering of a musical piece. Recent experiments by Chew et al. [2] on sound latency in ensemble performance have shown that, under favorable conditions, professional musicians can deliver a meaningful musical performance while experiencing delays as high as 65ms. This number recalls the experimental fact, also reported by Pierce, that humans perceive no echo when a strong reflected sound occurs within 60-70ms after the direct sound.

These experimental results illustrate the impressive plasticity of brain processes with respect to the perception of MWT. Abstract temporal concepts, such as synchrony, bear limited relevance to the modeling of the MW perceptual and cognitive phenomena that inspired their invention.

[1] Stuart K. Card, George G. Robertson, and Jock D. Mackinlay. The information visualizer, an information workspace. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pages 181–186, 1991.
[2] E. Chew, A.A. Sawchuk, R. Zimmermann, V. Stoyanova, I. Tosheff, C. Kyriakakis, C. Papadopoulos, A.R.J. Francois, and A. Volk. Distributed Immersive Performance. In Proceedings of the 2004 Annual NASM Meeting, San Diego, CA, USA, November 2004.
[3] Paul Fraisse. Perception and estimation of time. Annual Review of Psychology, 35:1–36, 1984.
[4] Robert B. Miller. Response time in man-computer conversational transactions. In Proceedings of the AFIPS Fall Joint Computer Conference, volume 33, pages 267–277, 1968.
[5] Allen Newell. Unified Theories of Cognition. Harvard University Press, 1990.
[6] John R. Pierce. The nature of musical sound. In Diana Deutsch, editor, The Psychology of Music, Second Edition (Cognition and Perception), pages 1–24. Academic Press, 1998.
[7] R. A. Rasch. Synchronization in performed ensemble music. Acustica, 43:121–131, 1979.