- Machine
Listening
- Fertile environments for creativity
- Music Exploitation and Distribution
- Musical Innovation
- High-Level Parametric Control
- Sound Reproduction and Recording
- Fertile environments for creativity
- Music Exploitation and Distribution
- Musical Innovation
- High-Level Parametric Control
- Sound Reproduction and Recording
In this article
Download
PDF
Research Goals in Digital Music: A Ten Year View
High-Level Parametric Control
A number of powerful algorithms and systems for sound synthesis, digital signal processing and composition have been designed. However, in general these algorithms are either too complex (i.e., they have an overwhelming number of simultaneous control variables) or non-intuitive (i.e., their variables are not perceptually meaningful), or both. This limits the usability of such algorithms and systems.
As an illustrative example, consider a typical audio equalizer. This provides a large set of faders for modifying the spectrum of a sound.
Normally, an equalizer is used to modify the auditory quality of the music; for example to achieve a specific tone quality. The larger the number of faders, the more precise the equalizer control, but its operation becomes more complex, i.e. the number of possible combinations of fader positions increases. The current state of audio technologies could produce a high-level parametric equalizer with much fewer controllers, each for a particular aural response; e.g. a degree of "weightiness", a degree of "sharpness" of a sound, etc. These are not the equivalent of presets of fader positions, but proper controllers with meaningful variations within given perceptual frameworks.
The provision of high-level parametric control of sound and music is an important research challenge.
The key to progress is to develop effective methods which integrate cutting edge developments in audio and music processing technology with the ongoing research on auditory and music perception. This can only be achieved by integrating research and developments from:
- Psychoacoustics
- Psychology of Music
- Auditory Neuroscience
- Neuroscience of Music
- Acoustics
- Audio Signal Processing
- Music Analysis
Although Psychoacoustics and Psychology of Music have contributed enormously to our understanding of subjective human musical perception, we still do not know what is happening in the brain when we listen to sound and music. The field of Auditory Neuroscience and the emerging field of Neuroscience of Music have started to address this question. Within the next 10 years, advances in these emerging areas will contribute to the development of new technology for Digital Music, most notably new models of machine listening [see Machine Listening], new techniques for coding audio [see Music Exploitation and Distribution] and new approaches to the organization of musical material [see Musical Innovation]. Recent advances in Auditory Neuroscience research have already begun to inform the development of DSP chips modelled after the functioning of the auditory cortex.
It is important to emphasize, however, that such developments cannot happen in isolation, but rather as the result of transdisciplinary research integrating psychologists, neuroscientists, engineers and musicians.
This research will lead to the development of a new generation of sophisticated technology for:
- Computer Aided Sound Design
- Music Production Tools
- Sonification Methods
- New Digital Musical Instruments and performance tools
- Consumer Applications
Sonification is the use of audio to convey information or perceive data aurally. Due to the nature of auditory perception, such as temporal and pressure resolution, it forms an interesting alternative to visualization techniques and provides good understanding of temporal information. Higher-level parameters for sound design informed by our cognitive ability will make a great impact on Sonification Methods, especially for auditory monitoring of complex simultaneous events.
Digital Musical Instruments are composed of a gestural controller and a sound generation unit. The gestural controller is the device that transduces the human inputs to the instrument. It is where physical interactions between the performer and the instrument take place. The sound generation unit involves methods of sound production and its controls. The relationship between the gestural variables and synthesis parameters is far from obvious and can be fruitfully varied in each musical application. A gesture may control a number of synthesis variables simultaneously. Research into high-level parametric integration methods is paramount for the design of New Digital Musical Instruments and augmented and extended instruments.
Finally, the example of the audio equalizer cited earlier is a good illustration of the impact of new developments in high-level control to Consumer Applications. Most current domestic Hi-Fi systems provide only three controllers for modifying the sound quality: volume, treble and bass. A new generation of "active" Hi-Fi systems will emerge from this research, where users will be able to control not only the audio quality of the music, but also the way in which is it interpreted; for example, the choice of playing back a piano piece in the style of different interpreters, and sophisticated search mechanism based on the musical attributes of audio recordings.
[Top]