Science and Music

Science and Music

The Science and Music Research Group conducts research on the application of computing and engineering techniques to current issues in Music. This is prosecuted through collaboration between an inter-disciplinary, inter-faculty, inter-institutional, international consortium of collaborating researchers taken from many disciplines, institutions, and countries.

Traditionally, engineers and computer scientists have often confused audio and written representations of music with the musical artefact itself. In addition, most of the technological tools available to musicians consist of solutions based on well-known engineering principals such as signal processing and typesetting. The next significant steps this research must take will include a concentration on music processing, as opposed to audio and graphical processing.

The SMRG collaborates with performers from music conservatories, and psychologists with expertise in perception to deliver solutions based on signal processing and applied computing platforms.

Research topics

Empirical Musicology: Gesture and Structure
Rehearsal and Analysis Tools for Microtonal Performance
Computer Representations of Musical Performances

Staff

Dr Nicholas Bailey
Prof Graham Hair

Empirical Musicology: Gesture and Structure

Dr Nicholas Bailey, Dr Jennifer MacRitchie, Prof Graham Hair, Carola Boehm, Prof Bruce Mahin

Empirical musicology involves measuring the physical production of sound during musical performance. It is of significance historically (informing authentic performance practice), didactically (helping expert teachers identify particular attributes of a performer's technique) and informatically (by discovering musical structure implicit in expert performers' gestures). The measurements range from whole-body profiling to establishing rapid and subtle gestures used in sound production, and the engineering challenge is to undertake these measurements without any significant perturbation of the performance. (Video)

Rehearsal and Analysis Tools for Microtonal Performance

Dr Nicholas Bailey, Prof Graham Hair, Prof Jane Ginsborg, Dr Ingrid Pearson, Prof Richard Parncutt

Microtonal music refers to works which notate divisions of the octave which are differently sized from the usual twelve equal semitones. Although a conservatory-trained performer is easily capable of resolving pitch differences much smaller than a semitone (usually about 20 times smaller), performance of such work presents many and varied challenges. What are the extent and limits of microtonal variation in musical performance? How, if at all, can instrumentalists perform microtonal music on instruments which are designed around the usual 12 divisions of the octave? What are the compositional consequences of scales with intervals which do not necessarily exist in “normal” music? How can such structures be represented and analysed? In the following video, composer Graham Hair talks to Alex South of the Scottish Clarinet Quartet about the practicalities of rehearsing and performing songs written with 19 divisions of the octave. Alex is seen using microtonal rehearsal software which gives him real-time feedback on the nuances of the pitch in his performance. (Video)

Computer Representations of Musical Performances

Dr Nicholas J Bailey, Prof Graham Hair, Dr Jennifer MacRitchie, Dr Margaret McAllister

Computer representations of performance data are often too naïve to be of much use to performance practitioners.

The reason for this is that while computers (and the engineers using them) are very good at representing quantities and timings, the kind of information sought by the music analyst even at the most basic level contains terms which are not easy to interpret algorithmically. “Find the degree by which all leading notes in this performance are sharpened” might sound like a simple signal processing question, until one turns to consider the predicate: “which notes are leading notes?”

The SMRG addresses the problem of storing and querying its increasing corpus of performance data by developing two tools: Performance Mark-up Language (a superset of the ubiquitous MusicXML) for information interchange, and an extension to the PostgreSQL database called ARNE (Annotated Retrieval of Note Events). Both have been designed for maximum flexibility and to be applicable to as wide a variety of performance scenarios as possible.

A key feature of these technologies is the ability to present performance information alongside a musical score, making experimental results much more accessible to musicians.