Music research papers

random trip report

Sources

The following journals use the rip-off "pay to view" model:

CMJ is on Jstor. I looked at 1990-2019. Instructions for Contributors.

Nuance analysis

I use the following 'tags' for papers:

"Statistical": studies the statistics of the nuance.

"Modeling": identifies parameterized functions that fit the data.

"Perception": studies how listeners perceive music.

"Physiological": studies factors involving performer's physical and cognitive processes, or physical properties of instruments.

"Algorithmic": systems that deterministically add nuance to a work based an (externally defined) structural decomposition e.g. putting volume hairpins over phrases, and putting a ritard at the end. In some cases there are additional high-level controls, e.g. an 'emotion' knob you can turn. Seems to me that this can only generate nuance that is repetitive and predictable. It's not sufficient for e.g. virtual performance. It implies that there is an ideal interpretation of a work, which is contradicted by the fact that people keep playing and recording the work.


Basics

Bruno Repp

Most of the important work in this area is by Bruno Repp. His work is amazingly thorough and meticulous.

Physical tempo models

These papers assert that tempo derives from physical movement, e.g. that ritardandi correspond to slowing down with a constant braking force. This implies a quadratic slowness function.

Comment: their curve-fitting is noisy; it's not clear that a quadratic fits better than an exponential or something else.

Algorithms

Algorithms that generate nuance based on some structural aspect (perhaps manually specified) of the score. Goal: automatically generate a human-sounding rendition (why??).

Other

Manfred Clynes

Manfred Clynes had a theory where nuance comes from 'pulse patterns': multiple levels, one for 16ths in a beat, others for beats in 4/4 and 3/4 measures. Separate patterns for volume and duration.

Where do these patterns come from? He did something where he had famous performers tap their fingers along with music, and recorded the result. But mostly he just made them up.

He claimed that each composer (not performer!) had a particular set of pulse patterns that applied equally well to all their work. He had a vague theory of acoustic aesthetics/emotion called "Sentic forms".

This work seems like BS. Repp calls it 'analysis through synthesis': claim that a model is valid because it produces plausible results in a few cases (though in this case the results are not plausible). Also, all of Clynes' stuff (including his self-authored Wikipedia page) has an uncomfortably high level of self-promotion.

He developed a Windows program called SuperConductor (not related to SuperCollider AFAIK). Superconductor does audio synthesis (e.g. of bowed string sounds). It provides control (through a GUI) of high-level (e.g. crescendi) and note-level (e.g. vibrato) nuance. It supports nonlinear curves (exponentials, cubic splines?), and (presumbly) pulse patterns. It looks rigid and limited. It comes with a few pieces by "The Great Composers" (in a proprietary format) that you can add nuance to. It can also import/export MIDI.

There are a bunch of examples on YouTube, mostly from string quartets. These are not bad - they have nuance with appropriate general properties. However, the nuance is excessive, and it's clear within a few seconds that it's computer-generated.

Papers:

Other

Comments

No one has compared multiple performances by the same performer.

Sources of performance data

  • MAESTRO: a corpus of piano performances (MIDI file and audio; no score info).

Visualizing nuance, virtual conductor

  • Visualizing Expressive Performance in Tempo-Loudness Space. Langner, Goebl. CMJ 27(4), winter 2003.

    Unified display of volume and tempo. Could be used for virtual conductor.

    Spatialization

    Programming Languages

    • ChucK: a strongly timed computer music language. Wang, Cook Salazar. CMJ 39(4) winter 2015.
    • A Max Library for Musical Notation and Computer-Aided Composition. Agostini, Ghisi. CMJ 39(2).

      textual score specification for Max.

      Where's the beef?

    Score representation

    • Electronic Scores for Music: The Possibilities of Animated Notation. Cat Hope. CMJ 41(3), fall 2017

      Use of color, animation etc.

    • Scores, Programs, and Time Representation: The Sheet Object in OpenMusic. Bresson, Agon. CMJ 32(4) winter 2008.

      A score editor that's integrated with a visual programming environment. Could be relevant to nuance generation.

    • Expressive Notation Package. Kuuskankare, Laurson. CMJ 30(4) winter 2006.

      A lisp-based music representation that handles non-metric avant-garde stuff and associated nuance markings.

    Audio alignment to score

  • Bayesian Audio-to-Score Alignment Based on Joint Inference of Timbre, Volume, Tempo, and Note Onset Timings. Maezawa, Okuno. CMJ 39(1) spring 2015.

    map audio notes to score notes.

    Automated accompaniment

    Bells

  • Cymatic synthesis of a series of bells, Barrass.

    Indexing sites

    Do they have PDFs?

    Taylor & Francis: no

    Semantic scholar: sometimes

    researchgate: sometimes

    booksc.me: yes (arabic)

    vdocuments.net

    Copyright 2025 © David P. Anderson