12/08/16 CogMIR 2016

I presented a poster at the 6th Annual Seminar on Cognitively Based Music Informatics Research (CogMIR) 2016. I was awarded best poster for this work.

Adaptivity in Oscillator-based Pulse and Metre Perception

Beat induction is the perceptual and cognitive process by which we listen to music and perceive a steady pulse. Computationally modelling beat induction is important for many MIR methods and is in general an open problem, especially when processing expressive timing, e.g. tempo changes or rubato.

Large et al. (2010) have proposed a neuro-cognitive model, the Gradient Frequency Neural Network (GFNN), which can be model the perception of pulse and metre. GFNNs have been applied successfully to a range of 'difficult' music perception problems (see Angelis et. al., 2013; Velasco and Large, 2011).

We have found that GFNNs perform poorly when dealing with tempo changes in the stimulus. We have thus developed the novel Adaptive Frequency Neural Network (AFNN) that extends the GFNN with Righetti et al.'s (2006) Hebbian learning rule. Two new adaptive behaviours (attraction and elasticity) increase entrainment, and increase model efficiency by allowing for a great reduction in the size of the network.

We will extend our comparative study presented at this year’s ISMIR with a new set of results incorporating different forms of expressive timing and we will discuss the biological and cognitive plausibility of the AFNN.

View Poster »

poster, software, music


08/08/16 ISMIR 2016

I presented a poster at the 17th International Society for Music Information Retrieval Conference (ISMIR) 2016.

Adaptive Frequency Neural Networks for Dynamic Pulse and Metre Perception

Beat induction, the means by which humans listen to music and perceive a steady pulse, is achieved via a perceptual and cognitive process. Computationally modelling this phenomenon is an open problem, especially when processing expressive shaping of the music such as tempo change. To meet this challenge we propose Adaptive Frequency Neural Networks (AFNNs), an extension of Gradient Frequency Neural Networks (GFNNs).

GFNNs are based on neurodynamic models and have been applied successfully to a range of difficult music perception problems including those with syncopated and polyrhythmic stimuli. AFNNs extend GFNNs by applying a Hebbian learning rule to the oscillator frequencies. Thus the frequencies in an AFNN adapt to the stimulus through an attraction to local areas of resonance, and allow for a great dimensionality reduction in the network.

Where previous work with GFNNs has focused on frequency and amplitude responses, we also consider phase information as critical for pulse perception. Evaluating the time-based output, we find significantly improved responses of AFNNs compared to GFNNs to stimuli with both steady and varying pulse frequencies. This leads us to believe that AFNNs could replace the linear filtering methods commonly used in beat tracking and tempo estimation systems, and lead to more accurate methods.

Read Paper » View Poster »

poster, paper, software, music


27/07/16 MUME 2016

I presented a paper at the 4rd International Workshop on Musical Metacreation (MUME16), held at the Seventh International Conference on Computational Creativity (ICCC) 2016.

Metrical Flux: Towards Rhythm Generation in Continuous Time

This work-in-progress report describes our approach to expressive rhythm generation. So far, music generation systems have mostly focused on discrete time modelling. Since musical performance and perception unfolds in time, we see continuous time modelling as a more realistic approach. Here we present work towards a continuous time rhythm generation system.

In our model, two neural networks are combined within one integrated system. A novel Adaptive Frequency Neural Network (AFNN) models the perception of changing periodicities in metrical structures by entraining and resonating nonlinearly with a rhythmic input. A Recurrent Neural Network models longer-term temporal relations based on the AFNN's response and generates rhythmic events.

We outline an experiment and evaluation method to validate the model and invite the MUME community's feedback.

Read Paper »

talk, paper, software, music


27/07/15 SMC 2015

I presented a paper at the 12th Sound and Music Computing Conference (SMC) 2015.

Perceiving and Predicting Expressive Rhythm with Recurrent Neural Networks

Automatically following rhythms by beat tracking is by no means a solved problem, especially when dealing with varying tempo and expressive timing.

This paper presents a connectionist machine learning approach to expressive rhythm prediction, based on cognitive and neurological models. We detail a multi-layered recurrent neural network combining two complementary network models as hidden layers within one system.

The first layer is a Gradient Frequency Neural Network (GFNN), a network of nonlinear oscillators which acts as an entraining and learning resonant filter to an audio signal. The GFNN resonances are used as inputs to a second layer, a Long Short-term Memory Recurrent Neural Network (LSTM). The LSTM learns the long-term temporal structures present in the GFNN's output, the metrical structure implicit within it. From these inferences, the LSTM predicts when the next rhythmic event is likely to occur.

We train the system on a dataset selected for its expressive timing qualities and evaluate the system on its ability to predict rhythmic events. We show that our GFNN-LSTM model performs as well as state-of-the art beat trackers and has the potential to be used in real-time interactive systems, following and generating expressive rhythmic structures.

Read Paper »

talk, paper, software, music


31/12/14 RI Robot Orchestra

I took part in this year's Royal Institution (RI) Christmas Lectures, which were presented by Professor Danielle George, with the theme, 'Sparks Will Fly'. The lecture was broadcast on BBC4.

I collaborated with robotics and engineering experts from universities across the UK to present a mixed robot and human orchestra rendition of the Dr Who theme. I filled the role of the conductor by providing the central robot orchestra control in an exciting musical and technological performance.

View Code » Read More »

software, music


12/10/14 NYUAD Rhythm Workshop II

NYU - Abu Dhabi

I attended the NYU Abu Dhabi's International Workshop on Crossdisciplinary and Multi-cultural Perspectives on Musical Rhythm and Improvisation II. The event brought together some 25 leading world scholars invited to lecture and discuss their research, as well as investigate musical rhythm and improvisation in non-western music, from a variety of different academic standpoints. The list of participants included researchers from the fields of music theory and composition, musicology, music information retrieval, music perception and neuroscience.

I was one of the lucky few awarded a travel grant to attend, briefly present my own research, and join the discussion.

talk, software, music


04/10/14 MUME 2014

I presented a paper at the 3rd International Workshop on Musical Metacreation (MUME'14), in conjunction with the Tenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'14) 2014. MUME brings together artists, practitioners and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, complete, accompany, or interpret music.

Studying the Effect of Metre Perception on Rhythm and Melody Modelling with LSTMs

In this paper we take a connectionist machine learning approach to the problem of metre perception and melody learning in musical signals. We present a multi-layered network consisting of a nonlinear oscillator network and a recurrent neural network. The oscillator network acts as an entrained resonant filter to the musical signal. It `perceives' metre by resonating nonlinearly to the inherent periodicities within the signal, creating a hierarchy of strong and weak periods. The neural network learns the long-term temporal structures present in this signal. We show that this network outperforms our previous approach of a single layer recurrent neural network in a melody and rhythm prediction task.

We hypothesise that our system is enabled to make use of the relatively long temporal resonance in the oscillator network output, and therefore model more coherent long-term structures. A system such as this could be used in a multitude of analytic and generative scenarios, including live performance applications.

Read Paper »

talk, paper, software, music


14/09/14 ICMC-SMC 2014

International Computer Music Conference

I presented a paper at the joint ICMC - SMC conference, 2014. For the first time, two well established events: the 40th International Computer Music Conference (ICMC) and the 11th Sound & Music Computing conference (SMC) were brought together.

Beyond the Beat: Towards Metre, Rhythm and Melody Modelling with Hybrid Oscillator Networks

In this paper we take a connectionist machine learning approach to the problem of metre perception and learning in musical signals. We present a hybrid network consisting of a nonlinear oscillator network and a recurrent neural network. The oscillator network acts as an entrained resonant filter to the musical signal. It ‘perceives’ metre by resonating nonlinearly to the inherent periodicities within the signal, creating a hierarchy of strong and weak periods. The neural network learns the long-term temporal structures present in this signal. We show that this hybrid network outperforms our previous approach of a single layer recurrent neural network in a melody prediction task.
We hypothesise that our hybrid system is enabled to make use of the relatively long temporal resonance in the oscillator network output, and therefore model more coherent long-term structures. A system such as this could be used in a multitude of analytic and generative scenarios, including live performance applications.

Read Paper » View Poster »

paper, poster, software, music


17/12/13 DMRN+8

I presented a poster at DMRN+8. The Digital Music Researh Network (DMRN) aims to promote research in the area of Digital Music, by bringing together researchers from UK universities and industry in electronic engineering, computer science, and music.

Deep Rhythms: Towards Structured Meter Perception, Learning and Generation with Deep Recurrent Oscillator Networks

View Poster »

talk, poster, software, music


11/07/13 Marinus Gig

Marinus

Marinus have been busy writing a brand new set. We're playing at Sticky Mike's Frog Bar, doors at 8!

View Soundcloud »

gig, music


06/06/13 OSC

Open Sound Collective

Matt Garland and I founded Open Sound Collective (OSC) as a new monthly meet up for any one involved with or interested in producing electronic music.

The collective is open to anyone, be you amateur or professional, working with software or hardware. So if you know your MSPs from your DSPs, your quarks from your reciprocal of qs, or your SuperCollider from your Ultranova, come along and say hi!

The meet up is primarily a social gathering, but there are also sets from local musicians, an open jack for anyone who wants to air their new tune, and a hackspace if you feel inspired or want to show off your latest patch.

OSC takes place on the first Thursday of every month at The Brunswick Pub in Hove. Look for us upstairs in the White Room from around 7:30pm.

Be there or be a square wave.

talk, software, music


09/09/12 ICMC 2012

International Computer Music Conference

I presented a paper at ICMC 2012, a major international forum for the presentation of the full range of outcomes from technical and musical research, both musical and theoretical, related to the use of computers in music.

A Stigmergic Model for Oscillator Synchronisation and its Application in Music Systems

Non-linear and chaotic dynamics, predominantly used in engineering, have become a pervasive influence incontemporary culture. Artists, philosophers and commentators are increasingly drawing upon the richness of these systems in their work. This paper explores one area of this territory: the synchronisation of a populationof non-linear oscillators used for the generation of rhythm as applied in musical systems.
Synchronisation is taken as a basis for complex rhythmic dynamics. Through the self-organisation notion of stigmergy, where entities are indirectly influenced by each other, the notion of local field coupling is introduced as a qualitatively stigmergic alternative to the Kuramoto model and noise, distance, delay and influence are incorporated.
An interactive system of stigmergic synchronised oscillators was developed, that is open to be used across many fields. The user is allowed to become part of the stigmergy through influencing the environment. The system is then applied to the field of music, generating rhythms and sounds by mapping its state.

Read Paper » View Software »

talk, paper, software, music


16/04/12 SC.art Exhibition

The Art Pavillion

I took part in an exhibition of interactive and audio-visual art hosted at The Art Pavilion in Mile End Park, East London

Other artists included Curtis and Chad McKinney, Andre Bartetzki, Jan Trutzschler von Falkenstein and Mike Rijnierse, Berdnt Schurer, Till Bovermann, Simon Katan, Yorgos Diapoulis and Scott Nobles.

exhibition, software, music


16/04/12 SuperCollider Symposium 2012

SuperCollider

I presented at sc2012, an international event where musicians, artists and coders come togther for a week of talks, music, art and workshops.

My presentation explained how you can use Crickets to interface with SuperCollider for music generation.

View Software »

talk, paper, software, music


© Andrew Elmsley 2017 | andy [at] andyroid.co.uk