Speaker: Elaine Chew, Queen Mary University of London
UCL Contact: Nicholas Gold (Visitors from outside UCL please email in advance).
Date/Time: 23 Jan 19, 10:30 - 11:30
Venue: John Adams Hall G27
Please email Nicholas Gold (firstname.lastname@example.org) in advance.
The explosion in digital music information has spurred the developing of mathematical models and computational algorithms for accurate, efficient, and scalable processing of music information. Total global recorded music revenue was US$17.3b in 2017, 41% of which was digital (2018 IFPI Report). Industrial scale applications like Shazam has over 150 million active users monthly and Spotify over 140 million. With such widespread access to large digital music collections, there is substantial interest in scalable models for music processing. Optimisation concepts and methods thus play an important role in machine models of music engagement, music experience, music analysis, and music generation. In the first part of the talk, I shall show how optimisation ideas and techniques have been integrated into computer models of music representation and expressivity, and into computational solutions to music generation and structure analysis.
Advances in medical and consumer devices for measuring and recording physiological data have given rise to parallel developments in computing in cardiology. While the information sources (music and cardiac signals) share many rhythmic and other temporal similarities, the techniques of mathematical representation and computational analysis have developed independently, as have the tools for data visualization and annotation. In the second part of the talk, I shall describe recent work applying music representation and analysis techniques to electrocardiographic sequences, with applications to personalised diagnostics, cardiac-brain interactions, and disease and risk stratification. These applications represent ongoing collaborations with Professors Pier Lambiase and Peter Taggart (UCL), and Dr. Ross Hunter at the Barts Heart Centre.
Elaine Chew is Professor of Digital Media in the School of Electronic Engineering and Computer Science at Queen Mary University of London. Before joining QMUL in Fall 2011, she was a tenured Associate Professor in the Viterbi School of Engineering and Thornton School of Music (joint) at the University of Southern California, where she founded the Music Computation and Cognition Laboratory and was the inaugural honoree of the Viterbi Early Career Chair. She has also held visiting appointments at Harvard (2008-2009) and Lehigh University (2000-2001), and was Affiliated Artist of Music and Theater Arts at MIT (1998-2000). She received PhD and SM degrees in Operations Research at MIT (in 2000 and 1998, respectively), a BAS in Mathematical and Computational Sciences (honors) and in Music (distinction) at Stanford (1992), and FTCL and LTCL diplomas in Piano Performance from Trinity College, London (in 1987 and 1985, respectively).
She was awarded an ERC ADG in 2018 for the project COSMOS: Computational Shaping and Modeling of Musical Structures, and is a past recipient of a 2005 Presidential Early Career Award in Science and Engineering (the highest honor conferred on young scientists/engineers by the US Government at the White House) and Faculty Early Career Development (CAREER) Award by the US National Science Foundation, and 2007/2017 Fellowships at Harvard’s Radcliffe Institute for Advanced Studies. She is an alum (Fellow) of the (US) National Academy of Science's Kavli Frontiers of Science Symposia and of the (US) National Academy of Engineering's Frontiers of Engineering Symposia for outstanding young scientists and engineers.
Her research, centering on computational analysis of music structures in performed music, performed speech, and cardiac arrhythmias, has been supported by the ERC, EPSRC, AHRC, and NSF, and featured on BBC World Service/Radio 3, Smithsonian Magazine, Philadelphia Inquirer, Wired Blog, MIT Technology Review, The Telegraph, etc. She is the centre of one of 9 publication clusters having ≥ 5 women in the international music information retrieval community (2016 ISMIR infometric study). She has given > 30 keynote lectures on her research, and > 125 performances as pianist. Her spiral array model, documented in a Springer ORMS International Series monograph, is widely regarded to be one of the most successful mathematical models of tonal perception. The spiral array underlies the latest tension model, which constrains longterm structure/narrative in the MSCA-funded MorpheuS project on fully automatic polyphonic music generation.