Matt Prockup

Music, Machine Learning, Interactive Systems

I am currently a scientist at Pandora working on methods and tools for Music Information Retrieval at scale. I received my Ph.D. in Electrical Engineering from Drexel University. My research interests span a wide scope of topics including audio signal processing, machine learning, and human computer interaction. I am also an avid percussionist and composer, having performed in and composed for various ensembles large and small. I’ve also studied floral design and making wheel thrown ceramics.

Research Overview

Download my C.V. here!


tapes.jpg

Modeling Genre with Musical Attributes

Genre provides one of the most convenient groupings of music, but it is often regarded as poorly defined and largely subjective. In this work we seek to answer whether musical genres be modeled objectively via a combination of musical attributes and if audio features mimic the behavior of these attributes. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).


snareSmall.png

Modeling Rhythmic Attributes in Music

Musical meter and attributes of the rhythmic feel such as swing, syncopation, and danceability are crucial when defining musical style. In this work, we propose a number of tempo-invariant audio features for modeling meter and rhythmic feel. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).


Percussion Excitation

In this work, we present a system that seeks to classify different expressive articulation techniques independent of percussion instrument.


Dataset of Expressive Percussion Technique

In this  work we outline  a newly recorded dataset that encompasses a vast array of percussion performance expressions on a standard four piece drum kit.


LiveNote: Orchestral Performance Companion

We have developed a system that helps users by guiding them through the performance using a handheld application (iPhone app) in real-time. Using audio features, we attempt to align the live performance audio with that of a previously annotated reference recording. The aligned position is transmitted to users’ handheld devices and pre-annotated information about the piece is displayed synchronously.  [video] [official PhilOrch page]


Related Published Work

  • Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Celma, O., Kim, Y., "Modeling Genre with the Music Genome Project: Comparing Human-Labeled Attributes and Audio Features."  International Society for Music Information Retrieval Conference, Malaga, Spain, 2015. [PDF] 
     
  • Prockup, M., Asman, A., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., Modeling Rhythm Using Tree Ensembles and the Music Genome Project. Machine Learning for Music Discovery Workshop at the 32nd International Conference on Machine Learning, Lille, France, 2015. [PDF] 
     
  • Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., "Modeling Rhythm at Scale with the Music Genome Project." IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, New York, 2015. [PDF] 
     
  • Prockup, M., Scott, J., and Kim, Y. "Representing Musical Patterns via the Rhythmic Style Histogram Feature." Proceedings of the ACM International Conference on Multimedia, Orlando, Florida, 2014. [PDF] 
     
  • Prockup, M., Schmidt, E., Scott, J. & Kim, Y. (2013). Toward understanding expressive percussion through content based analysis. Proceedings of the 14th International Society for Music Information Retrieval Conference. Curitiba, Brazil.[PDF]
  • Prockup, M.; Grunberg, D.; Hrybyk, A.; Kim, Y.E., Orchestral Performance Companion: Using Real-Time Audio to Score Alignment. IEEE MultiMedia , vol.20, no.2, pp.52,60, April-June 2013

  • Scott, J., Dolhansky, B., Prockup, M., McPherson, A., Kim, Y. E. (2012). New Physical and Digital Interfaces for Music Creation and Expression. Proceedings of the 2012 Music, Mind and Invention Workshop, Ewing, NJ: