MuMe – IJCAI 2018 – Tutorial

Musical Metacreation: AI for Generative Music

Slides: IJCAI-2018-Tutorial-v25

This tutorial proposes a survey of AI techniques that have been applied to music generation. From stochastic Markovian approaches to Recurrent Neural Networks, we will discuss representations, algorithms, and listen to their outputs for a variety of musical genres.

Musical Metacreation (MUME) involves using tools and techniques from artificial intelligence, artificial life, and machine learning, themselves often inspired by cognitive and life sciences, to endow machines with musical creativity. Concretely, the field brings together artists, practitioners and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, complete, accompany, or interpret musical content.

This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges, with a particular focus on IJCAI-relevant aspects of the field. After a brief introduction to the field, its history, and some definitions of the problems addressed, the tutorial will focus on presenting the various AI and ML algorithms and architecture used in the design of generative music systems. The tutorial is illustrated by many examples of successful generative systems. We will also review the evaluation methodologies used to establish the performance of these systems, and discuss current and future applications of these systems in the creative industry.

Tutorial Outline

The tutorial introduces musical metacreation themes, theory, algorithms and state of the art systems to those who are less familiar with the field or are interesting on catching up on new developments. Talks will take the form of literature surveys that offer an objective and broad snapshot of the work in the field.

The tutorial will be divided into two sessions of 1.5 hours each (see schedule below). The first session will focus on introducing key topics in musical metacreation, with a survey of the field history and motivations. We will then go through the various algorithms commonly used for these systems, and details the empirical studies showing that these systems are human-competitive. Examples of existing and applied systems will illustrate the presentation. Time for interaction is weaved in the structure of the tutorial.

Part I : “An Introduction to MuMe” (1.5 hours)

  • Name that MuMe: Introduction to Musical Metacreation
  • MuMe and Variation: Classification, Ontology, and History
  • Walking on the MuMe: Survey of families of approaches with examples: Stochastic systems, Grammars, Cellular Automaton and Complex Systems.

[Coffee Break, 30 mins]

Part II: “MuMe Systems and Evaluation” (2 hours)

  • Fruits of the MuMe: Current approaches, including Evolutionary Computation, Neural Networks, Multi-agent Systems
  • A Kind of MuMe: Evaluation of MuMe Systems, Past, Present and Future
  • MuMe Over: Critical discussion of societal issues

Bibliographic References:

  • An Introduction to Musical Metacreation
    Philippe Pasquier, Oliver Bown, Arne Eigenfeldt, Schlomo Dubnov, ACM Computers in Entertainment, 2016.
    https://dl.acm.org/citation.cfm?id=2930672
  • Algorithmic Composition: Paradigms of Automated Music,
    Gerhard Niehaus Springer Verlag, 2009.
  • Deep Learning Techniques for Music Generation,
    Jean Pierre Briot, Gaetan Hadjeres, Francois Pachet, Spring Verlag, 2017.
  • A Functional Taxonomy of Music Generation Systems
    Dorien Heremans, Ching-hua Chuan,  Elaine Chew. ACM Computing Surveys, Vol. 50, No. 5, 2017.
  • Proceedings of the First International Workshop on Musical Metacreation (MUME 2012), in conjunction with The Eighth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AAAI Technical Report WS-12-16, 88, 2012.  proceedings
  • Proceedings of the Second International Workshop on Musical Metacreation (MUME 2013). In conjunction with The Ninth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), AAAI Technical Report WS-12-16, AAAI Press, 91, 2013. proceedings
  • Proceedings of the Third International Workshop on Musical Metcreation (MUME 2014), AAAI Technical Report WS-14-18, AAAI Press, Palo Alto, 2014. proceedings
  • Proceedings of the 4th International Workshop on Musical Metacreation (MUME 2016), 2016. ISBN: 978-0-86491-397-5. proceedings
  • Proceedings of the 5th International Workshop on Musical Metacreation (MUME 2017), 2017. ISBN: 978-1-77287-019-0. Proceedings

 

Speaker Bio:

Philippe Pasquier works on creative processes and generative systems. He is both a scientist specialized in artificial intelligence, a multidisciplinary artist, and an educator. His contributions range from theoretical research in multi-agent systems, computational creativity and machine learning to applied artistic research and practice in digital art, computer music, and generative art. Philippe is an associate professor in the School for Interactive Arts and Technology and an adjunct professor in Cognitive Science at Simon Fraser University, in Vancouver, Canada.

Philippe’s artistic work has been shown in prominent venues on all five continents, including at the Centre Pompidou (France), IRCAM (France), GMEA (France), Musée d’art Contemporain de Montréal (Canada), Mutek Festival (Canada), Biennale of Sydney (Australia), Earzoom Festival (Slovenia), ISEA2012 (Turkey), ISEA2014 (Dubai), ISEA2016 (Honk Kong), ZKM (Germany), Vooruit (Belgium), Plus One Gallery (New York, USA), Rio Olympics cultural program (Brazil), ISEA2017 (Columbia), Ars Electronica (Austria), Space One (Korea)…

Philippe is the chair and investigator of the International Workshop on Musical Metacreation (MUME), the MUME concerts series, the International ACM Conference on Movement and Computation (MOCO), and he was director of the Vancouver edition of the International Symposium on Electronic Arts (ISEA2015). He has co-authored over 120 peer-reviewed contributions presented in the most rigorous scientific venues. His online class on Generative Art and Computational Creativity for SFU in partnership with Kadenze is the most popular of its kind.

Philippe’s projects have gained support and recognition from more than 20 scientific or cultural institutions including the the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), Canadian Council for the Arts (CCA), the French Ministère de la Culture et de la Communication, the Australian Research Council and the Australian Council for the Arts.

More about Philippe: http://philippepasquier.com/

More about the Metacreation Lab: http://metacreation.net/

Online classes: 

Generative Art and Computational Creativity: https://www.kadenze.com/courses/introduction-to-generative-arts-and-computational-creativity/info

Advanced Generative Art and Computational Creativity:  https://www.kadenze.com/courses/advanced-generative-art-and-computational-creativity

Visuals generated by the REVIVE generative music system during a live performance at SAT, Montreal, 2018.