Call for Participation
1st International Workshop on Musical Metacreation (MUME 2012)
Held at the Eighth AAAI Conference on Artificial Intelligence and
Interactive Digital Entertainment (AIIDE’12)
Stanford University, Palo Alto, California, October 9th 2012.
We are delighted to announce the 1st International Workshop on Musical Metacreation (MUME 2012) to be held October 9, 2012, in conjunction with the Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE’12).
Computer technology has become a valuable tool in the modern composition, production and performance of music. Thanks to continued progress in artistic and scientific research, a new possibility is emerging in our musical relationship with technology: Generative Music or Musical Metacreation, the design and use of computer music systems which are “creative on their own”. Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning, themselves often inspired by cognitive and life sciences. Musical Metacreation suggest exciting new opportunities to enter creative music making: discovery and exploration of novel musical styles and content, collaboration between human performers and creative software “partners”, and design of systems in gaming and entertainment that dynamically generate or modify music.
The workshop will bring together artists, practitioners and researchers interested in developing software and systems that autonomously (or interactively) recognize, learn, represent, complete, accompany, compose or interpret music. In particular, emphasis will be put on systems with real-time aspects, since these are relevant to both art and entertainment communities.
Our Motivation for Initiating MUME 2012
We have observed a strong and sustained growth of the field of generative music and more generally Musical Metacreation. Until this point, the work has been presented across a range of venues in related fields, including the International Computer Music Conference (ICMC), the International Conference on Computational Creativity (ICCC), Sound and Music Computing (SMC), EvoMusArt, Generative Art, the symposium of the International Society for Music Information Retrieval (ISMIR) and other AI- and entertainment computing or computer-music conferences. We feel it is time to gather experts and specialists in a more focused arena, to define, explore and push forward the boundaries of Musical Metacreation. We hope the MUME2012 workshop will become the first in an ongoing series.
- The workshop will be a one day event including:
- Presentations of Technical papers (published version must be 8 pages or less)
- Presentations of Position papers relevant to Musical Metacreation and its future: we encourage reports from individual researchers as well as groups (published version must be 6 pages or less).
- Presentations of Demonstrations (published Demonstration papers must be 3 pages or less).
- One or more Panel Sessions (potential topics include international collaborations, evaluation methodologies, industry engagement, generative music in art vs. games)
All contributions (technical, position, demonstration) will be presented sequentially and interspersed. We believe this will allow for a higher-quality and more focused appreciation of sonic material.
Presentations (technical, demo, and position) will be 20 minutes long for questions and answers. Also, reviewers will be asked to propose one or two critical but general questions related to the submission that could be asked to the authors after their presentation to stimulate discussion.
- We encourage paper and demo submissions on topics including the following:
- Novel representations of musical information
- Systems for autonomous or interactive music composition
- Systems for automatic generation of expressive musical interpretation
- Systems for learning or modelling music style and structure; exploring or transforming a musical space
- Systems for intelligently remixing or recombining musical material
- Advances or applications of artificial intelligence, machine learning, and statistical techniques for musical purposes
- Advances or applications of evolutionary computing or agent and multiagent-based systems for musical purposes
- Computational models of human musical creativity
- Techniques and systems for supporting human musical creativity; intelligent agents that support the user in being more creative musically
- Online musical systems (i.e. systems with a real-time element)
- Adaptive music in video games
- Methodologies for and studies reporting on evaluation of musical metacreations
- Emerging musical styles and approaches to music production and performance involving the use of AI systems
Academics and artists interested in presenting at the workshop are asked to submit complete papers for review. Demo presentations will be evaluated based on submission of a shorter (maximum 3 page) paper describing the system to be demonstrated. Submissions will be reviewed by an international program committee of experts.
Academics and artists interested in participating in a panel discussion are invited to please contact the workshop chair (listed below).
We also welcome those who would like to attend the workshop without presenting. Workshop registration will be available through the AIIDE’12 conference system.
Submissions closed: via the EasyChair system at:
All papers should be submitted as completed works. Demo systems should be tested and working by the time of submission, rather than speculative. We encourage audio and video material to accompany and illustrate the papers (especially for demos). We ask that authors arrange for their own web hosting of audio and video files, and give URL links to all such files within the text of the submitted paper.
Paper length requirements are flexible, given as maximum limits only. For demo submissions especially, authors may prefer shorter but high-quality submissions which clearly explain their systems or findings.
- Length requirements:
- Max 8 pages for technical papers
- Max 6 pages for position papers
- Max 3 pages for demo papers
Papers should be prepared using Word or LaTeX and submitted as PDF files. Papers should be prepared in the same AAAI format as will eventually be required for the camera-ready copy. The AAAI formatting instructions, as well as Word template and LaTeX macro files, can be found here: www.aaai.org/Publications/Author/author.php
Submissions do not have to be anonymized.
AAAI will compile the accepted workshop papers into an AAAI technical report — an informal publication allowing materials to be quickly available to a wider audience after the workshop. There will be opportunity to revise accepted papers, based on reviewer comments, before the final document submission to AAAI.
Presentation and Multimedia Equipment:
We plan to provide two video projection systems as well as a stereo audio system for use by presenters at the venue. Additional equipment required for presentations and demonstrations should be supplied by the presenters. Contact the Workshop Chair to discuss any special equipment and setup needs/concerns.
Questions & Requests
Please direct any inquiries/suggestions/special requests to the Workshop Chair,
Submit Papers (via EasyChair) at:
Notification date: August 3, 2012
Accepted Author CRC due to AAAI Press: August 15, 2012
Dr. Philippe Pasquier
Assistant Professor, School of Interactive Arts + Technology (SIAT),
Simon Fraser University
Email Dr. Pasquier, or visit his website.
Dr. Arne Eigenfeldt
Associate Professor, The School for the Contemporary Arts,
Simon Fraser University
Email Dr. Eigenfeldt, or visit his website.
Dr. Oliver Bown
Lecturer, in the Design Lab, Faculty of Architecture, Design and Planning,
The University of Sydney
Email Dr. Bown.
Administration & Publicity Assistant
PhD Student, School of Interactive Arts + Technology (SIAT),
Simon Fraser University
Tim Blackwell – Goldsmiths, University of London – UK
Oliver Bown – The University of Sydney – Australia
Andrew Brown – Queensland Conservatorium, Griffith University – Australia
Jamie Bullock – Integra Lab, Birmingham Conservatoire – UK
Michael Casey – Bregman Music and Auditory Research Studio, Dartmouth College – USA
Karen Collins – University of Waterloo – Canada
Darrell Conklin – University of the Basque Country – Spain
Arne Eigenfeldt – Simon Fraser University – Canada
Morwaread Mary Farbood – New York University – USA
Rebecca Fiebrink – Princeton University – USA
Judy Franklin – Computer Science Department, Smith College – USA
Jason Freeman – Georgia Institute of Technology – USA
Luke Harrald – Elder Conservatorium of Music, The University of Adelaide – Australia
Robert Keller – Harvey Mudd College – USA
James Maxwell – Simon Fraser University – Canada
Graeme McCaig – School of Interactive Arts and Technology, Simon Fraser University – Canada
Jon McCormack – Centre for Electronic Media Art, Faculty of Information Technology, Monash University – Australia
Peter Mcilwain – Monash University – Australia
Gordon Monro – Department of Fine Arts, Monash University – Australia
Kia Ng – ICSRiM, University of Leeds – UK
Francois Pachet – CSL Sony Paris – France
Philippe Pasquier – School of Interactive Arts and Technology, Simon Fraser University – Canada
Marcus Pearce – Queen Mary, University of London – UK
Robert Rowe – New York University – USA
Richard Stevens – Leeds Metropolitan University – UK
Peter Todd – Indiana University – USA
Dan Ventura – Brigham Young University – USA