Call for Participation
3rd International Workshop on Musical Metacreation (MUME 2014)
Held at the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE’14)
North Carolina State University, Raleigh, North Carolina, October 4, 2014
Submission deadline: July 10, 2014
- *NEW* Submission deadline: July 20, 2014
- Notification date: August 6, 2014
- Accepted author CRC due to AAAI Press: August 14, 2014
- Workshop date: October 3-4, 2014
We are delighted to announce the 3rd International Workshop on Musical Metacreation (MUME2014) to be held October 3-4, 2014, in conjunction with the Tenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE’14). MUME2014 builds on the enthusiastic response and participation we received for the inaugural workshop in 2013 and 2014.
Thanks to continued progress in artistic and scientific research, a new possibility has emerged in our musical relationship with technology: Generative Music or Musical Metacreation, the design and use of computer music systems which are “creative on their own”. Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning, themselves often inspired by cognitive and life sciences. Musical Metacreation suggests exciting new opportunities to enter creative music making: discovery and exploration of novel musical styles and content, collaboration between human performers and creative software “partners”, and design of systems in gaming and entertainment that dynamically generate or modify music.
MUME brings together artists, practitioners and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, complete, accompany, or interpret music. As such, we welcome contributions to the theory or practice of generative music systems and their applications in new media, digital art, and entertainment at large. Join us at MUME2014 and take part in the exciting spirit of this growing community!
Our Motivation for Initiating the MUME Series>
We have observed a strong and sustained growth of the field of generative music and more generally Musical Metacreation. Until this point, the work has been presented across a range of venues in related fields, including the International Computer Music Conference (ICMC), the International Conference on Computational Creativity (ICCC), Sound and Music Computing (SMC), AudioMostly, EvoMusArt, Generative Art, the symposium of the International Society for Music Information Retrieval (ISMIR) and other AI- and entertainment computing or computer-music conferences. We felt it was time to gather experts and specialists in a more focused arena, to define, explore and push forward the boundaries of Musical Metacreation. Our first two editions were met with enthusiasm and received 31 and 32 submissions respectively of which half were accepted. With MUME2014 we continue the vision of MUME as an ongoing series.
- The workshop will be a two day event including:
- Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
- Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages maximum)
- Presentations of DEMONSTRATIONS (3 pages maximum)
- One or more PANEL SESSIONS (potential topics include international collaborations, evaluation methodologies, industry engagement, generative music in art vs. games)
- Presentations by INDUSTRY PARTNERS on Musical Metacreation-related work and challenges
One of the main innovations of MUME is that we allow more time for interaction (in the spirit of the workshop bringing together experts that are verbose about this field). In order to foster interactions between workshop participants, presentations of long papers will be 20 minutes long with 15 minutes for questions and answers (position papers 15+10, and demo 10+10).
- We encourage paper and demo submissions on MUME-related topics, including the following:
- Representation and Algorithms for MUME
- Novel representations of musical information
- Advances or applications of AI, machine learning, and statistical techniques for generative music
- Advances or applications of evolutionary computing or agent and multiagent-based systems for generative music
- Systems and Applications of MUME
- Systems for autonomous or interactive music composition
- Systems for automatic generation of expressive musical interpretation
- Systems for learning or modelling music style and structure
- Systems for intelligently remixing or recombining musical material
- Online musical systems (i.e. systems with a real-time element)
- Adaptive and generative music in video games
- Techniques and systems for supporting human musical creativity
- Applications of musical metacreation for digital entertainment: sound design, soundtracks, interactive art, etc.
- Evaluation of MUME
- Methodologies for qualitative or quantitative evaluation of MUME
- Studies reporting on the evaluation of MUME
- Theory, and Socio-economical Impact of MUME
- Computational models of human musical creativity
- Emerging musical styles and approaches to music production and performance involving the use of AI systems
- Socio-economical impact of MUME
- Authorship and legal implications of MUME
- Representation and Algorithms for MUME
We invite companies and businesses involved in Musical Metacreation and its application to come present their work and challenges to the MUME community. Each industrial partner selected will be given a timeslot to present/demo during the workshop. Interested industry representatives, for more information please see: www.metacreation.net/mume2014/industry
Please make submissions via the EasyChair system at:
https://www.easychair.org/conferences/?conf=mume2014 (in configuration) .
All papers should be submitted as completed works. Demo systems should be tested and working by the time of submission, rather than speculative. We encourage audio and video material to accompany and illustrate the papers (especially for demos). We ask that authors arrange for their own web hosting of audio and video files, and give URL links to all such files within the text of the submitted paper.
Paper length requirements are flexible, given as maximum limits only. For demo submissions especially, authors may prefer shorter but high-quality submissions which clearly explain their systems or findings.
- Length requirements:
- Max 8 pages for technical papers
- Max 5 pages for position papers
- Max 3 pages for demo papers
Each submission will be reviewed by at least three program committee members
Papers should be prepared using Word or LaTeX and submitted as PDF files. Papers should be prepared in the same AAAI format as will eventually be required for the camera-ready copy. The AAAI formatting instructions, as well as Word template and LaTeX macro files, can be found here: www.aaai.org/Publications/Author/author.php
Submissions do not have to be anonymized.
AAAI will compile the accepted workshop papers into an AAAI technical report ï¿½ an informal publication allowing materials to be quickly available to a wider audience after the workshop. There will be opportunity to revise accepted papers, based on reviewer comments, before the final document submission to AAAI.
Presentation and Multimedia Equipment:
We plan to provide two video projection systems as well as a stereo audio system for use by presenters at the venue. Additional equipment required for presentations and demonstrations should be supplied by the presenters. Contact the Workshop Chair to discuss any special equipment and setup needs/concerns.
It is expected that at least one author of each accepted submission will attend the workshop to present their contribution.
We also welcome those who would like to attend the workshop without presenting. Workshop registration will be available through the AIIDE’14 conference system.
Questions & Requests
Please direct any inquiries/suggestions/special requests to the Workshop Chair,
Submit Papers (via EasyChair) at:
https://www.easychair.org/conferences/?conf=mume2014 (in configuration)
Dr. Oliver Bown
Lecturer, in the Design Lab,
The University of Sydney
Email Dr. Bown.
Administration & Publicity Assistant
Nicolas Gonzalez Thomas
MSc Student, School of Interactive Arts + Technology (SIAT),
Simon Fraser University
Kingsley Ash – Leeds Metropolitan University – UK
Al Biles – Rochester Institute of Technology – USA
Oliver Bown – The University of Sydney – Australia
Karen Collins – University of Waterloo – Canada
Darrell Conklin – University of the Basque Country – Spain
Arne Eigenfeldt – Simon Fraser University – Canada
John Ffitch – University of Bath – UK
Rebecca Fiebrink – Princeton University – USA
Jason Freeman – Georgia Institute of Technology – USA
Bill Hsu – Department of Computer Science, San Francisco State University – USA
Kristoffer Jensen – Aalborg University – Denmark
Robert Keller – Harvey Mudd College – USA
Bill Manaris – College of Charleston – USA
Aengus Martin – Faculty of Engineering, The University of Sydney – Australia
James Maxwell – Simon Fraser University – Canada
James McDermott – University College Dublin – Ireland
Alex Mclean – ICSRiM – University of Leeds – UK
Philippe Pasquier – School of Interactive Arts and Technology, Simon Fraser University – Canada
Robert Rowe – New York University – USA
Avneesh Sarwate – Princeton University – USA
Andie Sigler – McGill University – Canada
Richard Stevens – Leeds Metropolitan University – UK
Thomas Stoll – Dartmouth College – USA
Dan Ventura – Brigham Young University – USA