PWGL is a free cross-platform visual language based on Common Lisp, CLOS and OpenGL, specialized in computer aided composition and sound synthesis. Currently, it runs under Mac OS X and Windows XP operating systems.
PWGL (the successor PatchWork) is the result of some 20 years of research. This activity has involved numerous expert users, such as composers, researchers and musicians, and thus PWGL can be seen as an encyclopedia of different ideas and approaches. In order to support these ideas PWGL integrates several programming paradigms (functional, object-oriented, constraint-based) with high-level visual representation of data. The most important key points of PWGL can be listed as follows: high-quality visual appearance; ergonomic, uniform, and efficient graphical user interface; the ability to operate visually directly with high-level musical data; powerful intermixing of textual and visual programming; tight integration of its major tools (i.e. music notation, sound synthesis, scripting, and constraint-based programming); and cross-platform code-base.
PWGL is developed at Sibelius Academy in Finland by a research team consisting of Mikael Laurson, Mika Kuuskankare, and Vesa Norilo. Recently the team has been augmented by Kilian Sprotte (TU, Berlin).
In this tutorial we intend to demonstrate the current status of our system. We give several relatively complex examples that show how PWGL can be used in practice. A special attention is given to compositional and music analytical applications along with the music notation package ENP.
Biography of the presenter
Dr. Mika Kuuskankare, born 1970 in Finland, is a composer and researcher. Kuuskankare received his doctorate in computer assisted music notation in 2006 (opponent Roger B. Dannenberg). His dissertation dealt with ENP--the music notation package of PWGL (the thesis can be downloaded from his home page). Kuuskankare is also one of the original authors of PWGL along with Dr. Mikael Laurson. He is currently working in a research project "Expressive Notation Package" supported by the Academy of Finland. Kuuskankare is also an avid trumpet player. The most notable band he has worked with is Boston Promenade Big Band in Finland.
This tutorial provides a complete overview of MIRtoolbox, a Matlab toolbox for musical feature design, extraction and analysis from audio. An innovative environment integrated into the toolbox adds an extra layer on the top of the Matlab programming environment that makes it easier to use for both beginners and expert users, for pedagogy and for research purposes. A synthetic tour of the complete set of signal processing operators and musical feature extractors will be accompanied with concise explanation of the different concepts on concrete examples.
One particularity of this tutorial is the emphasis on important aspects of the toolbox that was not extensively documented so far: in particular, how to extract large set of features from large batches of long audio files without facing memory overflow, thanks to implicit memory management processes. The general architecture of the toolbox is presented, explaining how the integration, into the Matlab environment, of the new syntactic layer featuring implicit memory management has been made possible.
The audience will also have the opportunity to learn how to write new features and integrate them into the toolbox. Finally, a new MIRtoolbox Open-Source project invites users to actively collaborate to the further improvement of the environment.
Biography of the presenter
Olivier Lartillot is a researcher at the Finnish Centre of Excellence in Interdisciplinary Music Research at the University of Jyväskylä. He obtained a degree in engineering at Supélec Grande École, France, and a PhD degree in Computer Science at Ircam / University of Paris 6 in 2004. He also obtained a BA in Musicology from the University of Paris-Sorbonne. He designed MIRtoolbox with Petri Toiviainen and Tuomas Eerola, within the context of a European Commission NEST project (“Tuning the Brain for Music”, code 028570). From August, funded by an Academy of Finland Research Fellowship, he will initiate a 5-year project called Music Mining Plant, aimed at the design of a comprehensive framework dedicated to feature, structure and concept mining for Music Information Retrieval. Olivier Lartillot has published more than 50 scientific papers on these topics, serves as a reviewer for several international journals and is member of Program Committees.
by Bob Sturm
Just as Fourier analysis is to additive synthesis, dictionary-based methods (DBMs) can be seen as the analytical equivalent to granular synthesis, providing a "score'' for synthesizing a given sound by a combination of grains. However, their application extends much further than this. In this tutorial we introduce the concepts of DBMs (also known as sparse approximation or atomic decomposition), with particular emphasis on applications to music and audio data. We first review the background and fundamentals of approximation theory, which provides a brief discussion about Fourier and wavelet theory. Then we focus on the theory of and methods for sparse approximation and atomic decomposition, paying particular attention to iterative descent methods such as the family of matching pursuits. We discuss the importance and difficulties of dictionary selection, and interesting problems that can result from the decomposition process. Finally, we present several applications and demonstrations of DBMs to problems in music and audio signals, for instance, sound analysis and music information retrieval, source separation, music transcription, visualization, and transformation. We aim to provide participants with enough experience to use available software tools for sparse approximation, focusing on the C++ library Matching Pursuit Toolkit. Various MATLAB scripts will also be demonstrated and made available, along with a detailed set of notes, copies of the slides, and list of references for further study.
Biography of the presenter
Bob L. T. Sturm has received an undergraduate degree in physics from the University of Colorado, Boulder, USA (B.A. 1998), a graduate degree in computer music from Stanford University, USA (M.A. 1999), and a few other graduate degrees from the University of California, Santa Barbara, USA (M.S. 2004, M.S. 2007, Ph.D. 2009). He completed his Ph.D. in February 2009 in the Department of Electrical and Computer Engineering at UCSB, specializing in signal processing and sparse approximation. Bob co-authored the paper awarded the 2008 International Computer Music Conference Best Paper Award —which was all about DBMs. Currently, he continues his research as a Chateaubriand Fellow post-doctoral researcher at the Université Pierre et Marie Curie, Paris 6.