Summary and Conclusions

In this chapter we have presented a thorough overview of audio and music processing environments. Although all of them have different scopes and motivations, we have presented a classification in different categories. These categories are summarized in the following list:

  1. General purpose signal processing and multimedia frameworks: software frameworks for manipulating signals or multimedia components in a generic way. The most important examples in this category are Ptolemy and ET++.
  2. Audio processing frameworks: software frameworks that offer tools and practices that are particularized to the audio domain.

    1. Analysis Oriented: Audio processing frameworks that focus on the extraction of data and descriptors from an input signal. Marsyas is the most important framework analyzed in this subcategory.
    2. Synthesis Oriented: Audio processing frameworks that focus on generating output audio from input control signals or scores. Here it is important to mention STK.
    3. General Purpose: General purpose Audio processing frameworks offer tools both for analysis and synthesis. Out of the ones presented in this subcategory both SndObj and CSL are in a similar position, having in any case some advantages and disadvantages but no being very mature.
  3. Music processing frameworks: These are software frameworks that instead of focusing on signal-level processing applications they focus more on the manipulation of symbolic data related to music. Siren is probably the most prominent example in this category.
  4. Audio and Music visual languages and applications: Some environments base most of their tools around a graphical metaphor that they offer as an interface with the end user. In this section we include important examples such as the Max family or Kyma.
  5. Music languages: In this category we present different languages that can be used to express musical information. We have excluded those having a graphical metaphor, which are already in the previous category.

    1. N-Music languages: Music-N languages base their proposal on the separation of musical information into statical information about instruments and dynamic information about the score, understanding this score as a sequence of time-ordered note events. Music-N languages are also based on the concept of unit generator. The most important language included in this section, because of its acceptance, is CSound.
    2. Score languages: These languages are simply ways of expressing information in a musical score, usually based on a textual or readable format.
As a conclusion we must observe that many different environments exist and as already commented most of them with different goals, motivations and scope. Many of these environments are the result of a single person's effort and therefore offer a very personal view on music or audio processing. Few of them can truly qualify as software frameworks as defined in section 1.3 and also very few employ software engineering methodologies or advanced programming techniques.

On the other hand, a few of them (namely Max/Pd and CSound) have been able to build a relatively important community of users that is constantly adding new features to these environments and may be seen as an added value.

The basis that we have set in our analysis of the state of the art for our particular domain will be used for both constructing our proposals and also comparing the final results. In particular, in section 3.3 we will compare our CLAM framework to many of these environments.

2004-10-18