Breaking
bars of musical communication
by Friedhelm Hartmann
Abstract
This
article is based on the assumption that metrical basics need to be audited for
global redundancy that may subtlety hinder important elements of further
musical development. As plea for real-time communication utilizing contemporary
electronic media, an implementation example is shown to demonstrate how frozen
metrics could be transformed into a livelier platform enabling new musical
solutions for various audiences. Before illustrating possible deployment
opportunities by introducing existing and proposed models based on this
platform, the underlying metrical progression paradigm is generalized and
presented within a unified stylistic approach that intents to support the
establishment of new commonalities in the musical structure across different
target communication functions. Eventually the author would like to encourage
any kind of feedback and cooperation wherever the suggested elements are able
to gain a relevant resonance.
About the
hierarchical-periodic “snap-trap”
The
amount of sounds, which are communicated in a musical context today and which
are freed from their physical source at the same time, being it mechanical or
electronically, is tremendous.
While
sounds are still carrying their production system characteristics as a part of
their identity, considering the World Wide Web distribution and the nature of
all kind of electronic mixing and processing systems, sounds are basically
totally free today to interact in any way with each other at their phenotypical
level. Thus they seem to be completely free to enable the creation of any new kind
or type of musical context or style.
The
new possibilities are so rich and overwhelming that we are not surprised to see
on the other hand a very high degree of reproduction of musical concepts which
originating from “older” times. Moreover, the new power of electronic media
distribution multiplied by the World Wide Web seems even to re-enforce and to
solidify those older concepts.
For
example, the concept of organizing music in bars which originates historically
from the need to synchronize human players in a real-time performance situation
at a defined location and time is still by far the dominating rhythmical
organization method in a medium where such an organization would not
necessarily be needed.
This
raises the question if the hierarchical-periodic rhythmical organization
(represented by the overwhelming use of 4/4 bars) represents also an optimum
concept for non-instrumental music, for example, in its ability to support
dance (as it can be shown by the reproduction of 3/4 and 4/4 based dance
pattern in commercial synthesizers)?
Or,
does our mindset that learned from instrumental solutions - now empowered by
the high reproduction rate in the electronic media - hinder us to discover new
solutions that would be more inherent and adequate to the new medium
(represented by “free sound meetings”) while still satisfying our basic musical
needs (for example, to support dance elements)?
I
tend to agree with the second question statement:
·
No matter which sounds we use, relating any kind of
rhythmical creation to a hierarchical-periodic system represented by metrics
like the 4/4 bars seems to increasingly reproduce an non-proportional high
number of well-known pattern that rather refer to known music than establish
new or significant extended experiences.
·
As a result, an increasingly limiting factor seems
to capture other musical elements as well, such as melodic development,
resulting in similar recycling effects as described above.
·
As a reaction to it, music that wants to
differentiate itself from the overuse of known pattern seems very often to
create an opposite solution which solidifies the same situation even more while
remaining in a “negative” or “opposed” dependency. For example, rhythmical
dance elements are very often rejected together with their limited metric
context, instead of trying to decouple them from this context and using them in
a new way.
·
These contextual difficulties may be an indication
for the strong force of systemic sedimentation which goes far beyond our
natural given abilities of musical reception or reflection within our momentary
social context. I suggest to consider the hierarchical-periodic system and its
implications as a specific “musical operating system” as opposed to other
musical operating systems that would organize its elements through different
principles, for example, Indian music that organizes metrics by the
accumulation of rhythmical cells with continuously varying numbers of pulses.
Focusing
on those areas which might be hidden by these systemic communication issues
could help to enable a discussion how areas that suffer from recycling effects
of the musical operating system could be improved and which musical values and
opportunities would lie in such new areas.
In
order to help this discussion being more efficient, I would like to illustrate
the explored principles with early implementation instances which have been
built on top of a public sound community during the past years, and to share
them gladly with anyone who feels relevance to this topic, being it musical
artists, students, teachers, or musical researchers.
Posted
communication versus animated communication
With
its storage capabilities, the electronic media created a new quality of communication:
Sounds or even entirely pieces can be repeated in exactly the same way over and
over again. Here are a few examples:
·
Drum samples in a rhythmical loop of a pop song, played
in a live performance
·
Playlists holding various pieces of contemporary
music in the Internet
Let’s
call those communication units “fixed” and in terms of the communication act
“posted”, as you would pin a poster on the wall for everyone to visit and to
watch.
“Variable”
communication units would appear every time in a slightly different shape but
not necessarily neglecting their identity. Again, here are a few examples:
·
Drum sounds in a rhythmical loop of a pop song
appearing each time with a slightly different volume, filter setting and entry
time in order to simulate a real drum being played
·
A modular composition where the interpret takes
decisions about the order of the different parts
The
communication act could be called “animated” as it requires a real live
experience to recognize the differences each time new. This could be also
interpreted as a sign of uniqueness or sensual complexity of being
non-repeatable in an exact manner, much like a living organism.
As soon as you would record and make
such an animated event available as an exact copy for replay, you would have -
again - posted it.
The
whole differentiation becomes even more interesting when looking into which
layers of a musical communication are built of fixed and which are built of
variable elements at the same time. For example:
·
A score of an instrumental composition as a fixed
layer (the score is posted) while its interpretation as a variable layer (the
notes are played every time a bit different, which means animated)
·
A free improvisation with a sampler (animated
layer) where the samples are repeated as an exact copy (posted elements)
·
A running drum machine with a static loop (posted)
onto which unique melodic and effect elements are dropped (animated)
·
An electronic piece of art which is played from a
laptop using a standard sound sequencer (such as ProTools, which would be a posted
approach), whereas an interpret controls multiple output channels by a mixer in
order to adapt the piece to a certain physical room (animated appliance of
volume and filters in a live situation)
Posted
sequencing versus animated sequencing
Standard
sequencers such as mentioned in the above example are based on the traditional
score structure. Each sound has its exact entry point, beginning, end,
amplitude, envelope, etc. A composer would edit these functions in relation to
other sounds with similar parameters aiming to achieve a kind of final status
of the sound score, which would be considered as an optimum – yet authorized – scenario.
Animated
sequencers are suitable to produce unique musical situations even so they
include also posted elements. They exist in form of various real-time
configurations, as we learned from the early days of analogue electro-acoustic
music till digital patches realized in software solutions like MAX, for
example.
The
animated sequencing model introduced in this article is being built on so-called
“Animated Sound” objects. These objects can be scheduled simultaneously in real-time,
every time in new shape and order.
Figure 1: Animated
Sound (animated mode) – credit: ERH
In
order to achieve a reasonable performance of this approach, posted and animated
elements are combined at the following levels:
1. At
the lowest level, a posted sound sample is being called in real-time
2. At
the level of an Animated Sound object, player, random and navigation functions
are applied to the sample to allow uniqueness of its appearance each time it is
being called again
3. At
a sequencer level, Animated Sounds calls can be recorded and replayed in
different degrees of determination:
1. Recording
of the exact sequence including all random outputs to replay a posted version
2. Recording
of the exact time schedule of the Animated Sounds while generating their random
parameters new with every replay
3. Recording
of a sequence of Animated Sounds while applying rules of new appearance, for
example, comprising four successive sounds to a simultaneous event
4. Allowing
interactively playing and recording additional elements while the original sequence
is being played (classic overdub)
5. Allowing
to interact with the sequence itself during play, for example, to play or
repeat parts of the sequence
In
order to allow a unique animated listening experience for each sound, a minimum
set of sound variations is applied to each Animated Sound (see next section).
The
current implementation of the Animated Sounds model can be summarized as
follows:
1. An
Animated Sound contains at least one posted sound (this is simply a sound
sample)
2. The
Animated Sound includes player functions that allow to
1. start
the sound interactively at different index locations
2. stop
it at any time
3. mute
the sound while it is playing
4. set
or discard a loop while it is playing
5. set
loop start and end points while a loop is playing
6. use
a volume slider to jump to a new volume or to change it smoothly
7. use
a panorama slider to jump to a new sound position or to change it smoothly
3. The
Animated Sound includes pre-defined random functions that
1. change
the index start point each time the sound is called
2. change
the volume each time the sound is called
3. change
the sound position (panorama) each time the sound is called
4. jump
to a different sound (via sound links) each time the sound is called
4. The
Animated Sound includes pre-defined sound links that allow to
1. access
a number of sorted similar or different sounds
2. navigate
from sound to sound based on a pre-defined network of sounds
5. The
Animated Sound includes a pre-defined scheduler that allows to
1. jump
automatically to another sound or to itself to be played with new parameter
settings
2. stop
automatically, including a predefined decay, which may be randomized as well
3. jump
automatically to predefined indices within a sound sample
The “Figure 1” online example above can
be used to experience the described functions (available through www.cm-gallery.com/freesound/FleX/BreakingBars_01.htm).
The example requires the Internet Explorer browser version 6 upwards in
accordance with the Adobe Shockwave player. The “Figure 1” online page includes
also links to a complete interactive description of the Animated Sounds and
further background information.
Where possible, it would be ideal to
be able to link the sample to its source generation algorithms as well, in
order to edit its parameters, to offer pitch or filter changes, etc., as known
from MAX capabilities, for example. This is, however, not included in the
current implementation, as it invokes also further challenges for a web
environment, etc.
Posted
Sounds versus Animated Sounds
The
value of sound animation methods becomes evident when comparing it to its
posted equivalent. With posted sounds, our listening experience focuses on the
internal variations and relations, while the periodical context (posted
metrics) works in the background:
Figure 2: Animated
Sound (posted mode) – credit: fons
Now,
the same sound is used in an animated fashion. Our listening experience focuses
here still on internal relationships of the sample but now also on the
relationships that are built when repeating the sound each time in a different
manner:
Figure 3: Animated
Sound (animated mode) – credit: fons
In
other cases, these two levels maybe even closer to each other, especially when
listening to it for the first time (as also learning effects changing the
experience dynamically):
Figure 4: Comparison Posted
and Animated Sound – credit: walkerbelm
It
is important to mention that the degree of difference between the posted and an
animated effect of the same sound elements can vary a lot. This bridges posted
and animated experiences on one side, keeping its anchor in known listening
experiences, while smoothly opening the space for lots of additional possible musical
values on the other side.
Using
a sample as a playable scale
On
an abstract level, Animated Sounds which are differentiated by their index entry
points are playing like on an instrumental scale. Imagine a glissando sound as base
sample:
Figure 5: Animated
Sound (posted mode) – complex glissando
This
sound, obviously simpler in a posted continuous shape can appear as a source
for more sophisticated melodic expression, even more when spectra and volume
transformation are involved in the tone shifting process of the original sound:
Figure 6: Animated
Sound (animated mode) – melodic glissando elements
Animations can be generated both by
scheduled events that are pre-defined as well as by real-time interaction with
the sound graphics that allows free control of the sample index. This
“index-clicks” can still be supported by automated volume and panorama changes at
the same time.
From
this experience it is only a small step to imagine the potential of more
complex non-linear functions or scales for which samples can be used to expand
the melodic or rhythmical scope. Also, links between various Animated Sounds
can be included in the real-time scenario to further widen the musical spectra:
Figure 7: Animated
Sound (animated mode) – multiple sample calls
The
complexity that can be applied here is extremely scalable. Just imagine using
samples which are an output of Animated Sounds becoming controllable in the
same manner.
Animated
Sounds used in popular music environments
In
popular music environments, the scalability of the “animation degree” may
encourage for trials that keep most of the known sound experiences while
breaking with the fundamental operating system or – so to say – breaking the (4/4)
bars. For example, the following static loop
Figure 8: Animated
Sound (posted mode) – credit: AikiGhost
seems
to have a good ability to appear in an animated way without suffering from too
much discontinuity:
Figure 9: Animated
Sound (animated mode) – credit: AikiGhost
The
following complete set of Animated Sounds which would include this animated
loop as rhythmical base could be used as an element of a rather popular
oriented Sound Track:
Figure 10: Sound Scape 1
(4 Animated Sounds) – credits: AikiGhost,
Corsica_S,
Andrew Duke
In
another example, a slow drum loop is being transformed as a part of a rather
quite sound scenario:
Figure 11: Sound Scape 2
(4 Animated Sounds) – credits: pentm450,
hanstimm,
CosmicD
To
listen to the full piece where this “Sound Scape” has been included you may
visit the corresponding music video at YouTube:
Figure 12: “Breakin’ bars” #16 at www.youtube.com/watch?v=6eUf6m0zfzg
This
example is a part of a study for YouTube assemblies of various dance clips (including
amateurs and professionals of all scales) called “Breakin’ Bars” where
non-periodical pattern are applied especially to break dance figures, which
very often exceed periodical movements as well:
Figure 13: “Breakin’ bars” videos available
at www.youtube.com/profile?user=FreesoundMusic&view=playlists
A
non-periodic construction of musical phrases has obviously consequences for the
musical form as well. As a response to a binary formal structure which is a
result of periodical rhythmical patterns and would often constitute multiple
units of 4, 8, 16 etc., an animated structure could be accumulated through a
continuously increasing and decreasing size of its formal parts. It might be a
nice exercise to experience this suggested principle – which we call “Wave Form
Design” – in a piece like this taken from the same above collection:
Figure 14: “Breakin’ bars” #10 at www.youtube.com/watch?v=cN7sDdW5HdE
For
public places, the reduction to one static Animated Sound assembly as a kind of
“Sound Image” representing music for environmental occasions may be sufficient:
Figure 15: Sound Scape 3
(4 Animated Sounds) – credits: fons,
ERH,
hanstimm
Figure 16: “Breakin’ bars” #10 based on
Sound Scape 3 at www.youtube.com/watch?v=HfdypUQ3T00
It
might be worth to think about the lack of attention or intentional listening
when hearing background music as an advantage – not as constrain – for
establishing new musical experiences, just “by the way”, or as a “Trojan Horse”.
For more examples and information about the related Sound Image project
“Changes of Music”, you may visit the following site:
Figure 17: “Changes of Music” at freesound.ning.com/group/injections/forum/topics/changes-of-music-65-sound
Animated Sounds used in differentiated
music environments
The
reuse of more differentiated material through animation can lead to fairly
sophisticated structures allowing the listener to dive deeper into the material
while still providing a quite consistent and stable scope of sound
relationships.
The
following sound exposes differentiated material in a posted fashion:
Figure 18: Animated
Sound (posted mode) – complete sample
When
animated, a continuous sound generator seems to produce the sound stream. In
fact, the same sample as above is used over and over again in the following
example:
Figure 19: Animated
Sound (animated mode) – using sample as generator
Through
its scalability, the model of Animated Sounds allows a high degree of flexibility
which can allow highly differentiated rhythmical structures to enable
counterpoint-like inclusion of melodic phrases. This way, rhythmical elements can
be transformed into new continuous listening experiences, realized through various
floating degrees of rhythmical exposure:
Figure 20: Sound Scape
4 (4 Animated Sounds) – credits: jesges,
ERH
Please
note that the shown model can be directed for change on a higher level (moving
to further Sound Scapes by using single play controls) as well as being
differentiated at a lower level (interacting with the full set of Animated
Sound functionality during the play of the Sound Scape). This allows basically
to “interpret” the music at different levels in real-time or configuring it very
conveniently for sound installations or similar occasions.
Obviously,
the continuous model of the Wave Form Design as mentioned in the former section
seems to be even more suitable for sophisticated or differentiated developments
of a musical form. The following “Sound Drama”, completely built of noise
samples, may be an example for a continuous rhythmical stream that does not
fall into any kind of periodical “snap-trap” while still providing the feeling
of an ongoing and forward moving “drive”:
Figure 21: “Movement” – first part of
“Noise Symphony” at www.cm-gallery.com/gex01r04.htm#nsy01
More
examples of this kind are available at the same site:
Figure 22: “Electric Symphonies” series at www.cm-gallery.com/gex01r04.htm
Looking
into musical commonalities of diverse musical genres or targeted experiences
may encourage to discover important values or benefits of consistent stylistic transformations
across musical ecosystems. These transformations can strengthen in return the
distinct areas of targeted listener groups while enriching the experiences
toward wider acceptance as well as building the ground for natural evolvement
freed from “unavoidable” recurring redundancies that may be caused by elements
and dependencies of an overused musical operating system.
Ideas
to a value proposition of a holistic and scalable style
Needless
to say that applying similar elements and principles to very popular and very
sophisticated pieces of musical communication at the same time is not a new
concept at all. Mozart will probably always serve as the best example of this
attitude, but he is of course not an exception at all in this respect.
On
the other hand, in a musical world where sounds more and more play key roles as
foundational elements instead of tones with their discrete rhythmical and harmonic
relationships, the question, how stylistic commonalities can be achieved, even
if this should be an objective at all, may look overambitious.
Today,
the overwhelming number and variety of concepts seems rather to produce the
opposite tendency, manifested by an enormous power toward further
differentiation through stylistic separation. But does this tendency
automatically lead to more value or to more individuality?
We
believe that in our current situation successful differentiation can be rather
achieved by stylistic unification serving as a basis for individualization, where
individualization becomes better visible and powerful on top of a common
ground.
This
brings us back to the function of a musical operating system. As long as I feel
the need to reject foundational elements since they seem overused, I will need
to separate myself from it. As soon as I succeed to reuse foundational elements
in my own way without falling into any kind of “snap-trap”, I might be able to
create a musical communication that has a much broader relevance for a lot of
different listeners although I’m still communicating through my very personal
profile. But exactly this may help to create eventually the ground for a greater
and lasting power of individuality.
To
give these assumptions a try, some verification aspects need to be visited.
Within the framework of this article, I would like to suggest looking into the
following stylistic aspects:
1. Balancing
“band” and “bend” sounds
2. Getting
a clear understanding of possible musical deployment targets
3. Anthropological
basics as a common ground for the semantic classification of musical elements
and functions
4. Stylistic
relevance of anthropological oriented balance and derivation capabilities
Band
and Bend sounds and structures
In
electronic media, sounds or sound structures tend to be closed to one of the
following poles:
·
Pure sound recording of the physical world,
represented by the aesthetics of the Musique concrète – bound to reality,
combination of discrete elements, also: instrumental sound tracks played by a
band = “band” sounds
·
Pure or “artificial” sound generation by means of
electronic techniques represented by the aesthetics of the Cologne school –
pure sound parameter modulations, networked models of sound synthesis, etc. =
“bend” sounds
Both
poles may stand for the following semantic implications:
·
Recorded sounds are – as the process suggests –
conservative, backward looking. They are rather of a static nature, not
modulated and discrete. They are directly linked to our non-electronic hearing
experience, originating from nature or other sounds in our environment, voices,
mechanical instruments, etc.
·
Generated sounds are – as the process suggests –
progressive, forward looking, based on visionary elements. They are rather of
dynamic nature, modulated or “bend” in a physical continuous space by various
electronic parameters, not bound by physical limitations in the real world.
They are linked to “typical” electronic hearing experiences for which very
often no other comparison pattern can be found, and which can be categorized
according to the various known electronic sound generation and processing
methods.
In
the same way,
·
Posted sound experiences tend to sediment band
sound experiences, whereas
·
Animated sound experiences tend to support bend
sound experiences.
These
poles are of course abstractions, as recorded sounds may be modulated as well,
or generated sounds may tap into known non-electronic recognition pattern (for
example, bird phrases). In addition, both worlds tend to converge as follows:
·
Sound recordings go through electronic equipment
and become more and more transformed into heavily processed, abstract “bend”
sound material.
·
Sound generation methods are used to simulate more
and more real physical “band” sound events, when applying methods of physical
modeling, for example.
Using
sound animation methods,
·
band sound experiences can be transformed into bend
sound experiences, while
·
new bend sound experiences can be solidified
through a higher degree of included repeated elements (still varied or
animated).
This
rough polarization can be extrapolated to musical structure and composition:
·
On the discrete conservative pole we find elements
of the “old” musical operating system: beside band sounds, we find discrete
tonal scales, relatively fixed relationships of tones, self-contained division
of time values representing the binary 2-4-8-16 hierarchy, etc.
·
On the continuous progressive pole we find elements
of new listening experiences: transitions from one sound into another, tonal
modulations like glissandi, metric modulations, rather chaotic distributions, etc.
An
accentuated dialogue and convergence between both of these spheres may be
beneficial:
1. It
may provide a rich, fruitful and growing area of new discoveries.
2. It
can combine conservative and progressive hearing experiences or even bring them
into a true synthesis.
Figure 23: Synthesis of
a speech gesture performed on a violin sound
3. The
free play between band and bend elements can support a smooth transition toward
more progressive elements without losing general relevance and acceptance by
the listener, and risking narrowing and isolating communication tendencies.
This way it has certainly also relevancy for educational purposes.
4. It
may help to consolidate and formalize elements and rules of a new musical
operating system to gain maximum leverage for instrumental or technical
capabilities, deployment, etc. Posted and animated versions of the same sound
can be offered, for example.
The
stylistic determination in the field of band and bend sound and structural
elements should specifically focus on those elements, which are suitable to
converge or shift between both poles to support integration capabilities
wherever possible. Then it should be also quite feasible to support musical
areas that follow very different objectives.
The
following deployment model is built upon two axes:
·
Left to right: Bend elements to Band elements.
·
Bottom up: Popular to expert listening.
Figure 24: Generic
musical deployment model
On
a fairly high level, the four resulting poles can be characterized as follows:
1. Labmusic
This
summarizes music with a high degree of experimental (bend) sounds, including
lots of noises and other extreme sound material, strong processed natural
sounds, etc. Labmusic often includes formal experiments as well.
2. Pubmusic
This
summarizes music with a strong orientation to physical and balanced sounds as
known from common hearing experiences. The musical structure itself is more
balanced and musical forms are rather simple and not too concentrated.
3. Clubmusic
This
summarizes music with a high degree of stylistic determination due to specific
utilization purposes, for example, music for dance. The musical structure is
very concentrated, sounds are advanced but balanced, and the musical forms are
rather simple.
4. Hubmusic
This
summarizes music with the highest degree of musical differentiation through an
optimum balance of distinguished musical values. This music functions as a hub
for all other musical areas, serving as an orientation for wise information
reduction in advanced music design (Labmusic convergence path) as well as for
enrichment opportunities of more conservative music design (Pubmusic and
Clubmusic convergence path).
On
a lower level of genre typical applications, the following derived model can be
helpful:
Figure 25: Generic
deployment model for musical genres
The
different deployment areas can serve as a map for existing musical
communication channels, putting them into a consolidated view of
interrelationships that can be realized by common stylistic elements as known
from successful “older” musical operating systems. At the same time the
following terms can be suggested to provide a clear differentiation of the
characteristics that would be typical for the respective deployment area:
·
Sound Image
Sound
Images are "Occasional Music" that can be seen as a kind of
"non-listening" music experience. Music would be usually presented in
the background where attention is put only occasionally. With Sound Images, a
certain scenario of sounds remains static for a longer period of time. A
listener could dive into it but also interrupt or leave it any time. He can
keep it in the background like atmospheric wallpaper. He decides, however, how
and for which occasion to use it, unlike "Environmental Music"
that fills a place with music or sounds without any possibility of interaction by
the listener (see “Sound Space”).
·
Sound Track
Sound
Tracks are basically songs (without referring to “movie sound tracks”), which
stand for the most popular and common understanding of music, usually backed by
a strong rhythmical fundament and often used as “Dance Music”. We suggest
calling it "tracks" rather than "songs" since this allows
also including pieces of pure sound assemblies like Techno styles. Short in
duration, concise and straight in its message, fast to consume, easy to enjoy,
entertaining, and intended for full listening attention, even so it is often
used in the background as well.
·
Sound Trip
With
Sound Trips a listener enters intentionally deeply an uninterrupted musical
listening experience. Ongoing long-lasting pattern, usually presented with a
high intensity is suitable to completely embrace him. Many kinds of
"Trance Music" can be determined by its underlying romantic character
being it music by Wagner, Tangerine Dream, Jazz Jam sessions, Trance techno
style, etc. Often, this kind of music is presented in live sessions in direct
interaction with the audience.
·
Sound Drama
“Dramatic
Music” requires even more attention than Trance Music. It stands for a
presentation of the music with an extremely wide range of its expressions and
capabilities. A listener needs to follow a Sound Drama like he would follow a
theater piece or a movie. A kind of inner story or dramaturgy is fundamental
for this kind of music and represents – beside enigmatic music (see below) –
the highest level of musical (not necessarily structural) complexity possible
in musical communication.
·
Sound Enigma
Sound
Enigmas may summarize musical creations that are based on "counterpoint"
like ways of musical construction, providing the opportunity for the highest
unity of intellectual and emotional density that can be probably achieved (no
better example than Joh. Seb. Bach). Any work toward a definition what
"sound counterpoint" means today would be certainly a great
foundational contribution to further musical explorations of “Enigmatic Music”
with strong impact to other areas as well.
·
Sound Park
Where
sounds are basically "parked" there is the music rather built in its
potential. Instead of being elaborated as a kind of story, or providing trance
intensity, etc., "Prospective Music" functions very much like Sound
Images in that sense that parts of it can be replaced or moved around without
decreasing its overall quality. Unlike with Sound Images, a Sound Park suggests
a kind of sequential catalogue of sounds (more or less systematically). It can
be a piece to listen to or just a collection of sounds where a listener may
just pick whatever he likes. An extreme idea of Sound Parks is known by the
American composer John Cage that lets the listener enter an environmental Sound
Space to experience the arbitrary elements as music by himself (listening =
composing).
·
Sound Game
The
possible involvements of a listener to follow or change the rules of a musical
game are wide spread. Mozart’s famous composition cube belongs to it as well as
mechanical music machines and other systems of musical interaction. Obviously
the electronic media is especially suitable for a wide range of truly “Interactive
Music” environments. Sound Games may be also exposed in public environments,
such as galleries, where interactive attendance would be required from the
visitors.
·
Sound Space
Sound
Spaces stand for musical installations which are usually placed in public
places literally as a part of the architecture or interior environment,
unchangeable by the visitors. “Environmental Music” could also be part of an
exhibition where the installation is being perceived without interaction.
For
each of the musical communication forms multiple sub categories can be found or
created, whereas overlapping and combinations would apply in addition.
In
a nutshell, we may look at the bottom area of the model as open or experimental
areas, including Sound Parks, Sound Games, Sound Spaces, and Sound Images,
while the upper areas represent music in a more determined or productized
fashion, including Sound Tracks, Sound Trips, Sound Dramas, and Sound Enigmas.
The latter would require a stronger involvement of the musical authorship, as
well as a different dedication from the listener in the communication process. Listening
examples mapped by the model are available at the author’s musical home page:
Figure 26: Examples for deployment model at www.cm-gallery.com
FleX
In
order to be able to apply common elements to such a variety of communication
forms and objectives as shown above, and in order to do that with the
capability of an ongoing and smooth overall transformation from band to bend
areas, musical commonalities should be found that are deeply rooted in anthropological
conditions.
Based
on a phenotypical analysis of sounds and their musical appliance in most
various musical styles, the following high-level categorization may be helpful
as a starting point:
1. Figures
Figures
are musical structures dominated by permanent repetitive and rhythmical actions
in order to keep the musical flow going. These structures are highly 'communicative'.
They allow other structures to dock on while appearing as a constitutive
background.
Figure 27: Rhythmical
elements serving as “Figures” – credit: Corsica_S
Xmission and Integration
Repetitive figures are most suitable to cause a physical
resonance of our body, creating the ultimate desire to move along with the
sounds. In the same time, our spirit is kept awake while continuously analyzing
- even without knowing - the ongoing relations and comparisons presented by the
play of figures.
As a result, Figures are creating a ground for
transmission of musical information that hardly can be rejected and that
transport other musical phenomenon, while providing the potential to integrate
with them.
2. Layers
Layers
are musical structures of rather static nature or representing longer lasting
processes. They allow to “dive” deeply into sound and sound complexes,
experiencing mental space, and creating a kind of emotional environment. Layers
are not always present.
Figure 28: Long lasting
sound serving as a “Layer” element – credit: nicStage
Enlarging Experiences
Only if sound events stay persistent for a certain period
of time we are able to grasp their characteristics in greater depth. Layers
provide the opportunity to deep dive into musical structures and to develop a
true emotional relationship with them.
Like Figures, Layers provide a kind of ongoing carrying
functionality, while Events and Xpressions provoke rather particular and more
focused reactions.
3. Events
Events
represent musical actions of singular and individual character that occur
rather arbitrary. Their appearance can be very different and spans from
fill-ins representing a kind of “comment” till important major changes or
occurrences of the musical structure like special breaks, or start of new
sections.
Figure 29: Special voice
effect serving as an “Event” – credit: Jimbrowski-One
Leaving the Structure
The primary function of Events is to evoke the highest
awareness in relation to what the current musical situation provides. Events
deliver the “greater picture”.
They are the drivers for moving toward the exploration of
further experiences. A musical structure that owns very few surprises will tend
to cause very conservative and recurring behavior, while an overload of events
cannot keep the inherent and optimum power of the surprising information due to
disorientation.
4. Xpressions
Xpressions
are musical structures leading the musical communication. Usually split in
phrases, these structures are of individual and expressive nature and most
responsible for the “message” of a certain piece of music. Xpressions are
almost always present.
Figure 30: Ornamental
flute elements serving as “Xpressions” – credit: ERH
Focusing on Messages
The highest density and degree of a joint emotional and
intellectual feedback is caused by musical expressions. Without them, music is
"playing" (figures), "sounding" (layers), or
"attracting" (events), but not "speaking" nor
"telling" anything.
We are following expressions as we follow some bodies'
speech, and we are responding with a well developed sense of interest that
resides between a pole of being bored (too much repetitions) and a pole of
being overwhelmed (too much different information) – a sense that is of course
very personal and depends on our own dynamic and individual hearing
experiences.
Figure 31: Example of a
typical FleX sound assembly (Sound Scape) –
credits: Corsica_S,
nicStage,
Jimbrowski-One,
ERH
For educational purposes these categories can be linked
to the classical four temperaments which help to define music and its stylistic
requirements as a come together of different mentalities or mental beings in
order to create a complex and wide resonance through the musical communication.
Figure 32: Musical
character mapping with “Four Temperaments” (artist unknown)
Further, we may find that a full coverage of those areas
is needed in order to satisfy the various communication areas. For example, we
may define Figures and Xpressions as mandatory elements at least for the
determined communication forms Sound Track, Sound Trip, Sound Drama and Sound
Enigma regardless their characteristics as band or bend dominated elements.
In terms of the musical structure, FleX elements can also
be combined in various numbers and arrangements as it can be seen from the
following example:
Figure 33: “Web Song” #2 at
www.cm-gallery.com/FleX/SoundTrips/WebSongs/WS_0002.htm
(for credits, click [i] button at the respective sound in the web page; to
start, click white button)
“FleX” can be used
as a logo to summarize this categorization model of the four main areas
Figures, layers, events, and Xpressions, representing a “conservative”
anthropological pool on one hand, as opposed to “power” (flexing) and
“flexibility” to express the adaptation efforts required to conquer new and
progressive areas on the other hand.
Once
we agree to accept anthropologic roots as a basic differentiator of sound
semantics, we will start to be careful mapping complex sound synthesis and
composition rules to linear parametric spaces as often suggested by the layout
of electronic sound generating systems. In order to create and use sounds
effectively in musical communication, it could be helpful to look which fields
of semantic resonances need to be considered, as a starting point for further musical
differentiation.
Moreover,
accepting the musical communication model as mandatory part of the structure
definition of sound and music demands a semantic orientation for any sound
generation and composition to gain appropriate and intended resonances by the
audience.
The
abstract model of a Semantic Sound Synthesis is built on two axes – spanning
the total of auditive sensations down to specific systemic sedimentations (the
material level), while mapping these across human experiences starting by
anthropological levels up to the highest degree of individual resonance,
happening in just a moment of an individual life.
Figure 34: Model of
Semantic Sound Synthesis
Together
with the anthropological view as introduced by the “FleX” model above, four
semantic poles of sound and music recognition can be used to determine
competency levels in the musical communication:
1. Color
On
a first level that should be even valid beyond human experiences, we need to
discuss the parameters of direct sensorial traction. The distinction of a tone
versus noise, harmonic versus disharmonic spectra, soft versus hard shaped
sound envelopes and much more belongs to this area.
2. Character
The
behavior of sounds in time around the human heart beat as central anthropologic
determination for the feeling of speed adds another level to the meaning of the
sound as discussed above. Anthropological determinants are also given by the
distortion of the auditive resonance through other sensations like visual
impressions, and so on.
3. Grid
The
level of social differentiation specifies conventions and sedimentations that
are a result of a collective selection process, very much alike what we called
formerly a “musical operating system” on which further stylistic
differentiations are taking place.
4. Mood
The
same way as the Grid determines specific relationships out of the sensorial
total, the Mood determines the specific realization of the communication
process by an individual based on a broader mind-set determined by the
Character. This actual communication is obviously biased by an enormous amount
of determinants dictated by the specific social, biological, environmental
conditions of the individual listener.
The
model of the Semantic Sound Synthesis is built to promote a holistic view on
sound and composition rules, which can be simply achieved by acknowledging all
the important areas of semantic or communication relevancy and the attempt to
improve our understanding of their interdependencies.
Using
the above axes, we may be tempted to map supporting musical science areas as
follows:
Figure 35: Supporting
musical science areas
The
framework of this article won’t allow elaborating on the interrelationships of
these areas more in depth. At this point we may just look to an example, where categories
of a music theory would be always linked to corresponding categories of a sound
theory. This way, a comparison of tonal music, where certain sound classes are
clearly invariant, with tonal music, where the “color” has impact on tonal
decisions, could gain additional insights, etc.
Before
exploring examples derived from implementation suggestions of the unified
deployment approach, I would like to emphasize why a discussion rooted in
rhythmical structures in music may be particularly suitable to trigger a
holistic discussion on how current conflicts of musical communication indicated
by redundancies, ineffectiveness and other symptoms could be proactively
addressed, and how this discussion can further help to promote musical
understanding as an open and unique communication value across different
societies:
1. Rhythm
and metric organization based on rhythmical units is the primary level for
structuring musical communication. This can easily be seen with music that does
not own significant differentiation of sound events around the human heart beat
which basically determines the center of rhythmical reception (determining
slow, fast, etc.).
2. Rhythm
itself cannot be discussed sufficiently without its interrelationship to other
elements, like melodic elements, layers, etc. These other elements may be
combined at a different track or being integrated as a more complete figure.
For example, a rhythmical pattern will have a total different impact when
linked to a different sequence of sounds or tones, creating different
rhythmical sub-pattern depending on the similarity of the respective sound or
tone elements.
3. The
musical structure including its rhythmical elements is not complete or cannot
be discussed without taking receptive targets (musical deployment) into
consideration. This is based on the assumption that music itself is determined
by its musical communication which would be hard to define otherwise. This is
also where we start to be able to define a “meaning” to the elements we are
using.
Taking
the above into account, a stylistic solution could be targeted as follows:
·
If there is a way to create musical value by
combining or synthesizing “old” or “common” or “understood” elements
represented by the “band” class of elements with new “bend” elements, listeners
could gain the opportunity to “learn” new elements and to come used to it based
on a sufficient pool of “known” elements which would keep the communication
channel alive.
·
In order to create the right mix or adaption with
the elements, the stylistic capabilities need to be flexible enough to modulate
or change known characteristics seamlessly into unknown or new values.
·
The true semantic power or enablement of this
approach lies of course in the ability to have
1. credible
or authorized conservative elements (known to the recipient)
2. credible
or authorized new elements (new to the recipient)
3. a
credible or authorized combination or synthesis of these two or more elements
as a musical value in itself
This
combination of the known with the new information is a model for a true
communication as a credible act of (creating or providing new) information:
Person A is represented by sound A (he loves this sound, he would communicate
or listen to it over and over again), person B is identifying itself by sound B
(not less engaged than person A). Both sounds are originating from very
different stylistic resources.
If
now a style succeeds to bind sound A and sound B together in a credible way of
musical information, person A will meet person B represented by his sound and
probably accept him since his own sound is still a part of the message and vice
versa. Person C that created the link learned from both and added its own view
in the way the AB combination was set. This way, also person A and B are
virtually meeting with person C as well.
Figure 36: People-Sound
meeting communication model (“musical fusion reactor”)
Sharing
musical experiences inside a musical communication, as composition or
moderation of those experiences, requires conversation rules that represent
basically a musical style. Depending on the kind of “operating system” built on;
this style is capable to obtain a certain reach or scope of the communication.
The wider the reach, the stronger the specific segment can be informed, and the
more personal the musical message may occur.
The
slogan “when sounds meet – just as people” expresses this approach of a
“musical fusion reactor” and is built to encourage cross-linked semantics to
create a true win-win situation for authors and recipients in a dynamic
environment.
When
sounds meet – just as people…
The
International Computer Music Conference 2005 in Barcelona with the conference
theme “Free Sound” was the trigger for “The Freesound Project”, which became
the most successful sound sharing platform till today in the World Wide Web.
“The Freesound Project aims to create a huge
collaborative database of audio snippets, samples, recordings, bleeps, ...
released under the Creative Commons Sampling Plus License. The Freesound
Project provides new and interesting ways of accessing these samples, allowing
users to
·
browse the sounds in new ways using keywords, a
"sounds-like" type of browsing and more
·
up and download sounds to and from the database,
under the same creative commons license
·
interact with fellow sound-artists”
(www.freesound.org/whatIsFreesound.php)
“The
Freesound Project” site is a good example how sounds of most different origins,
structure and aesthetical positions can freely relate to each other, supported
by community features that position sounds almost as “personalities”, holding
their own homepage, profile, etc.
Figure 37: “Home page” of sound “balls.wav”
at www.freesound.org/samplesViewSingle.php?id=31497
“The
Freesound Project” is also a great example under how many aspects sounds can be
set free today:
Free – How? |
Free – What? |
Free – Where? |
Free
available |
Free
of cost Free
access Free
distribution Free
editing |
Creative
Commons License Public
availability Use
for any purpose/audience Cooperative
results |
Free
from physical constraints |
Free
generation and shaping Free
distribution (broadcast) |
Any
sound generation source Sounding
everywhere |
Free
from stylistic constraints |
Free
mixing and merging Free
habits |
Combine
popular/lab sounds Not
bound to metric scales |
Free
to communicate |
Free
behavior of sounds Free
interaction of sounds |
All
anthropologic patterns apply Open
semantic composition |
Although
“The Freesound Project” is not intended to restrict the sound interchange to
musical use, it can be shown that a significant sector of this resource is well
suited to feed musical intentions especially when looking into the fusion of
diverse musical elements.
The
“Freesound Music” community site is built to support new musical experiences
while utilizing Freesounds supplied by “The Freesound Project”. The following musical
user groups are currently targeted:
1. Freesound Explorations
This
group is created for advanced musical students and music professionals that
want to use Freesounds for their own musical occupation. This group may tend to
focus on Sound Dramas and Sound Enigmas.
2. Freesound Injections
This
group is open for people who use Freesounds to upgrade their tracks or like to
enjoy pure Freesound jam-sessions. This group may tend to focus on Sound Tracks
and Sound Trips.
3. Freesound Alchemy
This
group is built to support interactive workshops helping to make any kind of
Freesound Music.
4. Freesound Square
This
group is arranged for people who love to listen to any kind of “Freesound Music”
and like to share their experiences with it.
Figure 38: “Freesound Music” user groups at
freesound.ning.com/groups
Beside
these groups the site offers lots of standard community features in order to
allow intensifying the overall “Freesound Music” communication via:
·
Blog entries
·
Concert reports
·
Articles and discussions
·
Uploading of favored music, musical imagery, or
music videos of any kind
·
Messaging between the members
·
Playlists of music and music videos
·
Sharing of sound creation and mixing freeware, and
more.
Figure 39: Home page of “Freesound Music” at freesound.ning.com
One
of the most intense “Freesound Music” community experiences so far has been a
series of “Freesound Music” workshops held for pupils aged 14-15 at a public
school in Tel Aviv.
A
set of 6 initial sessions was dedicated to introduce into the concept of
commonalities beyond stylistic constraints and to open the mind for new
creative opportunities combining free sounds obtained from “The Freesound
Project”:
1. “Collecting”:
Of what does music consist?
Collect
the main ingredients to be boiled (looking into style).
2. “Shaping”:
How music is built?
Prepare
the material and put it into the glass tube (looking into form).
3. “Heating”:
How does musical material function?
Put
fire under the glass tube (looking into the sound material).
4. “Vaporizing”:
What is fixed and fluid material? (see band and bend sounds)
Make
the material fluid (looking into atomic grids, releasing sounds).
5. “Mixing”:
When sounds are a good fit?
Shake
the glass tube with the fluids (looking into new Sound Scapes).
Figure 40: “Music Alchemy” course Sound
Scape replies at freesound.ning.com/group/alchemy
6. “Pouring
Out”: Creating a sound trip and sending it away…
Turn
the tube and let the result stream out (looking into new musical shapes).
A
second set of sessions gave the pupils the opportunity to collect sounds from
“The Freesound Project” site in order to create their own sound composition.
The results have been exciting especially in that sense that every student put
a completely different musical attitude to it, while still following the
general course guidelines.
Another
interesting experience was to see the degree of freedom that was applied to the
material of the pieces in spite of the typical stylistic predetermination of
pop and classical music.
All
pieces have been presented in a public concert together with an introduction
which informed the audience about objectives and the path of the course:
Figure 41: “Music Alchemy” concert
introduction at freesound.ning.com/group/alchemy/forum/topics/links-1
From
a methodical standpoint the use of the Freesound Music site and its Alchemy
group was extremely welcomed by the pupils:
·
Each of the students got its own home page where
he/she could present himself/herself with his own favored musical taste.
Figure 42: “Music Alchemy” home page
example at freesound.ning.com/profile/Lilit
·
The presentation of students work during sessions
was simplified through the use of the site as online repository for the
homework (see Figure 40).
·
Students could share their work and communicate
with each other beside the regular weekly session meetings.
·
The relationship between teacher and students was
significantly enhanced since it was possible to monitor the progress throughout
the whole week using the website as communication tool including audio
verification, technical assistance, etc.
·
The intensified relationship supported by the
website was certainly a factor that allowed positive and quite complex results
in a very short time (the course was limited to 13 sessions only).
·
The students finished the course with a steady
reference or documentation of their achievements together with the possibility
to apply the learned topics also in the future in the same environment and
share it with further participants.
Figure 43: “Music Alchemy” student compositions at freesound.ning.com
The Sound Surf Project
At
the level of sound creation, another course was held for a younger group of
pupils (aged 12-13), where lots of unique sound results were created and
eventually uploaded to “The Freesound Project”.
The
course topics have been backed by dedicated work sheets that helped to examine
the following items:
1. Elementary
sound parameters (high/low, loud/weak) and typology of Figures, Layers, Events
and Xpressions using known musical examples
2. Determining
basic sound origins (instrumental, voice, environment, synthetic), spectral
types (tone, tone mix, noise, hybrid), and combination of those together with
the above sound typology while searching “The Freesound Project” online
repository
Figure 44: “The Sound Surf Project”
worksheet excerpt for defining basic sound origins
at www.cm-gallery.com/Students/Workshops/IroniAlef/SSP-WS.htm
3. Preparing
and performing recordings with physical sounds, with free chosen combinations
of sounds from “The Freesound Project”, and with explorations of sound
generation models provided by MAX patches
4. Applying
multiple sound processing methods to the recorded sounds, publishing the
results at “The Freesound Project”, and sharing the sound experiences with each
other
Figure 45: Student sounds at www.freesound.org/usersViewSingle.php?id=263230
Through
the platform of “The Freesound Project”, pupils learned also a new social
behavior of sharing musical values with a worldwide community, as their sounds
have been used already by a wide number of users, including the students of the
latter Music Alchemy course, who discovered them completely on their own based
on “The Freesound Project” sound search features.
Figure 46: “The Sound Surf Project” page
with direct access to the pupils sounds at
www.cm-gallery.com/Students/Workshops/IroniAlef/IroniAlef.htm
The
“Freesound Music” community site has been further a meeting point for
educational workshops at universities including the presentation of the
Semantic Sound Synthesis model (the above):
Figure 47: Video excerpts of “Semantic Sound Synthesis” workshops for
advanced students at freesound.ning.com/group/explorations
Freesounders joining Freesound Music
When
defining musical communication as a part of the musical structure, it is
obvious that the degree of progressiveness of musical elements cannot be derived
solely from parametric values but must be related to the current set of
perceptional experiences of a listener as well. As a “less” and “more” exists
on an abstract level of parametric modulation, exactly the same musical element
will have a “less” or “higher” degree of progressiveness depending on which
musical experiences are met in the musical communication cycle.
From
that reason posted versions of animated sounds are available in addition to the
animated mode. The following example presents a well-known musical design where
the rhythmical structure is still remaining in posted or static loops while
Xpressions and other elements are dropped in an animated fashion into the
static structure.
Figure 48: “Images I” at freesound.ning.com/profile/Markus
– credits: (see album info)
Only
the full scalability between the poles of band and bend elements and
structuring can provide the true freedom that is required to drive a successful
adaptation and learning process. Please note also that this collection of 6
Sound Images has been created by a Freesounder with minor computer experience
using the system of Animated Sounds for the first time and just for a couple of
hours.
Other
Freesounders develop different attitudes to share their Freesounder experiences
with the community, as the following statement suggests:
owl, mikesh, jace and Freed. All of these guys get
credit. All I did was added them into one. I used some very basic tools. I
recorded the samples, then I trimmed and edited the file. Here's something
interesting; When I was playing them I immediately thought that the 11 samples
I heard sounded like a "Hidden Track" used by some bands (Marilyn
Manson?). Anyways, when I played them all at the same time I said; "YES!”
So here you go. I present my rendition of a “Hidden Track". I am creating
these sounds for personal enjoyment. However, they MAY or COULD be used as a
"Hidden Track" for those that would like to add it to their Music or
CD's. Enjoy!
Figure 49: “Hidden Tracks” at freesound.ning.com/video?page=3
(page number may vary)
Freesounders
were also joining the Freesound Music community when finding out that their
sounds have been used by other members in order to learn more about their sound
meetings with the other Freesounders, and for which the attribution rule of the
Creative Commons License is especially supportive.
Figure 50: Project announcement to
Freesounders at www.freesound.org/forum/viewtopic.php?p=20847
Community
interrelationships are a great opportunity for the development of multimedia
experiences or a higher awareness of Freesound Music related projects. The
following set of pieces is a virtual meeting between the Freesound and the
Electric Sheep community (http://electricsheep.org/), as introduced in the
first chapter.
Figure 51: “Music of Changes” at www.youtube.com/profile?user=FreesoundMusic&view=playlists
Another
example for the merge of creative communities is the “Breakin’ bars” cycle that
was mentioned in the first chapter as well. It connects the Freesound with the
YouTube community in form of multiple video responses:
Figure 52: “Breakin’ Bars” at www.youtube.com/profile?user=FreesoundMusic&view=playlists
Posted
publications as being presented in playlists are fairly suitable for the
productized approach of Sound Images or Sound Tracks. Other deployment areas may
require more direct engagement in a real-time environment, such as Sound Trips
or Sound Dramas.
Due
to the object oriented design of Animated Sounds, which can be freely assembled
within a standard browser environment, Freesound Music members are able to
create and offer their own Sound Pages, including play controls, etc.:
Figure 53: Creating Sound Pages at freesound.ning.com/forum/topics/1486807:Topic:1563
Figure 54: Template based Sound Page
created at freesound.ning.com/forum/topic/show?id=1486807:Topic:1501
A
more user friendly approach supported by a catalogue of specific Shockwave
objects has been implemented at another public location that is linked to the
Freesound Music site:
Figure 55: Excerpt of Sound Page publishing
preferences at www.cm-gallery.com/FleX/Publish/Prefs.htm
Figure 56: Sound Page building User Guide at
www.cm-gallery.com/FleX/Docu/FleX/index_files/sheet007.htm#P_Publishing
Pre-defined
Sound Pages are available as well, supporting the various deployment areas of
the Freesound Music community. “The Sound Surf Beach” offers a kind of Sound
Games for “Freesound Square” and “Freesound Alchemy” groups where recorded
sound surf paths can be shared through the Internet simply by interchanging and
modifying a “Surf Code” in text format using email, messenger, or alike.
Figure 57: Creating a “Sound Trip” at www.cm-gallery.com/FleX/SoundTrips/SR-103Frame.htm
Figure 58: Sending a “Sound Trip” as Surf Code
by email from www.cm-gallery.com
The
open concept of the Sound Pages allows Freesound Music members to publish their
own Sound Pages at “The Sound Surf Beach”:
Figure 59: Freesound Music member Sound
Page at www.cm-gallery.com/FleX/SoundTrips/SR-136Frame.htm
A
similar project uses more advanced sounds that could be suitable for the “Freesound
Injections” group:
Figure 60: “The Sound S@nd Bank” at www.cm-gallery.com/FleX/SoundTrips/SoundBankFrame.htm
Users
of the “Freesound Injection” group can also find a systematic catalogue of
Freesounds to “inject” specific sounds into their own tracks. Since sounds are
presented in a posted format, these pages can be also used for more traditional
Sound Tracks or Sound Trips, whereas the free availability of advanced animated
handling can be integrated to any degree desired.
Figure 61: “Freesound Injections” Sound
Trip creation site at www.cm-gallery.com/freesound/inject.htm
Figure 62: “Freesound Injections” portal at
www.cm-gallery.com/FleX/freesound/INJ/ContentFrame.htm
Figure 63: Posted sound environment at www.cm-gallery.com/FleX/freesound/INJ/!StartSession.htm
A
similar sound library has been created for musically advanced users as targeted
in the “Freesound Explorations” group. Here sounds are presented by the
animated mode default and a larger selection of advanced bend sound structures
is available.
Figure 64: “Freesound Explorations” Sound
Trip creation site at www.cm-gallery.com/freesound/explore.htm
Further
selections and assemblies can support the ability to effectively create “sound
meetings”. The “Workbench” holds a complete repository of all sounds that are
currently available as Animated Sounds:
Figure 65: “Workbench” with complete
repository at www.cm-gallery.com/FleX/Collections/!Start.htm
The
special importance of rhythmical structures building the ground for further
musical decisions is reflected in the “Sound Roads” pages that hold a catalogue
of Figures sorted by tempo:
Figure 66: Catalogue of Animated Sounds
sorted by speed at www.cm-gallery.com/Projects/Envi_01c.htm
“Sound Scapes” holds a catalogue of suggested ‘sound
meetings’ of sounds taken from the four main categories Figures, Layers, Events
and Xpressions, from which users can start to create their own pieces:
Figure 67: “Sound Scapes” sorted by speed
at www.cm-gallery.com/Projects/Envi_01b.htm
The
“Matrix” is an example for an extended play console consisting of 88
simultaneous Animated Sounds, especially suitable for real-time performances using
a touch screen. It is recommended to use larger Sound Pages such as the
“Matrix” in an offline mode (DVD):
Figure 68: “Matrix” of 88 Animated Sounds
at www.cm-gallery.com/Projects/Perf_01b.htm
A
more advanced Sound Page environment, called the “Freesound Navigator”, allows
creating, combining and playing sound assemblies in real-time across the whole
repository by using Sound Scape palettes with a large number of Sound Scapes
that can be even recorded and stored in real-time as well:
Figure 69: Freesound Navigator with full
repository real-time control at
www.cm-gallery.com/FleX/freesound/EXP/Navigator00FrameCompact.htm
Figure 70: Demonstration of “Freesound
Navigator” within “Breakin’ Bars” video at www.youtube.com/watch?v=etyJa7WdGIs
As
you can see in the video demonstration, the Sound Navigator is also equipped
with the ability to display every sound author during the Animated Sound
performance in real-time.
All
pages that are demonstrated in this section join the same functionality,
including:
·
Assembly of currently displayed Animated Sounds to
a new Sound Page
·
Assembly of links of different Sound Pages to be
included in the new Sound Page
·
Session management for counting sound use (see next
section)
·
Technical extendibility through HTML based editing
of Sound Page parameters
Figure 71: Technical documentation for Sound Page customizations at
www.cm-gallery.com/FleX/Docu/FleX/index_files/sheet007.htm#PU_HTML
·
Technical extendibility through Shockwave based editing
of Animated Sound parameters and further resources
Figure 72: Technical documentation for Animated Sounds customizations
at
www.cm-gallery.com/FleX/Docu/FleX/index_files/sheet007.htm#PU_Shockwave
Royalty Inheritance Program
As
the use of Freesounds is bound to the Creative Commons Sampling Plus license,
all Sound Pages are equipped with the ability to register all used Animated
Sounds and to count their use per actual sound duration and number of calls of
the particular sound anytime. Since Animated Sounds are always included in a
Sound Page and controlled by it, this capability allows a fairly complete and
detailed control of the distributed use of the sounds, including:
1. Sound
Author Attribution
With
a mouse click a complete report of all sound authors involved can be issued to
be included into any piece of communication to disclose the required
attribution.
Figure 73: Automated Freesound
author attribution report launched through Sound Pages
2. Session
information
More
detailed information can be issued as well in the reports, including the links
to the original sounds at “The Freesound Project”, duration of its use, number
of calls etc. Extended reports are also available that summarize the use over a
larger period of time.
Figure 74: Session reports, see
www.cm-gallery.com/FleX/Docu/SoundCount/SoundCountUserGuide.htm
3. Project
related assignments
For
projects that are produced with Animated Sounds but distributed in classic
music channels like CDs or other posted environments, a project related
“RIP-Key” can be applied that allows separating those distributions in the
calculation.
Figure 75: Project counting, see www.cm-gallery.com/FleX/Docu/SoundCount/SoundCountUserGuide.htm
Detailed
information about the report facility can be obtained from the “Royalty
Inheritance Program – User Guide” available through the above links. This User
Guide explains also how Freesound creators could benefit from a commercial use
of their sounds through Freesound Music Sound Pages.
Usually,
when it comes to commercial use of Freesounds, no commercial limitation applies
if Freesounds are used as they are presented by “The Freesound Project” which
is in line with its version of the Commons Creative Sampling Plus license.
Figure 76: Sampling Plus 1.0 license for
Freesounds at creativecommons.org/licenses/sampling+/1.0/
This
changes when using Freesounds as Animated Sounds since the additional value
provided by the Animated Sounds is covered by a different non-commercial
Creative Commons license. Together with a waiver for commercial use, the basic
idea is still to allow completely free non-commercial use as granted by the
original Sampling Plus license, but to ask for a fair share from any party as
soon this party uses the content for money and to forward this share to the
parties that are involved in the value chain, starting from the originator of
the sound.
Figure 77: Simple value chain of a creative
network with basic Creative Commons license terms,
see White Paper at www.cm-gallery.com/FleX/Docu/SoundCount/SoundCountWhitePaper.htm
To
support this approach, a royalty sharing program (“Royalty Inheritance Program
– RIP”) is being suggested, enabled by the ability of the system to track the
exact use of the sounds in relationship to the original authors.
Figure 78: Royalty Inheritance model for
creative networks,
see RIP Waiver at www.cm-gallery.com/FleX/Docu/Legal/Waiver
for Commercial Use.htm
The
RIP model could be also suggested for similar value chains in today’s World
Wide Web content development environments. The RIP model seems particularly
suitable to support and encourage progressive or niche initiatives (for
example, the author of experimental sounds) through the capability to allow an
ongoing worldwide accumulation even of small amounts that could still become a
significant figure for the particular initiator. More information can be
obtained by the “Waiver for Commercial Use”, and by the “Royalty Inheritance
Program – White Paper” (see links provided above).
The
intention to breaking bars of musical communication as described in this
article is a holistic platform approach that needs to cope with the challenge
of linking a personal vision with interpersonal engagement and cooperation
required to create a significant impact.
Therefore,
depending on the degree of alignment and resonance that can be envisioned, the
author will be glad to
1. respond
to any request to share further information and insights and engage deeply in
any cooperation opportunity with all of those readers, which would like to
support the concept as a whole and even use it for their own occupation,
2. supply
assistance to all of those readers that don’t agree with the complete concept
but see value in particular aspects or suggestions of the content,
3. help
to understand more in detail those parts of the content that couldn’t be
communicated well enough,
4. especially
respond to those of the readers that have strong concerns, critics, or any
other comment deriving from different views onto the subject, or regarding the
way the content has been presented.
Or
let’s meet sounds, just as people…
Rhythm as a key factor
Rather
to provide a conclusion, this article tries to offer a proposal. Meaningful
conclusions about musical communication can obviously only be found when
meeting the listener itself (including ourselves) in the daily biased battle of
musical identification. As an integral part of the musical education and
progression, the voice of the listener should always have a predominant place:
Figure 79: Listener feedback to Freesounds
and Freesound Music at www.cm-gallery.com/gex01gb.htm
Figure 80: Public poll “What do you think
about ‘flexible grooves’?” at www.freesound.org/forum/viewtopic.php?t=4465
Some of the questions I have
1. Which
technical platform would be ideal to proceed in an open-source environment,
allow users to add Animated Sounds, provide sound generation tools for Animated
Sounds, etc.?
2. Which
institution would be ideal or interested in addressing cross-linked musical
research and development of such a platform?
3. Which
commercial organization would want to engage in new ways of creative musical
networks and could benefit from “Freesound Music” features?
4. Which
artistic cooperative projects could be suitable or pursued using “Freesound
Music” elements?
Bibliography
Volek, Jaroslav
Essay:
Musikstruktur als Zeichen und Musik als Zeichensystem
Aus: Henze,
Hans-Werner (Hrsg.): Die Zeichen. Neue Aspekte der musikalischen Ästhetik II.
Frankfurt a. M. (Fischer) 1981. S. 222-255.
ISBN 3-596-26900-8
Apresjan, Ju.D.
Ideen und Methoden
der modernen strukturellen Linguistik
Berlin 1971
Blaukopf, Kurt
Musical life in a changing society:
aspects of music sociology
Translated by David Marinelli. Portland,
Or. : Amadeus Press, c1992.
ML3795 .B6313 1992
Wißkirchen, Hubert
Essay: Kunst und
Popularität
Musik und Bildung,
Juni 1983
Rosen, Charles
The Classical Style, Haydn Mozart
Beethoven
The Viking Press, New York, 1971, 1976
ISBN 0486222942
Dahlhaus, Carl
The Idea of absolute music
trans. Roger Lustig.
Chicago:
Chicago University Press, 1989.
ISBN: 0 226 13487 3
Curotta, Laura
“An exploration of a student string quartett
as a model of cooperative learning”
Sydney
Conservatorium of Music, University of Sydney
2007
Busoni,
Ferruccio
"Sketch of a New Esthetic of
Music" in Three Classics In The Aesthetics Of Music
Dover Publications (1962); originally
published by G. Schirmer ca. 1911; translated from the German by Dr. Th. Baker.
Benjamin, Walter
The Work of Art in the Age of Mechanical
Reproduction
in SELECTED WRITINGS: 1927-1934
Harvard University Press
ISBN: 0674945867
Bolz, Norbert
Theorie der neuen
Medien
Raben Verlag von
Wittern, München 1990
ISBN
3-922696-67-8
Deutsche Sektion der internationalen Gesellschaft für
elektroakustische Musik (DecimE)
Die Analyse elektroakustischer Musik - eine
Herausforderung an die Musikwissenschaft?
Berlin 1991
Zwicker, E. Fastl
Psychoacoustics - Facts
and Models (1997)
Springer-Verlag New
York Heidelberg Berlin, 1982
ISBN 0-387-11401-7
Charles Dodge and Thomas A. Jerse
Computer Music, Synthesis, Composition,
and Performance
Schirmer Books, New
York, 1984
ISBN 0-02-873100-X
Bernstein, Leonard
The Unanswered Question
Harvard University Press, 1976
ISBN 3-442-033052-1
Hofstadter, Douglas R.
Gödel, Escher,
Bach: An Eternal Golden Braid
Basic Books; (January 1999)
ISBN 0465026567
Top Overview © 2008 Copyright. All
rights reserved.
Weber, Jürgen
Gestalt Bewegung
Farbe
Berlin 1978
Biography
Born in Reichenbach in Lower
Silesia/Germany in 1963.
From 1980 to 1985, he studied
composition under Udo Zimmermann and piano at the Carl Maria von Weber Academy
in Dresden.
During the following two years he was an
independent artistic employee at the "SEK'D - Studio für elektronische
Klangerzeugung Dresden" (Studio for Electronic Sound Generation in
Dresden), where he was involved in concerts using live electronics and, in addition,
sat in on informatics lectures at the Technical University in Dresden.
From 1987 to 1988 he was a master-class
student under Georg Katzer at the GDR Academy of Arts in Berlin.
From 1988 onwards he extended his
knowledge of musical electronics at the ICEM (Institute for Computer Music and
Electronic Media) in Essen under Dirk Reith and has developed his own
computer-aided compositional language "Celsyus".
From 1993 to 1995 he had a DAAD
scholarship and carried out research at the University of Tel Aviv under his
mentor Yitzhak Sadai into semiotics in electronic music.
From 1995 to 1997 he was secretary of
the composer’s league of the society for new music of the
"Ruhrgebiet" (GNMR), involved in the organization of concerts and
festivals for contemporary and electronic music.
He married in 1998, lives in Tel Aviv
with his wife and his daughter and, since then, continued to develop and carry
out his artistic concept of "Freesound Music".
His compositions have been performed in
various countries in and out of Europe.
Contact
Friedhelm Hartmann (Freed)
37 Spinoza, Tel Aviv
64516, Israel
+972-506-686-382
freed@cm-gallery.com
skype, MSN: freed.h