Own Notes (Anca)

Article Summary - Virtual 3D environments as composition and performance spaces

Article Link: https://doi.org/10.1080/09298215.2019.1703013


"...artistic research ... implies that knowledge and understanding is primarily gathered through artistic practice (Lüneburg, 2018)."


"Usually, the concept of ‘composing with space’ is understood as the purposeful positioning or movement of a sound event in a plane or three-dimensional space. My reflections are also concerned with the distribution of sound events in space, but here one important feature is that this spatial configuration can be experienced in a virtual space through performers’ movements."


"A 3D environment and the sound-generating object it contains can thus be seen either as an instrument or a musical score ... (in the sense of a legible record and temporal organisation of musical events)."


"While there are a number of different sonic scenarios in the "Kilgore" piece, movements of the avatars always produce continuous sounds when moving in the horizontal plane and percussive sounds when jumping."


"If we want to enable the performers to experience virtuality as a place into which they are intensely embedded, it is important to design its digital realisation – also referred to as perceptually seductive technology (Waterworth, 2001)–in a way that enables the experience of presence. Parallels to the experience of the real world form the starting point for this."


"It has been known for some time that it is possible for virtual reality to achieve a kind of ‘sensory rearrangement’ resulting in modified experiences of one’s own body’ (Waterworth & Waterworth, 2014, p. 595). This is also referred to as ‘maximal binding’, ‘[which] implies that in cyberspace anything can be combined with anything and made to “adhere”’ (Novack, 1992). This is highly interesting and has hardly been investigated in regard to musical scenarios thus far. In an experimental setup (cf. Figure 4), for example, I replaced the third of the abovementioned elementary points, the filtering of sound when turning away, with a manipulation of pitch. This means that in this particular case, all sound sources located behind the avatar were transposed. As the sound sources in this model produced static pitches, rotating around one’s own axis in the virtual space resulted in a variation of the harmonic situation."


"virtually embodied steering of parameters"


"When a 3D environment is designed with the aim of offering a certain arrangement of possible sounds, this environment can thus be understood as an instrument.

By contrast, the fact that a 3D environment can also be understood as a score may appear less self-evident. I would this like to explain my thoughts on this by way of aconcreteexample. When designing Kilgore’s 3D environment, I spent some time smoothing bumpy sections of the paths and ravines along which the avatars usually move, as otherwise the avatars would frequently get stuck behind these uneven patches and need to perform a jump to continue on their path, interrupting the flow of movement. Initially, I sought to avoid the gaps produced by uneven sections. When rehearsing Kilgore, however, the following unexpected scenario occurred: at a certain point in the piece, one of the avatars has to move to a position that can only be reached by running through a long ravine. Furthermore, at this point in the piece a functionality is activated that causes objects to fall from the sky when the jump function is used. These objects produce feedback-like sounds when they land. During the rehearsals it became clear that I had not made this ravine smooth enough, which made the avatars get stuck and meant that they could only move forward by jumping. This triggered a large number of the falling objects and their feedback effects. What had initially been an oversight in developing the 3D landscape unexpectedly gave this particular part of the piece a special character of its own. There was no other part of the composition in which the mentioned feedback-producing objects were triggered so frequently. In this particular context, this design ‘flaw’ provided musical interest and became characteristic of the piece’s formal structure."


"In my compositional work with gaming elements, I have found it increasingly necessary to adopt the performers’ experiential perspective rather than searching for a certain sound phenomenon for a selected point in time. The question guiding my compositions is thus not which sound event I want to occur exactly when, but rather: how can I create a situation in which the performers are motivated to perform a certain musical act? Thus I compose situations and stimuli rather than sounds. I try to create a situation that on the one hand corresponds to a precise musical idea, without being able to directly shape the sound events that may occur within said situation. On the other hand, the situation provides a number of affordances, which in their entirety aim to create an interesting and stimulating situation for the performers. Thus motivated, the performers’ actions convey the intended musical quality. In the context of the design of game spaces, game theorist Michael Nitsche refers to this phenomenon as Attractors or Perceptual Opportunities (2008). He argues that ‘[…] spaces evoke narratives because the player is making sense of them in order to engage with them. Through a comprehension of signs and interaction with them, the player generates new meaning.’ (2008, p. 3)."


About musical performances involving a screen:

"In my experience, a single projection screen in the performance setup has particularly strong absorbing effect. I describe this setting as a cinematic setup, in the sense that viewers are focused completely on the screen and block out their environment, similarly to a cinema screening. Interestingly, however, using two different projections already breaks the screen’s pull. I compare this with an ‘installative’ situation. Because viewers’ visual attention is no longer drawn to a single screen, we are dealing with a setup that is thus not simply distributed across two screens but includes and makes the audience more aware of the whole of the physical space. In other words, while a single screen remains a singular unquestioned attractor, two or more screens articulate a space that also includes the performers and the audience (Petersen, 2015). I have observed that the events and performers on stage are better integrated into the whole when two or more projections are involved."


About parameter mapping:

"The simplest configuration is one-to-one mapping. Here, a single input signal is allocated to a single parameter. For example, if I move a control and this changes only the volume of the given signal, we are dealing with one-to-one mapping. One-to-many mapping describes when an input signal affects several parameters. For example, we could imagine the same input signal changing not just the volume, but a filter setting, so that the sound becomes brighter as it grows in amplitude.

Besides one-to-one and one-to-many we also have many-to-one mapping, which is present for example in wind instruments, where changes in both blowing pressure and finger position can alter the pitch. Accordingly, several input data influence the same musical parameter.

Finally, there is also many-to-many mapping, which covers a large number of possible combinations. Another analogy to a traditional instrument serves as an example here: with string instruments, for example, a finely coordinated interaction of bow speed, bow pressure and the bow’s position on the string produces a certain timbre that affects both various sound parameters and the volume. What is distinctive here is that the individual input data can no longer be attributed directly to a single change in the sound, but that the complex process of precisely coordinating the parameters with one another leads to changes in the overall sound produced."


"In research on Human–Computer Interaction (HCI), the way that data are allocated to the parameters of a digital system is referred to as ‘mapping’. These allocations can be described using the ratio between the input and output signals.

Examples of two-to-many mappings. On the left, the relations between the avatar’s position and five objects in the environment are measured, resulting in a 2 in/5 out scenario; on the right, the distances between the objects are measured, resulting in a 2 in/15 out scenario."

  • No labels