Cage Variations II LIVE


I've adapted my Variations II installation software for performance by live ensembles. To better fit a performance context, the idea is to determine a multi-section structure in advance, with each section based on a specific number of sketches. Relative to Cage's straight-up Variations II instructions, it's like a sequence of realizations with no pauses in between. Under this approach, while any particular section is being heard, players in the ensemble are creating point/line sketches that will determine music in the section immediately following.

This brings some interesting performance elements into play: players have to develop a sense of the musical consequences associated with their graphical compositions; the ensemble may want to coordinate overall trends in their sketches in order to shape density, pitch range, and dynamics in a more controlled manner; there is a palpable feeling of risk (within the ensemble at least) around the issue of finishing the required number of sketches for the next section by the scheduled deadline.

The software is designed to allow an arbitrary multi-section structure, specified in a simple text file. Any number of players can create point/line sketches via laptops, while a central hub computer buffers the incoming data to generate and play the score and live animation for each section. Its first use was within a concert coordinated by the Music Technology Group at Pompeu Fabra University in Barcelona, Spain. The next performance will be at the 2013 International Computer Music Conference in Perth, Australia.

Check out this [demo video].


Cage Variations II: interactive installation

var-still-0 var-still-1

One of my projects for the John Cage Centennial Festival Washington, DC was an interactive realization of Variations II, originally for 6 transparency sheets with a single line, and another 5 with a single point. The idea is to arrange the points and lines however you like, then drop a perpendicular from each point to each line, measuring the distance for each one. The 30 distances that result are mapped to various sound features that make up sonic events in the piece. The interface I'm working on, [seen here], lets visitors draw their own point/line sketch with a touch interface. It calculates all the distances and uses that information to play from a bank of over 500 percussive samples with no randomness: everything is determined from the sketches. Points and sketches are displayed in sync with corresponding sonic events as they occur. Here's video of a 60-second test run with 10 sketches: [test-video]


The Open Shaper

ostid-1 ostid-2

The Open Shaper is a gestural controller in which mappings between control streams and synthesis parameters are centered on the notion of shaping a four-sided polygon with the fingertips. It uses infrared blob detection to capture open-air gestures of two fingers on each hand. Though there are only four vertices, higher level features drawn from these raw coordinates generate many different control streams. Reference to this simple virtual entity is also useful for encouraging more motivated movements in a completely open-air scenario. This is grounding in the context of instrumental performance, where a player traditionally manipulates an external object (the instrument) through direct physical force. A further consequence of this strategy is that it imposes particular types of interdependence between control streams. For example, it is not possible to shorten even one edge of the polygon without also affecting the angles of other sides and the position and velocity of the vertices involved. In a complex mapping scheme where each of these control streams is utilized, even the most basic motion can have multidimensional consequences, creating idiosyncratic action-sound relationships for a performer to learn and exploit.

When browsing timbre spaces, an immediate benefit of the Open Shaper is polyphony. Depending on the mapping, up to nine browsing cursors can be active at once - the four vertices of the polygon as well as the centroids between certain pairs of points and that of the polygon as a whole. Each active cursor drives audio in separate grain streams that relate by virtue of the polygon's shape and position. Because distances between grains arranged in a timbre space are tied to sonic similarity, contracting the area of the polygon makes the timbre of all grain streams more consistent, while stretching the shape to its extremes immediately diversifies the overall texture.

Pre-defined mappings of DSP chains used to process the four grain streams are chosen based on which of the four fingertips enters a particular side of the tracking area first, and different timbre spaces are brought up via a simple pedal trigger. With practice, strong relationships are formed between physical movements, visual characteristics of virtual elements, and the resulting audio output.

[A video] demonstrating the open shaper applied to 3 different timbre spaces.


The Gesturally Extended Piano

gep-2 gep-1

The Gesturally Extended Piano is an extended instrument that tracks four points on the pianist’s forearms and hands in order to control real-time processing and synthesis. Motions that extend relatively naturally from standard piano technique—such as flexing the wrists and rotation of the forearms—can be used to modulate sound transformation parameters. A very basic example is pitch-bending. By playing a note and rotating the wrist to the left and right, pitch can be bent downwards or upwards.

Motion tracking is achieved using the infrared (IR) blob tracking module from my open source [DILib] library, which can be downloaded from the research area of this website. Four reflective spheres are attached to my arms, and a PS3eye camera with an 850nm IR filter is mounted over the keyboard. Attached to the camera is an IR light bar shining downwards, so that the brightest light reaching the camera is that from the highly reflective markers. Extreme contrast is applied to the video stream so that the blob tracking algorithm can be fed a pure black and white image where the markers appear as large white circles that clearly stand out against the black background.

In addition to making the blob tracking easy to tune (via a region of interest and thresholding parameters), the DILib module generates many pieces of relative data beyond the basic coordinates of the four points. Between any pair of points, the module provides distance, angle, and centroid. It also provides the delta values for coordinates between frames—an indicator of velocity.

[A video] demonstrating the pitch shifting/time stretching mapping.
[A video] demonstrating 4 different mappings.
[A video] of the overhead IR camera view (no sound).


Cartridge Music/Pools/One4


In October 2011, percussionist Rosse Karre and I worked on realizations of three Cage percussion works that either call for or allow for live electronics. I wrote and operated audio processing/spatialization software for the event. In general, software for all of the pieces was designed to create relationships between the spatial positioning and transformation of sounds as they are played in real time. Based on chance operations, live audio is sent along specific spatial trajectories within the gallery via an array of speakers. Different types of sound transformation processes also exist and shift position within the space, awaiting any live audio that is placed nearby.

For Cartridge Music/Duet for cymbal, these shifts of position are sudden, and for the most part happen in relation to the onset of new notes or sound events. Audio from the microphone inputs jumps across the space; however the spatial position of sound transformation processes remains fixed, with the intensity of processing changing proportionally based on distances between these two elements.

[Cartridge Music] (note: this video only contains raw low-quality audio from the camera)



Exurbia is a digital sound-editing program where users compose individual sound-works from a shared collection of sound samples. The program has four distinct features:

• The interface is TIME-INTENSIVE, being predominantly aural and executed in real time.
• Editing is DESTRUCTIVE (i.e. there is no 'undo' feature).
• All of the source materials (i.e. the sound samples) are SHARED among all users.
• Each edit on a single user's computer impacts every instance of a single file throughout the Exurbia COMMUNITY.



Infrared fingertip tracking


I hope to develop this into an instrument over the next few months. The only hardware I needed to put together was an array of IR-LEDs. The array shoots a cannon of IR light at me, and I send it straight back via reflective tape attached to my fingertips. Originally, this was picked up by the Wii remote's infrared camera, with the resulting data received at my laptop over bluetooth, then routed into Pd to do whatever with. The latest patch—which will be released with DILib this summer—drops the Wii remote altogether and uses a PS3eye camera straight to GEM instead. This provides full 3-dimensional tracking (very reliable depth info), and can also be expanded to track as many points as you like.

[A short video] of the new tracking patch coming soon as part of DILib. This one has full XYZ coordinates.
[A short video] of me trying the tracking in timbreID's timbre-space example patch for the first time.
[A short video] of me trying the tracking with four phase vocoded samples.


Contrabass Improvisation Project


A new performance project involving timbreID objects is currently being developed in collaboration with bassist Matthew Wohl. The patch analyzes Matt's playing using a few different features in order to control a concatenative synthesis process. Here, the goal is not for the timbres of input and output to mirror each other. Bark-spectral analysis of the live bass signal is constantly compared against a training database. Then, rather than using the nearest match index as a read point into the training audio, the index is used to point to grains in entirely unrelated audio files. This provides consistent control that Matt is learning through practice. At this point, the audio files being explored are simply conventional instrument samples: piano, clarinet, and contrabass. The patch includes settings for making the synthetic output independent, so that it amounts to more than a real-time imitation of Matt's playing. Apart from classification functionality, additional timbreID objects are used to generate control streams for filtering, reverb, and spatialization.

Matt will perform with the system in the CPMC black box on 6/9/10.

[Watch Matt exploring a few sounds in a practice session.]


False Ruminations

false_ruminations false_ruminations_02

[Ludbots] have been used in several live performance pieces, but False Ruminations marks their first application in a long-term installation. Over the course of their stay at [Open Space] in Victoria, BC, they will perform about 15,000 algorithmically generated variations of a single 4-5 minute piece I composed for this event, each time pausing for several minutes before proceeding to a new variation. These variations are directions that the piece may have moved toward had I not made the specific choices that I did. As the Ludbots make their way through various combinations of compositional materials, they may come upon an ideal configuration.

The instruments are wooden planks (poplar), cut in lengths between about 1 and 2.5 feet. After the nodes of each plank were found, they were suspended from the ceiling in a way that doesn’t interfere with their resonance. The sixteen planks are tuned in pairs, so that there are eight rough pitch categories, and the members of each pair differ in pitch by between 1-3 quarter tones.

Compositionally, there are two streams of rhythm interpolation happening in parallel. Each stream moves between 5 different rhythms, using a specific group of pitches in each section. Because the two processes do not line up at the seams, the collection of pitches and rhythms in play are constantly changing in subtle ways.

Thanks to Ted Hiebert for the photographs.

[Listen to variation 1100]
[Listen to variation 7717]

[View a 2 minute movie]



ludbot_image ludbot_image_2
Photo: Matt Jenkins

In 2008, I built a set of mechanisms for playing acoustic percussion instruments or found objects, and a [PIC microcontroller] circuit for controlling them. As the name suggests, Ludbots are very simple in terms of design, but they perform effectively. I’m grateful to [Bil Bowen] for taking the time to explain his very clean and efficient ModBot mechanisms.

On the [PIC] programming end of things, I worked with [Kevin Larke] to extend his MIDI parsing [PIC] code to allow for translating the velocity component of note on messages to pulse width modulated voltages. The end result are percussion playing robots that respond to MIDI note on messages from any source. Their first application ([Cobbled], described below) involved a live performer and real-time computer generated accompaniment. On a very practical level, Ludbots provide the ability to play difficult scores that cannot be achieved with human performers. Their small size and easy mounting also make them ideal for installation use—filling a room with immersive, acoustically produced sound.

[View eight ludbots in action]


More recently, they have been used to realize a unique interpretation of David Lang’s Unchained Melody in collaboration with percussionist Steve Schick. The recording will be released on one of Schick’s upcoming albums. Pictured above is Steve playing with 16 ludbots in Zipper Hall, Los Angeles to open the [Carlsbad Music Festival] on 09/19/08. Mark Swed of the LA Times had this to say:

“The percussion standout was David Lang’s ‘Unchained Melody,’ a bopping single line played by Steven Schick on the glockenspiel, each note doubled by an electronically operated mallet hitting a noise-maker. This was made possible by William Brent . . . Operated by a laptop, the robot crew proved a flawless ensemble.”

[View a live performance of Unchained Melody]
[View an excerpt of the robot component of Unchained Melody]

lud09 lud09
The second generation of ludbots are almost done. They'll be nearly identical in terms of performance (though with a little less slop in the action), but the biggest change is the metal housing. I'm hoping it will be much more durable for road trips and general abuse, and also make repairs and part replacement much easier. Counting the original ludbots, this set will bring the total up to 32 units, meaning that some interesting spatialization and ensemble coordination will be possible soon. Their first use will be in Adam Wilson's "Duoquadragintapus" at CalIt2 on 05/19/09.


Wii Remote Instrument


I’m beginning to explore using the Wii remote as a performance controller. The current instrument consists of layered phase vocoders in SuperCollider, where IR tracking scrubs through the sounds, pointing the Wii upwards affects pitch, and rotating the wrist left and right controls volume. Accelerometer information translates to parameter glides that are proportional to the action in each dimension. A major attraction of this setup in comparison with other things I’ve tried in the past is its compactness. It’s pretty refreshing to be able to carry everything in a small bag, open the laptop, and begin making sound.

[Listen 01]
[Listen 02]



risk risk

This piece is another collaboration with percussionist [Ross Karre], and the first to make use of my percussive timbre tracking externals [bfcc~ & timbreID] for Pd. So far, the tracking in this 11 instrument setup is very reliable.

The score for the piece was generated algorithmically with scripts I wrote in MATLAB, according to a large-scale formal sketch by Ross. The MATLAB "score" was completely numerical, which I then realized in SuperCollider with some percussion samples I had recorded previously. Ross then transcribed this audio file into conventional notation by hand. Though I also output the score as a MIDI file that could be imported to notation software, Ross preferred writing the part out himself for maximum flexibility, and simply as a chance to begin learning the piece.

In short, the timbre tracking is used to drive a variety of processes in the accompanying electronics. Some are immediate, and others take place over a larger window of time. All of the instruments that the performer strikes are classified and stored as statistical information. Because the piece is written so that the performer can choose one of a few instruments for each note (i.e. - a "wood" instrument must be struck, but it doesn't matter which one), this information will be unique for each performance. The most simple example of an immediate process is for the electronics to play back a sound from a certain bank of sounds associated with the instrument hit. When the performer hits an instrument, the timbre classification is fast enough that a computer-generated sound can be produced with reasonable synchronicity. Certain instruments trigger bursts of granular synthesis, others trigger long drones, and none of these associations are hard wired; they change along with the form of the piece. An example of a process that takes place over a longer time window is the bed of phase vocoded samples. This is composed of several samples of the instrument set that are drastically stretched in duration. A tally of timbre classifications reveals the most favored instrument within a certain amount of time; this information determines which sample will be added to the bed, and how it will be processed.

Above are sample stills from Ross' video, which will be projected on the instruments. The first performance will at UCSD's Conrad Prebys Music Center on 05/20/09.

[Test excerpt with percussion & electronics.]


Popol Vuh


Diagram: Jeffrey Treviño

A live video/electronics/percussion performance piece based on the Mayan creation myth. Popol Vuh was realized collaboratively by the Synchronism project. William Brent: software & sound design. Ross Karre: composition, video & percussion. Jeffrey Treviño: compositional structure.

The software involves synchronized video and audio between three computers over a network, an application of my composite rhythm flocking idea, and analysis-based mixture between the pot and voice sounds. Three computers were used to ensure maximum stability. One computer is responsible for sound, another for playback of the video projected onto the bass drum, and the last for playback of Ross’ video score within his percussion setup. It’s possible to run everything on my MacBook Pro, but it does push the CPU% near the top.

Popol Vuh will be performed at the [Carlsbad Music Festival] on 09/19/08 in Los Angeles, Zipper Hall, and 09/27/08 in Carslbad.

The video here is rather large because it’s the entire piece. It may be best to simply download the 30MB+ file instead of waiting to view it in your browser.

[View Popol Vuh]



Photo: Matt Jenkins

The initial motivation for producing this hardware was a performance project entitled Cobbled with percussionist [Matt Jenkins] (pictured above) and composers Brian Griffeath-Loeb and Dan Tacke. The compositional structure of the piece is provided by short notated cells of musical material. Each cell has a part for the human player, and a separate part for the robotic player. The combinations exploit possibilities offered by robotic percussion, such as: high speeds, playing several instruments at once, and erratic fluctuation in loudness. To explore the possibility of robotic performance that is more than simple execution of a given score, Cobbled employs improvisation and machine listening with the aim of eliciting unpredictable responses from both players. The performance scenario is straightforward: a percussionist plays from one composed cell to any other, making improvised transitions of various lengths based on the corresponding material. The robotic part adapts to these changes with appropriate accompaniment, and returns to playing a composed cell upon the percussionist’s cue.

[View a .mov of the performance]


Man With A Movie Camera


In 2006, percussionist [Ross Karre] began working on an accompanimental score to Dziga Vertov’s 1929 film, Man With A Movie Camera. His goal was to sonify the visual rhythm of the film’s transitional cuts—all of which are immediate “hard” cuts. This involved transcribing an hour of precise and irregular rhythms to conventional notation in order to produce a performable score. Each cut is represented by a single note. Early versions of this piece employed coordinated ensemble improvisation to support the raw one-note-per-cut instrumental part.

In 2007, Ross and I collaborated to create an electroacoustic version of the piece for solo performer. A variety of techniques are used to relate the cuts to computer generated sound events. Among them:

• Straightforward sample playback: at particular structural points in the piece, new pre-recorded rhythmic material is added to the pool of available samples. The samples vary in character. Some have very sharp initial attacks and correlate to the visual cuts very clearly. Others have very gradual attacks.
• Cluster of samples: each cut triggers a chime-like cloud of about 7 attacks.
• Partial glissando: an analysis-based resynthesis of a chime is played, and upper partials gliss slightly upwards or downwards. The effect is subtle, and can create a confusing mixture with the accompanying acoustic chime sound.
• Time stretching: straightforward phase vocoder time stretching of a chime sample is used to mark the beginning of each new section in the piece.
• Palindromes: a second or two before a cut occurs, a randomly generated rhythm is articulated, followed by another attack on the actual cut. After the cut, the newly generated rhythm is played backwards, creating a palindrome rhythm that pivots at the cut point.
• Delay buildups: in certain sections of the piece, windows of varying size (20 - 60 seconds), write any attacks that happen on cuts to a delay buffer. The contents of this buffer repeat, leaving a constantly building sonic residue of cut events. These composite rhythms slowly fade out when overlapping with the next section.

[View excerpts]


Postures For Realignment


Photo: Serina & Jason Ponce

Often written: PFR, and pronounced: “Fur”, this five-player ensemble has been a pleasure to play with. It is unique for a few reasons—the most significant being the odd combination of instruments. PFR features two laptop performers doing live processing, tuba, percussion, and electric guitar. During our heyday with performances and consistent rehearsals, this band provided the environment in which I felt like a performer on a laptop computer for the first time.

PFR is:
• Joe Bigham: Electric Guitar
• William Brent: Laptop
• Fabio Oliveira: Percussion
• Jonathan Piper: Tuba
• Jason Ponce: Laptop

Our culminating achievement was a live instrumental accompaniment to the 1986 blockbuster film, Top Gun. A studio recording session followed, resulting in the track below. It is also available on UCSD’s 2008 album entitled sound check three, which represents the state of the music department.

[Listen to Squirrel Pod Saskatchewan]



The most recent realization of {Percept} at the 2006 Collision conference at the University of Victoria was a large undertaking. Jason Ponce and I had to transport six computers, projectors, and various hardware for the installation, which took two solid days to complete.

The installation is based on a pen and paper game thought up by Jason Ponce that challenges players to communicate in an unconventional way through images. After playing this game together, we began talking about how it might take shape in the realm of sound rather than images, and ideas came quickly.

In short, {Percept} presents an immersive multimedia environment that is centered on sound recordings contributed by visitors. These recordings are made in an effort to represent something specific (which is very nearly impossible in the multi-valent language of sound). Visitors can also choose to interpret recordings left by others—a feedback process that shapes the state of sound and video playback in the space.

My contribution to the installation was the compilation, spatialization, and manipulation of visitor recordings, managed by software I wrote in SuperCollider. Using Max/MSP/Jitter, Jason Ponce did real-time processing of videos that he and I shot beforehand. This is what required the horsepower of multiple computers, which Jason networked to a master computer. Also worthy of note was the multi-platform communication: I used SuperCollider, he used Max/MSP. Through the wonders of OSC, cooperation was not an issue.

What I find most interesting about the installation is the capacity for social interaction through articulation of abstract sound. In the right setting, {Percept} is a slowly evolving social sound space.

[View a .mov of the installation]