earlier experiments in generative music

These are the demos I made for an Ableton device contest before realizing we weren’t meant to produce demos.

What you’ll hear here are snapshots of a dynamic system. The real versions come out differently each time they’re played, much as the repeating sections here aren’t really repeating. These will be longer clips than usual, to demonstrate that variation.

The idea, of course, is not to create endless loops like this, but to build a framework where the overly simple source material used here is replaced by live input from a real musician.

Again, they’re just experiments. Details are provided below each music player.

* * *

  • The various pitched instruments are randomly selecting notes (within the same scale).
  • The drum part is one short loop, with many hits removed at random, and some of the remaining ones repeated as ghost notes.
  • The lower bell sounds are using the same tricks as the drums, with different settings (and a sync'd delay to fill in gaps and give them some groove). They're inspired by javanese gamelan
  • The brighter bell sounds ring longer and play less often, as a melodic element to tie things together

* * *

  • Rhythms are broken up with similar tricks as before
  • The bass part is actually a stream of long steady notes, gated against the drums to create the illusion that a live bass player and drummer have played a lot of shows together.
  • The scribbly notes (for lack of a better term) only plays within that same rhythm, but delayed by a beat.
  • The organ part follows similar rules to the bass, but sidechained against unheard elements

* * *

  • The pitches of a repeating pattern are randomized on the piano.
  • Drums and piano both drop notes at random, to vary the rhythm.
  • I might have overdone it on the fx
  • Heavy reverb fills the space where no drums are playing
  • The piano swaps between two audio chains when the drums are not playing. One version is recognizable as the piano. The other becomes the repeating flute sounds you hear.

generative nonsense in C Major

Here’s a bunch of random notes. I established some rules for the virtual ensemble to follow, pressed play, and recorded the results. Parts of it sound decent, but it isn’t very natural.

Here, I’ve mapped some effects controls, as well as the velocity values of each note, to a motion sensor. As I rotate it around, parameters adjust accordingly. When I lunge at the screen, stabby accents occur. It’s very satisfying.

Things sound more human, until they don’t.

And here, I forego the wacky repeat effects in favor of tempo control. Same basic control paradigm, but the results should be more subtle.

EDIT: Just noticed my “thin out incoming notes” routines weren’t working on most of the instruments. Fixed now.

More to come, surely. But I’m happy with the progress.

GTZ Hydra

GTZ Hydra is a MIDI routing utility. It is currently only available for Ableton Live.

There will be other versions (some day), including a basic hardware solution, but their functionality may be cut down to meet the functional limits of those other platforms.

The Ableton version requires Live 8.1 or higher, and the Max For Live add-on.

GTZ Hydra v1.11

Now. What the heck are we looking at? What does it do, and how can you use it?

Here’s the first part of that:

Explanation - Part 1 from GreaterThanZero on Vimeo.

The “how can you use it” bit is up next. Watch this space.

my first monome video

Explanation Pending from GreaterThanZero on Vimeo.


Ableton Live provides a somewhat non-linear, modular approach to creating music. It’s popular amongst DJs in particular, and producers of hip-hop, but it has some great tools for my workflow as well.

Max/MSP/Jitter is a visual scripting language which creates and manipulates audio and video. It’s traditionally been popular in experimental avant garde circles, but a new generation of electronic musicians have adopted it, thanks in part to the monome.

The monome is a grid of buttons that light up, allowing you to tangibly manipulate any idea that can be expressed in a two-dimensional grid over time. It’s minimalist in design, and open-ended in function. This makes it an ideal interface for something as open-ended as Max/MSP/Jitter, and in many cases, the ideal instrument for musicians who work with samples. This creates a strange overlap in the user base, which makes their community a fun place to be.

Recently, these worlds have merged further with the release of Max for Live, which allow Max/MSP/Jitter developers to re-imagine what Ableton Live can be used for, and build new interfaces inside it.

I’m involved pretty heavily in the monome community, mostly helping people with Max for Live. I’ve created some tools of my own in it, and several of those are at work in this video.

Technical discussion of those will be found here. At least in theory. Thus far, it’s just me in there.

(This happens a lot when I post something too far outside of the norm. Does silence convey reverence, or pity? I’ve left the world dumbfounded.)