These are the demos I made for an Ableton device contest before realizing we weren’t meant to produce demos.
What you’ll hear here are snapshots of a dynamic system. The real versions come out differently each time they’re played, much as the repeating sections here aren’t really repeating. These will be longer clips than usual, to demonstrate that variation.
The idea, of course, is not to create endless loops like this, but to build a framework where the overly simple source material used here is replaced by live input from a real musician.
Again, they’re just experiments. Details are provided below each music player.
* * *
The various pitched instruments are randomly selecting notes (within the same scale).
The drum part is one short loop, with many hits removed at random, and some of the remaining ones repeated as ghost notes.
The lower bell sounds are using the same tricks as the drums, with different settings (and a sync'd delay to fill in gaps and give them some groove). They're inspired by javanese gamelan
The brighter bell sounds ring longer and play less often, as a melodic element to tie things together
* * *
Rhythms are broken up with similar tricks as before
The bass part is actually a stream of long steady notes, gated against the drums to create the illusion that a live bass player and drummer have played a lot of shows together.
The scribbly notes (for lack of a better term) only plays within that same rhythm, but delayed by a beat.
The organ part follows similar rules to the bass, but sidechained against unheard elements
* * *
The pitches of a repeating pattern are randomized on the piano.
Drums and piano both drop notes at random, to vary the rhythm.
I might have overdone it on the fx
Heavy reverb fills the space where no drums are playing
The piano swaps between two audio chains when the drums are not playing. One version is recognizable as the piano. The other becomes the repeating flute sounds you hear.
Here’s a bunch of random notes. I established some rules for the virtual ensemble to follow, pressed play, and recorded the results. Parts of it sound decent, but it isn’t very natural.
Here, I’ve mapped some effects controls, as well as the velocity values of each note, to a motion sensor. As I rotate it around, parameters adjust accordingly. When I lunge at the screen, stabby accents occur. It’s very satisfying.
Things sound more human, until they don’t.
And here, I forego the wacky repeat effects in favor of tempo control. Same basic control paradigm, but the results should be more subtle.
EDIT: Just noticed my “thin out incoming notes” routines weren’t working on most of the instruments. Fixed now.
More to come, surely. But I’m happy with the progress.
I made this track as accompaniment for the ballad to end a friend’s stage show. It was, however, decided that her show shouldn’t end on a ballad, so the song was scrapped.
Two years later, this mix came about mostly because, in exporting the stems for a new collaboration, I realized that the drums were straight out of GarageBand. One loop, no variation. That won’t do. So, I glitched out the drums a bit, and they sounded pretty good now but didn’t match the song at all anymore. Now they do.
Stems, you say? A new collaboration? That’d be here.