Here’s a bunch of random notes. I established some rules for the virtual ensemble to follow, pressed play, and recorded the results. Parts of it sound decent, but it isn’t very natural.
Here, I've mapped some effects controls, as well as the velocity values of each note, to a motion sensor. As I rotate it around, parameters adjust accordingly. When I lunge at the screen, stabby accents occur. It's very satisfying.
Things sound more human, until they don't.
And here, I forego the wacky repeat effects in favor of tempo control. Same basic control paradigm, but the results should be more subtle.
EDIT: Just noticed my "thin out incoming notes" routines weren't working on most of the instruments. Fixed now.
More to come, surely. But I'm happy with the progress.
I made this track as accompaniment for the ballad to end a friend’s stage show. It was, however, decided that her show shouldn’t end on a ballad, so the song was scrapped.
Two years later, this mix came about mostly because, in exporting the stems for a new collaboration, I realized that the drums were straight out of GarageBand. One loop, no variation. That won’t do. So, I glitched out the drums a bit, and they sounded pretty good now but didn’t match the song at all anymore. Now they do.
Stems, you say? A new collaboration? That’d be here.