Programmed in C++ using openFrameworks and ofxFft.
In this project I created an application to allow users to interact with 9 different audio filters (Lowpass, Highpass, Bandpass 1 (0 db gain), Bandpass 2 (Normalized gain), Notch, Allpass, Peaking EQ, Lowshelf, and Highshelf). I designed all my own UI elements and created my own methods to filter the audio stream in real-time. For more information check out the full post.
The filters were designed using a 2nd order feedback equation with an adjustable Q-factor to allow the user to specify the bandwidth of the filter’s falloff (lower Q = wider bandwidth, higher Q = narrower bandwidth). With a medium/high Q-factor the very characteristic frequency-sweep can be heard when sweeping the frequency cutoff slider low to high.
Audio Recording / Playback:
Instead of manipulating the live stream of audio coming into the microphone I instead programmed a history buffer that records and playbacks 15 seconds of audio. Once the audio has been recorded and is being played back the user can scroll back and forth through the audio by clicking and dragging inside the audio waveform box (easiest seen in the demo videos below).
Programmed in C++ using openFrameworks and ofxFft.
In this program I explored Frequency and Amplitude Modulation and their applications to audio programming and DSP. Users can modify the frequencies of the initial sine, FM, and AM waveforms to output all sorts of crazy tones. In addition, I used ofxFft to do the frequency domain plotting and visualization. For this project I wanted anyone using this app to be able to understand exactly what is happening in the frequency domain when we modulate a signal’s amplitude and phase. The end result is actually quite aesthetically pleasing, though I didn’t round the frequencies to note values as in my last program. I’ve completely switched over from Matlab to C++ and openFrameworks for my music and audio algorithmic prototyping and programming.
Here’s how it works:
This technique works really well for low frequency oscillation (LFO) and creating tones with rich harmonic spectrums using only 2 sine waves!
This technique works really well for tremolo effects at low frequencies and, like FM modulation, works great to generate harmonically interesting tones at higher frequencies.
Low Pass Filter:
I also designed and programmed a 2-pole low pass filter to allow the user to get rid of some of the higher frequencies above the LP cutoff. Depending on the relationship between the phases of the FM, AM, and initial waveforms the higher frequencies components can have a piercing quality to them.
In this application I generated various waveforms (sine, sawtooth, square, triangle, and white noise) in C++ using openFrameworks. The goal of this was to better understand how audio travels through the computer and take a step back to better understand the base waveforms before diving into more complex applications using variations of these waveforms in additive, FM, and PM synthesis. I also gained a better grasp of UI design and real-time streaming of arrays of values. For the novice audio programmer this would be my recommended starting point to make practical sense of sound buffers, interleaved arrays (left and right channels), and synthesizer basics.
Here’s a screenshot showing the actual generation of the waveforms using phases.
Here’s a video of the user interface:
Below is the final application you can download and play with. You will only be able to use it if you are using a Mac…
“This is me mutilating a plastic bottle.” – The inspiration behind this piece (originally a slate in one of my Foley tracks for my upcoming Watchmen sound re-design trailer). I found a lot of my previous dubstep songs were lacking with excitement, depth, and a balanced frequency spectrum. Hence, I’ve worked especially hard in creating a multi-layered bassline using Massive to fix this problem. Also, I’ve used many of my own sound effects that I’ve been developing for my next trailer including glass breaking, watermelon explosions, car sounds, and, of course, a bottle getting the hell squeezed out of it. I included a couple of the sound effects underneath the main piece to give a little breakdown of the various parts. Enjoy!
I recently took the trailer of Tangled and redid the sounds in my own style to explore sound design in regard to film and other visual media. This was a wonderful experience and I will be doing one or two more movie/game/animation trailers to show off some more of my sounds and processing techniques.
I introduce to you, my finest work yet! This piece includes all the recent production techniques I’ve learned…ghost vocal fade ins, 5-part harmonies, vocals (singing lessons paid off!), dubstep-step-inspired wobby leads, and pogo-inspired disney samples…the whole sha-bang. I’d love to get some feedback!
The last two weeks have been pretty awesome. In addition to graduating from Johns Hopkins (on the JHU website’s front page you can see me shaking hands with President Daniels!), I’ve also had time to code a ton of stuff in C++ and understand how to translate some of my Matlab audio algorithms into more useable code for future projects for smart phones and such. Today I stumbled upon a dance beat I created two years ago when I first learned how to use my Motif’s arpeggiator and decided to turn that 10 second clip into some longer so I cut it up in pieces, remixed section, reversed stuff, created a fat bass, and shrunk and stretched it into a new form. Let me know what you guys think!
I finally got my pitch and beat detections working in Matlab and I’ve made a visualization of it using the Arduino Uno and an array of LEDs.
To understand what you are seeing…
Here’s the video!
First the song plays from a separate sound source (room’s speakers) and the computer takes in a couple seconds of audio in Matlab (kinda like Shazam) and creates the audio object. The board initializes by turning off and on all the LEDs and then the yellow light signifies the algorithms are calculating the beats per minute (BPM) and key (fundamental frequency). Once the yellow light turns off the results on the algorithms are shown as above. The tempo light blinks to the BPM and the last four LEDs have one of the twelve key-patterns shown above.
So here’s a preview of the kind of vocals I’ll be doing in the new song. I haven’t recorded my voice in 3 years so it’s kinda crazy to be singing again! I won’t be able to complete the song today because of academia, but it’s definitely coming along slowly…(I may increase the tempo and mix the whole thing differently in the end)
Just wanted to announce the release of a new song I’ll be uploading by tomorrow night with more complexity than my previous pieces. It incorporates a couple new new drum processing techniques, electric guitar power chords and shredding, and more foley-type percussive recordings of my environment to take a break from beatboxing. As typical, it will have a medley of dubstep elements, acoustic instruments, and retro synths (custom, of course ). I want to offer more rhythmic variation and more a well defined song structure (more typical rock or pop Intro -> Verse -> Chorus type of thang).