Hello! My name is Haris, I'm a full-stack software developer specializing in iOS and OSX development, web applications, and audio programming. On my site you'll find a portfolio of my projects, both commercial and open source.
Mentioned is a note taking app that lets you quickly take recommendations as notes and have them automatically looked up for you.
I am the sole founder and developer of Mentioned. I am currently adding new features and smarter recommendations into the web service and extending the application into multiple platforms including web and android.
For Detour I created a shared audio engine used by the iOS and Mac applications to perform dynamic-location based audio playback with multi-effect and multi-track (up to 32 simultaneously) support. It was built on top of my EZAudio framework and extended to conform to location triggers.
In addition, I wrote the Mac application, Descript, used by the content team to create and simulate Detours in a movie-script like fashion with the ability to drop in location triggers, music, voice overs, sound effects, images, and comments. It also features a git-like version control tool to allow authors the ability to check out remote Detours and collaborate simultaneously.
I also helped with the iOS app to smoothly integrate Descript bundles and extend audio functionality for real-time playback support.
For Beats Music I wrote two separate applications, one with the Special Projects team for South By Southwest (SXSW) 2014, and one in collaboration with the Analytics team to provide a dashboard for the platform's metrics.
The first project was a Mac application to demo the new Beats Music Developer API. The application was an audio player, inspired by those old school Winamp players,
that allowed users to search and stream audio tracks from the Beats Music platform.
This served as one of the official example projects for the Beats Music Developer API and was premiered at the Beats Music tent at the South By Southwest Conference (SXSW) in Austin, Texas 2014. The project is currently live and still used by developers around the world learning how to use the Beats Music Developer API to developer native Mac and iOS applications.
The second project was a Mac application that provided a dashboard for viewing real-time metrics for the Beats Music platform used by various members of the team including Dr. Dre, Ian Rogers, and Jimmy Iovine.
The application itself provided a native-OSX dynamic layout to accommodate various screen sizes from 13" Macbook Airs to 60" displays. Along with the Analytics team we were able to simultaneously create a new backend to provide the real-time data for the application to consume.
Shortly after my time at Beats they were bought by Apple and integrated into the new Apple Music streaming service.
A simple, intuitive audio framework for iOS and OSX.
EZAudio is a popular iOS and Mac audio framework I created after leaving Pulselocker in the fall of 2013. I wrote EZAudio initially as a playground where I could quickly utilize common audio components to function at a high-level, while still having access to the audio data. For example, getting audio input from any device's microphone in 1 line of code, reading compressed and uncompressed audio files a consistent way, or creating really good looking waveforms.
As of the 1.0.0 release EZAudio currently contains the following components:
EZAudioDevice - available input/output devices
EZAudioFile - reading from audio files and waveform data generator
EZAudioPlayer - flexible audio file player
EZMicrophone - generic audio input
EZRecorder - writing to audio files
EZOutput - generic audio output
EZAudioPlot - CoreGraphics based waveform graph
EZAudioPlotGL - OpenGL based waveform graph
EZAudioFFT & EZAudioFFTRolling - flexible FFTs using Accelerate
EZAudioFloatConverter - converting audio data to a stereo float format
Getting audio input and plotting the samples (CoreGraphics) -
Getting audio input and plotting the samples (OpenGL) -
Playing an audio file and plotting the samples. You can seek through the file, adjust the volume, and modify the length of the rolling waveform -
Recording audio input to an audio file and playing it back -
Core Audio is awesome and insanely flexible, but, like OpenGL is for graphics, it tends to scare away new iOS programmers with its low-level C interface, sparse documentation, and cryptic error codes. Most of the examples provided by Apple with user-interfaces heavily utilized the C++ CA-prefixed Core Audio public utility classes and lead you to believe anything you would write would require a fair share of C++ and Objective-C++ files to achieve anything you'd see in a real-world application. EZAudio was designed to be the Objective-C layer inbetween the AVFoundation, which was too high-level to modify and visualize the audio samples directly, and the Audio Unit API, which was too low-level for the casual developer to really make use of in a timely fashion.
While working on EZAudio over the last few years I dealt with a good amount of frustration while trying to better understand Audio Units. These included constant threading issues between audio and UI threads (which is why apple provides so many command lines example apps), how the components would behavior in isolation versus connected, managing the API to keep it easy to use yet flexible enough to allow customized behavior. Even the underlying Audio Unit architecture was changed as of 1.0.0 in specific cases to use AUGraphs instead of singular Audio Units to deal with issues of format conversion. Overall, however, EZAudio was a joy to develop and I'm so glad so many developers have made good use of it in their applications. If you have used EZAudio to develop anything cool please send it to me so I can add it to the wall of fame.