The video below is documentation for a live video compositor project developed in collaboration with James Moore. I was the sole As3 programmer on this project. The AIR app takes advantage of FFMPEG to allow videos uploaded to the system to be broken down into frames and also to render out (render out is still in development). FFMPEG was accessed via the native process functionality of the air runtime environment. The overall installation was setup using a matrox Tripple head running 3 projectors. The project is a lot bigger than it looks in the video.
I really enjoyed the collaboration with James. It was nice to take a back seat in the decision making and just concentrate on how the application would be coded.
The left projection functions as a media viewer allowing the user to browse through files in the assets folder. The center screen allows the user to edit the content before it is played back. The edit choices are stylised and restrictive to fit in with the overall aesthetic of the system which is based around a minimalist grid. The right side projection shows a playback of the video being edited with the changes appearing in real time.
A beautiful way of getting your music out into the community.
Recent experiments with the AIR runtime environment, native process and FFMPEG got me thinking about digital video. The digital strands of narrative intertwine in a confusion of naughts and ones. Auditory vibes harmonise with the luminosity of performing pixels conducted by semiconductors fluent in machine. FFMPEG is a decoder/encoder for digital content capable of converting one video format to another, separating audio from video, breaking video up into frames as jpegs and so much more. One of the most basic features that interests me the most about FFMPEG is -s (size parameter). FFMPEG uses this parameter to scale the video as its being converted. As a person of great ability in the art of procrastination instead of the task in hand I began contemplating the consequence of encoding a video into a containing dimension of 1×1 pixels. After some experimentation I disproved my first naive/romantic hypothesis of what this 1×1 video might produce. Without considering the repercussion in depth I thought that the result of this scaling might produce a colour narrative, a timeline of mood brought forth by hue, saturation and brightness presented by a single pixel against an axis of time. The reality is that FFMPEG is only able to resize and scale video in factors of 2 so next I tried a 2×2 pixel square. Still the notion of a colour narrative was far out of reach as once encoding of the 2×2 video was complete the playback result was a grayscale blur. The result was definitely not a consequence of the colours within the video. I decided I would try the process one more time only with a 4×4 pixel result so that more of the colour detail was kept. I was extremely pleased with the result at 4×4 the mood of the video was very apparent but the detailed had definitely been dissolved. I enjoyed the extra bonus that the audio had been preserved to compliment the mood of the frames. I intend to follow this up with some experimental data visualisations of the pixel colour over time very similar to this example by Brendan Dawes but for now see below for the result of the 4×4 square scaled back up to 320×240.