Skip navigation

Melvin the Magical Mixed Media Machine (or just Melvin the Machine) is best described as a Rube Goldberg machine with a twist. Besides doing what Rube Goldbergs do best – performing a simple task as inefficiently as possible, often in the form of a chain reaction – Melvin has an identity. Actually, the only purpose of this machine is promoting its own identity.

SITE

Yet another Rube goldberg machine but I really love the social media twist for this one.

Advertisements

Well worth a watch.

Our camera uses 36 fixed-focus 2 megapixel mobile phone camera modules. The camera modules are mounted in a robust, 3D-printed, ball-shaped enclosure that is padded with foam and handles just like a ball. Our camera contains an accelerometer which we use to measure launch acceleration. Integration lets us predict rise time to the highest point, where we trigger the exposure. After catching the ball camera, pictures are downloaded in seconds using USB and automatically shown in our spherical panoramic viewer. This lets users interactively explore a full representation of the captured environment.

 

Amazing idea

Zach Lieberman is one of the smarts behind the openFramework software. Here he is talking about some of his projects. The main reason I have posted this video is for the project that is introduced 6mins in.Zach has been collaborating with an old school graffiti artist who goes by the name of Tempt. The twist is that Tempt now suffers from Lou Gehrig’s disease and is paralysed. Zach Lieberman and a few others created a system for tracking Tempt’s eyes and relaying the movements back to a drawing package. The accuracy is really quite amazing and the project is very inspirational.

 

Life writer By Christa Sommerer and Laurent Mignonneau.

Website

We are artists working since 1991 on the creation of interactive computer installations for which we design metaphoric, emotional, natural, intuitive and multi-modal interfaces. The interactive experiences we create are situated between art, design, entertainment and edutainment. One of our key concepts is to create interactive artworks that constantly change and evolve and adapt to the users’ interaction input [1]. For several of our interactive systems we have therefore applied Artificial Life and Complex Systems principles and used genetic programming to create open-ended systems that can evolve and develop over time through user interaction.

 

 

 

 

 

 

 

 

Recent experiments with the AIR runtime environment, native process and FFMPEG got me thinking about digital video. The digital strands of narrative intertwine in a confusion of naughts and ones. Auditory vibes harmonise with the luminosity of performing pixels conducted by semiconductors fluent in machine. FFMPEG is a decoder/encoder for digital content capable of converting one video format to another, separating audio from video, breaking video up into frames as jpegs and so much more. One of the most basic features that interests me the most about FFMPEG is -s (size parameter). FFMPEG uses this parameter to scale the video as its being converted. As a person of great ability in the art of procrastination instead of the task in hand I began contemplating the consequence of encoding a video into a containing dimension of 1×1 pixels. After some experimentation I disproved my first naive/romantic hypothesis of what this 1×1 video might produce. Without considering the repercussion in depth I thought that the result of this scaling might produce a colour narrative, a timeline of mood brought forth by hue, saturation and brightness presented by a single pixel against an axis of time. The reality is that FFMPEG is only able to resize and scale video in factors of 2 so next I tried a 2×2 pixel square. Still the notion of a colour narrative was far out of reach as once encoding of  the 2×2 video was complete the playback result was a grayscale blur. The result was definitely not a consequence of the colours within the video. I decided I would try the process one more time only with a 4×4 pixel result so that more of the colour detail was kept. I was extremely pleased with the result at 4×4 the mood of the video was very apparent but the detailed had definitely been dissolved. I enjoyed the extra bonus that the audio had been preserved to compliment the mood of the frames. I intend to follow this up with some experimental data visualisations of the pixel colour over time very similar to this example by Brendan Dawes but for now see below for the result of the 4×4 square scaled back up to 320×240.

I love Kate’s outlook on life. I think we should all try to be more glacier like!

KateHartman @ Ted Talks

Heres a home CNC/drawing project with a twist. The drawing machine uses the time it takes for ink to bleed in to blotting paper to create a grayscale image. More information can be found  here along with some other awesome projects like a street art quadracopter and gesture driven drawing machine.

Over the next academic year I intend to show a lot more of the UCF student work on this blog. The staff and students put their heart and soul into the work that is produced at Falmouth. Personally, I get a massive sense of pride from what I do and want to share the results with the world. So here is the first of many; a video documenting the first Kinect based student project to come from UCF.

I am pretty sure that I have posted about this technique before but this is a particularly good example. The waterfall is by far the biggest install of this type I have seen and the lighting really emphasises the lines produced by the falling water.

Another example only this time wrapped into a cylinder:

%d bloggers like this: