Skip navigation

Category Archives: video

 

A beautiful way of getting your music out into the community.

Android 1 – iPhone 0 (eventually sorted though)

I have used PhoneGap before without the aid of Dreamweaver and although it works well the setup process is a little complex and I never really got it working for IPhone on my Mac. A lot of students have been asking for training sessions for writing apps so I thought that the CS5.5 / PhoneGap integration might streamline the whole process. As with most things Adobe do the workflow is very simple and when using the jQuery mobile template and exporting out to Android everything worked really well. The problems came when I started trying to export for the IPhone (Surprise surprise). The main issue was that once I pointed the setting to the the developer folder at the root of my hard drive and clicked build and emulate I got an error:

The build of application failed to complete successfully

After hunting round in the settings for a while I decided to hit up Google. I found the solution in the Adobe forums, here is the link:

http://forums.adobe.com/message/4170843#4170843

I was unsure if I was supposed to change the camera.h file @

/Users/alcwynparker/Documents/DW_NAF

or

/Applications/Adobe Dreamweaver CS5.5/Configuration/NativeAppFramework/DWPhoneGap

So I changed them both for good measure and everything now works fine.

Just bought one of these!

I shall be posting videos of it up here as soon as it arrives!

EXCITED

Here are some useful links that I have been using for reference:

How to get a video feed from the new toy (manual insists the quadrocopter is not a toy!)

http://rcexplorer.se/Educational/FPV/FPV.html

A really informative forum thread

http://www.rcgroups.com/forums/showthread.php?t=1171103

The Conrad 450 ARF Quadrocopter manual

http://www.rcgroups.com/forums/showa…4&d=1270284725

ROUGH NOTES

Possible Camera setup 1:

Other configs:
500 metres ready to use camera:

Roughly two years ago there was big hype around Augmented Reality, the main driving force behind this was the transition from augmented reality being bound to desktop software to the ARToolkit being ported for Flash and therefor able to run in the browser. I jumped on the band wagon and wrote some samples in AS3 using the FLARToolkit which is an as3 port of the ARToolkit and the results were very nice. Thats pretty much where it ended.

In the last couple of weeks I have had more and more request for code that involves augmented reality. Most notably a project involving a wide open space being mapped as an AR maze. The challenge is to have a persistant 3d virtual maze to walk around. I am not sure if it is even possible with the equipment we have here but I have begun testing some ideas.

The video above is the first experiment in a long time involving AR. I wanted to check how well processing would handle the ARToolkit instead of Flash. I used the wrapper class of the NYARtoolkit as a base but I found the adaptation by cpbotha.net who has added multiple marker functionality to the library. The end result is a lot slower than i remember AS3 to be but I am loading an .STL file with 97000 triangles. I am using the unlekker library to load the .STL file but I am thinking about using the  OBJLoader library instead because of the texture support.

The .STL was a download from thingiverse.com. I can’t find the name of the person to credit anymore but the  file is called doneShell.stl

Recent experiments with the AIR runtime environment, native process and FFMPEG got me thinking about digital video. The digital strands of narrative intertwine in a confusion of naughts and ones. Auditory vibes harmonise with the luminosity of performing pixels conducted by semiconductors fluent in machine. FFMPEG is a decoder/encoder for digital content capable of converting one video format to another, separating audio from video, breaking video up into frames as jpegs and so much more. One of the most basic features that interests me the most about FFMPEG is -s (size parameter). FFMPEG uses this parameter to scale the video as its being converted. As a person of great ability in the art of procrastination instead of the task in hand I began contemplating the consequence of encoding a video into a containing dimension of 1×1 pixels. After some experimentation I disproved my first naive/romantic hypothesis of what this 1×1 video might produce. Without considering the repercussion in depth I thought that the result of this scaling might produce a colour narrative, a timeline of mood brought forth by hue, saturation and brightness presented by a single pixel against an axis of time. The reality is that FFMPEG is only able to resize and scale video in factors of 2 so next I tried a 2×2 pixel square. Still the notion of a colour narrative was far out of reach as once encoding of  the 2×2 video was complete the playback result was a grayscale blur. The result was definitely not a consequence of the colours within the video. I decided I would try the process one more time only with a 4×4 pixel result so that more of the colour detail was kept. I was extremely pleased with the result at 4×4 the mood of the video was very apparent but the detailed had definitely been dissolved. I enjoyed the extra bonus that the audio had been preserved to compliment the mood of the frames. I intend to follow this up with some experimental data visualisations of the pixel colour over time very similar to this example by Brendan Dawes but for now see below for the result of the 4×4 square scaled back up to 320×240.

%d bloggers like this: