A beautiful way of getting your music out into the community.
A beautiful way of getting your music out into the community.
Recent experiments with the AIR runtime environment, native process and FFMPEG got me thinking about digital video. The digital strands of narrative intertwine in a confusion of naughts and ones. Auditory vibes harmonise with the luminosity of performing pixels conducted by semiconductors fluent in machine. FFMPEG is a decoder/encoder for digital content capable of converting one video format to another, separating audio from video, breaking video up into frames as jpegs and so much more. One of the most basic features that interests me the most about FFMPEG is -s (size parameter). FFMPEG uses this parameter to scale the video as its being converted. As a person of great ability in the art of procrastination instead of the task in hand I began contemplating the consequence of encoding a video into a containing dimension of 1×1 pixels. After some experimentation I disproved my first naive/romantic hypothesis of what this 1×1 video might produce. Without considering the repercussion in depth I thought that the result of this scaling might produce a colour narrative, a timeline of mood brought forth by hue, saturation and brightness presented by a single pixel against an axis of time. The reality is that FFMPEG is only able to resize and scale video in factors of 2 so next I tried a 2×2 pixel square. Still the notion of a colour narrative was far out of reach as once encoding of the 2×2 video was complete the playback result was a grayscale blur. The result was definitely not a consequence of the colours within the video. I decided I would try the process one more time only with a 4×4 pixel result so that more of the colour detail was kept. I was extremely pleased with the result at 4×4 the mood of the video was very apparent but the detailed had definitely been dissolved. I enjoyed the extra bonus that the audio had been preserved to compliment the mood of the frames. I intend to follow this up with some experimental data visualisations of the pixel colour over time very similar to this example by Brendan Dawes but for now see below for the result of the 4×4 square scaled back up to 320×240.
Recently I attended a 3 day training session on building home-brew CNC machines which was run by Dave Turtle from the RCA. It was an amazing 3 days and I will post the photos and videos of the results as soon as possible. One of the hurdles that came crashing in on day two was the limitation of running the kit from the parallel port. There really aren’t that many computers these days that still roll out with a parallel port as standard, not to mention my nice shiny mac does not come equipped with a parallel port. I was 100% sure the solution to this problem was the trusty Arduino. There have been many projects where the Arduino has already been used as the heart of a CNC machine, the first that comes to mind is the REPRAP. There is also another CNC project called Contraptor which utilises the Arduino at its heart. The home site for the Contraptor project has a lot of useful information, it was there I came across Grbl.
I tried the RepRap g code interpreter, fiveD but I could not get it to compile for the Arduino (Any tips would be gratefully recieved). I also tried a few other interpreters with varying success: teapot, rsteppercontrol and arduino-gcode-interpreter-new. I really struggled, probably partly due to my lack of understanding when it comes to g-code. I had no success over the three days of training but I did find Grbl though I didn’t have the kit to test it. Grbl seemed like a very simple solution but the main hurdle when it comes to implementing it is that you need to use avrdude to flash it to the Arduino you can’t just send it via usb direct to the Arduino. I have never done this before so I let the Arduino rest for the the remainder of the training with a mind to try it as soon as possible.
Today I started messing around with Flashing Grbl to the Arduino and was caught out by several issues which slowed my progress. There are already several sites with information on how to do this but I found I needed bits from all my sources to get the job done. I thought I would document my process in case anyone else finds it useful.
First off the sites that proved to be most useful:
I started by downloading the prebuilt hex files for Grbl here
I then downloaded Crosspack-AVR from here which installs a version of AVRDUDE (used to handle flashing the data to the Arduino)
The Arduino that is going to act as a programmer needs to have the programming firmware uploaded to it. This is a very simple task as it is all built into the Arduino IDE. Open up the Arduino IDE then go to File -> examples -> Arduino ISP then upload the sketch to the Arduino. The Arduino is now fully setup to Flash another Arduino.
The next step was to wire one Arduino to another to use as a programmer. I found the wiring diagram from Sparkfun here and the picture below is my version of the wiring. One thing that sparkfun didn’t explain is that you must disable auto reset on serial connection. I found out how to do this here. I could not find a cable to suit so unfortunately I had to solder directly to the ISCP headers (not pretty).
Now all the setup is done it is time to put AVRDUDE to work, on a Mac this is done via terminal.
I found the terminal commands for AVRDUDE on sparkfun here about half way down the page.
command one (make sure the Arduino is ready for grbl):
avrdude -P /dev/tty.usbserial-A9007VP6 -b 19200 -c avrisp -p m328p -v -e -U efuse:w:0×05:m -U hfuse:w:0xD6:m -U lfuse:w:0xFF:m
change the red text for the name of the serial port that your Arduino is plugged into
command two (load Grbl):
avrdude -P /dev/tty.usbserial-A9007VP6 -b 19200 -c avrisp -p m328p -v -e -U flash:w:grbl.hex -U lock:w:0x0F:m
blue text is for the location of the grbl hex file on the computer
Hopefully thats it, Grbl is now installed!
If you want to test that Grbl is working properly the you can download CoolTerm which is a GUI for mac for sending and receiving information on serial ports.
We were very lucky recently to have Kim Cascone visit UCF.
Wikipedia says it better than I ever could: Kim Cascone
Kim was a very intense and provocative speaker who there was no doubt in my mind had tremendous passion for his work and field of expertise. He seemed to be hyper observant at a level where no detail was left unscrutinised. He took us on journeys through past memories reminiscent of the tiniest details, from the intrusive tones of coins dropping on to the hard sidewalk to noise of the birds agitated and overactive. I was really impressed by his work with World Building. Never before had I thought about the complexities of the sound design behind films. Kim explained what he called scope and focus as key concepts to understanding the situation of the listener. From his explanation my interpretation of these concepts goes as follows:
Focus is a directional aim of attention from the listener on certain points in the environment. The scope is the almost like the circumference around the focus point, the bigger the scope the larger the area where the listener is able to the sounds is. I am sure that my definition is not quite right but the way I imagine this to look visually is almost like a cone protruding away from the listener with the wide end furthest away. as the scope and focus gets larger and less specific the end of the cone becomes larger allowing a lot more sounds to be heard. If the cone’s base becomes smaller then the listener can really focus in on very specific sounds.
I was very interested in the battle that seemed to be persistant in Kim’s work between the auditory field being 3D and the stereo recording which exists only in 2D. Kim used the term ‘grain’ to explain how if done well a stereo 2D signal can be amplified to a 3D experience by the user. Grain follows the listener, past experiences and sensations amplify and reconstruct the 2D signal.
A small blog entry won’t do this man justice so if you ever get the chance to see Kim talk then it is well worth going.
For a while now I have been messing around with what I have called a domestic appliance sequencer. I think I have photos of it on this blog (the start of it). I have not had time to improve it or work on the software and now youtube user arcattack has beaten me to it. The project is very effective though there are some very impressive, percussive sounds being produced.
Touche mr arcattack!
A morning spent murdering Nirvana – come as you are.
A couple of months ago a good friend bought me a stylophone for my birthday. I had a blast snarling out noises that were close to songs we all know and love. unfortunately due to my clumsy inaccurate nature I have never managed to play a whole song at the correct tempo with out playing wrong notes. The novelty soon wore off and the stylophone was left to gather dust on a shelf in my office.
I have a list of tasks as long as my arm to do at work but this morning when I got in the motivation levels were at an all time low. Instead of doing anything useful I decided it was time time to put the stylophone to good use. Knowing that my ability to manipulate the stylus over those circuit board keys was never gonna improve I decided I would cheat and automate the circuit connections that are made when the stylus connects with one of the keys.
The video attached shows the result of todays procrastination. So far I have only automated 10 of the keys direct from the arduino.if I get time in the near future I will extend its functionality using an 8bit shift register so that the arduino can play all the keys. I also intend to write a processing sketch interface so that inputting songs is easier and more intuitive (and not murderous to classic rock songs).
Written using swype on htc feature hd
Just found this video on nextNature.net:
I love the lengths Diego goes inorder to get interesting sounds.
I recently stumbled upon a rather interesting concept defined by Wolfgang Köhler in 1929. The bouba / kiki effect is based on an experiment whereby viewers are shown two shapes; one shape is curvy and cloud like and the other is jagged and angular. The view is then asked which of the two shapes should be called bouba and which should be called kiki. The results were very conclusive in that 95% to 98% of the people asked said the curvier shape should be called bouba. This result points towards the notion of a synesthesic type remapping of the senses where audio characteristics have an underlying link to visual characteristic.
Its not important I just thought it was interesting
This project was built using Arduin and processing.org. The project has a beautifully unique and playful take on sound manipulation. I especially love the bucket but you will have to watch it to know what i mean!
I have been busy running through ideas for automated instruments I could use to enrich my performances at open mic nights. One of the main points of interest for me is percussion as it is usually quite over looked at open mic nights apart from the occasional set of bongos. I have been drawing up sketches for a snare drum played using dropping marbles and also for a cassette player hack. The main hurdle for any automated instrument is how will it be sequenced to play itself. Last night I sat down for a while and coded a very very basic sequencer in processing that controlled an Arduino with Firmata installed. There is nothing fancy about the code but I believe this will be a good solid starting point for most of the automated instruments I could ever imagine. There are some images below of the basic setup and a video of the sequencer on the screen and the Arduino carrying out the sequence using LED’s. I am quite happy to publish the source code on request.