Over the next academic year I intend to show a lot more of the UCF student work on this blog. The staff and students put their heart and soul into the work that is produced at Falmouth. Personally, I get a massive sense of pride from what I do and want to share the results with the world. So here is the first of many; a video documenting the first Kinect based student project to come from UCF.
Tag Archives: sound
We were very lucky recently to have Kim Cascone visit UCF.
Wikipedia says it better than I ever could: Kim Cascone
Kim was a very intense and provocative speaker who there was no doubt in my mind had tremendous passion for his work and field of expertise. He seemed to be hyper observant at a level where no detail was left unscrutinised. He took us on journeys through past memories reminiscent of the tiniest details, from the intrusive tones of coins dropping on to the hard sidewalk to noise of the birds agitated and overactive. I was really impressed by his work with World Building. Never before had I thought about the complexities of the sound design behind films. Kim explained what he called scope and focus as key concepts to understanding the situation of the listener. From his explanation my interpretation of these concepts goes as follows:
Focus is a directional aim of attention from the listener on certain points in the environment. The scope is the almost like the circumference around the focus point, the bigger the scope the larger the area where the listener is able to the sounds is. I am sure that my definition is not quite right but the way I imagine this to look visually is almost like a cone protruding away from the listener with the wide end furthest away. as the scope and focus gets larger and less specific the end of the cone becomes larger allowing a lot more sounds to be heard. If the cone’s base becomes smaller then the listener can really focus in on very specific sounds.
I was very interested in the battle that seemed to be persistant in Kim’s work between the auditory field being 3D and the stereo recording which exists only in 2D. Kim used the term ‘grain’ to explain how if done well a stereo 2D signal can be amplified to a 3D experience by the user. Grain follows the listener, past experiences and sensations amplify and reconstruct the 2D signal.
A small blog entry won’t do this man justice so if you ever get the chance to see Kim talk then it is well worth going.
For a while now I have been messing around with what I have called a domestic appliance sequencer. I think I have photos of it on this blog (the start of it). I have not had time to improve it or work on the software and now youtube user arcattack has beaten me to it. The project is very effective though there are some very impressive, percussive sounds being produced.
Touche mr arcattack!
I recently stumbled upon a rather interesting concept defined by Wolfgang Köhler in 1929. The bouba / kiki effect is based on an experiment whereby viewers are shown two shapes; one shape is curvy and cloud like and the other is jagged and angular. The view is then asked which of the two shapes should be called bouba and which should be called kiki. The results were very conclusive in that 95% to 98% of the people asked said the curvier shape should be called bouba. This result points towards the notion of a synesthesic type remapping of the senses where audio characteristics have an underlying link to visual characteristic.
Its not important I just thought it was interesting 😛
This made me smile so I thought i would post it. One of the projects I am working on at the minute is a snare drum that is played using marbles dropping from shoots. I had been wondering how to return the marbles to the top of the shoot until I saw this video posted on hackaday.com. Its such a simple idea I cant wait to see if it will work for my project!
I have almost finished a project which I have been working on for the last few months so i thought it was about time I posted it up here. The website is written for Peter Cusack of CRiSAP (Creative Research in Sound Art & Performance). He approached me looking for an online database to show case sounds around major cities in the UK (which eventually turned into the world) and here is the result: The Sound Database
There are a few bugs and usability features that I am working on but the base of the site pretty much finished.
The site uses a number of different languages and API to achieve the end result.
- Regular Expression
This is the first time i have coded something this complicated and i am fairly pleased with the results. Next i hope to add some extra functionality to the site.
- A randomizer which pans around the world randomly playing sounds as it goes.
- A walk though feature whereby the user can draw a line though the sounds. An icon will drive the path of the line playing sounds in close proximity. Sounds will pan to the left if they are on the left and right if they are on the right.
There are three main reasons why i love this project. Firstly the switches are ingenious. They look to be made out of washers cut in half and separated then as the ball bearing is placed down it connects the two half’s and joins the circuit together. Its so simple that is brilliant.
The second reason i like this project so much is that it uses the old CRT type screen. If you ever go to a dump you will see plenty of these thrown to waste replaced by the more convenient TFT monitor. Its nice to see the old monitors being put to use, i have always thought that the CRT screen would make an awesome base to a coffee table (project coming soon).
I also love that the interface is placed directly above the screen making it possible for the interface to react and change colour throughout the experience. I found this project on the MAKE magazine blog. The Make blog has more information on the build and techniques used for communicating with the computer so if this interests you take a look there as well.
This is a really interesting project. A sewing machine has been modified using 24 Servos to knit sound levels recorded from a microphone. Not only does it look epic but the end result has been used to create some quite interesting garments. check out the main website here though i found the project on the Make site
After my post on subconscious tapping a while ago i have been pondering over my own version. I have been developing a very simple robot that bounces back and forth between two objects; the further apart the objects the slower the beat. I plan to make quite a few of these little bots so that different beats can be made by having each robot bounce between different distances. Here’s a quick mock up of how the robots will be made –
I have nick named him Gaz the destroyer i made a mock up out of cardboard before this one who was not quite so successful. He was called Baz the racing slug! i do in fact need to get out more.
I am quite happy with the over all performance of the mock up. There are a few things to take into consideration. The robot produces a slight wheel spin on the return journey. This is because of bad weight distribution. I have decided to counteract this by making the robot 4 wheel drive. I also want to consider what material to construct the mechanism with so that when the robot bashes against an object it makes a good noise.
This is a very poetic project and a very beautiful idea. The author and creator of the program decided it would be nice to have a way to convert the beauty of the retina in an eye in to music. He’s Using Processing as the backbone and creating OSC which are then picked up by SuperCollider.
I have not looked at SuperCollider yet but it does look like an excellent piece of software for producing real-time audio synthesis and algorithmic composition. So watch out in the future for experiments on my blog using this software.
heres the video showing the EyeSequencer:
this project was found on Makezine.com
Makezine found this project on: http://blog.califaudio.com