Archive for March, 2012

Control Methods

For the reactive system I intend to create, I want a very visual method of control – with expansive movements involving the whole body if possible to fully connect the visual and auditory aesthetics of the performance. This relates to what I said before about Ginko’s performance – that when the sounds AND visuals are interesting, the product is far greater than would be expected with both separately. I intend to look at various methods of gestural control, in its different forms.

I’ve looked at several types of control:

Mocap – The uni apparently has a Mocap (motion capture) suit. This would easily be the most interesting to work with, and has the most atributes with accurate tracking that I would be able to work with. It would, however, be very complicated to set up, and it’s not certain that I’d be able to use it.

Accelerometers – Tracking with an accelerometer on each hand could work – I don’t know how accurate they’d be as a practical performance tool, it would be interesting to experiment with some.

Visual tracking – This is potentially the cheapest and most readily available method, ranging from:

Eyetoy – which are apparently relatively easy to hack and fairly cheap now.

Kinect – which I’ve already talked about and deemed impractical for this module.

Infrared tracking – which is relatively simple at a base level, the EyeToy is apparently easy to convert to a IR receiver by removing the IR filter from the camera, and adding a filter made from the inside of a floppy disk or a film negative to provide the opposite function of only allowing IR wavelengths through. IR tracking has the issue of not being able to distinguish between different objects if they cross or get too close, making multiple controls inaccurate.

Visual blob tracking – working the same way IR would but with the visible spectrum. This means simple coloured objects can be used, meaning much less specialist equipment – a webcam would work. this method does obviously put big restraints on light levels in a performance though.

I think blob tracking, while not being hugely accurate or flexible, would be the quickest and easiest system to implement, especially as I have used it before in a personal project last year – tracking a ball around my room, and synthesising sounds based on it. This project last year was purely tech based however, there was never much focus on either the performance element, or making it sound good. It can track multiple objects allowing multiple parameters to be controlled at once. If some form of foot pedal is used as well, it would enable a large array of control to be available with simple movements.

Thursday, March 29th, 2012 2nd year, Emergent Tech, Live No Comments

Kinect as music controller

There has been a reasonable amount of development in terms of Microsoft Kinect hacks to control music recently. From simple midi control demos –

…to slightly more complex attempts at live control of prewritten music –

The video that has most caught my attention though, is this one –

It very successfully combines audio and visual performance. The visual performance and the audio seem equally important, complimenting each other. I think that it maintaining a soundscape-esque feeling gives it a lot more weight, as motion tracking controls generally have problems with precise timing – even beyond lag, the fact that there’s no tactile response to a gesture that creates sound makes it difficult to accurately trigger a sound at a specific point in time. The soundscape feel gets around this problem by remaining fairly free of rhythm.

This is definitely a route I would like to look into, though focusing more on my visual performance strengths of circus based arts.

While there is extensive information written about using the Kinect with computers (http://openkinect.org/) From what I’ve read about it so far, the technical expertise required goes well beyond the scope of anything I’d be able to accomplish in this Emerent Tech module – it could well be worth looking into as a possible project for next year, where I’ll be able to dedicate a large amount of time and effort to the task.

Thursday, March 29th, 2012 Emergent Tech, Inspiration No Comments

Initial idea? Dialogue.

I went to Dialogue last night – a showcase of music MA students at Newport Uni. My brain started running off at different tangents throughout the entire thing.

Jauge’s set particularly set me thinking about how pre-programmed music can be manipulated in a live setting, making what’s fundamentally a pre-composed piece transition into a live experience, unique to each performance. I started thinking of different ways this could be achieved.

Through the rest of the night these ideas solidified into one solid (in principle, if not specifics) concept:

A system of ranking each note or beat in a piece of music in relation to how ‘important’ it’s deemed to be (based on where in relation to the beat it lies, or it’s velocity), thus an instrument can be ‘turned up or down’ by removing or adding the less important notes, rather than the more standard volume. This system could well also allow transitioning between melodies/beats, by swapping levels of importance between them – the least important ones changing first for example. this would result in interesting melodies coming out of the mix, which weren’t specifically written in in the first place, and they would be different each time if these perameters were controlled live.

As with projects I usually have a tendency to dither over a huge array of ideas for far too long – running out of time to actually implement them, I’ve decided to buck the trend and run with this idea – should it keep seeming feasible – for the rest of the project.

Also, worth mentioning, though perhaps less relevant – Ginko‘s set was amazing. The subtle soundscapes he created, using lights to control the sound created an overwhelming sense overload. Beautiful.

Wednesday, March 28th, 2012 Emergent Tech, Inspiration, Stuff I've Done, Uni No Comments

Arduino Workshop

Today we had a workshop on interfacing Arduinos with Max/MSP. It’s much simpler than I was expecting, I’m looking forward to exploring what I can make!
Immediately considering many different ideas for interfaces, I’ve been wanting to create my own for a long time, it now seems much more achievable.

All we did in the workshop was attach a single potentiometer, and get Max to read it. Everything’s well documented online though, and it all seems relatively simple.

Tuesday, March 20th, 2012 Emergent Tech, Uni No Comments

Work Experience – Journal Day 2

9:30 – 11:30

Sit in on finishing the initial mix of Glass Armonica program.

it now just needs 1 last interview, and then enters the final mixing process which takes a week or so.

11:30 – 12:20

Sit in on meeting planning technical practicalities for a radio play with a live audience.

An interesting point mentioned here is that when you’re recording audience sounds, it’s best to avoid putting the microphones too close to the edges of the stage, as the wide stereo image of the audience exaggerates any pocketed laughter – eg. where one side of the audience is laughing more than the other side.

Room areas (doors etc) are taped on the floor to ensure people enter and exit in the same places.

A lot of sound effects are left out live, and added in in post production.

They were discussion experimenting with staging – using lights and props that don’t create sound, though this is very much not the norm for radio plays.

13:30 – 15:00

Finished the Ed Byrne mix from yesterday.

15:00 – 16:30

Sit in on a meeting discussing concepts for a proposal of a series of BBC documentaries on WWI. I found it interesting to see how the system for proposing programs works, and how the smaller companies have to compete to get program slots.

16:30 – 17:15

Crit session/discussion about my Ed Byrne mix.

Apparently I did a much better job than they were expecting, so my mix will be broadcast! Woo!

 

Overall, I have enjoyed the experience, and have gained valuable knowledge on working within Sadie, as well as much better knowledge of how the broadcasting industry operates.

Friday, March 16th, 2012 PDAR, Stuff I've Done, Uni, Work Experience No Comments

Max/MSP – Keyboard Control

While I have used keyboard control before, some of the ways I needed to implement it for this task were new to me, which was interesting. I would have liked to compose a new track to fit the format well, but due to time restraints I ended up just using the sounds I created for the generative patch. Because of this I don’t think the patch is very effective as an overall effect. While I didn’t compose new music, I did think about how I would, and would have some good places to start if I was composing for an interactive, trigger-based patch.



Thursday, March 15th, 2012 Emergent Tech, Stuff I've Done, Uni No Comments

Work Experience – Journal Day 1

9:30 – 13:30

I was thrown in at the deep end a bit because they’d been sent an urgent long program that needed changes implemented that morning, so couldn’t spare me much time until the afternoon.

LIVE BREIF: Edit Ed Byrnes interview of Ian Hislop

This required getting to grips with Sadie – Audio editing software. I picked up most of the techniques needed very quickly because the program was fairly simple as it was very streamlined for a specific set of tasks.

The trickiest – and as such, most interesting – part of this process was learning how to edit just voice. you have to be very careful otherwise the speech sounds unnatural where bits are cut out. Cutting on consonant sounds helps mask the cut. Ed Byrne was particularly tricky for this, as he consistently changes pitch gradually over the course of a sentence, making it extremely obvious if any of that sentence is removed.

I Roughly finished cutting the interview down from 25 minutes to around 10, though I didn’t quite manage to cut it down that low at this point.

 

14:15 – 15:00

Sat in on studio session recording Dame Evelyn Glennie reading a scripted part for a program on the glass armonica. I found it interesting to see how the technician made quick edits live, while recording – deleting all the silences and bad takes as he went along, and keeping all the other audio neatly together.

15:00 – 17:00

Sat in on recording voice actors for other parts in the glass armonica program. Interesting to see how all the acting parts were played by a very few people, the shorter/simpler parts played by people in-house, while the longer or harder parts used a professional voice actor.

We also started sorting the audio, loading in location clips and music that would be interspersed with the studio recordings.

Thursday, March 15th, 2012 PDAR, Stuff I've Done, Uni, Work Experience No Comments