Jump to content

Automated DMX using face tracking/ motion zoning


interactiveartsNI

Recommended Posts

Hi,

 

I have developed several systems using Open Sound Control (OSC), face tracking / motion tracking / color tracking and have a few videos on youtube

 

http://www.youtube.com/watch?v=a9-1x4xxNoI

http://www.youtube.com/watch?v=beGXoSrODeA&feature=fvw

 

And I do not know that much about lighting, but have just ordered an eNTTEC USB pro adapter so I can send control data (hopefully) from my software apps, so face tracking, zone tracking/ color tracking etc can be achieved using cheap cameras, the data can then be processed and send to DMX stuff.

 

Really want to know would lighting people be interested in a system like this and would anyone like to test this in a club venue when I finish it?

 

Had the idea to use a moving head with RGB, so you could for example program the system to follow faces, put spotlight on faces, use the motion in the crowd to alter lighting, that kind of thing, or maybe there is software that does this sort of thing ?

Let me know you views as I am new to the forum, thanks, also the data collected from the cameras could be used for visuals as well, e.g. you could use face tracking and extract the faces from people in the crowd, that kind of thing, let me know what you think

 

liam

Link to comment
Share on other sites

Hi

 

Off the top of my head the only thing I can think of is Wybron Autopilot which uses radio triangulation to find an object in 3D space (in this instance a small beltpack transmitter)

 

I think you might have an issue with distance perception because you'd need to be able to find x,y and z before you could translate that into a valid pan and tilt command.

 

On the plus side DMX is quite easy to get to grips with, the protocol itself is about as simple as it gets. The only things to worry about is that just about every moving head is different as to how it responds to DMX. Each fixture uses a block of channels and the pan and tilt data is somewhere in this block. They're always 16-bit as well, with both pan and tilt using 2 channels (1 DMX channel = 255 values = 8 bit).

 

Lighting desks work by the use of personality or fixture profiles which map features such as dimmer, pan, tilt, gobo etc into the correct order so the fixture understands them. Every fixture has its own unique profile (even ones of the same make.) I couldn't control a Martin MAC2000 with a Martin MAC700 profile, and vice-versa.

 

My advice would be to download the DMX specification for the moving head you are experimenting with, and set its start channel to 1. In there somewhere it will tell you what channel in the block controls what function. As the offset is 0 (because the start is 1) what reads in the manual will be the real-world value. If you change the start address on the fixture you add that number the ones listed in the specification.

 

It sounds like a really good, challenging project and I wish you all the best. If you have any queries about DMX don't hesitate to come on here, we're all quite friendly.

 

All the best

Timmeh

Link to comment
Share on other sites

Hi, and welcome to the BR.

 

Just wondering, what sort of situation are you invisiging your project to be used in in the end?

 

I've done some fairly basic computer vision stuff before and know that lighting can cause VERY LARGE problems with doing some forms of recognition. Youve got constant lighting in your videos, but youll need to make a system which is very lighting invariant to reliably work if youre thinking of using it in a 'club' environment. Remember computer vision fails when youre in a blackout!

 

Something that youve probably also noticed in your videos as well, you have quite a large amount of lag between you moving your head in the frame, and the object moving. I dont know whether this is down to you having some form of spring coefficient in the movement, or whether its just laggy, but if you have someone moving in a spot, you need to minimise the lag severely. Moving heads are usually not used as followspots because they have enough lag in them anyway in physically moving the head to point in the right direction. So any software method needs to be as minimal lag as possible. A human spot operator can also grasp a concept of where the person is likely to move, whereas a computer system usually doesn't grasp this quite so well!

 

You mention other things about face tracking in crowds? What sort of end product are you thinking of using this for? Some more detailed context about what you're trying to achieve might give some better responses.

 

Theres a company which I've seen at plasa over the last few years who have a similar system with a camera and projector to do the ripple effect things when people walk across the projected image on the floor which seems quite similar to what you have shown in the video in your post. Have seen a few other installations using a similar thing as well.

Link to comment
Share on other sites

Thanks Timmeh,

 

think I will get something simple working to prove it first, maplins have a KAM LED PAR56, suits my budget, so gona write the library in Java cause it is really serial programming i.e. write the commands to the usb port, so I need to read up on the protocols, and simply write the text out to the port, e.g. RGB values will be 0 - 255 so just write a loop to go from 0 - 255 for the Red value then you have a basic dimmer, that kind of logic, and with OSC you can route the data anywhere that has a network.

 

Really want to develop a simple library and bridge it with OSC, so the program is an exchange mechanism for OSC and DMX, and of course will run on any machine, mac, pc linux etc.

 

The Wybron Autopilot, well was thinking of more of the audience than the performer, have used cheaper ways to track the performer, used the IR camera in the Wii mote to read IR bulbs and give accurate tracking have a video I done on www.youtube.com/watch?v=QNZr9S5G-Yg out of pure chance, and it is very accurate, but you need the performers to wear a mini LED IR bulbs with about 5volt and a 90 ohm resistor, crazy but crazy enough to work, or use color tracking and get people to wear bright colored helmets, that would be cool....

Link to comment
Share on other sites

If you are commercially serious then get your original work IP protected in appropriate ways then go to PlasaShow (dot Org) and see if anyone will offer you co-development agreements. That way you can have some saleable IP rights and a product that works in real life.

 

Alternatively/additionally get the IP protection and liase with CCTV surveilance for following faces in crowds and passing info to the next few cameras, Big Brother has big pockets and can buy expensive thngs if they do the job.

Link to comment
Share on other sites

Hi,

 

the lag is programmed to smooth out movements and give the spring effect (watch the mouse pointer in relation to the rectangle). Lighting can be controlled in realtime, i.e. if it is dark then increase pixel values of rgb, there are so many things you can program, but you are right, a club seen lighting if anything but constant.

 

DMX lighting seems to be normally used in conjunction with the audio output and control of the performer (the dj), I think if you could capture the audience and use there input into the performance as feedback to themselves (and the performer) in the form of lighting/effects etc thus the question is How to capture the audiences input to the performance, there are many ways to do this, cameras for zone detections/face tracking/background substraction etc, sensors on floor (like the Wii fit), hands raised above a certain height could be detected using vision processing etc, but you get the idea, its capturing human gesture and feeding it back.

 

These ideas are not specifically geared to the club scene, in fact any scene were people interact with something, e.g. someone walks past a shop window and a video projection interacts with them, interactive installation art, and a good example give on the post was a face extraction system, although people might not like the idea that I could set a camera up and extract there face, but really I posted here to get (like the idea) feedback from lighting people as my experience is minimal.

 

The ripple effect is an algorithm which can be translated into any language, so you just substitute the mouse movements with OSC triggers, hang a projector on the roof with some good projection film (glassvu on perspex, guy called Michael wolf sells it in Canada, or 3m's film), mount the camera on the ceiling as well, and run the program, need even light for this though, shadows will effect it. Ripples appear to be path based, you could also integrate boid's algorithm for flocking and add a few fish and chase them around, this is how those guys do it, again boid's algorithm can be translated into any language..

Link to comment
Share on other sites

the lag is programmed to smooth out movements and give the spring effect (watch the mouse pointer in relation to the rectangle). Lighting can be controlled in realtime, i.e. if it is dark then increase pixel values of rgb, there are so many things you can program, but you are right, a club seen lighting if anything but constant.

 

Try IR if all else fails!

 

DMX lighting seems to be normally used in conjunction with the audio output and control of the performer (the dj), I think if you could capture the audience and use there input into the performance as feedback to themselves (and the performer) in the form of lighting/effects etc thus the question is How to capture the audiences input to the performance, there are many ways to do this, cameras for zone detections/face tracking/background substraction etc, sensors on floor (like the Wii fit), hands raised above a certain height could be detected using vision processing etc, but you get the idea, its capturing human gesture and feeding it back.

 

Fair enough. Sounds an interesting concept, and I would be interested to see how you get on with this. Im sure with some form of LED floor / ceilling and pressure sensors / image processing you could create some quite funky looking installs and interactions between people and the environment. Could get some good beat detection just based on how much people jump / when! Although I guess for the style of music thats usually in a club, beat detection is fairly easy!

 

There was another company at plasa a few years ago, which I can't remember the name of, who had an LED pressure sensitive floor, and it did certain visual effects when people stood on it, triggering droplet / ripples, lighting sparks between two people, or even played a primitive game of dodgeball on it! I'm sure something like this shouldnt be too hard to achieve with some good processing.

 

Also, a side note - its usually not worth quoting the entirity of someone elses post in your replies. Can bloat out a thread quite quickly!

Link to comment
Share on other sites

Try IR if all else fails!

 

My thought exactly.

 

I read a while ago about one of the Cirque Du Soleil shows that has a massive tilting/revolving/lifting stage that they project onto. The projections interact with the performers and IIRC they use a massive stage sized touch pad to track the performer's movements. If you could track movement visually it'd be a hell of a lot cheaper.

 

Another thought. If you mount the camera coaxially with the mover you're controlling, you can get away from having to define where the target is in x,y and z. You only need to know pan and tilt which is easy enough to pick up from the camera. I've never even seen one in person but I imagine if you were to take the video feed from the camera on a DL3, run it through your program and then send the results back to the DL3 as DMX you might be able to track faces on stage or whatever. Your position updates would then just be based on how far the target is from the boresight of the video.

 

Ben

Link to comment
Share on other sites

take a look at vvvv.org

 

it can talk DMX & OSC and it can object track from a live cam.

 

I have had a play with using 2 webcams to drive a moving head

 

I put one cam overhead to get x & Y and one cam at thefront to get x & z.

 

cheers

 

ian

Link to comment
Share on other sites

thanks for your ideas guys, will leave the quotes out in future.

 

Wana steer away from vvvv as it is a mainly pc based (uses direct x) technology and having programmed in .NET for too long there are serious problems, java is by far the most flexible language for hooking stuff up and will run on any OS which widens the usefulness of any software written.

 

Finalized my idea and gona use the ipod touch/ iphone (same OS) to hook into a java WI-FI server which can read the accelerometer data (roll, pitch, yaw) values and use these to alter the DMX commands (visuals too), so I figured people will be moving and jumping around (cause thats what people do), so why not translate this into DMX. Accelerometers will soon be in every thing and so why not make use of them. Have read the accelerometers in the Wii already using bluetooth, yet another video, using a process called hidden markov.

 

Have ordered an ipod touch 3g, just need the rgb light DMX thingy from maplins, also there is an App for ipod touch/iphone called TouchOSC which, yes you guessed it uses OSC, which brings me to the conclusion that OSC is of real use to visuals/lighting as developers can get stuff up and running pretty quickly. still gona do something with the camera though, maybe grab faces from the crowd and project them, tried it on the tv last night and it worked fine, will see.

 

Will check out this Plasa thing too, seems a good conference, have learned a lot about lighting systems the last few days I can say I will use this forum again,

 

 

 

liam

Link to comment
Share on other sites

  • 1 month later...
tracking someone on stage can be done cheap using IR cameras or movement zoning, it does not need to be 3d, the cameras are above the performer.

You can do it even cheaper if you get a member of local crew to do it for you... Plus they don't have to be rigged above the performer!

 

Don't get me wrong, some of the stuff you've described / linked to looks and sounds quite cool and interesting, I just don't really see it entering the live / nightclub market. It's a great project though!

 

Just my 2p

Link to comment
Share on other sites

Thanks for comments.

 

I think accelerometers (the things inside wii, iphone, ipod touch 2 measure motion), zigbee radio (used to replace bluetooth), open sound control (will replace midi) and camera technology combined with other sensors will definitely be used in live performance. This technology is emerging, and will make live performance more interactive for performer and audience, you could hook a heart rate monitor up to a performer (using zigbee - measure position as it is radio) and output the signal to a DMX light (using open sound control) to pulse it, way more interesting that sound -to-light, to me DMX is simply a protocol which is under utilized, there is so much more you could do with sound and light by creating extensions to the performer and the audience.

 

Here is an example of an extension in performance -

(uses color tracking to control playback, bit contemporary for me, but a good example), its not all rock and roll..
Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.