Jump to content

Is Ableton Live useful for theatre peformance?


andy_s

Recommended Posts

I haven't used it in theatre, but when combined with one of the specialized control surfaces, it would be an ideal system for triggering sound fx/pre recorded samples, not sure what else you would use it for though
Link to comment
Share on other sites

Ableton Live is software designed for "writing, producing and performing music" (quote stolen from their web site). Yes, you could work out ways to use it for triggering effects and tracks during a theatre show but you'd be wasting the majority of it's capabilities to achieve something not as good as some the examples you list.

 

Bob

Link to comment
Share on other sites

thanks for your contributions - well yes that's what I thought - I'd not heard of it, so I googled it as well and couldn't see any theatre-related info. Went to Press Night yesterday to see my pal who is using it to run the sound effects / incidental music. Not his choice; he's got SFX and QLab available to him. Guess I'll find out at first hand tomorrow; I'm learning the show so I can cover him when he goes on paternity leave .... wasn't very keen on the lack of a nice big "GO" symbol to aim for....
Link to comment
Share on other sites

The only time I've thought of using something like Abbleton over QLab/Sfx is when trying to mix a pre-recorded/sequenced backing track with live music from the pit, and you can give the MD control of the backing track without the sound op having to press go.
Link to comment
Share on other sites

Well I've now done a few performances of the show using Ableton, and it does a job, but wouldn't be anywhere near the top of the list as far as I'm concerned.

 

I ended up doing the show cold, as my colleague's partner went into labour a week early on the day he was supposed to teach me the show. Luckily not a complicated or fast moving show, as there seemed to be no cue sequence facility - each cue clicked on individually to start. But no mishaps, though a couple of instances of misunderstanding the cuesheet, meant we had no interval music... which is nothing to do with the software of course...

 

initially using the sound designer's laptop, which was a macbook with a tiny screen - not the easiest of interfaces. The theatre had bought the software and loaded it onto their mac mini, but there were some crash issues, so for safety, the designer left his machine behind. The crash issues are now resolved, (it still crashes when we load the show, but you empty the cache and reopen, and it works until the next time you shut it down) so we swapped back to the mac mini today, and I must say this is a major improvement - bigger screen, easier to read, and we now discover that the cue list will sequence - just hit return and the next cue fires. Why the laptop wouldn't do this, I don't know, and as far as I know, nothing special has been done to the new software on the Mini to enable this facility, it seems to be the default. Which is good. It is a more recent software version, so maybe that is something to do with it...

 

But for a theatre operating system, give me a B77 anyday ...! ( or failing that, Qlab. cheaper than SFX and Ableton Live, easier to operate, clearer interface). (cheaper than a B77 too - even without allowing for inflation!) I'm not a gigging musician, and I have no desire to be a DJ. I just want to hit a big button marked "GO" when the DSM tells me to!

 

I suppose there is a time-saving to be had if you don't have to take files form your creating programme and load them into your operating software, so this might be an argument in its favour.

 

(E2 remove stray apostrophe)

Link to comment
Share on other sites

This is certainly interresting to hear. I have a copy of Ableton which shipped with my interface and I've never used it or thought about using it for this purpose. Unfortunatly I don't have any of the decent softwares out there as the school I work in doesn't see why we can't just 'play a CD'.

I think a quick install of Ableton is on the cards this afternoon.

Link to comment
Share on other sites

I have a vague idea that Gareth Fry (the only London sound designer to win two Olivier's for Sound Design) uses Abelton as part of his work flow. I'm afraid I'm not sure exactly in what way but I have an idea he used to integrate it with SFX.

 

Indeed he has and does. It is used to different effect on different shows however the general gist is that Live takes care of the music and underscoring of the performance. It's ability to loop and vamp and have realtime manipulation enables the operator to respond in realtime to the performance.

It's integration into SFX (or now indeed QLAB) works through the combination of using QLAB's linear based cue-list system to provide the operator with overall control and a list of events to follow with the manipulative capabilities of Ableton Live allowing the show to remain dynamic.

With Cat In The Hat last year Live was also used to provide spot effects using the trigger pad as someone has previously mentioned.

 

Dom

Link to comment
Share on other sites

  • 2 months later...
I have a vague idea that Gareth Fry (the only London sound designer to win two Olivier's for Sound Design) uses Abelton as part of his work flow. I'm afraid I'm not sure exactly in what way but I have an idea he used to integrate it with SFX.

 

Ableton Live is immensely useful for theatre performance. I use it in a few different ways... and I pretty much always use it in combination with QLab or SFX, because it does things that they just can't do. As an analogy, SFX & QLab are most closely related to multiple linear playback devices like CD players whilst Ableton is closer to a sampler.

 

To take Cat in the Hat and Beauty and the Beast, both at the National Theatre, as examples

These shows have recorded music running continuously through the piece. There is a degree where the performers are working off the audience's reactions so sequences never last the same amount of time. We knew that we wanted the music to flow through the show and transition on the beat. So we split the music in sections, a mix of holding loops and transitional bits of music. Over the course of 5 minutes we'd go through maybe 15 pieces of music, vamping sections as needed, then moving into the next bit of transitional music leading into another holding loop, and so on. The pieces of music are often at different tempos and time signatures too. Ableton Live can very easily handle all that vamping and changing of tempo with minimal programming and does it in a musical way. All we have to do is send a MIDI note from QLab or SFX to Ableton and Ableton takes care of the rest. We can also do live volume automation, filter sweeps, xfade to reverb, complex audio effects processing simply by sending MIDI control change information from QLab.

 

In addition to running the music side of things, Ableton is also taking care of some of the spot effects. Both shows involve a degree of the performers playing things differently each performance. We have a Novation Launchpad connected to Ableton with a bunch of sound effects on it, so that the operator can respond quickly to the performer.

 

One of the big drawbacks of SFX and QLab is that once it's programmed it's difficult to change things. I recently had a director come up to me, who worked with another sound designer, and asked me if the following was true or not:

 

"My sound designer tells me that the software won't let them change the time of a fade out on a nightly basis"

 

In the olden days of theatre we might often specify that you start fading out a piece of music on word X of a sentence, and that you finish the fade on the last word of the sentence. Very easy with an operator and a fader. Not easy with QLab, SFX etc - you can only specify a fade time in seconds and you can't speed that up or slow it down if the performer is going faster or slower than normal.

 

Likewise we would often ask the operator to ride a piece of music under a scene. This was often the real test of an operator, how well they could respond to the dynamics in the piece of music and the dynamics of the performance, simultaneously, and make the music complement the performance. You just can't programme that into QLab etc because the performance (very rightly) changes from night to night in every genre of theatre.

 

It is the common practise to use QLab outputs routed to speakers, ie output 1&2 of QLab is routed to FOH L&R, 3&4 to Upstage L&R, etc. If you have more than one thing playing at a time, that means your operator can't even ride the physical desk faders to take the music up and down because then they' be affecting whatever else was playing too. The alternate way to use QLab outputs is to use them by content, i.e. output 1&2 is Music, Output 3&4 is Atmos, etc. But this reduces some of the power and flexibility of using QLab etc.

 

To counter this problem, I sometimes will run the music component of a show off Ableton. The SFX, VO's, etc come out of QLab, and QLab also triggers starting, stopping, effects automation and re-routing of the music in Ableton via MIDI control changes. As far as the operator is concerned it's a normal show with a Go button and they never touch the software in Ableton, except they will have an additional piece of kit - often a Behringer BCF2000 fader control surface. And on that they will often have a fader that just lets them ride the music up and down, and fade it out in a musical performance-responsive manner. Sometimes I will take it a step further and with the composer we'll have broken down the music into stems: drums, guitar, piano, etc, each having it's own fader on the control surface. This means that the operator can mix the recorded music as if it were a live multi-mic'd band, and we can lift up the piano for a certain section, or dip the drums for a certain section. Essentially it puts the operator back in control. Rather than having a Go button they get a Go button and 8 faders that let them respond to the performance.

 

Live and QLab will quite happily operate on the same computer in my experience, talking to each other via MIDI IAC, though I usually use seperate computers for each to improve reliability and programming. Ableton is a very efficient bit of software and everything I've done so far has happily run on a Mac Mini.

 

I pretty much always use Ableton in conjunction with QLab or SFX for running shows, but it's worth noting that Ableton on it's own is considerably more popular in Europe for show playback than both QLab, SFX, CSC and SCS combined. I'm not keen on this myself. Ableton is also used on it's own for dance shows, and again you'll see Ableton on a dance show far more than you'll see any other bit of software.

 

Ableton is also an immensely powerful tool for creating sound designs - DAWs like PT and Logic are only just beginning to catch up with Ableton in terms of pitch/time manipulation. It is also particularly useful for sitting in rehearsals and throwing in sound effects. I will often busk sequences in Ableton in rehearsals then transfer them into QLab when they lock down.

 

I could harp on about Ableton for much longer. It really is worth checking out.

 

Gareth Fry

Sound Designer

www.garethfry.co.uk

 

PS Paul Arditti also has two Olivier awards for Sound Design :-)

Link to comment
Share on other sites

Thanks for that Gareth - an interesting read.

 

Currently we are running a dedicated Mac Mini QLab setup, with ADAT outs into an LS9. We generally mix with component output as you suggest - difference sources to each output mixed down on an LS9. I'm interested in how you say this reduces some of the functionality - we don't have any problems with the fades etc working across multiple outputs, or being specific to just one.

 

One thing we also do which you haven't touched on is that we use the MIDI Scene recall on the LS9 with recall cues in QLab - so we can recall a cue with the levels set (or even with a fade time) at the top of a cue group, adjust as needed, and save the scene down if appropriate to make the adjustments permanent. This also gives us the abilities to ride the levels on a physical fader, as you suggest is missing when mixing down to L&R, SL, SR etc.

 

You're right about dropping out of loops - QLab simply can't do this. I have had to do something similar to this once, which was for an off-stage piano playing - about ten minutes into a scene the piano picks up pace - we obviously couldn't time this, so what we had was a second sound file with about ten seconds of the first state before the cue, and we fired the messy (not synchronous) crossfade under a very loud action a couple of lines before the cue point. Firing a MIDI cue from QLab to drop out of an Ableton loop sounds much neater in some situations.

 

For the quicker or slower fades fitting to the lines, what I've taken to doing is having the first cue slightly longer than usually needed, and having a second 'kill' cue just to cut that last second shorter - fired later in the line / paragraph that the cue is required to take part over. We could also just fade manually on the LS9 and have a fade / stop cue to stop the audio once it's silent.

 

I'm interested in the use of the BCF2000 - using this to control levels within Ableton fired from within QLab sounds very neat, and gives the level of control that we have without needing the LS9. With your BCF, and a setup with VO and SFX in QLab, and music in Ableton, do you have any control over the QLab effects via the BCF2000, or would you have to put all the audio in Abelton to have that control? (Sounds messy, but if not you could probably route the Audio from Qlab into Ableton via either SoundFlower or an aggregate sound device to gain this control)

 

I'm interested in the pitch/time manipulation - this isn't something we've had a need for yet, but we do have a show coming up where we are hoping to have a background radio playing, and then the radio speeds up as the action onstage speeds - I was going to do this through a messy QLab crossfade into a faster file, or possibly with a 'hidden' crossfade a few lines before the required point, but your setup with Ableton and controlled from MIDI cues in QLab would give a better result.

 

Again, thanks for your post and hope you will come back to follow this up with some further info - QLab's something I've been getting really involved with but it's been rare to see any feedback from any real high-level use.

Link to comment
Share on other sites

>Currently we are running a dedicated Mac Mini QLab setup, with ADAT outs into an LS9. We generally mix with component output as you suggest - difference

>sources to each output mixed down on an LS9. I'm interested in how you say this reduces some of the functionality - we don't have any problems with the

>fades etc working across multiple outputs, or being specific to just one.

 

So the two approaches are: 1) QLab outputs routed directly to mixer outputs ( ie output 1&2 of QLab is routed to FOH L&R, 3&4 to Upstage L&R, etc.) versus (2) having QLab send specific content to specific mixer chanels which are then routed to a variety of outputs (output 1&2 is Music, Output 3&4 is Atmos, etc).

 

In approach (1), if you have a piece of music you can send it to outputs 1-16 as you see fit specific for that cue, and if halfway through you want you can easily shift the balance to other speakers. In approach (2) you have your music coming in to the mixer on two channels, which are then routed to a bunch of outputs. It's trickier then to change where that audio is going to whilst audio is playing. That is the key lack of flexibility you lose with approach (2) but of course approach (2) has a bunch of benefits too, and there are workarounds to these limitations.

 

In the world of QLab where >16 output channels is a reality it is getting easier to use approach 1 and 2 simultaneously and get the best of both worlds.

 

>One thing we also do which you haven't touched on is that we use the MIDI Scene recall on the LS9 with recall cues in QLab - so we can recall a cue with the >levels set (or even with a fade time) at the top of a cue group, adjust as needed, and save the scene down if appropriate to make the adjustments permanent. >This also gives us the abilities to ride the levels on a physical fader, as you suggest is missing when mixing down to L&R, SL, SR etc.

 

That is exactly what I was trying to touch on! With approach 2 you are putting your playback audio volume and routing into the hands of the mixing desk rather than in the hands of QLab. Whilst mixing desks give you great hands on control of audio levels they are rubbish at adjusting where sounds are going to over the course of a cue, and that of course is what QLab is great at.

 

>You're right about dropping out of loops - QLab simply can't do this. ... Firing a MIDI cue from QLab to drop out of an Ableton loop sounds

>much neater in some situations.

 

Exactly, you can create musically seamless joins between bits of music.

I once did a show where the director wanted a 5 minute piece of Philiip Glass to last through a 60 minute act, with it looping parts of the music, then moving onto the next bit of music at certain cue points in the text. Ableton let me do this seamlessly.

 

>I'm interested in the use of the BCF2000 - using this to control levels within Ableton fired from within QLab sounds very neat, and gives the

>level of control that we have without needing the LS9.

 

Indeed, QLab plays back the audio that needs to be exactly repeatable from night to night. On shows with a lot of microphones QLab will often be routed directly into the DME64 that the mixing desk is routing into, bypassing the mixing desk altogether.

 

>With your BCF, and a setup with VO and SFX in QLab, and music in Ableton, do you have any control over the QLab effects via the BCF2000.

Nope, none, and I prefer to keep them seperate, because as you can it gets very messy otherwise.

 

>I'm interested in the pitch/time manipulation...

 

You can very easily do real-time adjustment of pitch by automating the global song tempo via midi controller change.

Some examples of where this is useful:

- Stopping a piece of music with the record player slowing down effect

- Going into slow-motion, or fast-forwarding through

- Subtly slowing something down over a long period of time to create variation in a looped track

- Creating momentary glitches in the soundtrack

 

And of course you can change the pitch of any given audio file with the click of a mouse, with or without effecting it's duration.

 

It's also worth saying that a lot of what Ableton can do for playback of audio it can also do to live inputs, for example microphones routed into it's inputs. Ableton works as a very sophisitcated effects processor for microphones and has a bunch of other creative effects (like looping or freezing incoming audio) that can be automated from QLab to. Ableton will quite happily sit there putting AltiVerb on whatever you send to it if that's all you want too.

 

regards,

Gareth Fry

 

Sound Designer

www.garethfry.co.uk

Link to comment
Share on other sites

In approach (2) you have your music coming in to the mixer on two channels, which are then routed to a bunch of outputs. It's trickier then to change where that audio is going to whilst audio is playing. That is the key lack of flexibility you lose with approach (2) but of course approach (2) has a bunch of benefits too, and there are workarounds to these limitations.

For that, I'd program two scenes into the LS9 from the component ins, with the two alernate output mixes, and set a fade time between them. We've also done similar manipulation using direct MIDI control of parameters on the digital desk - in one setup (again the piano in the adjoining room), to create a 'lift' in the sound when anyone entered through the adjoining door, we set up a MIDI fade controlling the Mid/High EQ gain on that channel, and via QLab gave a 6db(ish) lift on the EQ when the door opened. Looked and sounded very natural, but this was using MIDI control of the EQ on the LS9 for what you would be doing in Ableton.

 

Alternatively, we could use a mix of method 1 and 2 as you suggest - mix mostly in component, but in the cues where routing is needed to change over the cue, set up a scene on the LS9 with two different output mixes, and mix between the outputs on QLab to achieve the move.

 

It's also worth saying that a lot of what Ableton can do for playback of audio it can also do to live inputs, for example microphones routed into it's inputs. Ableton works as a very sophisitcated effects processor for microphones and has a bunch of other creative effects (like looping or freezing incoming audio) that can be automated from QLab to. Ableton will quite happily sit there putting AltiVerb on whatever you send to it if that's all you want too.

Again, this is interesting - we've used MIDI control (as in the example above) to automate reverb on live inputs as well - we had a show with some live inputs from lapel mics, which we then automated (via MIDI) the reverb mix levels inserted across the channel within a QLab cue.

 

Ableton seems like it would be able to reduce out desk space considerably, as we could do just about everything we could on the LS9 within Ableton. We also can't replicate the dropping out of loops, or the speed or pitch shifting within our setup either.

 

Again thanks for your input - it's a very interesting read.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.