Opera Australia’s Madama Butterfly on Sydney Harbour requires plenty of audio innovation to stay afloat.
Text:/ Robert Clark
Since the Handa Opera on Sydney Harbour (HOSH) series began four years ago, Opera Australia’s lavish outdoor productions have created complex live-sound scenarios for sound designer, Tony David Cray. Not to mention the wind, rain and a few oddly-tuned ferry horns. This year’s production of Puccini’s Madama Butterfly once again took some real ingenuity to pull off.
From the beginning, questions around how to amplify the voice in such an environment were the most important for Cray and the team of engineers from Norwest Audio, engaged specifically for these productions. “An operatic voice is one of the most powerful instruments,” says Cray, “so we had to ask: what technology do we use? What mics do we use? What transmitters do we use in that chain to try and capture the nature of the voice and share it with the audience?” Headset mics with in-ear monitoring were really the only option, and Cray admits that this is actually a pretty radical thing in opera; not just due to the vocal amplification aspect, but also because wearing in-ears creates unique problems for operatic performers: “A couple of singers found that the headset’s closed acoustic was very isolating, and the way an opera singer will create the node, there’s a lot of resonances; there’s very physical processes in generating that sound. Having the ears blocked creates an inward pressure and it’s very disorientating. It would reinforce certain frequencies, too.”
For many of the singers, having time to rehearse with the in-ear monitoring systems – a combination of Sennheiser G2s, Shure PSM200s and UR4Ds using Shure UR1M transmitters – was enough to overcome the discomfort, but for others who still experienced difficulties, a creative solution was available thanks to one of the more experienced singers. As Cray explains, “Jonathan Summers devised the idea of using the generic in-ears, taking the foam cover off and just getting the transducer taped in his ear. So he had like a little piece of spaghetti going into his ear but he could still hear ambient sound. It gave him enough present sound of the orchestra to time and to pitch to, but it was open.” This technique is now lovingly referred to as the ‘Jonathan Summers Method’, and some singers opt for it, while the majority persevere and wear the headsets as is.
Hyeseoung Kwon as Madama Butterfly at her wedding to Pinkerton. Photographer: James Morgan
Handling the audio feed from all these headsets, in what could only be described as a ‘guerrilla encampment’ under the stage, is Norwest’s John Watterson. His role as monitoring engineer also includes piping audio from the orchestra, which is enclosed in an acoustically-isolated pit behind him. There is more room in the pit than in previous years, but once performances begin, there is still no chance of any technicians squeezing in to make adjustments. Thinking about ways to minimise the likelihood of that scenario, Cray and the Norwest team installed Aviom A-16II personal mixers on the musicians’ stands, and a lot of groundwork was laid to coach the players to be responsible for their own microphones. The string and brass players all have DPA 4099s clamped to their instruments, while other members of the orchestra have a combination of Schoeps CMC 6-MKs, Neumann TLM103s and Royer R-122 ribbon microphones. Cray describes the scenario as “like a close mic studio gig”, which is not just because of limited space. Previous experience has shown that such a boxed-in environment typically creates a build-up of low-to-mid frequencies that can be tricky to eliminate down the signal path. Better to mic close and add space later (with the help of an Altiverb reverb Cray modelled on the Opera House Concert Hall years ago).
In contrast to the pit, the immense size and odd shape of the auditorium created tricky coverage and delay issues for Cray and his team. It is, after all, far wider than it is deep, which is the opposite concern from most venues. This meant the task of finding a stereo centre was quite a challenge. They ultimately took a predominantly front-fill approach, with seven speakers embedded into the front of the stage at a very shallow angle, with the driver of each pointing “to about 60% up the auditorium”. The flown array of Adamson Y18s, in concert with left and right stacks at stage level, provide extra coverage for the sides and rear of the auditorium.
The expansive stage means the amount of delay is considerable. “If I’m standing down the front of the stage,” says Cray, “my voice is going to take 12ms to get to the first row, but if I’m standing towards the back it’s going to take 45ms”. The solution was to calibrate the throw of each speaker to an artificial zero point about 4.5m back from the front of the stage, which is where most of the cast perform. The delay from one side of the stage to the other isn’t exactly minimal either. Cray estimates the acoustic delay between singers on either side of the stage is 60-100ms. He further points out that at some tempos, that can represent a sixteenth note.
Foldback for the singers is provided via a combination of EAW JF80 and Adamson M15 low-profile wedges installed above the speakers at the front of the stage. This helps singers who aren’t relying on in-ears for timing and, as Cray puts it, adds a level of ‘energy’ to the performance space. It is a complex audio environment for singers to navigate, as mezzo-soprano Anna Yun – who plays Suzuki in the opera – tells, “We can hear the front-of-house speakers and there is a fraction of delay there, which is unavoidable. At times, depending on the position on the stage, we can also pick up the sound coming directly out of the orchestra pit [usually the brass], so there can be three different timings for the same phrase [including that of the in-ears].” Yun insists that these issues were not insurmountable, however, and that compensating for the delay became “second nature” by the end of the rehearsal period.
THERE’S AN APP FOR THAT
One thing pit musicians can never simply adjust to is loud percussion reverberating in a closed, tight space. This problem is especially acute in an opera like Butterfly, where a famously-loud gong is an essential part of the score. “It would just cane the rest of the pit,” says Cray, “so I suggested to them that we record it and play it back, and they were open to that idea.” As time was running out to figure out how best to achieve this, he sat down one night and “made a little app on the iPhone as a joke”. This turned out to be just the right tool for the job, though, and after creating an interface of a gong that is simply tapped on cue; it was mounted on a stand and routed into the foldback path via the Aviom system for any musicians wishing to hear it. It’s otherwise totally silent in the pit and, being pre-recorded, perfectly balanced in the front-of-house mix every time. “I think this is a good example of how we can just do things slightly differently to achieve a good outcome,” says Cray.
AND FOR THAT…
It’s also a good example of the kind of creative thinking behind his decision to ‘outsource’ the show’s digital sound processing (DSP) to some unconventional platforms. “Primarily for the audiophile aspect,” says Cray, “the EQ and the compression algorithms on the [Digico SD7] console are good, but they really get exposed when dealing with orchestral music and opera. The operatic voice is a fearsome instrument; it’s quite a challenge to deal with.” He decided to start ‘farming out’ the DSP using his own plug-ins of choice, particularly FabFilter’s Pro Q, but then came across the problem of how to tie them all in to an interface he could easily use on the fly during performances. Eventually the highly-customisable Lemur platform was chosen, which allowed him to create a graphical environment on an iPad and map it into Ableton Live.
The key parameters on the Lemur interface were determined by the EQ and filters Cray uses most on the Euphonix System 5 console in the Opera House recording studio. This constrained the number of filters in the Pro Q plug-in to four; crucially streamlining his process. In further service of creating an intuitive and efficient DSP environment, Cray added an Akai LPD8 hardware controller within easy reach in the control room, with dedicated EQ just for the orchestra. Cray recalls a night where the wind was particularly bad, and having such easily accessible and carefully chosen controls enabled him to respond quickly to a potentially ugly scenario. “I was dreading the notion of the geishas coming on,” he says, “because I knew it would be this wild flapping wind sound when I suddenly open 24 mics. But I was able to, in a moment, look at my little hardware controller and quickly assign a filter into the chorus bus. So as they came on stage I could instantly just have a steep high-pass filter and roll it up to a point where I almost lost them, but got rid of all of the wind. And that was during the show – seeing a massive problem and actually implementing a change that just required one little turn of a knob. It’s fantastic.”
SAVING FOR A RAINY DAY
Of course, filtering on the fly is one thing, but troubleshooting during a performance is another altogether. With wet weather an unavoidable reality, redundancy was essential. The Digico SD7 console in the site control room (situated in a tower halfway up the auditorium) was designed by Norwest head of sound, Adrian Riddell, to run two simultaneous 64-channel drive chains divided into ‘Engine A’ and ‘Engine B’, which can be manually switched via a MADI bridge system in the event of a failure. And if the digital network goes down, they also have the option of switching to the console’s analogue outputs, which are fed into Dolby Lake Processors that handle both digital and analogue inputs. There is also comprehensive DSP redundancy, with two individual instances of Ableton Live (each with a full suite of plug-ins) running simultaneously off networked Mac Minis with RME cards.
Of course, such a long and complex processing chain comes at the expense of latency. Cray says the round trip takes 12ms. “But on this crazy site,” he adds, “the vocal stems themselves need to be delayed at least 15ms, so I was in a window that allowed me to do that. Which is just as well, because it’s pretty scary when you take it out of line and listen to what’s going on.”
THE SHOW WILL GO ON
The technical experience of Cray and his Norwest team certainly comes to the fore in these large-scale scenarios, but refreshingly, his emphasis is always on the big picture elements of his job. The extensive research into third-party apps and plug-ins, the programming, the sophisticated redundancy, the intricate DSP; all of this serves ultimately to simplify his role to the point where detail fades into the background. “The main focus,” he says, “is to try and bring opera to a broader audience, and at the same time, to always remain as true as possible to the art form.” With the HOSH series recently confirmed for another three years, it’s good to know Cray & Co. will have more opportunities to refine and innovate in this genre.
Sound Designer and FOH Mix Engineer: Tony David Cray – Opera Australia / Sydney Opera House
Head of Sound: Adrian Riddell – Norwest Productions / Onset Audio
Systems Engineer: Matt Whitehead – Norwest Productions
Monitor Engineer: John Watterson – Norwest Productions
RF Engineer: Steve Caldwell – Norwest Productions
Stage Technician: Dane Cook – Norwest Productions
Radio Mic Fitter: Alison Bremner – MessageStick Productions
Radio Mic Fitter: Roy Jones – Norwest Productions
Secondment: Brittany Wright – Queensland University of Technology