From Concept to Screen: Virtual Production in Film and Television

Hi everyone, I'm Kali Bateman

here for Mixing Light.

And today I've got Bryn Morrow here,

who's the virtual production supervisor

for Steel Bridge Studios.

It's really exciting to have you here

to have a chat today about volume work.

We met grading some car commercials together,

which actually I didn't know were shot on a volume

until about halfway through the session.

And I turned off a resize and went,

oh, look, there's the edge of the universe.

And it was really impressive stuff.

Once you started to look for it,

you could start to say, oh yeah,

maybe that's something to do with the volume,

but it was pretty seamless stuff.

And we got to talking about how it all works.

And I realized how little I knew

about how volume production works.

So very excited to have you here

to shed a bit of light on that today.

Thanks so much for coming.

- Oh, thank you.

I'm very happy to be here.

So thanks for the nice introduction, Kali.

- Such a pleasure.

So can you just tell me,

basically from the very beginning,

you need to have a image to go on these volumes.

So just to try to explain what a volume is,

and you'll probably do a far better job than me,

but a large LED wall or walls

that serve as a background for motion pictures.

So cameras in front of the volume

and usually actors or props,

some kind of integration of physical sets

in between the volume and the camera.

And then a whole bunch of data is recorded

and the backgrounds are happening in real time

through Unreal Engine.

Have I kind of got the basics down there?

- Yeah, actually, I think you did very well there.

I mean, there's obviously variations

to how virtual production is done,

but that is certainly one way of doing it.

And in the project that you were grading for us,

that was essentially how it was done.

Like the backgrounds were

projected onto an LED panel,

like a large sort of screen,

which is a combination of a

bunch of very small panels

as well that we combine together

and that particular one,

from if memory serves me right,

that was about, I think it was about 15 meters

by four and a half meters high for the main wall.

But yeah, so I mean, essentially a volume

is a combination of panels

that are sort of stacked together to make a wall.

And then whether you have a

ceiling LED panel in there

and a side, a couple of sides

or a reflection sort of panels as well

that can combine to create, as we call it a volume.

And you don't necessarily have to have

digital assets in there.

So that particular example, yes,

those were all virtually created

from our virtual art department.

And so those were full 3D environments,

but we've done many projects

where we have live action plates that we filmed

and then those are projected onto the panels.

But essentially that's what's happening.

We're projecting the digital environment

onto the back, onto the LED panels.

And we have sort of a 3D space recognition

that we understand where everything should be

in the volume in terms of its 3D positioning.

And the idea is for us to translate that

to the real world set that we've created as well.

And then that needs to exist

in our digital environment.

So we have specific distances that correlate

to distances to the panels.

And essentially you're able to then get parallax

purely because the camera that we have,

which is obviously a production camera

and can go into all the different sorts of cameras

that we would use for virtual production,

but that camera is being tracked in real time.

So we have a bunch of motion capture cameras

that we create our own little volume for.

And those are constantly tracking

the physical production camera in space.

And then that is translated

into our digital environment.

And we have a digital replica camera

in the digital environment.

And those two essentially are matched together.

So that's how when the practical camera moves,

then the background is moved in relation to that.

And it's field of view changes based on lensing.

Well, once we adjust those,

and so just the parallax

shifting in the background,

that's where that all sort of happens.

But that would happen more

for digital environments.

But if it's a live action

shoot that we've captured plates,

it doesn't exist the same way.

Of course, that then is not,

it's not a digital environment anymore.

So the parallax is still

sort of there in some senses

because we do a sort of a two

and a half D sort of workflow.

But in that sense, it's not quite as 3D

as the 3D virtual ones.

- Wow, okay.

I think for me, it kind of comes together

when you tell us how the camera itself

and the motion capture in the camera

is linked to a 3D camera.

I think that's the bit

that I didn't quite understand

before our discussion,

because it all seems quite magical to me.

So the process is that you

would get those environments

digitized prior to the shoot, right?

So if it's a fully 3D environment,

that's quite a bit of work that's gone in

prior to actually cameras rolling.

Can you tell me a little bit

about how you take an image?

And it can be a 3D one or it can be a,

something that you've sourced in the real world.

How do you then get that into the volume?

- Yeah, I mean, if we could start

from sort of the 3D side of it,

our virtual art department,

obviously we've been getting a brief

as to what that environment wants to be

and how realistic it also needs to be.

Because of course, it's subjective,

working in an artistic sort of realm,

there could be multiple sort of environments

that you can make,

but in certain, I mean, in most cases,

it is realistic as we can make it.

And then of course, we very

much create the environments,

sort of we like to Hollywood the environment.

So in that sense, we build a full 360 environment,

but we do Hollywood it out

to a specific field of view,

at least for a range of motion for the camera

that we're expecting,

so that we don't create too

huge an environment because--

- What does that mean, Hollywood it out?

I've never heard that.

- It's the term, you know, like back in the day,

in production, you know, you're building a set to,

you don't have to build in full environment,

you know, the whole world if you're building sets.

And so we consider it just Hollywooding it

to the environment, I mean, to the camera.

I don't know, it's a term

that I picked up way back.

So I don't know.

- I love it.

- So anyway, we ultimately build the environment

to the camera's, you know, like the range of motion

that we're expecting it to sort of go along

and we're able to obviously deduce that

by doing previs or techvis

and also obviously, you know, creating boards

and we have a, you know, we

get quite far into our previs

prior to, you know,

projecting it up onto the screen.

So we already sort of have a pretty clear idea

of how much of the environment's gonna be seen

during the production.

I mean, that doesn't mean that we're not giving

the director as much

opportunity to change the camera

as, you know, they generally would like

and also the DP for that matter, obviously.

So, but we certainly give

them a bit of a constraint

and we've already built it to sort of, you know,

we're expecting, for instance, you know,

we wanna see behind us,

we're gonna be shooting reverses,

we're gonna be shooting sort of POVs

and sort of wide vistas or whatever.

And so we'll build it to those constraints.

But again, there's many sort of factors

with regards to optimization and things like that,

which the reason why we don't build everything out

is purely because of optimization

and being able to run at the required frame rate

that we're gonna wanna roll at.

And so again - go down the rabbit hole.

We could sort of get into

that later if you want to, but--

- Yeah, sure, I'd love to.

- But ideally, you know, the idea would be

once we've pre-vised out those environments

and, you know, we can have

multiple environments in a day

that we wanna shoot,

we'll then have a

pre-light at least with the camera

and we'll project those onto the wall.

Now, when I say project those onto the wall,

we, within Unreal Engine, for instance,

we have something called

nDisplay, which is a plugin,

which essentially is a projection plugin

where you create planers within the scene

which represent the panels,

represent the position of the panels.

And that nDisplay node

essentially is what we call it,

that we then have the environment all around

and then with the camera,

it projects onto a very specific position

of where the panels should exist in space.

And that becomes, that's kind

of where the trick really lies

in understanding the real distance

that things relate to the real world,

from the digital to the real world,

which is why we tend to LIDAR scan our sets.

So we will go ahead and do a full LIDAR scan

of the set environment,

whether it's just in a studio,

where the panels in position,

where they are in position,

where our camera is expecting,

we create something called a home base

so we know where the camera wants to always be

at the beginning of a shoot.

And so we then do a LIDAR scan

and from that point, we filter

that back into Unreal Engine.

So we actually have real world scale positioning

of everything.

And once we've done that,

we'll then align our nDisplay node in Unreal

to where the panels are, and

then where that set exists.

And then we can work out distances

from practical positioning.

Like obviously, for instance, if we had Artabar

and bump in some foreground elements or whatever,

those elements need to exist

in the space digitally as well.

It's what'll end up happening

as things will overlap on the screen.

And then something on the background

is supposed to be in front, et cetera, et cetera.

And so you need to be very

clear about where the opening is

and how you align that.

And it's all about

calculation, real world calculation.

And then that needs to be translated

into the digital space and vice versa.

- Is that why it's called the volume?

Because it's all about the space,

the actual depth and distance between things?

Is that something that gets

its name or is that some other?

- For me, I think volume

because it's a volume of light.

So when you have multiple panels

and generally it sort of

encompasses you as the environment

and you're creating

essentially an exterior environment

inside an interior environment,

that becomes a volume of

light, its own environment.

And so that's how I...

That's where I'm getting it from.

I still like what you're saying though.

I don't mind that.

I can think of it that way.

- I've been trying to just,

as I've thought about speaking with you about this,

trying to imagine where the term came from.

So I'm looking for it everywhere.

It sounds so futuristic, but that is fascinating.

So you've kind of got these projected elements

as well as physical elements.

And that was certainly the case

in what we worked on together.

There was a foreground that was physically bumped

into the space.

And then on top of those two

things just existing together,

you've also done a LiDAR scan

of those elements in the space

and you fed them back into the digital simulation

of the environment.

- Yes, that's great.

- That must be the trickiest

thing is getting that scene

between the digital and the

real kind of working seamlessly.

- Yeah, that's definitely the trick.

And we tend to build a lot

of, in our digital world,

and depending on what sort of practical set

that we have available to us,

we tend to build into our

digital set little tricks.

Like, for instance, in that Sydney scene,

I know it's hard to talk

about it when you can't see it,

but we built in a

pavement, an edge, a pavement edge

where we're expecting the

edge of the panels to exist.

And so we will build in where

that edge of the panel wants,

sorry, the edge of the pavement,

which happened to be in the digital set.

And then we decided, okay,

we'll replace that pavement

with the real pavement

edge, so that becomes practical.

And then we align that up to our digital pavement

and we delete our digital pavement,

and then that becomes the new pavement position.

And so when the camera

moves, it should stick in a line.

I mean, I think there's just,

in those instances, you're then locked into

a distance to the panels.

And so I think one of the main advantages,

I think, in volume and in sort of quick shooting,

if you wanna do that,

is to be able to cheat the

distance backwards and forwards

and move things around digitally quite quickly,

and to be able to, so in other words,

if you build something into your set and say,

now that is now physically connected to the set,

and it's very heavy, for instance,

when you start spinning the environment around,

and I guess we haven't

really kind of gone into this,

but for instance, if we wanted to do a reverse,

there's no panels behind us,

so we spin the environment around digitally,

and then suddenly that's the reverse of the shot

and keep the camera in the same position,

so that you're kind of cheating it in that sense.

But if you've now built a practical floor

which wants to connect to your panels,

that's incorrect, right?

So then that would need to move too,

so you'd have to flip that around,

or you don't always wanna be doing wide shots

that need to obviously

align with each other in the cut

and things like that, so you just need to be quite,

you don't wanna build too much infrastructure

into your foreground,

which doesn't have the ability to move,

I guess is what I'm saying.

- Yeah, unless you have it on a turntable.

- That's right, yeah.

And a lot of that is the case,

and people do tend to do that

when it comes to much

larger virtual production shoots,

which is a massive advantage,

and if there's that sort of budget thrown at it,

then I will say yes and put my hand up

and say let's do it every single time,

because it just helps out,

and helps the production designer out too,

and of course, we also then,

every time there's new props or anything like that,

we try to scan those as quickly as we can,

but those are just sort of

photogrammetry style scans,

not full LiDAR scans,

and then we'll try and,

because we would, like I said,

when we spin the environment around,

there could be elements that were created

as the bump in practical set,

those then now suddenly exist in the digital world,

you'll actually see them through the screens,

and if you haven't created those exact props,

then they won't be able to be extended out

into the digital world,

so that's why we also scan and recreate all those,

and match them if we can.

- That is absolutely full on,

like the level of precision that you would need.

I know that in the color grading world,

we're typically not working

with that level of precision

in terms of getting

everything measured and accurate,

it's sort of the approach is

more about getting a feeling,

and instead of doing like really tight shapes,

we're typically doing softer shapes,

and sort of being a little

bit more gentle in our approach

to manipulating the image

and trying to do a bit less,

but this is sort of the opposite

where you're doing everything,

you're generating everything.

I imagine you get into the uncanny valley

pretty easily doing that sort of thing.

- Yeah, I mean, and look,

and having sort of broad power windows,

I mean, that stuff is great,

and it certainly lends itself to what it is

that you guys are doing,

when all you have is a 2D image

to manipulate,

but I think for us, it's just,

it can very easily go wrong,

and so we're just always

trying to stay physically accurate

as much as we can before the cheating,

and even though it is a full cheat anyway,

the only way for that to be a reality

is for us to be very strict and constrained

to the physical aspects of what's going on,

and so that needs to sort

of loop back the whole time.

And yeah, I mean, once you create a pipeline

for what it is that you're doing,

these sorts of things don't

have to always be thought about

as intricately.

I mean, ultimately, I will be the one

that needs to keep an eye on that sort of stuff,

and if it goes awry,

because everybody within our brain bar,

which I haven't really sort of discussed,

but they have individual jobs.

- Yeah, but it has now become a universal term

for some reason, so

everybody uses it as the brain bar.

They call it the brain bar.

- That's cool.

So I suppose I'll just do

a little bit of background

about the actual studios themselves.

So Steelbridge are quite new facility in Brisbane,

and it's been really popular,

and Australia's actually been quite a large adopter

of virtual production worldwide.

So we've got some

facilities in Melbourne as well at,

jeez, I can't remember the name, but--

- Nance Studios.

- That's the one. - Nance Studios

in the Docklands, yeah.

- That's right, yes.

And VCA have put in some

virtual production facilities there

in Melbourne as well, but Brisbane has Steelbridge,

and then there's some volumes available

on the Gold Coast in the studios there as well.

But Steelbridge are probably

the most active in the region

in terms of doing a lot of work.

They've got their pipeline pretty well down.

They've done quite a lot of commercials.

I'm not so sure about Longform,

but I'll have to ask about that,

what you guys have been involved in,

if anything, in that space.

But definitely in terms of commercials,

they would be the ones that

you would go to in our area.

And some of the advantages of

it in the work that I've seen

is we worked on a car commercial together,

and the environments were quite varied

because our landscape in Australia is quite varied.

So they were able to do sort of red desert dirt

and city driving and coastal driving,

all of these environments in one day.

So obviously the practicality of that is great.

You only have to take the cars into one space.

You don't have to freight them around,

and you can go to a lot of places quite quickly.

So, although you're doing

quite a lot more in pre-production,

I think in terms of production,

everything's a bit more sewn together.

But yeah, so in Brisbane,

they've got this great facility

and the pipeline's pretty well sorted out now

and some pretty seasoned staff on board,

including the friend who

we're talking to right now.

- Yes.

Yeah, look, I mean, Steelbridge has been

certainly pushing really hard,

and I think it's been,

I've been very fortunate to

be able to work with them.

I mean, initially, it was a bit of a brainchild

between myself and Colin,

who's the founder of AltVFX and owner.

And, you know--

- That's Colin Renshaw.

- That's Colin Renshaw, yeah.

And essentially,

obviously the new tech was coming in.

This was quite a few years ago now,

but it was just sort of something that we knew

it was gonna become a thing.

And at the same time for Alt Studios

AltVFX, sorry, they were pushing their animation,

which is obviously quite

integral to the work that they do,

a lot of creature work that they do.

And so motion capture was certainly a thing

that they used,

and they used a lot of XN suits.

- They were very good at motion.

Very, very good at that.

I remember doing some grades,

and occasionally I'd see people walking past

in those suits.

- Yeah.

- So those suits are sort of like a,

almost like a wetsuit with dots all over them.

- That's right, yeah.

- And then as the character moves around,

those points are captured,

and then they can be transposed onto a 3D model.

Like you could see a

dancing polar bear or something,

but it's just,

I love seeing that behind the scenes thing

of the person doing the silly dance,

and then the polar bear does it behind them.

I think that's always a cool party trick.

- Exactly, yeah.

And so I think that Alt just wanted to push

their motion capture facility,

and actually buy and look into the Vicon cameras,

which are the very high end

motion capture cameras for,

to again, push their

pipeline, their animation pipeline,

but at the same time,

find a permanent place for that as well.

And I think that's where

this sort of started growing

from there, and of course,

with having the motion capture facility,

where you don't,

it's not an X-Sense one,

but it's more a camera driven one.

You can have multiple people wearing suits

at the same time interacting.

So, you know, there's certainly a lot of fun things

going on here, there,

and we do a lot of that here at the moment as well.

So it's fantastic to be

able to push both parts of it,

and it's quite integral in the

visual effects side of things.

Of course, I'm a visual effects supervisor too,

so it's a very important

aspect of visual effects too.

So that in itself drove the volume,

motion capture volume,

and in that sense, you know,

we're tracking a camera at the same time.

And so we decided that, you know,

with that technology working out and, you know,

moving into the real time

sort of software like Unreal,

those obviously go hand in hand.

And so we just decided to push

onto the virtual production side of things as well.

And while both those streams

continue the motion capture,

and then also the virtual production,

and then it's developed into

where Steelbridge, you know,

creates a lot of virtual production commercials.

And yes, there has been some long form inquiries,

and we talk about a lot of

long form projects with producers.

And I think, and again, look,

it's always gonna be something that producers

and or production people, you know,

it's a learning process for everybody.

And I think it's hard

sometimes for people to commit

to a project, a virtual production project,

especially if it's a long form project,

and it's just a really big

part of what they're doing.

And there's so many things that can go wrong

that they're not in control of anymore.

And I think it's certainly

hard to get people on board,

but I don't think that it's far off.

And we're talking to a lot

of people is all I can say.

- Do you think that's why, you know,

the kinds of things that have been shot

in the long form space in volumes,

like Mandalorian and the

Batman and things like that,

that have used them quite a lot,

they've had really big

visual effects studios behind them

that are actually responsible for the production.

So in the case of the Mandalorian,

it would be people who

intimately understand the technology,

who are kind of funding

and controlling the decision making process,

as opposed to more traditional productions,

going out on that limb and

having to have that leap of faith

and trust in someone else's,

like obviously amazing abilities,

but just, it can be

difficult when you don't understand

what's involved, right?

- That's very true.

And also, and this is certainly something

that Colin mentions a lot,

and it's very true is that

if you have a post-production

facility that's behind a

virtual production facility,

you have the ability to fall

back on sort of visual effects

to fix sort of things.

And that sort of happens a

lot in virtual production.

I mean, as much as we're all

trying to get 100% in camera

out of the virtual production techniques,

I mean, it's most probably more like, you know,

it could be up to 60 to 70.

If it's a long form project,

the problem is that so many changes can happen

further down the pipeline.

You know, it's very hard to make all the decisions

at the beginning of the project,

and that is essentially what is happening

in virtual production.

So I think you kind of then look at it as,

what percentage are you

gonna actually get in camera

that you're happy with,

and it's not gonna have changed

when you've suddenly decided

you're changing your character

or you're changing your monster, whatever it is,

later on in the process,

then of course everything that you shot

gets thrown out the window.

- But it's the same in

regular production, isn't it?

Like it's exactly the same, you know,

possibly more visual effects required

for things that are shot photographically

in the real world, right?

Like nearly every shot in a long form show

can have a hidden visual effect in it

that the audience would never know about.

- That's true, but the problem is,

is that when you don't wanna pay twice, you know?

And so the issue in that sense

is that if you're shooting it,

that's fine, but then you

might have a visual effects budget

on top of that and that works out.

But then if you're kind of, you know,

already doing a virtual production budget,

which in some cases can be higher

than just going out and shooting on location,

then you're sort of,

and then you then need to

do the visual effects process

back on top of that,

then that can become a second, like a double cost.

I think that there certainly is some sort of,

you know, you have to think about that

and make sure that everybody's on board

with what you're gonna get

and the fact that you probably have to build

a contingency plan into the budget

that there will be shots that need to be fixed

or shots that need to change or extended, you know,

and that tends to always be the case.

- So can you talk to me a little bit about

how you do that pre-visualization

and how you might digitally scout a location

and how do you get that world built digitally

before you go ahead and, you know, storyboard and,

yeah, how do you take the director

through these environments?

Because you're not physically location scouting

and looking for physical props here.

- Well, I mean, you say that,

but sometimes that is the case

and certainly in the proof

that we worked on together,

all those locations were

essentially sort of located

in the real world locations.

And then we went and, you know,

sometimes went and LIDAR scan those sets

and converted them to digital sort of sets.

Or we just took heaps and heaps of reference photos

from that and then just

digitally replicated those places

because those needed to be iconic locations

that people needed to recognize.

But again, in say, you know,

we've done a lot of interiors and things like that,

you know, in that sense,

we will actually initially

talk to the production designer

and or the client,

depending on the size of the project

and, you know, come up with

the locations with, you know,

treatments, directors boards, all the rest of it

that you generally come up with

and send that to our VAD team

who will start building it out

and will do sort of large renders

as we're moving along in the process

and showing those to the

client and director initially

for them to sort of kind of

enjoy what they're looking at.

And then once we know we're on board

with that environment we're building to the right,

or at least the correct creative on that,

then we will go in and we'll

do sort of digital scouting,

which is, you know, you can either do that

with the virtual headset,

like a VR headset at least,

or, you know, we'll just,

you know, get a director in,

you know, sit over one of the artists shoulders

or whoever it is, and then

we'll actually lens it up

for them, you know, with the

proper camera and sensor sizes

and then we can sort of look at finding

those right frames that they like.

And then those can become part of the storyboards

and then we'll go ahead and

either have digital storyboards

or we'll kind of draw back over top of them

and add people in, or we

could do that in Unreal as well.

I mean, that's also sort of

quite a few methods in that sense.

And then after we've sort

of built these environments

that I think the director's

also sort of enjoying and liking

and we know now what to build,

we'll then do something called techfers

where we create the panel layout,

which we've already sort of worked out.

Well, it's kind of like a

mutual sort of decision there

because here at Steelbridge,

we tend to build a volume out

based on the needs of the shoot.

It's not always a general LED

volume layout that you would have.

You know, we wouldn't always have a floating wall

that can be maneuvered if we don't have a car

or sort of some sort of object

that needs to be reflected on all the time.

So there's different ways to build the volume.

We won't always have a ceiling in.

We probably won't always build a curve to our wall.

It might be a flat wall.

I mean, there's multiple sort of constraints

and also reasons why we would

build them in a certain way.

So then in that sense, we would then work out

what size volume we need based on those

that tech scout essentially that we've done,

or sorry, that virtual scout that we've done.

And then we'll add in the

panels that we've built digitally

into those environments.

And you can actually then work

out the distances prior to actually,

you know, your your art

department comes in or anything like that

when you're starting to build out the sets.

And you can then work out

the scale inside the studio

and what your best focal length could be

or what the constraints are on

the the lensing that you need,

you know, and you can sort

of then inform the the DOP

as to sort of what, you know, we

then want to talk and engage the DOP

obviously early on as well.

And also with the director

and what sort of shots he wants,

but and what the constraints are within the studio

or whether we need to go to another

studio and build it out over there.

And then sort of all those

decisions are informed by this tech

that you end up doing after

sort of building the environments.

Yeah, that's cool.

And then the DP, like, do

you find that there are certain

DPs who are becoming specialised

in shooting in these environments

or can sort of anybody with an

understanding of how to shoot

in the physical world

shoot in a digital environment?

Or is it a bit of a specialised thing?

No, I mean, I think that, you know,

for us, we love getting new DPs in

to certainly to train them up.

And also when I say train them up,

we don't obviously have to train them up on this

craft because, you know,

it's all very much translatable, like

everything's very much translatable.

It's more just about understanding the process

where he can relate it to what he's used to.

And then he just essentially

needs to know who he needs to talk to,

to be able to fix something that he needs to fix.

And at the same time, so it's still his eye.

It's still what he wants to do.

There certainly are some

constraints on lensing that that will be

purely based on the framing

and the boards and and then also,

you know, the aperture like, you

know, we really need to always be

on that fine line of

wide open if we can,

purely because of the scale

of our studio and also the

the distance that the panels are from from our

our main actors or whatever.

So the idea is is that we can't

allow those panels to come into focus

because, of course, we have

this moire issue that can happen.

And so we always need to have

quite a shallow depth of field.

Like we don't want to have too

much too much depth of field.

We need to limit that a little.

And so unfortunately, that's sort of something that

not all DPs want to be working with.

And it does give a certain look.

But we we always try to work

right on that edge of moire

because more focus in the

background, more realism creeps back into it.

You know, if it's everything feels a bit sort of

under watery, it's just not looking right.

You know, so it's always it's a very that's why it

becomes quite mathematical.

And the lenses need to be very we have to profile

the lenses and really test the lenses

to make sure that we're not

suddenly on the edge of moire.

And in that project that you and I

worked on, which was super, super crazy,

the lenses had a weird pincushion

scenario where the outer edges were

more in focus than the inner edges.

And so it was it was very strange.

And, you know, these were sort

of old school lenses.

And we really didn't do

our homework on those lenses.

And so it kind of, you know, essentially meant that

we had to push everything further

away from the screens even more,

which meant that we then were limited

by the framing that we wanted to

achieve. So, yeah, so it's not something

that I typically think about is like

the amount of focus across the lens.

I you know, usually you're thinking about depth of

field and, you know, your F stop

and all of that and how much

light you've got in the scene.

But, yeah, I mean, I suppose when

you're really on the edge there,

if it's a bit sharper in one area of

the lens and a bit softer in the other,

you don't want that sharp part of

the lens to start to see the actual LED

points, right? Because that's

what creates the moire pattern.

Yes, that you've got these, you know, grid

essentially of points that will strobe.

Exactly. Yeah. And that's yeah.

I mean, that's sort of the we're always working to

make sure we don't see that.

Yeah. Yeah. Wow. There's so

many considerations there.

And on top of that, you're actually

creating either a 3D or a two and a half

day environment, and I take it

that there's actually quite a bit of

like color pipeline that has to happen between the

start and the end of that,

because you're actually projecting an image that is

quite final at that point.

That's right. Yeah.

How do you how do you design the

pipe there to get the colors accurate?

Well, I mean, for us, you know, the most important

thing is to make sure that this

and anything on the screen just

looks realistic, you know, and natural.

So if you're looking at the screen with your eye,

it needs to sort of feel real

as well. But and I know it sounds sort of, you

know, you think that's what you would

expect, you know, but the problem is,

is that when you're creating digital

environments, you can very easily get illegal

colors, as we'd like to call them,

which are not really real life kind of colors.

And that all is dependent on the

renderer if you're doing digital content.

And that can be kind of quite tricky.

So we do employ an ACES pipeline.

And for us, we've got, you know, OCIO

library, which we output

from Unreal, an ACES color.

And, you know, whether we then

re-project that out onto the panels

as an ACES profile, and then that gets converted

with a look up on the processor

before it hits the panel.

That's sort of something that we sometimes shift

between whether we actually do

the conversion on in

Unreal and then onto the screen.

But the thing is, is that obviously

the panels are all slightly different

in some instances, like as in

different brands have got like different color

with different looks, you know.

And then the problem with the LEDs is that they,

you know, of course, it's an RGB

light, essentially, each one

of those little pixels on there.

And you can get something called

color shift very easily at certain angles

that you look at a panel and, you know, you get

these kind of weird magenta reds

or you get the greens if

you're looking at opposite angles.

And so that that's kind of

got multiple issues with that.

Is that how it translates onto skin tones as well?

That can be a real problem.

And if you start sort of shifting color, overall

color, for instance, on a panel

and what that actually does to skin tone is it's

quite remarkable, actually,

in some instances, you just

sort of hope that it just works.

Everybody's skin tones are slightly different and

the LED panels can really

just just warm it up or just make things look red.

Some skins just goes red under panels.

And it's very, very tricky.

And some skin tones just do not.

So I'm just look fine, you

know, and so you have to.

It's quite a tricky sort of

science to get that right.

So our main aim is to make sure everything's sort

of linear as it comes out of unreal

and into the processor.

So we want to try and make sure

that there's not too much done.

Like we don't want to do much color management

prior to it going to the Brompton

processor, which is the processors that we use to

drive a signal to the panels.

And on the Brompton processor, there is quite a bit

of color management that we

we tend to adjust and and use to help merge,

you know, merge the foreground like the practical

and the background together.

And then, of course, you really

only want the camera to be driving

the color after that fact, you know, with its color

temperature and then the lights

that then change accordingly.

So, you know, just like all lights,

panels can be warmed up and cool down.

And most LEDs run quite cool.

They're very sort of like as a default, they're

quite a cool sort of temperature.

So you can warm them up to

to try and balance it out.

But in doing so, you know,

it's very easy to break things.

And so that's why, like I said, we just try and

make sure that everything coming

out of unreal is very linear.

And so like, again, we transcode

everything that as in textures and or

footage that we're using

into the ACES color workflow.

And then we will then convert it out onto the

panels through the Brompton.

So it's kind of and then we could possibly look at

it in any other look up after that.

But we just want to have as much color gamut as we

can have, you know, and make sure

that those panels are working correctly.

And so, oh, sorry, you go.

Oh, I was just going to ask -

So about that linear signal, I mean, my

understanding of the usefulness of linear

for visual effects is that you can do more

realistic things because that's the way

that light kind of functions in the in the real

world is that it does work like linear.

Not like logarithmic curves.

You don't you know, you don't end up with those

soft roll offs that we then put on once

we're starting to work in the log space afterwards.

Is that why the linear pipe works so well for this

or is it just because it's such a

robust pipeline for visual

effects working in ACES and OCIO?

Well, I mean, I think just obviously keeping it

linear just allows us not to add any sort

of look ups on top so that you can kind of keep the

color information there for as long

as possible and then try and only, you know,

convert it later down the pipe.

And again, like you said, if we have some sort of

roll off that's happening prior to

it hitting the panels, then it's very hard to

invert that, you know, the other way around.

So keeping the signal as linear as possible all the

way through in terms of what you were

saying in digitally, it makes total sense that we

like to linearize everything so that

when our final look up happens in the renderer, we

actually then convert it so we can look

at it. Essentially, we'll look at it in, you know,

Rec 7- Rec 2020 or whatever it is

that we tend to look at it in.

But we will still be working

linearly the whole way through.

And it's also because of the renderers that we use.

So our renderer is the way that they actually

render, you know, whether you're using like

you know, it depends on if it's an unbiased

renderer, which means essentially it just shoots

out light rays just like you would have in reality,

you know, with photons, you know,

essentially, your materials are like balanced and

real world balanced as well.

So they just work far better in terms of a linear

workflow, because essentially computers

work linearly initially, and you don't want to bake

into the the any of the color or anything

like that within your digital, the digital

component of your renderer.

If you start introducing some sort of look up, it

just breaks and then you have to invert it

somehow down the line. So we stay linear as much as

possible all the way up until the end.

And then, of course, as you know, like, you know,

the camera itself will be adding to that

as well. And so, you know, you just have to

mitigate too many things that can change all

the way up until that point.

Yes, yeah, because the camera itself is going to

have, you know, its own ISO natively and it's

going to have its own range of stops that it can

render onto the actual image and capture.

Wow, so much can go wrong,

so much needs looking at.

So do you essentially color

the material in that process?

I know that you're saying that you

are trying not to do very much to it.

But like when I looked at the raw ungraded footage

for the car commercial that we looked at

together, there was nothing like there weren't any

parts of the image that were wildly different

from others. It did feel as though somebody with an

eye had integrated and composited a world

together that was kind of

harmonious and worked color wise.

Choices had been made, I

felt, on the way to the volume.

Is there somebody like a compositor or a colorist

or a visual effects artist who is kind of

responsible for that or is it

something that just happens?

No, I mean, so I mean, to answer your question,

that's probably me that's doing that.

But I think, you know, again, like if we've got a

realistic environment in the background, the aim

is to make sure that that looks like a realistic

environment with your eye looking at it and so that

it needs to actually, you know, it needs to react

correctly to light in digitally, digital

light, at least. And then that light needs to sort

of feel realistic, you know, as you're looking at

it. So that's the first part of it. And then

secondly, when you start

integrating your foreground

elements into that, you know, the there's only so

much light wrapped, for instance, and this depends

on the size of your volume that is encompassing

your person or objects that you've got that are

being lit as well. And so essentially, a lot of

that can be matched, you know, from the lighting

that's coming from the volume. But again, like, for

instance, if you consider that project, we have

the floor, you know, that the, you know, if we look

at the the outback scene where, you know, it was

like a reddish, the reddish color never sort of we

never quite got that one working quite right. So we

can essentially, we're trying to match to the

foreground environment, texture and color to our

digital environment needs to match to that. So

there is a level of that happening in real time.

And in our brain bar of people that we have,

they'll be making those adjustments, looking,

looking at the screen, looking through the split to

make sure that that sort of feels more in line,

especially if it's like a wide global, a global

adjustment to a particular material, for instance,

not a global color adjustment that you do to a 2D

image, but to a specific piece, you know, like

the sand will change the color of the sand to match

to the sand that we have in the foreground.

Those are things that we can sort of do on the day,

and we'll try to get those as close as

possible. And that generally would happen during

the pre-light day, which is you'd hope to be in

that position. And then afterwards, there's sort of

subtle adjustments in color, overall color of

the panels that want to then match with the, I

would be working directly

with the DOP to understand

when he exposes in a certain way what we want to do

to our highlights in our scene, for instance,

to make those work and sort of make them feel

correct as well. And then we have something else,

which is called, well, ICV effects, which is

in-camera visual effects,

but essentially it's a new

projection, kind of like a, you know, power windows

that you can add to the background panel,

and you can actually isolate areas and draw mats

around certain areas and grade those independently

to try and fix that scene where you might see the

integration of the foreground elements hitting the

panel at the background, and so that those can be

graded independently. But we try to stay away from

that for as long as possible until there's just

like a scene that we need to fix because those,

you know, because they actually exist there all the

time, like they're stuck to that piece of the

panel. And so when you start moving your camera

around, you can imagine

that that doesn't necessarily

relate in volume, per se. And so like, as in like

volume tracking, so it

might then stay in a position

and just not be in the right position after that.

So we try to avoid that if we can, but sometimes

you have to actually use that. And also, you know,

this ICV effects option also allows us to add like,

you know, hotspot lights that we want to use for

reflections and added reflections and animate

those that you have, kind of like what you would do

with sort of like a certain LED lighting that

the gaffers use to be able to actually have some

running lights over cars and things like that,

which is quite cool. That's really cool. Yeah,

that's fun. That's really cool. I mean,

it does seem like there's an opportunity there for

a colourist on the set to be part of that

process. And I know that visual effects artists and

compositors are extremely good colourists,

because that's sort of essentially what a

compositor does is matching elements and making

them integrate seamlessly. Can you tell me about

the brain bar and who's on that and what's

happening there? Yeah, I mean, before I do that, I

mean, you're 100% right. And you know, having like

a colourist eye is probably, you know, would be

amazing to have. And I think that, you know,

you know, the DITs, after it's coming out of the

camera, I think is a better place for that

currently the way I see it. And that's purely

because, you know, we want the foreground to

stay married to it. I think what ends up and can

end up happening is that when,

if the background panel is being colour adjusted to

match to what initially is being seen,

the problem with that is that the balancing of

reality can kind of really shift, you know,

in that sense. So when our background becomes sort

of separated, I mean, yeah, there could be very

easy adjustments that we can make. But when you

start making adjustments before it hits the panels,

you're then in this world of changing the

environment to match prior to doing it after

the fact when you have your integrated environment

in there. So it's just been hard to then work out

why certain things aren't working. You know, if we

have a grade that's happening prior to it,

then going, we've tried this and then it going into

the Brompton and then getting

projected up on. And so when something is not

working, it can't all be fixed in grade before

it goes through, because some things, like I said,

for instance, we need to adjust the ground colour,

for instance, specifically on a material, like a 3D

material for that to work, or we need to adjust

the position, the sun position and what that then

ends up changing in the environment by just

changing the lights on everything and how then the

grade fights against that opposite, you know,

because we need to be quite fluid, essentially. But

I think just maybe before that would be okay,

but like, there's so many shots that we're doing

that sort of have to happen. I mean, I think

there's certainly, you know, heel and toe and

levels that need to, you know, mids that need to

be moved around every sort of shot. And that would

be amazing, you know, almost like a one light feel

that would be great, you know, for that to sort of

happen. And that is sort of something that is

happening every time just before we roll. But a DIT

approach after the fact where we have the

foreground in would just be amazing, you know, and

if that was sort of set with the director and the

the DOP to have a look at what's more the DOP, I

suppose, at that point, I think that would be,

you know, amazing, you know, because what that does

is gives the client even more

you know, happiness when they see that. Yeah, it's

perfect. It's great. This is exactly what we want.

And then we sort of designing a look and

automatically on the day, and then suddenly,

you know, just like you'd normally have in a long

form project, it just becomes perfect, you know,

in that I say perfect, but you know what I mean?

And then the clients just even more into it,

you know, and then they feel like and they, you

know, you just they just feel like a way to lift

it off them, because sometimes it might not feel

exactly like or they're not used to that process.

And it just, you know, sets them at ease a little

bit more. That's really interesting. So in terms

of the skills that you think that that colorist who

who could be part of the virtual production

pipeline, they would be more of a DIT to you than

somebody who is skilled in Unreal.

Yeah, I mean, I think so again, skilled in Unreal,

if a colorist is skilled in Unreal,

I think that that's an amazing thing to have. I

think that that would be and integrating those

that person would be would be sort of a priority.

I'd love to have somebody like that in the team

to be able to do that, because I think and I know

there's a lot of tools that are being developed

now where there is more control in grade that

happens in Unreal and as a, you know, as a wrapper

that goes to the next stage. And I think that that

that's sort of quite important. I think one

thing to keep in mind, especially in Unreal Engine,

if we're talking about digital sort of creation

here, the renderer is not a real, it's not a real

world balanced renderer like you would get in say

some new visual effects style renderers or not,

sorry, not new old renderers that have been around

for a long time. So there's a lot of cheats that

happen in Unreal Engine to be able to get that

renderer to, you know, work correctly for

optimization for shadows for bounces and all these

sorts of things. And the problem that ends up

happening is is that you

need to actually understand

what the limitations are of the renderer in order

to look at an image as well and go, this is what's

missing. Because in reality, when you're looking at

something, you just expect things to look real

and when you see it on the wall, you're expecting

things to look real. But the reality is is that

there's a lot of cheats that are happening. And you

need to be aware of those. And if you make

color adjustments to fix things to make it feel

visually appealing, the problem is is that these

cheats that are actually happening are just

exacerbated, they keep getting worse, you know.

And so I think you're not starting, you have to

keep in mind that you're not starting from,

you know, something that looks real. You need to be

looking at something and understand

the fact that what you're cheating already to get

it to look real is happening, you know. And

I think that that's a broader thing to think about.

You know, it's like, oh, we've got one

light that's on and it's casting some beautiful

light on something, but we've turned off shadows

for some objects here and there because it's

costing too much money. Sorry, too much in

processor and it can't render correctly on the wall

because it's starting to get stepy. And so

then we go, well, there's a shadow missing over

there. We've done that on purpose so that we can

actually render it out in real time, you know, and

that needs to be hidden and translated a certain

way. And we're doing a lot of these sorts of things

all the time. And when you integrate sort of

somebody who, well, people need to be aware of all

of those sorts of aspects of what's actually

happening. And then on top of that, add two rather

than just change in a 2D because of course,

as a grader, you most of the time, everything

that you're grading is realistic already is a real

environment is real, you know, and so you just need

to keep that in mind. I think.

Oh, look, that's so true. I mean, in terms of my

experience grading, I mentioned at the start that

I didn't realise that it was a 3D background. I

didn't realise that it was virtual production

until halfway through the grade because you're 100%

right. You expect to see reality and you're

very forgiving of what you're seeing because why

wouldn't it be real? But then once you start

looking, you start to see things. And I'm not

saying that there was anything glaring or major,

but I just started, you know, OK, so this is

constructed. I would like to see some evidence

of that. So I start looking around and then I find

little things. And then, you know, you start to

get in there surgically and shape and reduce

anything that might not look real. And the goal

is always realism as opposed to some kind of

stylised treatment or just like you say,

just changing sometimes as a colourist, you go,

well, we don't want it to look like the offline

did, so we better do something. But, you know, that

might not be the right approach to this.

Yeah, because I could break it. I could just break

it like immediately. And you're like,

oh, actually, now I can see why it's digital now.

Yeah. So like a light touch and not doing something

unless you have to would be probably the best way

forward as a colourist in that scenario

is not to do anything unless you have to. But

again, because like I do, you guys add

this massive add to so I'm not saying sure, but,

you know, we we couldn't we can always

do the grade afterwards. I'm saying what what do

you do during the production? You know,

how do you add to that process? Because you can

always grade it afterwards. And I think there's

like, you know, every time I've ever graded

something that has been virtually produced,

whether it's a virtual production or

a 3D animation or something, you know,

it always adds to it because it gives it something

cinematic and it gives it something filmic that

it just needs to give it coherence and to make it

feel correct to to be part of our screen language.

But it's slightly different to the job of creating

the thing to begin with. It's 2D. It's about

it's just a different approach, right? Vignettes

and exposure and, you know, tones in different

parts of the curve and, you know, generating that

graceful curve. You're just thinking about a whole

different bunch of considerations as a colourist.

Exactly. Yeah. So what happens in the brain bar?

Well, we've got we've got so we've got a operations

controller, I guess, is what we call him head of

operations, Johnny. And he sort of so I guess to

start from the beginning, I think Unreal has a

multi-user approach where you have one scene that

is the environment, for instance, and then you

have a server that's sort of hosting that

environment essentially. And then you have these

multiple machines that are all connected to that

environment or that scene. And those multi-users

then can be identified as separate machines that

exist working on the same scene at the same time.

So it's kind of slightly different to how we would

do visual effects projects in that sense,

because things, you know, when something changes on

a scene that changes and then, you know, you

have to publish that change down the line. So in

this instance, you kind of have a bunch of

collaborators were working on one scene at the same

time. And that's part of Unreal Engine's

architecture. And it's something that's sort of a

game driven architecture, but it works really,

really well in this instance. So we have someone

who's head of operations, and he would be he or

she would be integral in hosting this scene. And as

the server, it would also be where all of our

other sort of peripherals attach to. So like, for

instance, the virtual, sorry, the motion capture

Vicon system that then is plugged into its own

suite, and that has another computer running that.

And then that is then streamed into Unreal. So that

gives us the real world position of the

camera. So then again, in the scene, the operations

managed would then have this virtual camera that

we sort of liken to the real world camera. And that

has these the crown that I don't know if you've

seen it, but obviously a little, you know, crown

that exists on the top of the camera, and has

little dots on top. And when that moves, the Vicon

system is sort of plugged into Unreal and

streamed into Unreal, and it tells it where that

position is in space. And then Unreal,

in that Unreal scene, we can move that camera

around in the scene to reposition where we want

it to be existing. So then those are sort of two

roles there at that point. So you've got the

the Vicon operator, and who's also like looking at

the cameras because they overheat,

over time, you know, we've got things in the scene

that are occluding that are in front

of those cameras that can cause the tracking to

mismatch, there's quite a few things to keep in

mind over there. And so the tracking of the cameras

is super, super important. And so that

Vicon system, we're constantly, you know, it's

harsh, because, you know, certainly for the gaffer

and the grips, you know, where they want to put up

flags, and they want to put up, you know, like

big bounce slides, or they want to put a bunch of

stuff up to also help with the scene. When those

get in the way of the Vicon cameras, as well, then

that so we have to work together to make sure that

that doesn't become an issue. So anyway, so we have

the Vicon controller as well, and they operate

independently. And then that plugs into the main

scene with the operations manager running that.

And then we have two other, generally, it depends

on the size of the project, but then we have two

other positions there where we have the head of the

virtual art department, so the VAD supervisor.

And then anytime there's an update to the scene,

which were at least two objects and color and

things like that, that we just talked about by

updating the sand or updating the plants and

repositioning plants here and there. And we want to

maybe say, let's move an object in the scene

quite quickly. That will be taken care of by the

VAD supervisor, who then repositions all those.

And then we'll have an extra person that would be

essentially working independently in another

environment, and then that might be publishing new

objects into the scene, like brand new assets.

And so that would then be published into the VAD

supervisor's scene, and then he would make a

decision of when that gets published before we

change the setup. But of course, you can imagine

that when you've started shooting something and

then you add something new to it, and it can affect

everything. In fact, the optimization, the speed,

the lighting, absolutely everything. So we have to

be very careful about that sort of stuff. But

generally, that's sort of the brain bar and those

guys. And then I would be in direct contact with

the operations manager in general and also probably

the VAD supervisor. And then especially if we do

lens changes or we make any sort of changes like

that, or if the motion capture volume is not

capturing correctly, all these sorts of things,

then I would then translate all that information

back to the first AD and/or the DOP and director

so that we can be sure that before we do a take,

we've got that take. That take is sort of

working. And so there's a few steps that we need to

make sure that we're locked and ready to go

before we can actually roll on something as well.

Oh, cool. Wow. That's a pretty big team without

even thinking about the productions team. Yeah.

Well, I mean, digitally, yes, you're right. It is a

biggish team. I mean, also we record every single

camera as well. So every single camera that we're

sort of filming, we do an internal recording of the

3D positioning of that in case we need to

re-render things later on down the line. And that

needs to be time coded. We need to have things

genlocked so that the frame rate is running

optimally with our motion capture cameras,

with our actual, sorry, production camera, and also

that the screens are all locked together

at the same time. So we have quite a few things

that can go wrong and people that are in place

to keep an eye on those particulars. Oh, wow. I've

got a couple more questions for you. I know

I've kind of taken up already an hour of your time,

which I'm very grateful for. 11.08. Okay.

I would love to know a little bit about your

background. Okay. What brought you to the

position that you're in to be somebody who can

understand how this emerging technology can be

implemented in this way? What's your history? My

history? Okay. Well, I was a painter, a traditional

animation artist. Then I went to digital sculpting

or traditional sculpting first, then digital

sculpting, became a character artist way back,

studied at the Vancouver Film School quite a long

time ago now, and then was in sort of pre-rendered

game cinematics back then, doing characters and

sort of lighting and texture work, and then moved

into the film side of things, and then became more

of a lighter, so a 3D lighter and lighting scenes

for visual effects integration into live action

plates, and then continued doing that for a little

while and became a CG supervisor. So I ran

the floor of the 3D artists and implemented new

techniques. So back in the day, I was into fur

and feathers and hair and all these sorts of

things, and there weren't pipelines in place in

those days to be able to roll those out that were

like plugins to make those work, so we had to use

multiple bits of software to make them work, and

then I transitioned from that role as a CG

supervisor, and then I just wanted to, I was just

more into practical sort of filming as well. So I

went to AFTRS and studied cinematography, and I was

already sort of a visual effects supervisor at

that stage as well. Then I went to AFTRS and

studied cinematography in Sydney, and I was working

on film, still work with film back then, which was

great, and then transitioned to digital cameras,

like the red one, and you know sort of early days

with sort of digital cameras back then,

and then sort of pushed cinematography for a little

bit, went to a few different studios, it's

like a combination cinematographer and heading up

their 3D departments as well, and got into

sort of virtual cinematography, said doing sort of

full 3D environments, but you know working with

the camera because I was quite skilled at that

point in practical cinematography, and then sort

of started directing a little bit more, and then

sort of started my own production sort of company

as a visual effects production supervisor, where I

sort of worked mainly on long-form projects,

doing second unit directing, and just pretty much

being on set all the time working in visual effects

in long-form, and pushing motion capture as well,

and then sort of got back

into working in Steelbridge

when we sort of pushed this tech, and I had a LiDAR

scanning company as well, because I've always been

into the LiDAR, photogrammetry, drone photography

as well, and all those sorts of things, so I'm

creating digital assets, just all sort of work

together, and it all just sort of put me in the

right space to be able to sort of do this kind of

job, which is pretty much all the things you need

to know sometimes. Yeah, so you've done a very very

good mix of kind of physical work and digital

equivalents along the way with the physical

painting and sculpting to the digital sculpting,

and physical lighting to digital lighting, physical

camera to digital camera, so also working in

sort of emergent technologies as you go. It does

seem like a pretty pretty special combination

that can allow you to understand so many things

that need to come together.

I mean when you were doing the onset work in visual

effects, when you're talking about second

unit VFX shooting, with that you know you've got a

shark movie and they've shot the principal

photography, and then they've got a really

important scene with a shark that needs to

come up underwater or something, and you're there

with the sort of gray shark head and capturing

everything you need to create the 3D shark. Is that

the sort of thing we're talking about? Yeah,

exactly, yeah, and I still do that, and that's sort

of kind of a big part of what I still do,

so but you know I'd have wranglers that I would get

to go out and you know do that sort of stuff.

I wouldn't be the one holding the shark head, but I

would certainly be the one that decides on how

the methodology to be able to accomplish the shot,

and that's why doing sort of long form,

you know I might sort of you know break down

scripts and we'll look at what needs to be

visual effects, traditional visual effects, you

know and integration in that sense, and then also

what may require virtual production, and so I'm

sort of in a good position to also sort of break

down scripts and look at them because there's you

know it's yeah there's no as virtual production

is kind of there's only certain circumstances where

it really is required, you know in certain

cases where it's just it just works, you know, but

in other senses you just need locations,

you need movement, you need much larger expenses to

be able to do that, and sometimes you know

shooting on green screen is the best approach for

something, and sometimes shooting in a location

is the best approach for something, and so you know

there's quite a few reasons why you would

and wouldn't want to do that, and so that's sort of

what I'm still doing, so I'd be you know I work

with big studios as well, and we'll talk about

whether VPP is an option, but you know inevitably

I think it's not as mainstream, it is sort of in

some studios overseas for some instances, but you

know there's certainly type cases for for why you

would want to use virtual production, and I think

for commercials it actually lends itself really

well because you can just you know bump out heaps

and heaps of environments within a day, and there's

just you know the cost works out far better, you

know, and so you know the quantity, the scale

economy works just sort of well in that sense too,

so in that respect it works, but I think you know

long form again like I said I'm a production

visual effects supervisor, so I look at those sorts

of options as well and make those decisions.

And so you would, if you're on set you would be

looking at okay you've shot these lenses,

we need to profile them, and you've shot under

these lights, so we need to get reflection passes

and have the balls go out and all of that, and yeah

do the LiDAR scanning, and I can really see

how that works right in with doing virtual

production, but also like it's really great

that you know that it's not right for everything as

well, I think that would be, I don't know if

you were the studio trying to sign off on something

you don't want the hard sell, you just want to know

what's going to work the best for the scene that

you're trying to shoot. Exactly yeah and no one

really wants to you know put their hand up to

decide to do virtual production and then it just

breaks and doesn't work you know either, so I think

just you know if you're trying to sell a

technology you know you better be in it in the

middle of it to really know whether it can be

done as well and what the obvious

problems are but also the things that

the gotchas that can happen and they can happen you

know very quickly and technology is you know

is reliable but it's not always reliable, that's

all I can say, so you have to have contingency

plans and everything yeah exactly. And so thank you

for that, the final thing that I wanted to

ask you about was as a colourist what do you see as

being the most useful things a colourist can

bring to a virtual production, whether that's if a

colourist wanted to get involved during the

production side or colourists who are working on

something that's already been shot and edited.

Oh I mean that's a, I know that's a tricky

question. Yeah I mean I think for me

I think like I was I kind of alluded to earlier on

I think just what LED panels do to skin tone

is just is really really tricky and is certainly

something that needs to be you know given its

you know just the time in fixing and looking at and

if you as a colourist are and this is now

posts now at the end of and you know you're sort of

grading a final spot or whatever

those are the things you need to sort of kind of

look at and understand I think so understanding

maybe the limitations or the things that can cause

issues within in a virtual production environment

are things that would be so if you have this sort

of knowledge and the things to look at in terms

of trying to fix those are really those would be

super advantageous for for for you as a colourist

so that would be you know in my opinion is you know

that there's colour shift that happens too

so if you look around certain areas and you'll

notice that some things are looking greener and

redder in some areas again those are things that

aren't real they shouldn't be there and those are

things you want to sort of grade out so I think

just understanding what can happen in a virtual

virtual sort of environment and then looking at the

the pictures and trying to be critical you know

not overly critical but I mean critical in that you

know those are things that if you just

automatically fix those and hit those sort of bits

you're in a really good place to then do your

your creative grading on top of that you know so

it's almost like the one line which is almost

like to fix to fix the led issues that can happen

on the day you know and and for me that would be

great you know like for a colourist to actually

understand what could be problematic you know

and then I think you know moving ahead and and sort

of being sort of an earlier part of the pipeline

I think and I've worked with a few colourists that

have been really interested and just want to be

there you know and go can we can I just be there

while you guys are doing some testing and sort of

maybe we can get some panels you know like some you

know balls out and see if we can like plug

into that and just see what we can do with the

image and and try and understand it I think all

of those are invaluable for us as well and and

would be the kind of experience that would really

help you even afterwards so like post you know in

the post process just being there on those

shoots and understanding what's happening is

actually a really good place to be and then help

you down the line fixing sort of stuff yeah yeah of

course I mean when it comes to the skin tones

we're looking at the light that's reflected from

the panels that then hits characters here in the

space right so instead of being hit with like the

full spectrum of colours in the light there would

be certain colours that would be enhanced and

others other parts of the tonal range that might

not even be there right so you might get like a a

flatter skin tone instead of like all of the

variation that you would get under natural light is

that the kind of issues yeah that and also

there's certainly a bit of colour shift that can

happen and there's sort of you know you mentioned

red skin tones yeah for some people and some will

just go greenish or not have any change and then

they've got two people standing side by side like

for instance in the one job that we were on like

you had those two people on the beach and the guy

he kept going too pink and then she had a different

tone like more melanin in her skin and hers sort of

almost went green and there was like a shift

that you needed almost do between the two of them

and it's when they walk through underneath the top

panel you if you just move your hand and go along

there and along there the colour is slightly

different over top of the person all the way

through which is problematic very problematic

you know and we're always chasing that sort of

stuff as well so that's certainly something to

so keep an eye on um i would say and and also as a

as a as a colourist realize that

you know what's happening is not just an a natural

artifact it is a mistake well it is something that

shouldn't be happening if you know what i mean and

not just accept things that are happening and

just try and fix them because ultimately you know

when i'm in the grade or if uh you know

Colin or the director is in the grade if it's Colin

it's different but if it's maybe a director

that's not used virtual production before they

might miss those sorts of things you know and

so then it needs to have say the virtual production

supervisor or somebody that already has an idea

of what they should be expecting in those instances

and try and correct for those you know i think

right so it's just being being aware that things

aren't natural to begin with i suppose you know

i'm thinking back to a um a story that a friend of

mine told about checking renders for quite

a large 3d film that was very popular and saying

that the hardest thing about checking those

renders was knowing that if a character had been

removed for a scene their matte might still be in

there so they might be like a little ghost of a

character like lingering and and anything could

happen anywhere um in the shot so i suppose you've

got to be in that mindset a little bit when you're

grading virtual production if somebody moves

through light normally as a as a colorist i

would say well they're moving from you know

daylight into tungsten we want to see some

we want to see those because obviously that's been

done on purpose if it's in front of the camera

it's meant to be there but you're saying no we need

to be a bit more critical here we need to say just

because it's there doesn't mean that it was meant

to be we need to really interrogate if somebody's

skin tone changes as they walk from one side of the

screen to the other that's right is that because

they're walking past a window or is it because the

panel was outputting a slightly different shade of

white there yes exactly and that happens on the

floor too you know if the floor is in the

practical floors in shot and then the edges might

have slightly varying color adjustments on them

and it's not something we can fix on the day um but

again look you know panel technology is getting a

lot better they're introducing white pixels into

the leds now which really really helps a lot

especially for overall exposure but also the the

the angle of the so the

the field of view essentially

of these lights is broader and that actually helps

with a lot of the the color shift as well

and the tighter pixel pitch actually helps with

color shift too but then you run into sort of

issues with brightness levels so i mean the the

panel technology is getting a lot better

and so and so that does help with that but i would

always be on the on the side of trying to keep an

an eye on that and also you know the panels

themselves i mean there's a lot of light pollution

that can that can exist and hit the panels you know

and so you have a you know you have this

coating on the the the lights on the leds and the

better the coating and they've really

improved their technology in the coating the better

the coating the less light pollution

from practical sources or even from other panels

that you've got in the scene are not lighting the

the the panels themselves that is then losing

contrast and when you start losing the contrast

in your panels the blacks are getting lifted up

even more and obviously the idea is to retain the

blacks as black as possible and then when you get

that light pollution on there we can get shadows

you know like there's we've added a practical light

in in the shot and it actually then starts

showing off the edges of the panelists you know

because of the the reflected quality of the

panels themselves the physical kind of panel so you

know those are the things that you can actually

end up seeing in the screen but not really realize

it so you just also need to be aware of those

sorts of things and look at them and go oh actually

the blacks are just looking a little

too milky here for some reason they weren't like

that in this particular shot just before

um and it could have been that we added a practical

light in that instance and that

is also something to keep a real keep an eye out

for yeah i suppose um the colorist then can ask

as well because you do have so many elements that

can be separated potentially you've got some mattes

for the foreground and the background and if the

backgrounds are starting to look a little

milkier than the black point in the foreground you

know it'd be much more simple to request a

matte and pop that in to adjust the background rather

than try to spend your whole session

you know doing some complicated animated shape that

is a nightmare for everyone to sit through

yeah well do you often have a lot of mattes well

actually and that's funny you say there's this

new tech and unfortunately it's only the red camera

that can actually do subframe and actually

maybe you know in maybe there's more cameras that

can do it but the subframe technology is something

that we're looking into quite heavily at the moment

which means that your camera can pick up

a subframe and the Brompton processor can add

inject a subframe into the panel so you wouldn't

you can't perceive it with your eyes but

essentially you're adding an extra frame

and they have green screen in the background and

your camera is picking up subframes so

so you say this for instance is at 25 frames per

second that you're rolling at

it's essentially picking out 50 frames a second but

still interpreting it as a one over 50th sort

of shutter angle for a 25 frames per second sort of

film so it's not ultra sharp no it still retains

its motion blur across the the frames but then you

can have separate outputs coming into your split

where you can see straight green screen on the

panels and that action is happening and then this

on the other output that's coming out is your

beauty which is what the panels are looking like

at the same time which is amazing you know and then

you can then use that to create your mattes

from and inject that into your beauty after so you

always then have a mat running at the same time

yeah it's amazing i can't see any reason not to do

that that just sounds so useful i know it does

it sounds good the problem is is signal and so you

know we wouldn't have to do it from unreal

so we're spitting out 25 frames per second which is

fine or we'd probably go to 50 and then we just

roll at 25 but then when you inject an extra frame

you're actually adding so we've got say a 4k

image that's being projected onto the screen you're

then running it at 50 hertz or 50 frames

a second on there so you need to have that much

more signal going through the panels and so you

have these bronton processes can only output a

signal of 4k across these panels and then it's

all about math to work out you know what the signal

ratio is through and you want backup and

you want you know like so it's the power ratio as

well so we have the ability to have 50 frames per

second on 4k which is good but if you were trying

to shoot at a different frame rate and then it goes

up over a certain frame rate then the panels aren't

able to run that way so you then need another

processor which is fine it just implicates the cost

goes up every time we have more of these

processes which are very expensive pieces of kit so

nothing comes for free in terms of options

no nothing comes for free exactly yeah yeah so yeah

that's super interesting well as as useful as

it would be perhaps it can still be somebody's job

to clip around those elements in roto yeah i mean

look i think it's actually super super helpful

and i'm sure there's going to be projects

where we just go we just need it and and that's

we'll just you know pay for that extra

processor and just you know make sure we've

got the power for it and i think we'll be good

you know don't take down brisbane's grid yeah well

we've got at steel bridge here we've just had

like another we've got like three 32 phase

outs or ins or whatever so we've

really like you know bumped it up a little bit more

so for a small studio so also a good place

yes also a good place to uh to go and charge a

tesla if you need to well yeah let's not tell

too many people none of us are driving tesla's out

here we should do that it's a great idea

we'll take that on be a very quick charge yeah look

thank you so much bryn for your time i

really appreciate it i think it's a very

complicated bit of subject matter and

i'm sure we've only just scratched the surface on

understanding how it all works but definitely

know a lot more now having spoken to you about it

than i did beforehand because it's a totally

foreign different universe to me so i can't

thank you enough for explaining some of these

concepts and i really look forward to seeing how

people can integrate the the world of color

grading a little bit more into virtual production

as as the technology begins to become more

mainstream and you know more widely adopted yeah

well it was a pleasure chatting to you Kali so

you know it was good and i'm glad i mean you know

like i said you know the more people that

are on board especially you know colorist as well i

think it'll just make the the final

results so much better and i think that's what we

should all be striving for so happy with that for

sure absolutely well thank you so much for Mixing

Light this is Kali Bateman see you next time yeah

From Concept to Screen: Virtual Production in Film and Television