Virtual Production in UE4 | SIGGRAPH 2019 | Unreal Engine
- Articles, Blog

Virtual Production in UE4 | SIGGRAPH 2019 | Unreal Engine


>>David: Hi. Welcome to this
talk about virtual production in Unreal Engine.
I hope you have a good SIGGRAPH so far.
I am David Hurtubise, I am Technical Account Manager
at Epic Games in Montreal, and there is Philippe Rebours, Technical Account Manager
at Larkspur. We have been playing around
with virtual production since a long time
at Epic Games. In this talk,
we will talk about all the overview of the tools
for virtual production, and we will do a case study with Joji and Jacob
with Fortnite Shorts. First, what is
virtual production? Virtual production
is a next-gen workflow incorporating sets
of real-time toolsets. So it allows a filmmaker to
turn their vision into reality. It redefined
what can be achieved in narrative storytelling. It is a blending of live actions
with virtual world, it can be used
for performance capture, previs, body and
face motion capture, virtual cinematography,
and so on. In 2016, Epic featured some of the greatest achievement
with the Hellblade: Senua’s Sacrifice.
With Senua they wanted to demonstrate
the use of Sequencer a new cinematic tool
for capturing and editing. And it was the beginning
of the experimental tools that we know today. After this, as Andy Serkis said, “Video-making tech
is a movie-making tool.” So we made him
a part of the Engine with the real-time digital
human of himself. Epic demoed the digital
[INAUDIBLE] cameras, and it was the early
VP tool as we know, VCam. To go deeper on animation sides,
GDC 2018, Epic Games team with 3D Lateral,
Cubic Motion, VICON, Tencent, and we make
digital human with Siren, which was the high-fidelity
real-time digital Character. Working with
the living animations, we gave the Epic Games
the idea to do more and more work
on the Live Link Plugins. And in 2018, we also dived
into an exciting new technology for Unreal, ray tracing.
In a joint venture with NVIDIA ILM, Epic Games unveiled
Reflections, a live-action film in
the Star Wars universe, our Real-Time
Live! presentation in 2018 with Reflections
walking through all the virtual
production tools, but we add also
experimental work on the VR scouting tool that we now have. At GDC 2019, Goodbye Kansas and
Deep Forest Films revealed Troll. It was a short Unreal
demo that raised the bar. How so? Featuring unprecedented
levels of cinematic qualities. And it was featuring, again,
full VP tools and ray tracing. Here are some videos
on this project. But why will
cinematographers want to work in a game engine to make movies? I can answer this easily with the fact
that Unreal Engine, VP tools are very
accessible to people, and maybe because we can do
a super cool baby like that. It was a really cool baby. It won Real-Time Live!
at SIGGRAPH 2018. And there are many,
many more examples. If you want to check
deeper some projects, you can go on our websites,
under the Production tag, you can also find
the virtual production gains that just have been released. And you have many,
many cool features, and information on that. I am pretty sure that you want
to know more about those tools, so here is a review
of what we will talk about. It will cover, in order,
all those sections, take a little time to check it. But we will not have time
to dive into all the VP tools, so there are other tools that
will be interesting to know. Maybe more
on the NDisplay stuff. We have many,
many things with [INAUDIBLE], the stuff and all that
is working with NDisplay. NDisplay allows to —
it is in-camera VFX, all those having multi TVs that can show
your information. First we need to link
the machines together, and we will talk
about multiuser. For this we will talk about
how to connect multiple instance in Unreal together,
so to work collaboratively. In this example we see
I have VR Scouting, VCam, MOCAP and Sequencer
that work in the user, and we have all of that together
with our teammates and colleagues
to work in real-time. You can see in this video
an example. So if there are any shut-downs
where you didn’t save changes, the next time you
open your projects, you will show a list
of transactions you made since the last save, so you can restore
all these list in change, and all change up
to any recovery point you need. The new things in 4.23
is really the multiuser UX and UI, that changes a lot. First, the Go Live icon
is no more on the icon, so you go take the multiuser
browser under Windows>Developer Tools, we check
the Multi-User Browser there. If you want to,
you can put back the Go Live button on the toolbar. So here is the Multiuser Editor. You see that there is
a lot of cool buttons, really easy button.
You have now Create Sessions, Launch Server,
Join Selected Session, Archive, Restore, Delete.
You have the Settings. You can see the display name
and color of your Character, and the archive content
you can see there. We also minimize
the possibilities to losing the sessions
attached to user errors, so server automatically archive
all the live sessions. You can easily recover
the archive of that anytime. You can select the archive
with the Archive button, select the session
that you have archived, and can just get it. Also, you now receive
a notification when you try to modify an Asset.
It is kind of same as Perforce, so if some other user
using something, it has already looked,
and you see the notification. In the LiveLink, we have extended
LiveLink Plugin now in 4.23. It is now more
than just animations. The big thing is,
you have the save and load presets now, so instead of putting the source
and you can present it, now you have the saved presets
— really useful. You have the status indicator
that shows at a glance what LiveLink source
are currently saving data. You have also the role
that can be seen there. There is already built-in role
in this part; you can see Character animation,
cameras, light, 3D transform. But you can create a
new role if you want to. You can now also pre-process
the data coming through LiveLink before it
gets applied to a sync. In this example,
I just push the Easy Access switch, so Y to Z conversions,
it is already built-in. With that you can change
pre-processor and translations, and the map [INAUDIBLE]
are rolled to another. You have also an Easy Game Mode, that instead of
the Blueprint node that you needed to do for using
the Game Mode, so now it is Apply to Client Blueprint node.
And you just use your preset, target it,
and it is really easy to use. You have also the LiveLink
Controller Component, so we can drive
an Actor in Unreal more easily from that, like here
by example, using Skeletal Mesh Transform that you want
to transfer to a Static Mesh Transform easily by that,
using the roles that you need to,
and parameters. Also, the big thing
is a LiveLink Pose — in an anim graph,
you have the Picker. So instead of just writing
the subject, the Picker is there,
you can just select the subject that is already
in the live source. After that, I will pass this to
Philippe for VR Scouting. [APPLAUSE]>>Philippe: Thank you.
So in the real-world, directors used to go on location
with their team, to scout, find the right camera
angles, how to right the environment.
And our VR Scouting tools lets them do the same
in their virtual world. So for now it only works
with Vive, but we intend to
add other headsets in future Unreal versions. You have various
navigation tools. You can fly,
you can teleport yourself. We have a grip navigation, where by hand movements,
you pull yourself. You can also scale
the World down, if you want to have
a bird view of your environment. So the idea is to find
camera’s point of views, right? So with a viewfinder tool, you can attach a viewfinder
to your hand so you can see, and you can change
the lenses as well. So when you are happy
with a position, when you like
what you are seeing, you can spawn a camera
that will be bookmarked, and then everyone can
see it and go there. You can also interact with it, and you can go back
to that position later on. In the Interaction Mode,
you can grab and move Objects. And we have
the Laser Context Menu, so you see those laser lines that shows you where
you point in the scene. We keep a history
of all the moves, so you can always go back to previous versions
in the undo system. You can also bookmark location
that looks interesting, or when you want to be sure that you can teleport
yourself later on. We also have a measuring tool
that display the distance between two points
within the scene. And now David will talk about
Rest API. And then I will come back. [APPLAUSE]>>David: I will talk
about, I think, one of the coolest feature
in the 4.23 — it is the ability
to controlling remotely over HDP,
I think it is Insights. Rest API, in that example,
I use Server Control, to remotely send from Unreal. We use view.gs and node.gs and
we can control the Actor we have to translate. So with that you can easily
make remote controls. Let us check on
the principal command that you have
on the output plug. You have a Start Server, if you want to enable server
on startup, this will let you
check about that. above that.
By default will be all set on Port 8080, but you can change
the default if you want. The Media IO, it is from 4.20, but I want just to
cover this a bit. We have this Proxy and
Profile Mode that enable you– you can have inputs
and outputs SDI. So you have dual in-quad
inputs and outputs. Key in field if you want to do,
like, alpha stuff. Media Profile give you
the access under toolbars to easy profile that you can
just put your inputs or outputs. We have Blackmagic and AJA Media
already on the Marketplace, so it will give you the bundles. The Blackmagic works with
mostly all the different cards Timecode Synchronizer — in [INAUDIBLE] 2.4
live motion data, in this example
we have tracked cameras, face mocap, body mocap timecode
with different frame rate. We use, like,
the master provider, and we can sync with the same
frame rate each of this source. To do so, we need to enable the
Plugin Timecode Synchronizer, and you have, after that, the access to the Timecode
Synchronizer Assets. You will be able —
it is working with the LiveLink, so you just select the LiveLink
you have with the source code, the timecode source. Then after that just
check the provider that you have as master. And we have since 4.22
the Open Color IO. I want just to go
deeper on that, because many people
don’t know about it. We enable the Open Color
IO handler, and we create a
new Open Color IO config in the Create Advanced
Asset>Miscellaneous. You see the Open Color
IO configurations there. In this example, I used on
the website OpenColorIO.org the new default stuff, so you will have
all the zipped file there with also the config
that OCIO can use. You use the config
directly there and the configurations file
in your config Assets. After that, you will be able to set up the desired
Color Space science. These Color Space settings work
mostly on the composure for now, but there is idea to work within
nDisplay in the future. For now, you can use
the Color IO science in the element compositor,
or the CG composition. I will go back to Philippe
to talk about Virtual Camera. Thank you.>>Philippe: So our Virtual
Camera tool lets you use in iPad like a camera. You can change
your point of focus. We have also a
pink plane that is a helper,
so you can be more precise where you want
your focus point. You can also play
with the aperture, and the proper depth of field
is then calculated. Another tool on the left
is the focal length, and you can swipe,
going from one to the other one. So you can move
in the real world and the camera is getting–
the position of the iPad is getting recorded. But you also can direct
your camera using the joypad. So the one on the left
is for dolly and track, and the one on the right
is the boom. So some of the settings
that you have, because the iPad is quite light,
it can be jerky a little bit, so you might want to stabilize
your movements, move it. Or you can lock some axis,
for example, you can lock the three
translation axes, and you get another move.
Or you might want to lock the Dutch to be sure that
your camera stays horizontal. You can also play with the
motion scale by increasing it. You can do those large
movements like a flyover, or by decreasing,
by scaling down, you can be more precise if you
have a very, very close-up. So this interface is done
using a Blueprint Widget. That means that
you can modify it, you can add
your own parameters that you would like to control
through the iPad. Now, Take Recorder
is our window, our interface
on which you define which Actor from your scene,
and from the Actors, which parameters
you want to record. Once you created
this list of Actors, you can save it
by creating a preset that you can reuse anytime. You can also define
default tracks. So for a specific Actor class,
in this particular example, the Cinema Camera Actor,
Cine Camera Actor, you can set which parameters
you always want to record. For instance, you are going to
want to record the focal length, or the aperture, even though you might not
change it during the take. But also, you can define which parameters
you never want to record. So when you hit record, the Take Recorder saves
the Level sequence file. The Level sequence file,
it is a container, it is a list of tracks
representing parameters that can be modified over time. And we have multiple
types of tracks; audio tracks, event tracks,
shot tracks — that is for the editing. And you have also
the subscene track that can reference other Level
sequence files at sublevels. So Sequencer is our video
editing tool, and the visualizer
of those Level sequence files. For instance, here we are looking
at our master sequence file that contains the shot tracks
and the audio tracks. We can see its name
on the right, and the shot
that we are looking at is a different file
that we are referencing. By double-clicking on it,
I visualize the shot file, and this is where
we do our shot work. I can, for instance,
modify light. It will automatically
add a key. The interest
of the sublevels is, you can split the work
and have multiple artists working on a shot
at the same time. When you want to export
your movie, you click on the clapper icon, and then you can
set the parameters and capture the movie. So besides rendering AVI, JPEGS, EXRs, you can now
render out videos encoded with the Apple
ProRes Encoder. If you use the timecode
synchronizers, the keys will be set
at the proper frame on the timeline,
automatically. And by implementing the Python
import/export fbx animation, we now have finished
the Python implementation for the Sequencer, so you should be able to do
everything related to Sequencer through Python, or actually because of Python
an Editor Utility Widget. So you can create
with that Level sequence files, add tracks, keys, et cetera. We also have a new Curve Editor
that has its proper window. We implemented
the re-timing tool, so by double-clicking
and adding those vertical lines, if you move them, you can then compress
or stretch the keys locally. We also added
the transform tool, where you can select
multiple keys that you can then translate
and scale in 2D. We also have different
viewing modes, so there is
the Absolute view mode, the Stack view mode,
and the Normalized view mode. Also, you can apply filters,
and we provide some of them. But you can easily
create new ones; you just need to create
a new Class, which derives from
the UCurve Editor Filter Base, and you override
the appropriate function. And now I will let Jacob
and Joji present a case study
in virtual production. [APPLAUSE]>>Hi. I’m Jacob Buck.>>I am Joji Tsuruga,
and we are technical artists on this Special Projects team.
And we are actively using and advancing these tools
that we just saw. Fortnite Shorts —
so Fortnite Shorts are a result
of an ongoing special project that combines the latest
technology in the Unreal Engine with the world of Fortnite to explore and streamline
virtual production workflows through the creation
of sketch comedy shorts. Believe it or not, the hardest
part of the process is comedy, but we will leave
that for another talk. The first public release
of some of these shorts went live last month at the Fortnite Celebrity Pro
Am competition. Let us watch one now. This one is called
“Desert Island Flare.” [VIDEO PLAYS]>>Oh, hey. Hey, hey! [FLARE GUN SHOT] [EXPLOSION]
[GASPS] [SPLASHING]
[CRASH] [MUSIC]>>Joji: So, how did we make
these shorts? What is our workflow?
As with most projects, we generally have
three large sections; pre-production, production,
and post-production. We begin with pre-production, where we come up
with the concept, as well as prepare everything
we need to move into production. Writing scripts
is the foundation of all of our sketches. Select scripts will be
translated into storyboards, which help with the framing
and pace of the story. We also create all the key Assets necessary
to tell our story, including Characters,
props, environments, and FX. Creating a stage layout
helps us be prepared once we are on the stage, to know where we want performers
and hero cameras. We also create a Stage Proxy
to view in Engine alongside our environment, to understand
our physical volume constraints. The transparent black here represents the walls
of our stage, and the green outline
represents our capture volume. We then move into production, which encompasses everything
that happens on set. This is where virtual
production shines. Jacob will now explain the
details of performance capture.>>Jacob: To capture performance,
we need to record audio, body, and facial performances. We used a passive optical system
for body capture, and we targeted our performances
onto our Fortnite Characters. When looking for a facial
mocap solution, we needed a system
that would allow us to produce content quickly,
with a very small crew. The goal of these shorts
is to be topical. This led us to look
for our facial solution, which met the following
requirements: We needed a real-time solver
to see results live in Engine, the solver needs to be generic as we may not have
a consistent cast of Actors. It needs to have little to no
training or calibration time, and must be able
to send timecode directly into the Engine. And it’s always nice
to have reference footage. The solution we felt best
matched our requirements was an iPhone 10. They are incredibly easy
to deploy, and use a generic solver
that works well across
many different face types. There is no training
or re-targeting, which allows us
to put the device in front of anyone’s face, and allows it to drive
any Character in Fortnite. We wrote a lightweight app
that streams the ARKit face poses over LiveLink,
stamps each pose with timecode, and also captures
the reference footage. Here you will see reference
footage on the left, and the LiveLink curves
streaming into the Engine on the right.>>Oh, oh, oh. Hey! Hey,
hey!>>Three, two, one,
it hits the balloon! [GASP]>>The bus is on fire!
And, cut.>>Jacob: Now I want to
talk to you a little bit about our stage setup.
To capture performance, again, we needed the body,
face and audio. We capture the performance
in Unreal on the record box. The body and facial performances are streaming into the Engine
via LiveLink, and the audios pipe
directly into the Engine. And Sequencer is what
we use to record a take. All the capture sources
in the Engine are kept in sync using Timecode. A stage with this setup allows you to capture
a live performance in Unreal, but once you introduce
multiuser into the equation, a number of possibilities
open up. I’m going to pass it
back to Joji to talk about how we use
multiuser in Fortnite Shorts.>>Joji: So by adding
a multiuser server, several Unreal clients
are able to collaborate on the same project, live through
shared transactions. A simple, yet heavily-used role
on set is the monitor. This is in our new client
that is connected to a display that is visible on set.
Through signal splitters, we also used this role
for video village. Here is a clip of what gets
displayed on the screens. We created a quad-view tool that can easily swap between
cameras that exist in a Level. It can also go full screen
on a single view, simply by clicking
on a quadrant. There is also some extra
information that we need, like timecode
and slate information. We built this tool
using Editor Utility Widgets. It is an extremely powerful
feature that was added in 4.22. If you are familiar with UMG, you will be happy to see
that it has the same designer and graph tabs for building
complex interfaces that can run in Editor.
Here you can see the director using the monitor on set
to review takes and give notes
on cameras and framing. Speaking of cameras, we created Blueprint-driven
procedural cameras and used several of them
on every sketch; every camera visible here
is moving procedurally. The logic and configuration
is kept simple, so that we could easily
drop these cameras into a scene on the fly, and quickly have basic framing
on all the key subjects. You can see that
with a few dialable controls, such as target selection
and motion easing, you can have endless variety
with little effort. We also created a role
called VPRC, or Virtual Production
Remote Control, which pixel streams
an interface to a tablet, and can be controlled
by touch input. To better understand
the idea behind this tool, let us take a look
at the various human roles that may be necessary on-set for a traditional
film production. There are performers, directors,
ADs, DPs, script sups, VFX sups, gaffers, grips, et cetera —
the list can be very long, and often times several of these
roles are multi-person teams. Along with each role comes with their own
set of equipment as well. When creating VPRC,
we wanted a tool that can give the user
the ability to wear the hat
of any of these roles by consolidating varying functionality
into a single interface. This is especially useful
for small teams that may want to run
a full production with a very low head count. Here, you can see that a user
can have control over slating, adjusting the views
on the monitor, firing off recordings. They can use the tabs on the top
to gain access to other roles. This is the Actor tab,
where you can inspect the performer
and Character properties, and verify facial performance. You can also swap out
Characters. The lighting tab gives
you control over time of day and the direction of the sun. Through VCam integration, you can also drive virtual
cameras around your scene. With additional tablets
and MU clients, you can even have multiple
VPRCs running at the same time. Through MU, there are also several additional roles
that can be added. A generic Editor role
is always handy to have on-set. Other roles may include
dedicated Virtual Camera roles, a rendering box,
and a machine role dedicated
to saving and uploading. We plan to continue to explore
more in this area to find the best combination
of roles for our workflow.>>Jacob: While multiuser
enables collaboration between all of our Unreal boxes, we needed a tool
that would enable us to control all the peripherals
on the stage. This new includes the body,
face and audio, but also reference cameras.
Control Panel is the hub that allows all the different
components on the stage to communicate with one another.
Our design goal was simple: We wanted to set the slate
in Unreal, hit record, and then have every device
capturing on stage. Hitting Record in Unreal sends
a message to the Control Panel, which then relays that message
to all peripheral devices. Likewise, someone could be
walking on set using VPRC, hit Record, and then
trigger the entire system. Since Control Panel
knows everything that is happening on the stage, it can upload all that
information about a take to a project
management database. This automatic tracking of take
data is central and critical for enabling
post-production automation. While Control Panel mostly
relays messages from one Unreal session
to other peripherals, it can also trigger
takes directly. The system is modular,
and was designed for a stage to experiment
with different machine roles. It is easy to add new multiuser
box into the configurations. Control Panel also helps
manage a multiuser session. It communicates with
the multiuser machines, allows you to pick
a change list, sync and build your project. It also acts as a launcher
that starts a multiuser server, and can bring all boxes
up into a Level ready to edit
in the multiuser world. This diagram represents
the machine setup that we used on our stage. I’m going to pass it
back to Joji to talk about post-production.>>Joji: In post-production,
this is where we polish the work and go through the finishing
stages for delivery. For some sketches, rather than
breaking them down into shots, we explored
a multi-cam workflow, where we have
a single performance covered by multiple cameras.
By having a master performance, all the layering, cleanup,
animation, lighting and FX can be contained
into a single sequence and be shared by
different camera sequences. This still allowed us
to have the ability to create
camera-specific overrides, without the constraint of shot
order or shot lengths. Most importantly,
it gave us huge flexibility with our storytelling, as we were able to keep
the edit fluid throughout the entire process. One major advantage
of virtual production is the ability
to adjust specific sections of the performance
non-destructively, without affecting any of
the action you’d like to keep. Here, we can see
the creation of a new camera on an existing take,
through the use of VCam. Utilizing layered takes, we were also able to enhance
hand and finger motion. We even layered in new dialog
and facial performance, on top of the existing
body motion to try alternate lines
in the script. Editing can be done
directly in Sequencer, but there is also a feature
that allows roundtripping through the use of EDL
or XML files. You can export these file types
directly from the shots track, which can then be imported into the editing package
of your choice. In this case, you can see a
matching edit in Adobe Premiere. For the return trip, you can
also export from Premiere. Once imported into the Engine,
you can see that the edit will match
the changes made in Premiere, including track hierarchy. Once picture is locked,
we add in the final audio mix, balance the colors, ensure that
it is broadcast-safe, and then transcode
for various delivery formats.>>Jacob: What are our future goals?
We always strive for no post. As we continue to push more
functionality into the Engine, our goal is to be able
to go into production and walk off set
with an edit and final pixels. Going forward, we hope
to share best practices and the tools we shared here. We wanted to end our talk
by showing the other short we shared at the program,
it’s called, “Team Outfits.” [VIDEO PLAYS]>>Insano Death Squad unite!
Let’s do this!>>Ready for vengeance!
>>We reclaim our destiny.>>Hello! What?>>I thought we
talked about this.>>Talked about what?
>>Your outfit, man. It’s just not screaming, “Insano
Death Squad!”>>You’re crazy.
This is the best!>>We’re trying to be
intimidating, here.>>I am a fish walking on land,
breathing the air. That is intimidating!>>No. Go change!
>>Are you serious?>>Go change,
or you’re off the squad.>>Fine!>>I’m so tired of this.>>It’s like he’s not
taking this seriously.>>He gets one more chance.>>Behold the Mullet Marauder! [GROANS]>>Now what?
>>I’m out.>>I just can’t even… [MUSIC]>>Jacob: Thank you
to everyone who came. [APPLAUSE]

About Ralph Robinson

Read All Posts By Ralph Robinson

27 thoughts on “Virtual Production in UE4 | SIGGRAPH 2019 | Unreal Engine

  1. Would love if unreal engine could come up with a game where you can create your own movie and us Kinect for mocap, easy level designs like the sims and create your own characters

  2. What a horrible outdated GUI! It looks from the beginning 2000's or so. TIme to refresh guys, this is not windows 95 anymore

  3. What a freaking boring presentation @24:00 If you do somthing about comedy, put someone up which can enrich the presentation. This guy just blant reads spreadsheets. Plus he shows how things are done to people which are in that same market… That backside tech everybody already knows?!?!?

  4. WOnder how they fix all the liggling from the headset. The side by side cleary showed the headrig moved all of the place, yet the animated character did not show that weird behavior.

  5. Behind the scenes of fortnite is really interesting the model creation music and more, really wanna see how they make it

Leave a Reply

Your email address will not be published. Required fields are marked *