In recent years, visual effects in movies and television have grown both more sophisticated and more ubiquitous. Audiences have grown used to elaborate environments and fantastical fight scenes, historical dramas and futuristic worlds. As audiences expect more, so film and virtual effects technology has to evolve to keep pushing the envelope without breaking budgets.
Since the 1940s, one of the most common methods of creating visual effects where multiple elements are composited together into one shot is Chroma Keying – also known as blue or green screen. We have all seen images of actors performing in front of a green screen set that later becomes a fabulous environment with dragons or spaceships. The way that chroma key technology works is to make pixels of a certain color transparent – usually a bright green color. Once the areas of the film that are green are invisible, any other elements can be combined together – other actors, virtual sets, real background footage or virtual weather maps, resulting in a combined image that could not be filmed normally.
WBGU TV – Mbrickn
There are many challenges when filming this kind of green screen content – for example, if you need a character to interact with something that isn’t there, the actor usually needs some kind of object to interact with, such as another actor in a green screen suit, or padded and covered props on sticks meant to represent a robot or creature later on.
Similarly, it can be difficult to correctly line up a shot that involves a large virtual background – for the director and camera crew, it can be challenging to figure out if a camera movement might cut off the top of a tower, or important environmental features that should be included in the shot. This can be solved with very good planning and preproduction but mistakes can still happen, and reshoots can be very costly.
A third challenge with chroma key is lighting; all the different elements that are placed in the shot must include the same direction and kind of lighting in order to feel like they are legitimately in the same scene. Likewise, any camera movements must be accurately recorded and reproducible between shots, otherwise elements would appear to float or move incorrectly in relation to each other.
All these challenges can be made easier with more intelligent and innovative solutions. Today, Ncam Technologies announced a new model of its award-winning camera bar. This completely redesigned camera bar features Intel® RealSense™ technology, adapted and integrated into the Ncam Mk2 camera bar to meet the requirements of the media and entertainment industries. The system also includes the latest Ncam Reality 2019.3 software running on standard HP workstations for reliable processing power.
The Mk2 camera bar mounts freely on any camera and uses the Intel® RealSense™ technology to capture spatial data that then feeds back to the Ncam Reality server. The system allows filmmakers to see their pre-built visual effects in real time as they are shooting. Ncam uses both computer vision and computer graphics techniques to show in real time on a monitor and through the view finder how the final shot could look, with virtual assets and real actors being combined as the shooting continues.
A good example might be allowing an actor to see what the geography they are standing in will eventually look like. Imagine a shot where one character is pointing out objects in the distance to another – standing in front of a green screen, the actor would have to guesstimate where each feature exists in the landscape, and then in post production, artists might have to adjust the environment to realistically match the actor’s gestures. Using the Ncam system would enable the actor to see the virtual assets on the monitor and understand where they were positioned in relation to the virtual set, and gesture/act accordingly. This not only gives the actor more confidence in the performance, it also saves time and money since changes in post production would no longer be needed.
The Mk2 camera bar also makes it easier for camera operators to see on the monitor what they are framing in shots. In some productions, combinations of real and virtual assets are necessary, for example, a real street or building façade that then merges into a virtual street. By allowing the camera operators to view what is virtually around them in all directions, live as the camera moves, they can more accurately and precisely move where they need to – again, avoiding the need for costly adjustments after the fact.
With this additional information on the day of shooting, directors can stage the action and position characters and cameras much more quickly, since the monitor shows the pre-built virtual sets all around them. This new kind of hybrid virtual production can and will lead to much deeper and richer content creation, with pre-production assets being easy to place in scenes and move as necessary without slowing down shooting, and allowing screenings to show much more clearly what final shots will eventually look like. Ncam’s focus on developing innovative approaches for AR in order to offer their customers comprehensive industry leading solutions has and will continue to enable amazing productions in the future for us all to enjoy.
Read more about the Ncam Mk2 camera bar here.
Subscribe here to get blog and news updates.
3D scanning There are many different applications for 3D scanning, and many different approaches and solutions, depending on things like
What is 3D Scanning? 3D scanning has come a long way since the first ever object to be ‘scanned’ using students,
Talk to sales
Let’s talk about how Intel RealSense depth sensing cameras can enhance your solution.
We'll be in touch soon.
You are about to leave our website and switch to intel.com. Click here to proceed
Keep up to date with Intel® RealSense™ Technology: product updates, upcoming announcements, and events.
You were successfully subscribed.