Eva and Space Spider: Bringing you mind-blowing visual effects
The Goal: To use lightweight, structured light 3D scanners to create highly realistic 3D models of people and props and objects, for animation and other VFX, and use these in a variety of ways during production, and archive these models for later use, including sequels.
Tools Used: Artec Eva, Space Spider
Ever wondered how a movie character can burst into pieces, turn into a beast or jump from a rooftop and land on two feet completely unharmed? Stunts like these would be impossible to do without companies likeTNG Visual Effects. Armed with Eva and Spider, TNG produce 3D models of characters and props that are later incorporated into movies to explode, shatter, transform – you name it. You can see TNG’s work in The Man of Steel, The Twilight Saga, The Girl with the Dragon Tattoo and many other top-grossing Hollywood blockbusters. Want to take a peek behind the scenes?
3D scanning in the film industry
“3D scanning brings the ability for directors to do anything they want,” says the founder of TNG, Nick Tesi, who started in 3D computer graphics back in 1986. “It’s a new era for filmmaking, and today directors can really follow their vision in the story telling process, having the chance to achieve any thought process or concept they have in mind.”
Filmmakers are story-tellers and not typically the ones making the decision on what to use to create a scene, Tesi adds. That is most commonly decided by the Visual Effects Supervisor who will choose whether or not they use 3D scanning for the job. The budget for the project plays a big part. If it’s a tighter budget, the Supervisor could elect to use practical effects instead, meaning they might actually blow up a house or car, rather than have a completely computer generated scene created. This may save a few dollars if the first shot taken is perfect, but you could end up losing money if you need to reshoot the scene. Using a digital double of a vehicle, prop or even a character gives the director more control of the scene by allowing multiple takes until the scene looks just right.
How they do it
TNG have been working with Artec scanners for five years now, calling themselves “early adopters.” They usually use Eva for heads and bodies, and Spider comes in handy when scanning props and any detail on the body that calls for a finer scan. Among Eva and Spider’s strengths, Tesi names their small footprint that travels well, better data with newer software, high accuracy and pre-calibration.
One of the most challenging jobs performed by TNG has been the scanning of a character wearing a lot of intricate armor. For that, TNG used a combination of Eva and Spider which helped them deliver a quality digital double to the customer.
To begin the process of 3D scanning an object, reference pictures are first taken which aid the 3D modeler, the 3D texture artist, and during final quality check to make sure the 3D object matches the real-life counterpart. After the object is 3D scanned, the scan data is aligned and fused together. Once the 3D scan technician acknowledges they have as close to 100% coverage as feasibly possible, the scan is complete and can be processed.
The time it takes to process the data depends on the resolution needed. Once the data has been processed, the 3D modeler can begin their work. With the use of the images captured from the professional photo shoot, along with a perfect silhouette and scale of the item provided by the 3D scan, the modeler will create a 3D model out of the many components and unwrap the geo for the 3D texture artist.
The texture artist then paints the object (or projects images) as well as paints the seams of where the UV coordinates were cut. The completed texture is given back to the 3D modeler who will bring out further detail through sculpting. As the model is finished off, the normal maps and displacement maps are generated to provide multiple ways for the 3D model to be viewed.
Motion capture
“As a 3D scanning company who primarily scans human bodies and human heads, it was only a natural progression for us to branch out into motion capture,” says Tesi. This technology brings static dormant objects to life. A 3D scan captures the surface of a person’s skin and clothing. “Once it’s been put together, we unwrap the UVs and remesh it to prep it for texturing,” says Tesi. “After this step we may render the model, but the next step is to insert joints (a skeleton) so that there is something to drive the skin of the character into movement. This process is called rigging.”
Once the skeleton is sitting nicely within the computer generated digital double, a process called weighting takes place. This allows you to work out how much skin each specific joint will drive. Dialed-in weights will visually create lifelike movements when the character is animated. To animate these joints without having to actually grab the joints themselves, a GUI is created, which is connected to the joints via orientation constraints. This allows an animator to more easily and intuitively animate a 3D character. After animation, the video is rendered frame by frame on a network of computers called a render farm.
Creating a digital character vs. developing a digital character
You can either have the character modeled from scratch, have a maquette of the character created, use a stunt person, animatronics, or 3D scan the person. The fastest and most effective way is to 3D scan.
To fully develop a character, especially if a well-known actor is in the scene, involves a much larger effort to create facial performance, cyber hair and cloth, and apply character lighting and final scale, all the while making sure it is an exact match to the original actor to keep a smooth transition. For example, if the digital copy you’re working on is the main character and needs to be on the screen for a long period of time, then a lot of time would be spent on matching the digital copy to the actor.
There are also those characters that will be some distance from the camera or appear for a very short time on the screen. In this case, less time would be spent on perfecting the details and making sure it was an exact match, but it would retain the illusion of being perfect.
In the event a shot is needed and the actor is unavailable, another approach is the concept of insurance scanning – scanning everything that may seem necessary and then archiving it until the need arises. This ensures that there is always a way to finish the project. Archived assets can be reused and carry over to the interactive field or perhaps the creation of a video game made from a film concept or vice versa.
What makes a digital double look true to life?
Tesi says there are three components to creating a great 3D character. The first one is making sure the character design itself looks like a real human. The overall structure of the body and the look of the skin all need to look like a real person. From the hair to the cyber clothing, every part needs to flow naturally. The real life character and the virtual character need to make the viewer wonder which is which – well, it shouldn’t even be a question.
The second component is full body animation, seeing how it moves – whether it’s stiff, jerky, ill-timed, or even too smooth like an astronaut on the moon. In working with live action, the character needs to have fluid movement to match that of a live character.
Last but not least, facial animation must look like a strong realistic performance. We all know what faces look like as we see them every day; it’s easy to comprehend a person’s emotion by looking at their face, but in 3D that must be created. If not done correctly, the upper part of the face, in particular dead flat eyes with a thousand yard stare and a lower forehead, could give it all away. Eyes need to have the correct proportions and require as much detail as possible, especially in the corners. The same goes for the mouth and ears. Finally, the volume and flair of the nose needs to have proper nostril placement that is non-symmetrical.
The future of visual effects
As3D scanningand百度开云体育app technology evolves, it will be nearly impossible to tell which characters are computer generated and which are the real thing, making it a very personal and real experience. We’ll see movies and television, like video games, becoming completely digital – the backgrounds, foregrounds, characters, etc., providing full digital entertainment in all media in an unpredictable timeframe.
“As more visual effects are used through 3D scanning, motion capture and other types of visual effects, the more the world will follow suit,” says Tesi.
Given that the cost and time of3D scanning servicesand/or equipment is becoming smaller (but still producing a high quality product), it’s highly likely that non-Hollywood filmmakers outside the U.S., including in Asia, Europe and Latin America, will catch up with them in the trend of using 3D scanning to create movie visual effects. It’ll become more accessible through price dropping, rental agencies, service bureaus and if technology becomes easier to use without the need of a 3D scanning technician who knows the entire pipeline of how to produce 3D models.
“In the past there were a lot less visual effects on television compared to the film world, but as time has moved on, television episodics and even commercials started to include visual effects,” says Tesi. “They’re giving a big budget look when they use the most up-to-date technology and will stay relevant and exciting in this market.”
If 3D scan data could automatically be stitched together already in full color and could produce a high quality digital asset that was usable in production, it’d be a huge breakthrough. The possibilities for computer generated assets would be endless because the delivery time would no longer be such a factor.
Tesi believes that in the near future, movies, commercials, and television programs will be 90-100% digital, which could lead to the viewer being able to create their own ending.
“Before you know it, there will be a world of digital images accessible to the entire industry through a virtual library,” he says. “Shows and projects will be completely computer generated from the characters to the weapons to even the locations. The human element will exist only through motion capture and voiceovers. Story generators exist today, but maybe in a dozen years such programs will develop blockbuster scripts.”
Today, 3D scanning is used as a starting point to create a digital asset, but if these predictions become reality, are you ready for a completely digital world?
Scanners behind the story
Try out the world's leading handheld 3D scanners.