When we hear deepfakes we mostly react in a negative way. We all know that we live in the era of misleading and we must be very careful nowadays believing in video or audio materials (please refer to my previous article about deepfakes to have a basis on this subject). But, there are some areas in our lives where deepfakes can be actually helpful: education, art, and movie industry to just name a few.
I love watching movies and highly value the role they play in our lives and culture. I’m really getting into the story of the movie and thrive for immersion. For me, every aspect of being immersed in the movie is important. For this very same reason, I prefer analogue VFX (visual special effects) over CGI (computer-generated images). They fit better into the movie and they don’t age so fast as CGI ones. There are plenty of examples of that but the most shining star (indeed) is my beloved franchise – Star Wars. Everything works great when it comes to the recreation of objects and props. Star Wars trilogy (4,5,6) is still an iconic masterpiece and you can’t say it’s getting older. I would even say that it’s getting more older from the narrative point of view than from the visual consistency.
Problems start when we try to recreate a human being. Since even untrained eye can tell the difference between the real actor and the CGI one. The shining examples are Moth Tarkin and Leia Organa from Rogue One. Both actors, Peter Crushing and Carrie Fisher died before the movie was shoot, so the director of the movie decided to recreate them using special effects. The output was.. weird. Leia probably bit better than Tarkin, but still, you could tell it’s a fake. It was around 5 years ago.
We are already technologically capable to make a real-looking human person using AI and machine learning.
Synthetic Media = deepfake
Before we will get further into the article one explanation on wording. Mostly we use word “deepfakes” in the negative connotation. But there is a term which we could use for deepfakes in a more neutral manner and it’s called synthetic media.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated processes, especially through the use of artificial intelligence algorithms, to recreate or create a new entity of media for various purposes.
If you want to dig more into the details on synthetic media I highly recommend you to visit PaperSpace Blog. There are tons of materials on this topic.
Let’s get back to the movie industry. There is a software called Paper Space which relies on machine learning and AI. Having just a set of images we can recreate the real human face and mimic. Look at this comparison made by Youtuber named Shamook. Using Paper Space and the sets of pictures of young Carrie Fisher (he found on the internet) he was able to push the scene from Rogue One to a whole different level. The output is unbelievable. Light, details, mimic of the actress – all is great. Just take a look at the video below.
The difference between original CGI (created by Disney/Lucasfilm) and deepfake is shocking. Everything is better on this example: tone of skin, overall mimic, proportions of the elements, eyebrows. Look how full of life became her face after implementing synthesized media. The eyes, how sparkling they become, her skin has better lights and shadows. The contrast of the sharpness especially in the eyes area could be a bit lower but that’s just minor thing. And now think, that all of it was made with just a bunch of photos found on the internet and a personal computer for $800. It’s truly unbelievable what is doable with deep fakes – used in the right way, by people who understand technology. They can truly bring back our beloved icons from the past.
Another example is the more recent one. As you can imagine I was thrilled to watch Mandalorian series directed by Jon Favreau and Dave Filloni (there were plenty of other directors in the series but I think those two are pivotal for the show). Favreau is in my opinion a visionary who can really understand technology and knows how to apply it for the good of the movie. And he proved is also in The Jungle Book where CGI was already ahead of the times. Filloni from the other hand knows SW franchise probably better than George Lucas himself and he is an icon in the community. Those two gentlemen were indeed a perfect match. They truly made memorable spinoff based in the Star Wars setting. Favreau used so many modern techniques like VR scene blockout and direction, huge projections of environments and mashup of real made props with wall projections. They also reached for old-schools puppeteers skills. Little Yoda is a puppet animated by 4 puppeteers (since they tests with Grogu in CGI were lifeless). All to achieve unbelievable immersion of VFX with real actors and to create a believable picture. For the first time, I felt like VFX blended with reality. I even said to my wife – this is the first time I can’t tell the difference (and my roots in creative production studio, so CGI is not new to me) Everything there is excellent if it comes to visuals.
In the last scene of the second season of Mandalorian…
(WARNING: MAJOR SPOILERS AHEAD!)
when Luke shows his face I must say they did a decent job, but we could instantly feel that something is not right. It’s very hard to cheat the audience when you show the face on close-ups and we could tell that Luke is CGI re-created. It wasn’t a huge thing since the scene was highly emotional. But of course – how much better would be to see the real Luke in the scene? Now, again the same guy I mentioned above: Shamook, using only 800$ computer (!), 2 weeks and set of pictures he improved the quality of the scene. Look at the difference in light and overall impression of life below.
Those are just two examples of what is doable with deepfakes. If you go the Shamook profile you will find a lot of materials from the movies. All using synthetic media / deepfakes. Now let’s think ahead what implications could it make for the film industry?
Implications on film industry
Licencing actors image
We could imagine that actors in the future could licence their image in the movies.
Of course now it’s difficult to make a more complicated scene than the simple appearance of the actor, but in the future using AI and machine learning we can make an imprint of the actor. Of course, it sounds futuristic – but I believe it will be possible. There are limited numbers of our behaviours and AI can learn those little facial expressions and body gestures. If not – we could imagine one actor double (bellow) doing motion capture and then the face of another actor can be implemented by AI.
Some people say – if it’s not broken – don’t fix it. But there are already a lot of examples of broken CGI effects in the movies. In fact, there is a bunch of people who are VFX artists which have a YT channel just about that. You can visit Corridor Crew channel and see for yourself. They are combining VFX with deepfakes to correct bad scenes in the movies.
Also, it’s not new in the movie industry to make films better. It’s actually a quite common process. I can imagine with time movies will be improved with deepfakes making the actors more live and the VFX scenes more consistent with the rest of the movie.
Do you want Robert de Niro or John Travolta in your movie? Why not? If you buy a licence – all you need to is to find a double (I’m sure there are already actors specializing in recreating playstyle of various iconic actors). You will have a full-body mockup and then using facial expression and deepfake you can bring your bellowed iconic actor to a movie. It might sound bizarre but it’s one of the possibilities.
Syntetic media in Art galleries
At the end of this article – I just wanted to highlight one really interesting area where deepfakes are used – art. If you find my article about film industry interesting – the next one about deepfakes and use them in art is coming next. Stay tuned!