In the age of deepfakes, could virtual actors put humans out of business?
In the age of deepfakes, could
virtual actors put humans out of business?
In film and video games, we’ve
already seen what’s possible with ‘digital humans’. Are we on the brink of the
world’s first totally virtual acting star?
When you’re
watching a modern blockbuster such as The Avengers, it’s hard to escape the
feeling that what you’re seeing is almost entirely computer-generated imagery,
from the effects to the sets to fantastical creatures. But if there’s one thing
you can rely on to be 100% real, it’s the actors. We might have virtual pop
stars like Hatsune Miku, but there has never been a
world-famous virtual film star.
Even that link with corporeal
reality, though, is no longer absolute. You may have already seen examples of
what’s possible: Peter Cushing (or his image) appearing in Rogue One: A Star
Wars Story more than 20 years after his death, or Tupac
Shakur performing from beyond the grave at Coachella in 2012.
We’ve seen the terrifying potential of deepfakes –
manipulated footage that could play a dangerous role in the fake news
phenomenon. Jordan Peele’s remarkable fake
Obama video is a key example. Could technology soon make
professional actors redundant?
Like most of the examples above,
the virtual Tupac is a digital human, and was produced by special effects
company Digital Domain. Such technology is becoming more and more advanced.
Darren Hendler, the company’s digital human group director, explains that it is
in effect a “digital prosthetic” – like a suit that a real human has to wear.
“The most important thing in
creating any sort of digital human is getting that performance. Somebody needs
to be behind it,” he explains. “There is generally [someone] playing the part
of the deceased person. Somebody that’s going to really study their movements,
facial tics, their body motions.”
“That’s still pretty far away,” says Hendler. “Artificial
intelligence is expanding so rapidly that it’s really hard for us to predict
... I would say that within the next five to ten years, [we’ll see] things that
are able to construct semi-plausible versions of a full-facial performance.”
Voice is also a consideration,
says Arno Hartholt, director of research and development integration at the
University of Southern California’s Institute for Creative Technologies, as it
would be very difficult to artificially generate a new performance from clips
of a real actor’s speech. “You would have to do it as a library of
performances,” he says. “You need a lot of examples, not only of how a voice
sounds naturally, but how does it sound while it’s angry? Or maybe it’s a combination
of angry and being hurt, or somebody’s out of breath. The cadence of the
speech.”
It’s not even as simple as
collecting a huge amount of existing performance data because, as Hartholt goes
on to point out, a range of characters played by the same actor won’t produce a
consistent data-set of their speech. The job of the human actor is safe – for
now, at least.
Nonetheless, Hendler believes that
the advancement of digital and virtual human technology will become more
visible to the public, coming much closer to home – literally – within the next
10 years or so: “You may have a big screen on your wall and have your own
personal virtual assistant that you can talk to and interact with: a virtual
human that’s got a face, and moves around the house with you. To your bed, your
fridge, your stove, and to the games that you’re playing.”
“A lot of this might sound pretty
damn creepy, but elements of it have been coming for a while now.”
Comments
Post a Comment