Digital twins: Engineering a path out of the uncanny valley

7 Mar 2023

Image: © Kitka Hekmen/Stock.adobe.com

Huawei’s Thomas Poulet explains his work in enhancing human rendering and the challenges these digital avatars have to overcome.

Click here to view the full Engineers Week series.

In recent years, the benefits of creating digital twins – a virtual copy of a real world object – has been discussed in multiple sectors.

Experts have discussed how this technology can shake up networks and manufacturing, while the EU is working to make a digital twin of Earth to predict climate impacts.

The concept also has a place in the metaverse, as creating virtual avatars of people has the potential to transform how people interact digitally.

However, there is still a lot of work to be done before these avatars can be viewed as true digital copies of people, as certain examples showed last year.

Thomas Poulet is the principal graphics engineer at Huawei’s Ireland research centre, leading the company’s efforts to improve human rendering.

Speaking to SiliconRepublic.com, Poulet explained that one of the most interesting parts of his role is balancing the technical aspects of the job with the creative process.

“We are not only engineers answering to hard requirements that can be calculated or estimated, a lot of our work is about understanding vague ideas, chasing intuition, and translating visions,” Poulet said.

Enhancing human rendering

Poulet explained the various challenges that come with trying to make these digital avatars more realistic. One example he gave to explain the challenge of this task was a hand with a light shining behind it.

He said that for this effect to be captured properly, estimates have to be made on how the light travels as it moves under the skin, from the back of the hand towards the front. He added that this is something animated movies can do “by taking hours for a single image”.

“Having less than a few milliseconds for one image in real-time rendering, we have to approximate,” Poulet said. “Finding these good-looking approximations is the challenging bit.”

Another example he gave was on rendering hair, which is traditionally done by using cards or, in simple terms, “long rectangles painstakingly places by expert artists on the character’s head”.

Poulet explained the benefits that new forms of rendering can achieve, as this traditional method can be both expensive and lacking in quality.

“What we are developing is a system that treats the hair as what they are in real life, hundreds of thousands of little strands,” Poulet said. “That way we can offer familiar tools to artist (digital combs and scissors), and improve the quality of the rendering with very fine details up to individual strand and accurate physics.”

While working on these various, specific details, Poulet said his team’s work is also contributing to a Linux Foundation project called O3DE, or the Open 3D Engine.

This is an open-source project to let developers create their own 3D worlds, which Poulet believes has various applications.

“O3DE could revolutionise the market by catering to the needs not only of games but also of the industry, movies, enthusiasts, immersive trainings, and many others,” Poulet said. “Our role is to help expand that ecosystem’s capabilities and offer more options for realistic human rendering.”

The future of digital twins

When discussing the rise of 3D avatars, Poulet said that the metaverse has started making its way into people’s lives “whether we like it or not”, with games like Roblox and Fortnite being a “lifeline” for children worldwide during the Covid-19 pandemic.

Many tech giants have also been looking at 3D avatars as a means to enhance remote working and keep teams together digitally. Last year, Engage XR released its enterprise-focused metaverse to let companies engage digitally with their clients and suppliers.

However, Poulet said these avatars still have a long way to go before they can be viewed as identical copies. He said the technology is currently at the “uncanny valley” point, where these virtual models look “human but deeply unsettling at the same time”.

“A decade ago we were no way near that valley, our characters were polygonal and unrealistic,” Poulet said. “So, the challenge for us in the coming years will be to climb back that valley, by capturing and studying all the micro details that we unconsciously expect to find in a human face.

“Many applications will benefit from it, but the one such as a real-time sign language translation of any content using a virtual avatar, are great reminders and motivational boosters to take on the next decade of challenges.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com