TCD professor looks to a more realistic virtual world

2 Dec 2016

Prof Carol O’Sullivan. Image: Trinity College Dublin

Prof Carol O’Sullivan is making simulated crowds and characters more realistic, with applications in gaming, movies and even health. Claire O’Connell reports.

Humans do things every day that we don’t even notice. We move our hands as we talk, we walk a particular way if we are in a crowd and we use facial expressions to capture someone’s attention or build their trust. Such human actions give texture to how we perceive a scene, and if they are ‘wrong’, then it jars.  

So for computer games and simulations, how can we translate that rich ‘human-ness’ into animated or virtual characters, and help viewers avoid that gnawing feeling that something is not quite right?

Those are central questions for Prof Carol O’Sullivan, who has carried out research in Trinity College Dublin (TCD), in Seoul National University, South Korea, and with Disney Research in California to figure out how to make virtual crowds, avatars and scenes more believable to us real humans.

Capturing motion

A graduate of mathematics in TCD, O’Sullivan did her PhD with Dr Steven Collins, who went on to co-found Havok, a hugely successful middleware company whose software engine helps to make special effects in computer games and animations more realistic.

Working with Collins sparked O’Sullivan’s interest in how our brains perceive animated environments, and particularly, how to make crowd scenes more realistic. “My initial foray into crowds was in realistic games and environments,” recalled O’Sullivan at the recent Watch! Video Everywhere event hosted by the Adapt Centre, where she is a principal investigator.

Now a professor of visual computing at TCD, O’Sullivan and her colleagues use motion capture equipment to gather information on the subtlety of how humans move. Participants don Lycra suits with ‘markers’ on them and are videoed as they talk in groups, gesture, walk and run.  

The resulting footage is meat for computer analysis that can pick up on gross and fine movements, and that analysis informs how characters move on screen. “We use motion capture to try and make the humans in our crowds and animations as realistic as possible,” she explained.

‘I’m interested in the physics of objects, and how real people interact with virtual objects in  the real world’
– PROF CAROL O’SULLIVAN

Crowds and the city

Cities are home to concentrations of humans, and what better setting to get more out of crowds? One of O’Sullivan’s long-term projects is Metropolis, a virtual model of Dublin city centre, including the TCD campus.

By putting crowds into this virtual Dublin, the researchers test all sorts of aspects that increase our credibility of such scenes; including motion, background sound, the ‘personalities’ of individuals, the physics of how people move and talk to each other, and even the lighting of the scenes.

“In real crowds, there is a lot of very complex and subtle behaviour,” said O’Sullivan. “You don’t want zombie-like characters talking, not talking or looking at each other and not expressing emotion – real crowds exhibit different behaviours and we want to get that without the need for a huge amount of computational time.” 

Being able to simulate crowds has other advantages too, noted O’Sullivan. For example, her work on crowd simulation has fed into Disney Research analysis of how visitors move through theme parks. “It is important to know what trajectories people take and where the bottlenecks are, because you need to ensure good flow,” she said. “So we developed computer vision techniques to analyse the flows in more detail.”

Believable hands and faces

A big challenge for individual simulated characters is getting us real humans to ‘believe’ them – there’s a place for Max-Headroom-like fiction, but if you are getting instructions from an avatar, playing a game or watching an animated film, then credibility can be key to keeping you engaged. So, O’Sullivan and her colleague Rachel McDonnell are looking at how to efficiently render believable hand movements and facial animation.

“We look at what makes human emotions and gestures appealing or recognisable,” said O’Sullivan, who also coordinated a European project called Verve, working with clinicians and neuroscientists to develop virtual reality for health applications.

“We wanted to use virtual environments and characters to help older people and people with neurological conditions address fears of crowds and a lack of confidence due to memory lapses,” she said. “So we developed these crowded virtual environments and we made games to help people build their confidence.”   

Vision of the future

One of O’Sullivan’s latest projects is on how to clue in ‘social robots’ about our intentions. “I’m looking at interactions with ‘friendly’ robots, and how to make them more aware of human intentions,” she explained. “Path-planning algorithms for robots often assume co-operation [with the human] but our work is trying to predict whether someone has the intention to get out of the robot’s way or to block the robot.”

O’Sullivan also envisages plenty more for her research on human and computer perception in the fields of augmented and virtual reality, whether we are looking through glasses or engaging with projected environments. “I’m interested in the physics of objects, and how real people interact with virtual objects in the real world.”

Dr Claire O’Connell is a scientist-turned-writer with a PhD in cell biology and a master’s in science communication

editorial@siliconrepublic.com