New software analyses words, gestures to detect lies

 

WASHINGTON: Researchers are developing a unique lie-detecting software that considers both the speaker’s words and gestures, and unlike a polygraph, does not need to touch the subject in order to work.

By studying videos from high-stakes court cases, researchers at University of Michigan (UM) are building the lie-detecting software based on real-world data.

It was up to 75 per cent accurate in identifying who was being deceptive (as defined by trial outcomes), compared with humans’ scores of just above 50 per cent, researchers said.

The researchers found that lying individuals moved their hands more. They tried to sound more certain. They also looked their questioners in the eye a bit more often than those presumed to be telling the truth, among other behaviours.

To develop the software, the team used machine-learning techniques to train it on a set of 120 video clips from media coverage of actual trials.

“In laboratory experiments, it’s difficult to create a setting that motivates people to truly lie,” said Rada Mihalcea, professor at UM, who leads the project with Mihai Burzo, assistant professor at UM-Flint.

“We can offer a reward if people can lie well – pay them to convince another person that something false is true. But in the real world there is true motivation to deceive,” Mihalcea said.

The videos include testimony from both defendants and witnesses. In half of the clips, the subject is deemed to be lying. To determine who was telling the truth, the researchers compared their testimony with trial verdicts.

The researchers transcribed the audio, including vocal fill such as “um, ah, and uh.” They then analysed how often subjects used various words or categories of words.

They also counted the gestures in the videos using a standard coding scheme for interpersonal interactions that scores nine different motions of the head, eyes, brow, mouth and hands.

The researchers fed the data into their system and let it sort the videos. When it used input from both the speaker’s words and gestures, it was 75 per cent accurate in identifying who was lying.

In the clips of people lying, the researchers found some common behaviours, such as scowling or grimacing of the whole face. This was seen in 30 per cent of lying videos vs 10 per cent of truthful ones.

Other common behaviours included looking directly at the questioner (in 70 per cent of deceptive clips vs 60 per cent of truthful) and gesturing with both hands (in 40 per cent of lying clips vs 25 per cent of the truthful).

Speaking with more vocal fill such as “um” was more commonly seen during deception.

“We are integrating physiological parameters such as heart rate, respiration rate and body temperature fluctuations, all gathered with non-invasive thermal imaging,” Burzo said. (AGENCIES)