An AI deception detector at airports may confuse your confusion with suspicion

The US Transportation Security Administration has funded research into whether micro-expressions are useful indicators of deception

04 August 2019 – 07:36 Camilla Hodgson

Picture: 123RF/DENIS ISMAGILOV

PLEASE LINK TO LEONID BERSHIDKSY PIECE ON MUSIC CASSETTES

San Francisco — A group of researchers are quietly commercialising an artificial intelligence (AI)-driven lie detector, which they hope will be the future of airport security.

Discern Science International is the start-up behind a deception detection tool named the Avatar, which features a virtual border guard that asks travelers questions.

The machine, which has been tested by border services and in airports, is designed to make the screening process at border security more efficient, and to weed out people with dangerous or illegal intentions more accurately than human guards are able to do. But its development also raises questions about whether a person’s propensity to lie can be accurately measured by an algorithm.

The Avatar — whose “face” appears on a screen — asks travelers a series of pre-configured questions and decides whether they are lying. It films a person’s responses, analysing information including their facial expressions, tone of voice and verbal responses, and looks for “deception signals” such as involuntary “micro-expressions” that could be triggered by the cognitive stress of trying to deceive.

The Avatar then makes a decision about an interviewee’s truthfulness, categorising them as green, yellow or red. In a border control scenario, those classified as green would proceed without further checks, and everyone else would be questioned by a human guard.

The Avatar — it stands for automated virtual agent for truth assessments in real-time — is based on federally funded academic research and could be on the market in six months, the company says.

‘There is no Pinocchio’s nose’

Discern was founded last year by scientists from the University of Arizona as a way to monetise the technology they had developed in an academic setting over the past decade. In October 2018, the company entered into a joint-venture agreement with a partner in the aviation industry — which Discern says was a well-established company that it could not yet name — to sell the “credibility assessment” tool to airports.

The partner is currently testing prototypes, and will begin marketing the machines in the coming months, says Discern. Final tweaks are being made to the wording of questions, some of which have prompted unexpected responses.

The Avatar’s development comes amid increasing interest from the security and aviation sectors in behavioural analysis. At a conference in May, representatives from the US Transportation Security Administration (TSA), London’s Gatwick airport and Israel’s Airport Authority discussed the implementation of behavioural technologies to enhance passenger safety and reduce risk.

However, doubts have been raised about the reliability of AI lie detectors, and whether what they measure is deception — which encompasses intent — or something else. Last week, a similar AI-driven lie detector, which has been piloted in the EU, failed a test by a journalist for The Intercept when it judged a quarter of their entirely honest answers to be lies.

One challenge is false positives: a machine might register as suspicious a micro-expression if someone is in pain or confused. Similarly, a traditional polygraph test might register that someone is lying when they are stressed or trying hard to remember something. In the US, polygraphs are not generally admissible in court.

“People lie on a continuum,” says Mircea Zloteanu, psychology lecturer at Teesside University. “There is no Pinocchio’s nose, no one thing that we know reliably predicts lying all the time.” An algorithm cannot measure or understand intent or the rationale for someone’s behaviour, he added.

But David Mackstaller, Discern’s chief strategy officer, says the amalgamation of so many different factors, which encompass deliberate and unconscious behaviour, makes the tool powerful. “You as a human may be able to manage some of your signals but you can’t manage all of them, and that’s why the algorithm works.” 

The Avatar’s accuracy rate is 80% to 85%, which “far exceeds the average accuracy by humans of 54%”, Discern claimed.

The rise of emotion recognition

Airport security checks must meet international standards, the exact details of which are not made public. Discern says it is confident the Avatar can meet these requirements, though it says it is the responsibility of its customers to implement the technology in a compliant way.

In the past few years, the machine has been tested by the Canada Border Services Agency (CBSA), by an airport in Bucharest, Romania, and at a US border port in Nogales, Arizona.

However, CBSA only tested the machine in a laboratory setting and has no plans to go beyond this. It says there were “a number of significant limitations to the experiment in question and to the technology as a whole which led us to conclude that it was not a priority for field testing”.

Immigration authorities in Nogales said they have no plans to pilot the system. But other sectors have already begun using emotion recognition as part of their security. Emotion analytics company Neurodata Lab says its behavioural analysis services are used for fraud detection by banks and call centres.

There is also US federal interest: the TSA has funded research into whether micro-expressions are useful indicators of deception, and the US departments of defence and homeland security helped fund the research that led to Avatar’s creation.

While most do not write off the project, they recommend proceeding with caution with a technology that remains in its infancy.

“I’d be very worried if any of these things became the be all or end all of decision-making,” says Brian Wood, project lead at The SilverLogic, which is working with emotion recognition for a digital marketing platform.

As emotion-detection and behavioural-analysis technology improves, it may become more reliable as a tool for lie detection, says Neurodata Lab. But it should be restricted to “a very specific range of applications. We cannot talk about some universal technology that, like a magic wand, would recognise a lie in all its manifestations”.

© The Financial Times Limited 2019

source : https://www.businesslive.co.za/ft/world/2019-08-04-an-ai-deception-detector-at-airports-may-confuse-your-confusion-with-suspicion/

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments