AI lie detector developed for airport security

Virtual border guard that asks travellers questions and assesses their answers trialled in airports Jay Nunamaker, director of the University of Arizona’s Center for the Management of Information, who led the development of the deception detection tool the AVATAR. The machine features a virtual border guard that asks travellers questions © Handout


Jay Nunamaker, director of the University of Arizona’s Center for the Management of Information, who led the development of the deception detection tool the AVATAR. The machine features a virtual border guard that asks travellers questions © Handout

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959

A group of researchers are quietly commercialising an artificial intelligence-driven lie detector, which they hope will be the future of airport security.Discern Science International is the start-up behind a deception detection tool named the Avatar, which features a virtual border guard that asks travellers questions.The machine, which has been tested by border services and in airports, is designed to make the screening process at border security more efficient, and to weed out people with dangerous or illegal intentions more accurately than human guards are able to do. But its development also raises questions about whether a person’s propensity to lie can be accurately measured by an algorithm.The Avatar — whose “face” appears on a screen — asks travellers a series of pre-configured questions and decides whether they are lying. It films a person’s responses, analysing information including their facial expressions, tone of voice and verbal responses, and looks for “deception signals” such as involuntary “microexpressions” that could be triggered by the cognitive stress of trying to deceive. The Avatar then makes a decision about an interviewee’s truthfulness, categorising them as green, yellow or red. In a border control scenario, those classified as green would proceed without further checks, and everyone else would be questioned by a human guard.The Avatar — it stands for Automated Virtual Agent for Truth Assessments in Real-time — is based on federally funded academic research and could be on the market in six months, the company said. ‘There is no Pinocchio’s nose’Discern was founded last year by scientists from the University of Arizona as a way to monetise the technology they had developed in an academic setting over the past decade. Last October, the company entered into a joint venture agreement with a partner in the aviation industry — which Discern said was a well-established company that it could not yet name — to sell the “credibility assessment” tool to airports. The partner is currently testing prototypes, and will begin marketing the machines in the coming months, said Discern. Final tweaks are being made to the wording of questions, some of which have prompted unexpected responses.

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959

The Avatar’s development comes amid increasing interest from the security and aviation sectors in behavioural analysis. At a conference in May, representatives from the US Transportation Security Administration, London’s Gatwick airport and Israel’s Airport Authority discussed the implementation of behavioural technologies to enhance passenger safety and reduce risk.However, doubts have been raised about the reliability of AI lie detectors, and whether what they measure is deception — which encompasses intent — or something else. Last week, a similar AI-driven lie detector, which has been piloted in the European Union, failed a test by a journalist for The Intercept when it judged a quarter of their entirely honest answers to be lies.One challenge is false positives: a machine might register as suspicious a microexpression if someone is in pain or confused. Similarly, a traditional polygraph test might register that someone is lying when they are stressed or trying hard to remember something. In the US, polygraphs are not generally admissible in court.“People lie on a continuum,” said Mircea Zloteanu, psychology lecturer at Teesside University. “There is no Pinocchio’s nose, no one thing that we know reliably predicts lying all the time.” An algorithm cannot measure or understand intent or the rationale for someone’s behaviour, he added.

The Avatar dashboard on which an interviewee’s responses are recorded. The machine makes a decision about the subject’s truthfulness, categorising them as green, yellow or red © Handout

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959

But David Mackstaller, Discern’s chief strategy officer, said the amalgamation of so many different factors, which encompass deliberate and unconscious behaviour, made the tool powerful.“You as a human may be able to manage some of your signals but you can’t manage all of them, and that’s why the algorithm works,” said Mr Mackstaller.The Avatar’s accuracy rate is 80-85 per cent, which “far exceeds the average accuracy by humans of 54 per cent”, Discern claimed.The rise of emotion recognitionAirport security checks must meet international standards, the exact details of which are not made public. Discern said it was confident the Avatar could meet these requirements, though it said it was the responsibility of its customers to implement the technology in a compliant way.In the past few years, the machine has been tested by the Canada Border Services Agency, by an airport in Bucharest, Romania, and at a US border port in Nogales, Arizona.However, CBSA only tested the machine in a laboratory setting and has no plans to go beyond this. It said there were “a number of significant limitations to the experiment in question and to the technology as a whole which led us to conclude that it was not a priority for field testing”.Immigration authorities in Nogales said they had no plans to pilot the system. But other sectors have already begun using emotion recognition as part of their security. Emotion analytics company Neurodata Lab said its behavioural analysis services were used for fraud detection by banks and call centres. There is also US federal interest: the TSA has funded research into whether microexpressions are useful indicators of deception, and the Departments of Defense and Homeland Security helped fund the research that led to Avatar’s creation.While most do not write off the project, they recommend proceeding with caution with a technology that remains in its infancy. “I’d be very worried if any of these things became the be or end all of decision-making,” said Brian Wood, project lead at The Silver Logic, which is working with emotion recognition for a digital marketing platform.As emotion detection and behavioural analysis technology improves, it may become more reliable as a tool for lie detection, said Neurodata Lab. But it should be restricted to “a very specific range of applications. We cannot talk about some universal technology that, like a magic wand, would recognise a lie in all its manifestations”.

source : https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments