Forensic interviews are intended to gather evidence from minors who may have information about a crime under investigation. But many children who have been traumatized—often by those in a position of (supposed) care and authority—are unable to express, or explain, what has happened to them. Even highly trained professionals are at times ill-equipped to decode what a child is really saying. So cases fall apart or justice is not carried out.
Dr. Shri Narayanan, founder of USC’s Signal Analysis and Interpretation Laboratory (SAIL), wants artificial intelligence to step in. Ideally, AI would work alongside professionals as an extra “brain” to identify patterns in intonation, speech, and responses that could uncover what the child is unable to say. In a recent interview, Dr. Narayanan explained how that might be possible. Here are edited and condensed excerpts of our conversation.
Dr. Narayanan, while preparing to interview you, I happened to see The Children Act starring Emma Thompson as a (UK) Family Court Judge, who has to ascertain whether to overrule a minor’s parents. Due to their religious beliefs, they don’t want their son to have a blood transfusion, but, without it, he will die from leukemia. The judge decides to go to the child’s hospital bed and interview him. No spoilers here, but an AI in her laptop case would have been useful, as the minor in question gave a highly charged and painful set of twisted verbal signals.
That’s such a good example. Yes, we’ve found, based on our research, that AI can provide valuable insights into a child’s mental state, by analyzing signals, which a person, unfamiliar to the child, might well miss, in such a high stakes situation as forensic interviews.
Explain how the AI does this.
At the USC Signal Analysis and Interpretation Laboratory (SAIL), we conduct fundamental and applied research, using engineering methods and tools to understand the human condition, creating technologies, with direct societal relevance, that support and enhance human experiences. The AI, in this legal realm, is looking at the linguistic patterns, decoding dialogue, getting to the nuanced details, to support and enhance the human experience, working alongside the professionals on the case. We are identifying exact level of details of precise information; affect in word choices and vocal intonation. Right now, all this is done in a very subjective way with humans.
This isn’t automation but augmentation? You collaborated with Professor Lyon at USC’s Gould Child Interviewing Lab on this work, right?
Exactly. My colleague Dr. Lyon is an expert in this area, working with attorneys and lawyers who interact with victims of maltreatment and abuse. Our AI is designed to complement human intelligence. Because, when we hear something, it is filtered through our own subjectivity and mental models. The AI is an objective training tool, providing extra insights, giving guidance on how to pace the questions, understanding how the mind works, picking up on cognitive and affective aspects from the behavioral cues and suggesting potential hypothesis. We’re suggesting our AI is used to train interviewers to build up better skills in these situations; developing more open questioning methods and, in effect, bringing more reproducible analytics in what is very much a subjective realm today.
– S.C. Stuart, PCMag Australia
Image Source: Pixabay