The Seattle MacArthur Fellow Who Teaches Computers Common Sense

This is according to COMET, an experimental text-based artificial intelligence web application, when asked to think about the context behind the statement “[Person] wins a MacArthur Award. Dr. Yejin Choi nods knowingly as he exits the app on his split Zoom screen: the program generates common-sense assumptions based on simple statements. She’s demonstrating the program, which stands for COMmonsEnse Transformers, for Crosscut on Wednesday, October 19, a week after it was announced by the John D. and Catherine T. MacArthur Foundation as one of 25 MacArthur Fellows.

Choi, a professor at the University of Washington’s Paul G. Allen School of Computer Science & Engineering, received the title and an $800,000 “genius grant” for his groundbreaking work in natural language processing. The artificial intelligence subfield explores the ability of technologies to understand and respond to human language.

Natural language processing research concerns us all, whether or not we interact directly with artificial intelligence. Every time we ask a smart device like Siri or Alexa to remind us to buy milk, clumsily type a text early in the morning relying on autocorrect help, or allow Google to complete automatically our search queries, we ask artificial intelligence programs to analyze our voices and keystrokes and correctly interpret our queries. And increasingly, this technology is critical to global business strategy, involved in everything from supply chain management to healthcare.

But computers always take our requests at face value, without understanding the “why” behind our questions. The processors behind AI assistants do not inherently understand ethics or social norms, slang or context.

“Human language, regardless of the language of the country, is fascinatingly ambiguous,” Choi said. “When people say, ‘Can you pass me the bottle of salt?’, I’m not asking you if you can do it, am I? So there are a lot of implied meanings.

At worst, creating AI algorithms based on content pulled from the internet can poison them with racism and misogyny. This means that they can be not only useless at times, but also actively harmful.

Choi works at the forefront of research intended to give artificial intelligence programs the context they need to understand what we really mean and respond to us in ways that are both accurate and ethical. In addition to COMET, she helped develop Grover, an AI “fake news” detector, and Ask Delphi, an AI advice generator that judges whether certain actions or statements are moral, based on what is being discussed by communities. online advice.

Crosscut recently caught up with Choi to talk about her honor MacArthur, demonstrate some of her research projects, and discuss the responsibility she feels to help AI grow ethically. This conversation has been condensed and lightly edited for length and clarity.

Crosscut: How did you feel when you learned that you had won this award?
Choi: I’ve come a long way, that’s one way of saying it. I consider myself more of a late bloomer: a bit quirky and working on risky projects that may or may not be promising, but certainly adventurous.

The reason I chose to work on it wasn’t necessarily because I was anticipating a prize like this in the end, but rather because I felt like I was nobody, and if I tried something risky thing and I failed, no one would notice. Even if I fail, maybe we will learn something from this experience. I felt that this way I could contribute better to the community than [by] work on what other smarter people can do.

What first attracted you to AI research, especially the risky aspects you mentioned?
I wanted to study computer programs that could understand language. I was drawn to language and intelligence in a broad sense, as well as the role of language in human intelligence. We use language to learn, we use language to communicate, we use language to create new things. We conceptualize verbally and that was fascinating to me, maybe because I wasn’t very good with language growing up. Now my job requires me to write a lot and talk a lot, so I’ve become much better at it.

I had a hunch that intelligence is really important – but it was just a vague hunch I had. I was playing with my career.

It got a lot more exciting than expected.

How well does AI understand us right now?
Computers are like parrots in that they can repeat what humans have said – much better than a parrot – but they don’t really understand. Here’s the thing: if you stray a little from the common patterns, that’s when they start making weird mistakes that humans would never make.

Computers can seem creative, perhaps generating something a little weird and different, and humans tend to project meaning into them. But the truth is that there is neither sensitivity nor understanding.