With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University’s AI Now Institute could easily be mistaken for the offices of any one of New York’s innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research.
But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it’s that disruption itself that’s under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that’s ethically sound.
“These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it’s happening simultaneously,” says Crawford. “That raises very serious implications about how people will be affected.”
AI has plenty of success stories, with positive outcomes in fields from healthcare to education to urban planning. But there have also been unexpected pitfalls. AI software has been abused as part of disinformation campaigns, accused of perpetuating racial and socioeconomic biases, and criticized for overstepping privacy bounds.
BY ALEJANDRO DE LA GARZA