How is Artificial Intelligence actually thinking? Even their creators often don’t really fully understand. But if AI becomes more and more important you should at least have an idea of how algorithms get to results. And they think totally different to how human beings do, says Sara M. Watson, tech critic and writer at the Digital Asia Hub, Hong Kong. How can literature and journalism help to find a new perspective on AI?
“The biggest problem AI has is that even the engineers can’t really explain certain outcomes or certain decisions that go through an artificially intelligent system.”
In many cases I think engineers have been talking about how they just can’t articulate how the output came out. They can try to kind of reverse engineer it or model the kind of logic tree. But in many cases you’re talking about layers and layers and layers of processing that happens in order to get to something like image recognition or tagging or kind of classification by these complex machines.
I still think this kind of crux of or societal question of what AI is and what it could be really surfaces around trying to understand it. If we have talked about AI mostly as a way of modelling human thought we’ve almost remitted ourselves to our imagination of what a machine is capable of. And so if we try to understand the way, a machine is playing chess or Go or winning Jeopardy. We’re not really fully understanding the way it’s processing because we’re trying to understand it the way we think.
People who are writing about this don’t know how else to write about it. But also the story is that, either scientists are telling or the PR that, PR firms that are around the science or the technology companies that are pushing these stories, are kind of limited in the imaginative framing that they are using. So I’m actually really interested in thinking of other ways of or other means of storytelling. So not just focussing on the kind of grand narrative like where AI is going and this kind of man versus machine scenario in which the machines take over and man loses but rather focussing on other ways of storytelling.
I’m really interested in talking about character development or first person narratives to try to understand how an AI thinks. What would that look like to write a story that tries to explain in the voice of an AI what it is like to be an AI.
I think that’s something that even engineers struggle with in trying to understand and explain how their own technology works because we’re talking about systems that are basically very complex, multi-level decision-making trees that are just in many cases beyond a kind of means of explanation, even to the engineers themselves.
I think there are a lot different ways we can use storytelling to try to either get at something that’s really hard to explain or to imagine another intelligence that is not just about creating an intelligence that’s just like man but rather an intelligence that is different from man.
What to do with Facebook?
I always go back to the question of what are these technology firms optimizing for? And in most cases a platform like Facebook the main question or the main kind of driver they have is to get you to spend more time on Facebook. And if that’s the metric that’s leading towards certain behaviours. It’s not about best serving the user, it’s about giving more content that will keep you on that site longer.
I start to ask this question about any technology. And I think it’s the kind of ultimate question about technology which is:What are we optimising for? And I think if we can answer that question then we can get to a very substantial conversation with either the technology firms, the policy makers, the users about what their interests are.
I’ve actually kind of been pushing that my underlying big picture question for basically any technology that we’re talking about. So in the case of, the newsfeed being, whether or not it’s screwed in one direction or another, I think users have to have the opportunity to express any intentions or desires about how the feed is optimized.
So as a user I don’t get to say, well, I keep seeing all these videos of cats. I would actually like my feed to be slightly more politicized because an election is happening, and I want to see a mix of this and that. I start to picture things like, a really concrete solution to that is to give users control over what the feed filters towards and the ability to change that any given week or any given day just by having kind of either proxy, third parties that can determine like, say, that I just want AP Associated Press to be responsible for whatever news shows up in my newsfeed.
But Facebook just hasn’t opened that as a possibility, and I can foresee a version of Facebook that allows multiple different ways of curating the feed. But for now nobody’s asking for that, and Facebook really isn’t interested in exposing that or kind of opening up that pandora’s box of possibilities, right? Like they’re just interested in optimising towards one thing and that’s what Facebook has determined as its ultimate reason for being.
How can journalism help to bridge?
I look to places like the Berkman Klein Center for Internet & Society for that because they are really interested in putting policy makers and technologists and users and activists and journalists all together in the same room.
I think what I’ve learned from hanging out there for the last couple of years is that those conversations can be uncomfortable and they can certainly get heated at times. But I think getting together round the same table and being in the same space and having a talk to each other face to face actually changes the temper of the conversation a lot in a way that’s, there’s at least an interest in having a shared conversation instead of being a kind of competitive interface.
I think also conferences like this one is often where that kind of conversation can happen, when, you can put a tech CEO on the stage and be interviewed by somebody who is talking of the policy side of things.
There’s a certain type of critic of technology that tends towards just deconstructing what’s going on or pointing at something and saying: This is a neoliberal problem where Silicon Valley is just deciding just what the standard is, pointing at something as being capitalistic or something like that.
I’m really interested in trying to those critics are important and interesting but they often shut down conversation, and I’m really interested in having a conversation that brings more people together to the table to share vocabulary and share the kind of framing of the problem, and a lot of that means that you have to kind of be careful about the language that you’re using, be careful about the framing that you’re using, and also about the questions that you’re asking and proposing.
But really the goal is to try to get people to kind of work towards something productive and constructive. So I’m also really interested in putting forth not only the kind of deconstruction or the description of the problem but really spending time on what the potential solution is or the way to have a policy intervention or design change or anything along those lines that actually starts to explore what’s possible or what could be or maybe what should be rather than just kind of describing the problem as a problem.
Sara M. Watson is a technology critic, an affiliate with the Berkman Klein Center for Internet and Society at Harvard University, and a writer in residence at Digital Asia Hub.
Sara writes and speaks about emerging issues in the intersection of technology, culture, and society. As a Fellow at the Tow Center for Digital Journalism at Columbia University, Sara researched the writers and publications contributing to the discussion about technology and society. She published a report in the Columbia Journalism Review that proposed a constructive approach to technology criticism that not only critiques, but also offers alternatives. Her writing appears in The Atlantic, Wired, Gizmodo, Motherboard, Harvard Business Review, Al Jazeera America, and Slate. She presents at technology conferences around the globe, including SXSW and O’Reilly Strata.
Sara began her career as an enterprise technology analyst at The Research Board (Gartner, Inc.), exploring implications of technological trends for Fortune 500 CIOs. She holds an MSc in the Social Science of the Internet with distinction from the Oxford Internet Institute, where her award winning thesis examined the personal data practices of the Quantified Self community. She graduated from Harvard College magna cum laude with a joint degree in English and American literature and film studies. Her work continues to draw from media studies, science and technology studies, anthropology, and literature.