The Widening Gyre: Building Community in the Age of Artificial Intelligence
2400 words—FACEBOOK founder Mark Zuckerberg recently published an open letter titled “Building Global Community”, where he asks, “Are we building the world we all want?” In the letter, Zuckerberg gives a hopeful vision of history, calling it the story of how people have learned how to come together in “ever greater numbers”. Zuckerberg notes that we have reached a significant milestone in history where we must come together as a global community, because the opportunities and threats that face us occur on a global scale.
I believe the solutions that he offers—namely, using artificial intelligence to improve Facebook as a communication service that builds community—will make its users into not better citizens, but better consumers. Further, I believe that his trust in AI as the solution reveals an underlying assumption that the problems we are faced with as a society are not structural ones, but essentially matters of efficiency: to make the world a better place, we don’t need to change the way we do things now so much as simply do them better. This veneration of technology as a panacea (Build more efficient cars and faster trains! Drink Soylent instead of wasting time with real food! Earth is dying? Let’s start over on Mars!) is part of the problem, not the solution, because it envisions humans as merely material beings, ignoring the moral and spiritual principles that are essential to the healthy functioning of any community.
Communities, Zuckerberg says,
provide all of us with a sense of purpose and hope; moral validation that we are needed and part of something bigger than ourselves; comfort that we are not alone and a community is looking out for us; mentorship, guidance and personal development; a safety net; values, cultural norms and accountability; social gatherings, rituals and a way to meet new people; and a way to pass time.
But the old models of community seem to be failing. Zuckerberg mentions a pastor who told him, “People feel unsettled. A lot of what was settling in the past doesn’t exist anymore.” The physical infrastructure of communities has been declining, with fewer and fewer people engaging meaningfully in traditional institutions like religion and the democratic process. Further, “today’s threats are increasingly global,” he says, “but the infrastructure to protect us is not.”
It is not until about a third of the way into the letter that Zuckerberg presents artificial intelligence as a solution. AI “can help provide a better approach”, he says, explaining that “right now [Facebook is] starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our service to recruit for a terrorist organization.” After that, he only mentions AI a few more times: first, stating that the discussion around AI in the tech community “has been oversimplified to existential fear-mongering” (a subtle push for normalizing more widespread use of AI); second, saying that the best way to create effective community content standards on Facebook is “to combine creating a large-scale democratic process to determine standards with AI to help enforce them”; and third, noting that “major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more”.
Although those are the only times he mentions AI in the whole letter, and it may seem that I am overemphasizing things when I say that he offers it as the “primary” solution, every example he provides of how Facebook can encourage its users to build more effective and just communities ultimately relies on AI to filter and prioritize content in order to present “better” content to users.
This is a troubling solution, particularly within the context of this letter’s “the arc of the moral universe is long, but it bends towards justice” tone, because it will take away any moral calculus from the decision-making process of what content is promoted and what is removed and replace it with decision-making based on profitability. Further, there are some questionable underlying assumptions about ethics and civic engagement that lead to a solution like this seeming reasonable—namely, that we can determine what is good and what is bad solely by collecting and analyzing data, and therefore the contemporary failure of communities is simply because we haven’t yet gathered enough data to optimize their functioning.
Facebook is a Company, Not a Community
Like any company, Facebook’s purpose is to generate wealth. Whatever particular services it provides are ultimately only a vehicle towards this goal. Therefore, any change that Facebook makes to its service will be oriented towards making it more robust, more desirable, and more profitable.
I do not mean to suggest that Facebook as an organization or Mark Zuckerberg himself have malicious intent—certainly, providing a genuinely meaningful service that improves communities, and being profitable, are not goals that are inherently in opposition to each other. But any moral compass that Facebook might rely on to guide their decisions will always be secondary to their bottom line. The success of any system of artificial intelligence that Facebook employs will therefore first be measured by whether its use increases or decreases user engagement before any considerations of building a “better” community, in a moral sense, are considered.
There’s a saying: If you’re not paying for the product, you are the product. This idea is particularly worth keeping in mind when talking about Facebook, a service that describes itself as “free and always will be”, has around 1.86 billion active users, and generated over $10 billion in profit in 2016. Again, I want to emphasize that I do not believe this necessarily means that Facebook is tricking us or acting against our best interests by treating us as products—it simply means that their main goal will be to make their service as desirable as possible so that we use it more in order to become exposed to more advertising, the primary way that Facebook generates revenue.
I want to turn now to a story published in The New York Times Magazine all the way back in 2012 that explores how companies like Target increasingly rely upon algorithms and statistics to increase brand loyalty. This story might help us better understand why Facebook might want us to both view it as an essential service—a part of the “social infrastructure”, as Zuckerberg calls it—for the functioning of modern communities and also feel less apprehensive about its use of AI. Even now, the article reads like science-fiction, with companies surreptitiously gathering enough information on their customers that they are able to predict personal events, like pregnancies, before even other family members are aware, and then using that information to manipulate buying habits. And this was five years ago, before any meaningful AI programs were in use. As Zuckerberg notes, at the start of his letter, about the progress of technology, “We always overestimate what we can do in two years, and we underestimate what we can do in ten years.”
For now, let me just quote extensively from the story:
For decades, Target has collected vast amounts of data on every person who regularly walks into one of its stores. Whenever possible, Target assigns each shopper a unique code — known internally as the Guest ID number — that keeps tabs on everything they buy. …
Also linked to your Guest ID is demographic information like your age, whether you are married and have kids, which part of town you live in, how long it takes you to drive to the store, your estimated salary, whether you’ve moved recently, what credit cards you carry in your wallet and what Web sites you visit. Target can buy data about your ethnicity, job history, the magazines you read, if you’ve ever declared bankruptcy or got divorced, the year you bought (or lost) your house, where you went to college, what kinds of topics you talk about online, whether you prefer certain brands of coffee, paper towels, cereal or applesauce, your political leanings, reading habits, charitable giving and the number of cars you own. …
Almost every major retailer, from grocery chains to investment banks to the U.S. Postal Service, has a “predictive analytics” department devoted to understanding not just consumers’ shopping habits but also their personal habits, so as to more efficiently market to them. …
There is a calculus, it turns out, for mastering our subconscious urges. For companies like Target, the exhaustive rendering of our conscious and unconscious patterns into data sets and algorithms has revolutionized what they know about us and, therefore, how precisely they can sell.
Much of the article focuses on the work of a mathematician who was hired by Target to “analyze all the cue-routine-reward loops among shoppers and help the company figure out how to exploit them”. One of his most effective efforts was to assign female shoppers a “pregnancy score”, which determined how likely it was that they were having a baby soon based on their buying habits in order to more effectively market certain products to them. It had been found that major life events, particularly the birth of a child, marked a point when an individual’s brand loyalties—which in general are extremely difficult to change—were weakened.
But the company soon realized that such an invasion of personal space as predicting a woman’s pregnancy without her sharing that information could be a public-relations disaster. As noted in the story, “how could they get their advertisements into expectant mothers’ hands without making it appear they were spying on them? How do you take advantage of someone’s habits without letting them know you’re studying their lives?”
What this story makes painfully clear is that many of our decisions, from what necessities we buy, what art and media we consume, to what spaces (both physical and digital) we choose to spend time in, are greatly determined by subconscious habits and subtle cue-routine-reward loops that are sometimes extremely difficult to identify and change. Companies do not want just one-time shoppers, but habitual, lifetime users.
Target was able to achieve their goals quite effectively by analyzing the shopping habits of their customers. What, then, can be accomplished by an organization like Facebook, which has information on not only our shopping habits, but our personal lives, our political views, what our favorite books and shows and bands are, who we love and who we hate, our hopes and dreams and most banal desires? What can be accomplished by an organization that seeks to analyze this information, not with people, but with artificial minds a thousand times more efficient, tireless, and lacking any hesitation or compunction? What sort of community will be built when these decisions are being made by a machine designed to maximize a user’s experience?
When we walk into a grocery store and buy Coke instead of Pepsi, we may believe that we have made this choice purely by free will in that moment—perhaps we simply like Coke better than Pepsi. But countless advertisements have influenced even that inconsequential preference. What will happen as decisions that actually matter—like what policies we should implement and actions we should take to protect the environment, educate our children, or eliminate racism—become increasingly influenced by advertising?
Zuckerberg states in his letter that “it is our responsibility to amplify the good effects and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.” He goes on to say, “In a free society, it’s important that people have the power to share their opinion, even if others think they’re wrong. Our approach will focus less on banning misinformation, and more on surfacing additional perspectives and information, including that fact checkers dispute an item’s accuracy”.
In short, people will see more of what they want to see, and less of what they don’t. The artificial intelligence that Facebook uses to determine this will judge based on user feedback and engagement—the more people like what they see on Facebook and continue to use it, the more prominent that kind of content will be.
Many people believed that the widespread adoption of the internet heralded a great democratization of the world. Created with the most cynical of purposes during the Cold War—it was meant to be a hardened network that would allow the military leadership of the United States to continue to communicate in the event of massive nuclear destruction—the internet has the potential to be the great equalizer of knowledge: no one person or group controls it, and everyone is free to share and create content on it. And although this has come true to some extent, with the modern internet massively increasing the amount of information available to the average person and uniting the world in an unprecedented way, it has also enabled a great fracturing of societies, resulting in what Zuckerberg refers to as “ideological bubbles”, a landscape of countless political tribes occupying innumerable hinterlands of isolated thought and belief.
If Facebook is serious about encouraging good journalism and removing “fake news”, maybe relying on artificial intelligence to filter content will be effective. But I suspect it will have the opposite effect on these ideological bubbles. People will become even more sequestered, exposed to fewer viewpoints that differ from their own. According to any reasonable moral narrative, things will probably get worse, because Facebook and AI, like all technologies, are only as good as the people who use them, and no real effort is being made here to address a wider culture that treats the independent investigation of truth with such flippancy. A man is not any more a murderer because he holds a gun instead of a knife; one simply makes him more efficient.
On Facebook, we are consumers first and citizens second. If you disagree, if you think this letter shows that Mark Zuckerberg has transcended his role as the owner of a company and transformed into an altruistic public servant, that he only values the longevity of his product insofar as it creates better citizens, try this thought experiment: imagine that it has been conclusively determined that Facebook, by its very nature, has a negative impact on its users’ sense of civic duty and desire to build community in the real world. Do you think Mark would shut down Facebook?