Zeynep Tufekci and Machine Gatekeepers

Zeynep Tufekci's Sawyer presentation on February 4 looked at how algorithms, the seemingly objective decision makers of the future, have widespread and unaccounted for social implications. She approached algorithms from three different angles: watching, judging, and nudging.

They Watch:

Algorithms monitor for certain types of online behavior to the benefit of all sorts of organizations. Although targeted advertising is far older than computers, algorithms can create a more personal profile for a potential consumer, often based on information not directly revealed, but inferred from other personal information, in order to sell products. Algorithms can also predict depression in an individual several months before the onset of symptoms using only information gathered from someone's online presence.

They Judge:

During the Ferguson protests, Facebook's curatorial algorithm filled user feeds with videos of the Ice Bucket Challenge while Twitter, which does not have a similar algorithm, was a hotbed of discussion about the events in Ferguson. The Year-in-Review feature on Facebook proclaimed, "It's been a great year!" while selecting pictures of a man's recently deceased daughter for display.

They Nudge:

A study published in Nature demonstrated that a simple message on Facebook encouraging some users to vote in the 2010 elections resulted in over 300,000 more ballots cast. In the recent Iowa caucus, the Ted Cruz campaign used an algorithm to create psychological profiles of voters and tailored specific messages to individuals based certain traits.

Each of these algorithms was a great tool for solving a specific problem, but they also raise profound political and philosophical questions. Paraphrasing Tufecki's words, "We are not ready for machines to be right or for them to be wrong." Who takes the blame if a drone, programmed to learn independently, bombs a group of innocents? Or if Facebook wants to sway an election, which in Tufekci's opinion is within the realm of possibility, we can't do much about it, if we even notice their influence at all.

On a less conspiratorial note, the people who produce these algorithms are usually well intentioned, but, given the complexity of the code they work with, can't predict the consequences of their programs. In the case of machine learning, we have no way to know exactly what the machine is learning and why, giving computers a sort of inscrutable agency. For instance, Tufekci said that the algorithmic censoring of the Ferguson protests embarrassed Facebook's developers. They did not anticipate that the values embedded within their programs, such as biases toward natively uploaded videos and highly-tagged content, could lead to the Ice Bucket Challenge blocking out Ferguson.

Algorithms also allow people to avoid examining the algorithmic decision making process. They remove accountability from the humans in charge and their impenetrability makes it easy to look the other way, discouraging a critical approach toward their programs. For instance, an algorithm can hire a racially diverse team for a business that would appear an ideal improvement over human judgment, but which would conceal undesirable results. If the algorithm accounts for commute time, selecting only those who live near the workplace, it could effectively eliminate an entire economic class from competition due to housing segregation. This outcome could be entirely unforeseen and unwanted by those in charge of hiring, but no procedure currently in place would ensure that the business deals addresses the problem. 

The way to come to terms with these machine gatekeepers, according to Tufekci, begins with the development of a critical practice regarding the use of algorithms. For example, programmers need education to understand the implications of their work and humanists and social scientists need to become involved in the process of testing algorithms for unforeseen consequences. Perhaps the main obstacle standing in the way of such a development lies in our content relationship with technology.

Machines do not exercise easily recognizable and objectionable totalitarian control, but instead, they cater to us. Why should we care about the exploitation of private information if it can make our lives easier, more comfortable, and give us what we want? The job of this new wave of researchers and activists would be to highlight the emergent problems enabled by a system that also has so many positive effects. As Rex has said, "We need to move beyond good and evil," and stop trying to position the effects of technology within conventional moral judgments. It should be clear from Tufekci's talk that the field of technology needs to become a more interdisciplinary effort if we want to understand our machines and how they affect us.