Taking Responsibility

In the new “Steve Jobs” movie (the new new one, with Fassbender, not Kutcher), there’s a running debate between Jobs and his ex-girlfriend Chrisann Brennan over whether or not Jobs is the father of her child – Jobs says no, Brennan says yes. At one point, Brennan attacks Jobs for a quote he gave TIME magazine in 1983: “28 percent of the male population of the United States could be the father.” Ouch.

 Jobs defends himself by proclaiming that he used some algorithm to get that statistic. But the quote’s implication still stands, obviously. Jobs could try to hide behind this “algorithm,” but at the end of the day he still essentially called his ex a slut in one of the nation’s largest news publications.

As I read William G. Thomas’ “Computing and the Historical Imagination”, my mind returned to this part of the film. In particular, Jobs’ remarks strike me as awfully similar to Time on the Cross: The Economics of American Negro Slavery, which Thomas cites as an early example of computational methods colliding with and fueling historical argument. Thomas explains that Time on the Cross and its authors received intense criticism not just for the accuracy of their data, but for also for its arguments, which seemed to paint slavery in at least a much softer light.

Thomas doesn’t say much of the authors’ reactions to this criticism, but I would imagine that like many researchers, and like Steve Jobs, their inclination was to point to the data, to shrug  the blame off of their shoulders and onto the computer’s.

Certainly, computers do have a certain crystal-ball aura about them that makes hiding behind their predictions incredibly tantalizing. Now more than ever, it is so easy to feed data into a given program or website and receive seconds later some output that we can immediately spin into an argument. Often, it’s enough that the computer – the pinnacle of exactness and precision – handles the work that we accept its output without hardly a second glance. Or, as Kim wrote last week, that vague sense of digital creations being inherently “different” and unique alone gives them an air of authority. But the real danger, I think, comes not when we produce faulty data, but when we position arguments as a product of a computer or an algorithm, that we might absolve ourselves of our responsibility for them.

I know this is a vague topic, so I want to close with a final, brief anecdote. Last week, my electronic literature class talked with author and Twitter bot creator Darius Kazemi over Skype. He told us about the time that one of his bots happened to generate a tweet containing a racial slur, and how he promptly deleted the tweet and introduced a language filter to prevent this from happening again. But he still felt bad, and told my e-lit class during a video chat last week that it’s his “moral duty” to feel at least some responsibility for his bot’s actions. According to Kazemi, you don’t get to take credit for the work put into a digital project, but then dodge the blame for whatever it generates. We’re guilty by association, I suppose.

Leave a Reply

Your email address will not be published. Required fields are marked *