Is It Cool if I Present My Thesis in Morse Code

Certainly theory and making are inseparably bound, even if we don’t always realize or intend it. I think we tend to talk about artifiacts  in terms of their explicit and implicit arguments: a website may feature a concluding statement that touts one argument, but then endorse another via its presentation, layout, or medium. In the digital world especially, becomes especially difficult to differentiate arguments generated intentionally by the author, and those generated circumstantially by the tool or medium. And of course, the question still stands as to whether and when this distinction matters. Should the constraints or afforandaces of a medium be dismissed or excused as limiting reagents (“for something made in MS Paint, it makes a convincing argument”), or should they be weighed as deliberate and argumentative in their own right? Both? I’ve tried to pick a few artifacts that investigate this struggle

1. InformationIsBeautiful.com’s “Snake Oil Supplements?” is one of my very favorite data visualizations, as its one of those few that serves up utility and visual appeal in equal measure. Yet I’m also critical of the piece, because it does rub me almost as a work of subliminal advertising. Before I even begin to read the words on-screen, my senses are flooded with buzz-words (“strong”, “good”, “evidence”, “scientific evidence”) and visual appeals to my affection – big, deeply colored bubbles are somehow better than tiny, pale ones. The entire thing reeks of careful plotting and strategy, and I don’t know whether to be repulsed or amazed. I think back to Leigh’s post “Memory of your loved ones: $3/day”, in which she voices her frustrations at finding a paywall that blocked her access to Fannie Brandon’s obituary. Both the Snake Oil visualization and Leigh’s paywal offer examples of interface and design coloring our understanding of an artifact before we even reach the “meat” of it.
2. My second example is actually one of my own. Last year, for Dr. Shrout’s HIS245 course, I compiled, organized, and visualized around 170 American love letters from 1768 to 1860. Once I had my data in a manipulable format, I plugged it into a number of data visualization platforms. Below is one that I made using a tool called WordIJ.
Screen Shot 2015-11-18 at 10.15.29 AM
Purty, ain’t it? I think it looks a little like a flower growing from two lungs. You might think it looks like spaghetti. In any case, I’m pretty unhappy with this visualization. Above all, it’s misleading. It suggests that my data is on the same level of beauty and eloquence as the image. Certainly, I could make a case for love letters being that beautiful, but I also have to concede that you could feed 1,000 lines of gibberish into WordIJ and end up with a visualization just as exquisite. The platform is built to make pretty things out of pretty and ugly things alike. Again, it’s up to us to determine whether and to what degree these kinds of affordances matter. Will recently touched on this topic when he wrote on “the issue of how much power curators have over the final interpretation of an artifact.” As we continue to examine the say that tools and platforms have in formulating theory, it seems the follow-up question to WIll’s may be how much power curators have over the tools they use, and if it is worth it to succumb to their affordances.

Data Assessment: Cornelia Shaw

IMG_20151116_171235

In researching Cornelia Shaw, I’ve come across several sources which cite her importance to the Davidson community – the Davidsonian wrote of her as a “most valuable friend to the college’, and her biographical page on the archives’ site emphasizes her close bonds with members of the student body.

The college’s perception of Shaw, while valuable, does not provide a very comprehensive picture of her. I’m curious what other groups and individuals – her family and her colleagues, for example – thought of Shaw, and I’ve tried to organize my database model around this question. For each of my sources, I asked myself whose opinion of Ms. Shaw the text mainly informs or reflects, then organized my sources into categories based on my conclusion. The sparsest category by far is “As Seen by Her Family,” since the college’s own sources on Shaw focus almost exclusively on Shaw herself, not her family members. I will likely need to look beyond the college archives to gather more info about the Shaw family. The last of my four groups is dedicated to capturing Shaw’s opinion of herself. Unless I magically stumble upon Shaw’s diary or autobiography, this category will likely be a tricky one to flesh out – for now, I’ve put in the things that she herself wrote. I may end up taking some poetic license here, as I try to tease out Shaw’s opinion of herself from her writings.

Mainstream Memory

There’s a video that invariably gets posted to Reddit in the aftermath of tragedies with high death tolls, and Friday night’s attack in Paris was no exception. It’s a clip from English satirical journalist Charlie Brooker, in which he criticizes the mainstream media for their coverage of these events. Brooker points to the media’s tendency to produce a killer-centric, rather than victim-centric narrative as harmful for all who watch it. It’s panic-inducing for the peaceful individuals who only want to grieve, and propaganda for the disturbed viewers who may simply be waiting for a final spark of inspiration before launching their own attack.

It’s easy lay the blame exclusively on CNN, Fox News, and other media outlets who flood our TV screens with images, videos, and trivia about the perpetrator(s). But as one perceptive redditor pointed out in the comments of a recent re-post of the Charlie Brooker clip, “mass murders = better ratings for CNN. Telling a network like CNN how to  prevent these types of shootings is like a batter telling the pitcher where he likes the ball.” The killer-centric narrative perpetuates because we consume it so readily, and then do little to actually challenge or dismantle its prominence.

I was largely compelled to write on this topic, no doubt, by the events in Paris this weekend. But Fuentes’ article for today, on the difficulties of altering and subverting deeply-cemented mainstream narratives is incredibly relevant here. Fuentes writes, “[Rachael Pringle] Polgreen’s story has in many ways been rendered impermeable, difficult to revise and over-determined by the language and power of the archive.” As Michael wrote last week, sometimes the mere accessibility of certain information can work to solidify a mainstream perspective – in his case, the history of Ralph Johnson as a businessman overshadows and overpowers the history of Johnson as a father or a writer. I am curious how this effect plays out on a much larger scale, as narrative patterns – from occupation-centric histories of individuals to killer-centric histories of tragedies – begin to emerge as the ones we consume and remember.

Taking Responsibility

In the new “Steve Jobs” movie (the new new one, with Fassbender, not Kutcher), there’s a running debate between Jobs and his ex-girlfriend Chrisann Brennan over whether or not Jobs is the father of her child – Jobs says no, Brennan says yes. At one point, Brennan attacks Jobs for a quote he gave TIME magazine in 1983: “28 percent of the male population of the United States could be the father.” Ouch.

 Jobs defends himself by proclaiming that he used some algorithm to get that statistic. But the quote’s implication still stands, obviously. Jobs could try to hide behind this “algorithm,” but at the end of the day he still essentially called his ex a slut in one of the nation’s largest news publications.

As I read William G. Thomas’ “Computing and the Historical Imagination”, my mind returned to this part of the film. In particular, Jobs’ remarks strike me as awfully similar to Time on the Cross: The Economics of American Negro Slavery, which Thomas cites as an early example of computational methods colliding with and fueling historical argument. Thomas explains that Time on the Cross and its authors received intense criticism not just for the accuracy of their data, but for also for its arguments, which seemed to paint slavery in at least a much softer light.

Thomas doesn’t say much of the authors’ reactions to this criticism, but I would imagine that like many researchers, and like Steve Jobs, their inclination was to point to the data, to shrug  the blame off of their shoulders and onto the computer’s.

Certainly, computers do have a certain crystal-ball aura about them that makes hiding behind their predictions incredibly tantalizing. Now more than ever, it is so easy to feed data into a given program or website and receive seconds later some output that we can immediately spin into an argument. Often, it’s enough that the computer – the pinnacle of exactness and precision – handles the work that we accept its output without hardly a second glance. Or, as Kim wrote last week, that vague sense of digital creations being inherently “different” and unique alone gives them an air of authority. But the real danger, I think, comes not when we produce faulty data, but when we position arguments as a product of a computer or an algorithm, that we might absolve ourselves of our responsibility for them.

I know this is a vague topic, so I want to close with a final, brief anecdote. Last week, my electronic literature class talked with author and Twitter bot creator Darius Kazemi over Skype. He told us about the time that one of his bots happened to generate a tweet containing a racial slur, and how he promptly deleted the tweet and introduced a language filter to prevent this from happening again. But he still felt bad, and told my e-lit class during a video chat last week that it’s his “moral duty” to feel at least some responsibility for his bot’s actions. According to Kazemi, you don’t get to take credit for the work put into a digital project, but then dodge the blame for whatever it generates. We’re guilty by association, I suppose.

“That belongs in a museum! . . . Or an Omeka exhibit.”

Overall, I like what McLurken has to say. I especially enjoyed his opening anecdote, about the initially-skeptical student who eventually gained appreciation for digital scholarship, after encouragement from McLurken to embrace feeling  “uncomfortable, but not paralyzed.”

Well, I should say that I like this version of this story – the one that McLurken is obviously touting, and that bears the morals ‘moments of epiphany and introspection can blossom from discomfort,’ and ‘digital technology has a place in academia,’ and ‘don’t knock it ’till you try it!’ 

And yet. . . my cynical mind is pushing me to imagine a slightly different telling of his tale, one a bit closer to my own experiences. What if the wary student was onto something? What if Omeka/Wordpress/DOS/whatever digital tool McLurken had his students use just wasn’t the right tool for the job, at least not for this particular student? What if the “right tool” (again, perhaps only for this student) was a non-digital platform? It’s hard to imagine the student’s discomfort as anything but paralyzing if this was the case, and she had to trudge through building an entire project in a platform she didn’t understand, enjoy, or agree with.

I’m sure most of us are familiar with the experience of being pigeon-holed into using a platform that just doesn’t ‘play nice’ with us, or with the material at hand. It isn’t fun, and it stifles creativity and learning. I’ll certainly grant McLurcken that more often than not, initial frustrations with technology can be attributed to that universally unpleasant feeling of stepping outside our comfort zones. But occasionally this discomfort has a more substantive root, and may be a sign that we’re trying to jam a square peg into a round hole. It’s the difference between jumping into a cold pool and getting used to water, and dipping your feet into a green, radioactive pool, then saying “nope, I’ll look for another.”

As Sherwood brings up in his post, our generation risks forfeiting the Internet’s “by the people, for the people” mantra if we continue slinking toward consumptive, rather than creative behaviors. To keep the Internet in our own hands, we must look critically at the technologies we use – a process that involves, among many other things, learning and deciding which tools and platforms are best for which projects. As students, it’s also critical that make this decision ourselves. I appreciate that in this course, we’re being given both the time and freedom to do just that.

museum