Is It Cool if I Present My Thesis in Morse Code

Certainly theory and making are inseparably bound, even if we don’t always realize or intend it. I think we tend to talk about artifiacts  in terms of their explicit and implicit arguments: a website may feature a concluding statement that touts one argument, but then endorse another via its presentation, layout, or medium. In the digital world especially, becomes especially difficult to differentiate arguments generated intentionally by the author, and those generated circumstantially by the tool or medium. And of course, the question still stands as to whether and when this distinction matters. Should the constraints or afforandaces of a medium be dismissed or excused as limiting reagents (“for something made in MS Paint, it makes a convincing argument”), or should they be weighed as deliberate and argumentative in their own right? Both? I’ve tried to pick a few artifacts that investigate this struggle

1. InformationIsBeautiful.com’s “Snake Oil Supplements?” is one of my very favorite data visualizations, as its one of those few that serves up utility and visual appeal in equal measure. Yet I’m also critical of the piece, because it does rub me almost as a work of subliminal advertising. Before I even begin to read the words on-screen, my senses are flooded with buzz-words (“strong”, “good”, “evidence”, “scientific evidence”) and visual appeals to my affection – big, deeply colored bubbles are somehow better than tiny, pale ones. The entire thing reeks of careful plotting and strategy, and I don’t know whether to be repulsed or amazed. I think back to Leigh’s post “Memory of your loved ones: $3/day”, in which she voices her frustrations at finding a paywall that blocked her access to Fannie Brandon’s obituary. Both the Snake Oil visualization and Leigh’s paywal offer examples of interface and design coloring our understanding of an artifact before we even reach the “meat” of it.
2. My second example is actually one of my own. Last year, for Dr. Shrout’s HIS245 course, I compiled, organized, and visualized around 170 American love letters from 1768 to 1860. Once I had my data in a manipulable format, I plugged it into a number of data visualization platforms. Below is one that I made using a tool called WordIJ.
Screen Shot 2015-11-18 at 10.15.29 AM
Purty, ain’t it? I think it looks a little like a flower growing from two lungs. You might think it looks like spaghetti. In any case, I’m pretty unhappy with this visualization. Above all, it’s misleading. It suggests that my data is on the same level of beauty and eloquence as the image. Certainly, I could make a case for love letters being that beautiful, but I also have to concede that you could feed 1,000 lines of gibberish into WordIJ and end up with a visualization just as exquisite. The platform is built to make pretty things out of pretty and ugly things alike. Again, it’s up to us to determine whether and to what degree these kinds of affordances matter. Will recently touched on this topic when he wrote on “the issue of how much power curators have over the final interpretation of an artifact.” As we continue to examine the say that tools and platforms have in formulating theory, it seems the follow-up question to WIll’s may be how much power curators have over the tools they use, and if it is worth it to succumb to their affordances.

Data Assessment: Cornelia Shaw

IMG_20151116_171235

In researching Cornelia Shaw, I’ve come across several sources which cite her importance to the Davidson community – the Davidsonian wrote of her as a “most valuable friend to the college’, and her biographical page on the archives’ site emphasizes her close bonds with members of the student body.

The college’s perception of Shaw, while valuable, does not provide a very comprehensive picture of her. I’m curious what other groups and individuals – her family and her colleagues, for example – thought of Shaw, and I’ve tried to organize my database model around this question. For each of my sources, I asked myself whose opinion of Ms. Shaw the text mainly informs or reflects, then organized my sources into categories based on my conclusion. The sparsest category by far is “As Seen by Her Family,” since the college’s own sources on Shaw focus almost exclusively on Shaw herself, not her family members. I will likely need to look beyond the college archives to gather more info about the Shaw family. The last of my four groups is dedicated to capturing Shaw’s opinion of herself. Unless I magically stumble upon Shaw’s diary or autobiography, this category will likely be a tricky one to flesh out – for now, I’ve put in the things that she herself wrote. I may end up taking some poetic license here, as I try to tease out Shaw’s opinion of herself from her writings.

Mainstream Memory

There’s a video that invariably gets posted to Reddit in the aftermath of tragedies with high death tolls, and Friday night’s attack in Paris was no exception. It’s a clip from English satirical journalist Charlie Brooker, in which he criticizes the mainstream media for their coverage of these events. Brooker points to the media’s tendency to produce a killer-centric, rather than victim-centric narrative as harmful for all who watch it. It’s panic-inducing for the peaceful individuals who only want to grieve, and propaganda for the disturbed viewers who may simply be waiting for a final spark of inspiration before launching their own attack.

It’s easy lay the blame exclusively on CNN, Fox News, and other media outlets who flood our TV screens with images, videos, and trivia about the perpetrator(s). But as one perceptive redditor pointed out in the comments of a recent re-post of the Charlie Brooker clip, “mass murders = better ratings for CNN. Telling a network like CNN how to  prevent these types of shootings is like a batter telling the pitcher where he likes the ball.” The killer-centric narrative perpetuates because we consume it so readily, and then do little to actually challenge or dismantle its prominence.

I was largely compelled to write on this topic, no doubt, by the events in Paris this weekend. But Fuentes’ article for today, on the difficulties of altering and subverting deeply-cemented mainstream narratives is incredibly relevant here. Fuentes writes, “[Rachael Pringle] Polgreen’s story has in many ways been rendered impermeable, difficult to revise and over-determined by the language and power of the archive.” As Michael wrote last week, sometimes the mere accessibility of certain information can work to solidify a mainstream perspective – in his case, the history of Ralph Johnson as a businessman overshadows and overpowers the history of Johnson as a father or a writer. I am curious how this effect plays out on a much larger scale, as narrative patterns – from occupation-centric histories of individuals to killer-centric histories of tragedies – begin to emerge as the ones we consume and remember.

Taking Responsibility

In the new “Steve Jobs” movie (the new new one, with Fassbender, not Kutcher), there’s a running debate between Jobs and his ex-girlfriend Chrisann Brennan over whether or not Jobs is the father of her child – Jobs says no, Brennan says yes. At one point, Brennan attacks Jobs for a quote he gave TIME magazine in 1983: “28 percent of the male population of the United States could be the father.” Ouch.

 Jobs defends himself by proclaiming that he used some algorithm to get that statistic. But the quote’s implication still stands, obviously. Jobs could try to hide behind this “algorithm,” but at the end of the day he still essentially called his ex a slut in one of the nation’s largest news publications.

As I read William G. Thomas’ “Computing and the Historical Imagination”, my mind returned to this part of the film. In particular, Jobs’ remarks strike me as awfully similar to Time on the Cross: The Economics of American Negro Slavery, which Thomas cites as an early example of computational methods colliding with and fueling historical argument. Thomas explains that Time on the Cross and its authors received intense criticism not just for the accuracy of their data, but for also for its arguments, which seemed to paint slavery in at least a much softer light.

Thomas doesn’t say much of the authors’ reactions to this criticism, but I would imagine that like many researchers, and like Steve Jobs, their inclination was to point to the data, to shrug  the blame off of their shoulders and onto the computer’s.

Certainly, computers do have a certain crystal-ball aura about them that makes hiding behind their predictions incredibly tantalizing. Now more than ever, it is so easy to feed data into a given program or website and receive seconds later some output that we can immediately spin into an argument. Often, it’s enough that the computer – the pinnacle of exactness and precision – handles the work that we accept its output without hardly a second glance. Or, as Kim wrote last week, that vague sense of digital creations being inherently “different” and unique alone gives them an air of authority. But the real danger, I think, comes not when we produce faulty data, but when we position arguments as a product of a computer or an algorithm, that we might absolve ourselves of our responsibility for them.

I know this is a vague topic, so I want to close with a final, brief anecdote. Last week, my electronic literature class talked with author and Twitter bot creator Darius Kazemi over Skype. He told us about the time that one of his bots happened to generate a tweet containing a racial slur, and how he promptly deleted the tweet and introduced a language filter to prevent this from happening again. But he still felt bad, and told my e-lit class during a video chat last week that it’s his “moral duty” to feel at least some responsibility for his bot’s actions. According to Kazemi, you don’t get to take credit for the work put into a digital project, but then dodge the blame for whatever it generates. We’re guilty by association, I suppose.

“That belongs in a museum! . . . Or an Omeka exhibit.”

Overall, I like what McLurken has to say. I especially enjoyed his opening anecdote, about the initially-skeptical student who eventually gained appreciation for digital scholarship, after encouragement from McLurken to embrace feeling  “uncomfortable, but not paralyzed.”

Well, I should say that I like this version of this story – the one that McLurken is obviously touting, and that bears the morals ‘moments of epiphany and introspection can blossom from discomfort,’ and ‘digital technology has a place in academia,’ and ‘don’t knock it ’till you try it!’ 

And yet. . . my cynical mind is pushing me to imagine a slightly different telling of his tale, one a bit closer to my own experiences. What if the wary student was onto something? What if Omeka/Wordpress/DOS/whatever digital tool McLurken had his students use just wasn’t the right tool for the job, at least not for this particular student? What if the “right tool” (again, perhaps only for this student) was a non-digital platform? It’s hard to imagine the student’s discomfort as anything but paralyzing if this was the case, and she had to trudge through building an entire project in a platform she didn’t understand, enjoy, or agree with.

I’m sure most of us are familiar with the experience of being pigeon-holed into using a platform that just doesn’t ‘play nice’ with us, or with the material at hand. It isn’t fun, and it stifles creativity and learning. I’ll certainly grant McLurcken that more often than not, initial frustrations with technology can be attributed to that universally unpleasant feeling of stepping outside our comfort zones. But occasionally this discomfort has a more substantive root, and may be a sign that we’re trying to jam a square peg into a round hole. It’s the difference between jumping into a cold pool and getting used to water, and dipping your feet into a green, radioactive pool, then saying “nope, I’ll look for another.”

As Sherwood brings up in his post, our generation risks forfeiting the Internet’s “by the people, for the people” mantra if we continue slinking toward consumptive, rather than creative behaviors. To keep the Internet in our own hands, we must look critically at the technologies we use – a process that involves, among many other things, learning and deciding which tools and platforms are best for which projects. As students, it’s also critical that make this decision ourselves. I appreciate that in this course, we’re being given both the time and freedom to do just that.

museum

 

Speaker for the Dead

Just as I was starting to get a little bit bored reading about the politics and financial woes of Oak Hill Cemetery, the author reeled me back in with with this troubling, yet instructive quote from board president George Hill:

“I am absolutely sure that Oak Hill, its board, and its patrons are not ready for ghost tours and dog-walking,” he says. “It’s just not who we are.”

For all the parties that Hill manages to somehow speak for, he also seems to have omitted one entirely… Ah, right, the residents of Oak Hill Cemetery. They should probably get some say in the future of their home, if you ask me – it is their resting place, after all. And if Neil Gaiman’s Graveyard Book is any indication, it really shouldn’t be that difficult to get their opinion, once we find a suitable intermediary.

In all seriousness, though, Hill’s failure (or refusal?) to account for those buried at Oak Hill and how they might have weighed in on ‘what’s best for Oak Hill’ did get under my skin a bit. Sure, we can only talk in “might haves” and “may haves” about the wishes of the dead, and in this we’ve talked a good deal in class on the dangers and pitfalls this sort of speculation.

However, this article encouraged me to reflect on what we may gain from this exercise. As Hannah Grace writes, physical artifacts “risk being forgotten unless something or someone new comes along” to spark interest and reinvigorate the discussion – it takes considerable effort to keep the dead ‘alive’, so to speak. So, for all the flaws and limitations of wondering what the dead “might have wanted”, I think the fact that we even attempt this is hugely significant. It signals our dedication to consider and honor these wishes, as well as our refusal to let death extinguish entirely an individual’s voice. It is at once an effort to converse with the dead, and to keep them in the conversation. There’s something beautiful in that.

Coloring Outside the Lines

In my Electronic Literature class today, we had the chance to video-chat with a pretty famous e-lit creator, Jason Nelson. Nelson has worked with a dizzying variety of digital projects and mediums, from Flash games and smartphone apps to VR headsets and Roomba vacuums.

Nelson makes it difficult to sum up his style with a single word, but one that seems to at least describe a good number of his works is “messy.” In fact, we asked him directly during our call today if harbors some kind of hatred toward clean, minimalist design. His response (paraphrased):

 “Hell, yes! I can’t stand it. Modern design doesn’t feel natural; in fact, it feels hollow and artificial. Humans just aren’t built like that. We’re the opposite. We’re incredibly messy creatures. We have tons thoughts that don’t lead anywhere, and we leak a ton of fluids every day. Humans are super messy.”

As soon as he said this, my own messy thoughts jumped back to our recent discussions in this class, on the importance of historical accuracy, and where we draw the lines between fact, exaggeration, and fiction. And Nelson’s comment got me thinking: if humans are messy, why on earth shouldn’t we expect a field dedicated to telling the stories of humans throughout time to be pretty messy, too? Accuracy isn’t a bad ideal to strive toward, since it lets us pull together data to make reasonable arguments, just like any other science.

But there’s still a danger, I think, of getting so caught up in the facts and statistics that we lose sight of the people behind them. I, for one, have read (and likely written) way too many research writings that treat their subjects as variables in some big equation for making sense out of someone’s life, as if showing that a person did X thing and Y thing can explain with complete certainty why he/she did Z thing. Most of the time, it doesn’t add up.
Michael contemplates in his post if the ideals of data curation are too lofty to possibly achieve. I would say yes, if the ideal data curation is a completely factual and accurate one, then this isn’t just impossible, it’s simply not something we should concern ourselves with. If humans defy cleanliness and straight lines, then history’s goal should be to revel in and account for this messiness – certainly not to correct it.

Mid-Semester Audit: Keeping the Bodies at Bay

During one of our first meetings, someone (head graveDIGer Dr. Shrout, I think) brought up humanity’s tendency to create distance between living bodies and lifeless ones. We may interact with the corpse at a funeral or other ceremony, but only briefly – soon, the dead are either cremated and scattered or buried beneath a narrow plot of land. Notably, this distance is a comfortable but not insurmountable one: with enough resolve, you can find the Davidson graveyard, and visit the tombstones of presidents past.

Digital technology and the internet offer yet another means to create this arms-reach gap meant to keep the corpses at bay. It was only in preparing for this assignment that I fully realized this. A solid majority of our class’ blog posts so far have remarked in some way on the persisting centrality of the physical body in our experience of death, even in a completely non-physical space.
In his first post of the semester, Sherwood grounds this discussion in classic philosophy, boldly asserting that “Descartes was wrong,” and that our physical bodies have always been – and will always be – an integral part of life and death alike. Many of the Galindo reaction posts expand on and develop this very notion. A handful of us frelected on what we gained by actually walking around and ‘experiencing’ the exhibit, perhaps best summed up in Leigh’s poetic description of the effect: “the physical grabs us and keeps us.”
Meanwhile, others focused instead on what is lost  in Galindo’s art when distances temporal, physical, and digital stand between us and her original performances.
Even as we transitioned into discussing new readings and topics, we quickly discovered further evidence of a persistent bond between death and the physical body. For example, in our attempt to understand the troubling culture of online death threats and shaming,  we’ve frequently pointed  to the internet’s capacity as a wall that shields us from ‘real-world’ (often a synonym for ‘physical’) harm and discomfort. Even if we’ve only recently begun to figure this particular topic into our answers to the ‘big idea’ questions for the course, several people have already given it thought in their blogs. Kim was among the first to call our attention to the veil-like powers of the internet, when she admitted in near the beginning of the semester that she will not remember Galindo because digital things are but “shadows in mirrors on the grand stage.”  This, I think, is just one instance of a topic from this course being constantly re-evaluated and re-explored in the context of new material. I’ve been in far too many classes where each week ends with a big “reset and forget” button. The fact that I can pick out a single topic mentioned during week one of DIG215 and track its development all the way up to our most recent meetings is surely a sign of our class’ commitment to treating this course as one large body of work, rather than a series of dismembered discussions. Puns very much intended, as always.

“Only One Life Left!”: Dying in Video Games

Though I’m a bit scared to don the “gamer” title after Monday’s readings on Gamergate, I won’t let a crazed few keep me from professing my love for Mario and Pokemon. Indeed, it’s hard to take Gamergate participants too seriously, since the ‘movement’ seems to bring a new effigy to the stake each week. In Gamergate’s drunken stumble through gamer culture, a common thread amongst the debates is one already well-trodden: if, and to what degree, do video games affect their players?

For a few years now, games have been the target of choice to explain away whatever we decide to hate about our youth. Kids are violent/sexist/racist? Must be Grand Theft Auto. Before games, the correct answer was rock music, and before that, well – take your pick from comic books, TV, movies, or any other medium.

If it’s not already clear, I’m not a fan of this blame game – but I won’t get any further down that rabbit hole. Because even though I seriously doubt any claim that video games actually inspire violent tendencies, I do think that games, with their often casual and dismissive attitude toward dying, may influence how we think and talk about death.

It’s not even the most violent games that could be at fault here – in fact, often the family-friendly arcade-style games are the ones that treat player (or enemy) death with the least reverence. In Mario, Pac-Man, Sonic, and any other number of games with a “lives” system, dying is quick and painless and readily forgiven. Touch a ghost in Pac-Man, and it’s only a matter of seconds before you’re “alive” again, as if nothing had ever happened. Mario even turns player death into a comedic moment, theatrically turning Mario’s sprite toward the player as he falls through the ground and off the screen.

I find this fascinating, especially considering that these are the games typically marketed towards children. Even the most violent “adult” games often treat death with more gravitas. The Grand Theft Auto series, for example, has become known for its cinematic “death-cam” which triggers a black and white filter and slow-mo effect as you watch your character collapse heroically.

I know that this is all just speculation, and that perhaps games are just a product of a generation already indifferent toward death. Still, I can’t help but wonder about the effects may be of games that portray death as comedic or reversible or trivial or all of the above. A casual attitude toward death might have partially inspired the language used in threats on Zoe Quinn’s life – threats that may not typically produce physical harm, but as Michael suggests, can effectively ‘kill’ an individual’s voice on the internet.

To close with a somewhat less concerning example: It’s always a bit weird to hear my younger brothers – just 13 and 14 years old – already tossing around words like “kill [the Goomba]” and “die[, Princess Peach!]” while they play Mario Kart or Super Smash Bros. Neither of them would hurt a fly, but they’ll sure talk with murderous intent about those damned blue shells.

css.php