Certainly theory and making are inseparably bound, even if we don’t always realize or intend it. I think we tend to talk about artifiacts in terms of their explicit and implicit arguments: a website may feature a concluding statement that touts one argument, but then endorse another via its presentation, layout, or medium. In the digital world especially, becomes especially difficult to differentiate arguments generated intentionally by the author, and those generated circumstantially by the tool or medium. And of course, the question still stands as to whether and when this distinction matters. Should the constraints or afforandaces of a medium be dismissed or excused as limiting reagents (“for something made in MS Paint, it makes a convincing argument”), or should they be weighed as deliberate and argumentative in their own right? Both? I’ve tried to pick a few artifacts that investigate this struggle
In researching Cornelia Shaw, I’ve come across several sources which cite her importance to the Davidson community – the Davidsonian wrote of her as a “most valuable friend to the college’, and her biographical page on the archives’ site emphasizes her close bonds with members of the student body.
The college’s perception of Shaw, while valuable, does not provide a very comprehensive picture of her. I’m curious what other groups and individuals – her family and her colleagues, for example – thought of Shaw, and I’ve tried to organize my database model around this question. For each of my sources, I asked myself whose opinion of Ms. Shaw the text mainly informs or reflects, then organized my sources into categories based on my conclusion. The sparsest category by far is “As Seen by Her Family,” since the college’s own sources on Shaw focus almost exclusively on Shaw herself, not her family members. I will likely need to look beyond the college archives to gather more info about the Shaw family. The last of my four groups is dedicated to capturing Shaw’s opinion of herself. Unless I magically stumble upon Shaw’s diary or autobiography, this category will likely be a tricky one to flesh out – for now, I’ve put in the things that she herself wrote. I may end up taking some poetic license here, as I try to tease out Shaw’s opinion of herself from her writings.
There’s a video that invariably gets posted to Reddit in the aftermath of tragedies with high death tolls, and Friday night’s attack in Paris was no exception. It’s a clip from English satirical journalist Charlie Brooker, in which he criticizes the mainstream media for their coverage of these events. Brooker points to the media’s tendency to produce a killer-centric, rather than victim-centric narrative as harmful for all who watch it. It’s panic-inducing for the peaceful individuals who only want to grieve, and propaganda for the disturbed viewers who may simply be waiting for a final spark of inspiration before launching their own attack.
It’s easy lay the blame exclusively on CNN, Fox News, and other media outlets who flood our TV screens with images, videos, and trivia about the perpetrator(s). But as one perceptive redditor pointed out in the comments of a recent re-post of the Charlie Brooker clip, “mass murders = better ratings for CNN. Telling a network like CNN how to prevent these types of shootings is like a batter telling the pitcher where he likes the ball.” The killer-centric narrative perpetuates because we consume it so readily, and then do little to actually challenge or dismantle its prominence.
In the new “Steve Jobs” movie (the new new one, with Fassbender, not Kutcher), there’s a running debate between Jobs and his ex-girlfriend Chrisann Brennan over whether or not Jobs is the father of her child – Jobs says no, Brennan says yes. At one point, Brennan attacks Jobs for a quote he gave TIME magazine in 1983: “28 percent of the male population of the United States could be the father.” Ouch.
Jobs defends himself by proclaiming that he used some algorithm to get that statistic. But the quote’s implication still stands, obviously. Jobs could try to hide behind this “algorithm,” but at the end of the day he still essentially called his ex a slut in one of the nation’s largest news publications.
As I read William G. Thomas’ “Computing and the Historical Imagination”, my mind returned to this part of the film. In particular, Jobs’ remarks strike me as awfully similar to Time on the Cross: The Economics of American Negro Slavery, which Thomas cites as an early example of computational methods colliding with and fueling historical argument. Thomas explains that Time on the Cross and its authors received intense criticism not just for the accuracy of their data, but for also for its arguments, which seemed to paint slavery in at least a much softer light.
Certainly, computers do have a certain crystal-ball aura about them that makes hiding behind their predictions incredibly tantalizing. Now more than ever, it is so easy to feed data into a given program or website and receive seconds later some output that we can immediately spin into an argument. Often, it’s enough that the computer – the pinnacle of exactness and precision – handles the work that we accept its output without hardly a second glance. Or, as Kim wrote last week, that vague sense of digital creations being inherently “different” and unique alone gives them an air of authority. But the real danger, I think, comes not when we produce faulty data, but when we position arguments as a product of a computer or an algorithm, that we might absolve ourselves of our responsibility for them.
Overall, I like what McLurken has to say. I especially enjoyed his opening anecdote, about the initially-skeptical student who eventually gained appreciation for digital scholarship, after encouragement from McLurken to embrace feeling “uncomfortable, but not paralyzed.”
Well, I should say that I like this version of this story – the one that McLurken is obviously touting, and that bears the morals ‘moments of epiphany and introspection can blossom from discomfort,’ and ‘digital technology has a place in academia,’ and ‘don’t knock it ’till you try it!’
And yet. . . my cynical mind is pushing me to imagine a slightly different telling of his tale, one a bit closer to my own experiences. What if the wary student was onto something? What if Omeka/Wordpress/DOS/whatever digital tool McLurken had his students use just wasn’t the right tool for the job, at least not for this particular student? What if the “right tool” (again, perhaps only for this student) was a non-digital platform? It’s hard to imagine the student’s discomfort as anything but paralyzing if this was the case, and she had to trudge through building an entire project in a platform she didn’t understand, enjoy, or agree with.
I’m sure most of us are familiar with the experience of being pigeon-holed into using a platform that just doesn’t ‘play nice’ with us, or with the material at hand. It isn’t fun, and it stifles creativity and learning. I’ll certainly grant McLurcken that more often than not, initial frustrations with technology can be attributed to that universally unpleasant feeling of stepping outside our comfort zones. But occasionally this discomfort has a more substantive root, and may be a sign that we’re trying to jam a square peg into a round hole. It’s the difference between jumping into a cold pool and getting used to water, and dipping your feet into a green, radioactive pool, then saying “nope, I’ll look for another.”
As Sherwood brings up in his post, our generation risks forfeiting the Internet’s “by the people, for the people” mantra if we continue slinking toward consumptive, rather than creative behaviors. To keep the Internet in our own hands, we must look critically at the technologies we use – a process that involves, among many other things, learning and deciding which tools and platforms are best for which projects. As students, it’s also critical that make this decision ourselves. I appreciate that in this course, we’re being given both the time and freedom to do just that.
Just as I was starting to get a little bit bored reading about the politics and financial woes of Oak Hill Cemetery, the author reeled me back in with with this troubling, yet instructive quote from board president George Hill:
For all the parties that Hill manages to somehow speak for, he also seems to have omitted one entirely… Ah, right, the residents of Oak Hill Cemetery. They should probably get some say in the future of their home, if you ask me – it is their resting place, after all. And if Neil Gaiman’s Graveyard Book is any indication, it really shouldn’t be that difficult to get their opinion, once we find a suitable intermediary.
In all seriousness, though, Hill’s failure (or refusal?) to account for those buried at Oak Hill and how they might have weighed in on ‘what’s best for Oak Hill’ did get under my skin a bit. Sure, we can only talk in “might haves” and “may haves” about the wishes of the dead, and in this we’ve talked a good deal in class on the dangers and pitfalls this sort of speculation.
In my Electronic Literature class today, we had the chance to video-chat with a pretty famous e-lit creator, Jason Nelson. Nelson has worked with a dizzying variety of digital projects and mediums, from Flash games and smartphone apps to VR headsets and Roomba vacuums.
“Hell, yes! I can’t stand it. Modern design doesn’t feel natural; in fact, it feels hollow and artificial. Humans just aren’t built like that. We’re the opposite. We’re incredibly messy creatures. We have tons thoughts that don’t lead anywhere, and we leak a ton of fluids every day. Humans are super messy.”
As soon as he said this, my own messy thoughts jumped back to our recent discussions in this class, on the importance of historical accuracy, and where we draw the lines between fact, exaggeration, and fiction. And Nelson’s comment got me thinking: if humans are messy, why on earth shouldn’t we expect a field dedicated to telling the stories of humans throughout time to be pretty messy, too? Accuracy isn’t a bad ideal to strive toward, since it lets us pull together data to make reasonable arguments, just like any other science.
During one of our first meetings, someone (head graveDIGer Dr. Shrout, I think) brought up humanity’s tendency to create distance between living bodies and lifeless ones. We may interact with the corpse at a funeral or other ceremony, but only briefly – soon, the dead are either cremated and scattered or buried beneath a narrow plot of land. Notably, this distance is a comfortable but not insurmountable one: with enough resolve, you can find the Davidson graveyard, and visit the tombstones of presidents past.
Though I’m a bit scared to don the “gamer” title after Monday’s readings on Gamergate, I won’t let a crazed few keep me from professing my love for Mario and Pokemon. Indeed, it’s hard to take Gamergate participants too seriously, since the ‘movement’ seems to bring a new effigy to the stake each week. In Gamergate’s drunken stumble through gamer culture, a common thread amongst the debates is one already well-trodden: if, and to what degree, do video games affect their players?
For a few years now, games have been the target of choice to explain away whatever we decide to hate about our youth. Kids are violent/sexist/racist? Must be Grand Theft Auto. Before games, the correct answer was rock music, and before that, well – take your pick from comic books, TV, movies, or any other medium.
If it’s not already clear, I’m not a fan of this blame game – but I won’t get any further down that rabbit hole. Because even though I seriously doubt any claim that video games actually inspire violent tendencies, I do think that games, with their often casual and dismissive attitude toward dying, may influence how we think and talk about death.
It’s not even the most violent games that could be at fault here – in fact, often the family-friendly arcade-style games are the ones that treat player (or enemy) death with the least reverence. In Mario, Pac-Man, Sonic, and any other number of games with a “lives” system, dying is quick and painless and readily forgiven. Touch a ghost in Pac-Man, and it’s only a matter of seconds before you’re “alive” again, as if nothing had ever happened. Mario even turns player death into a comedic moment, theatrically turning Mario’s sprite toward the player as he falls through the ground and off the screen.
I find this fascinating, especially considering that these are the games typically marketed towards children. Even the most violent “adult” games often treat death with more gravitas. The Grand Theft Auto series, for example, has become known for its cinematic “death-cam” which triggers a black and white filter and slow-mo effect as you watch your character collapse heroically.
I know that this is all just speculation, and that perhaps games are just a product of a generation already indifferent toward death. Still, I can’t help but wonder about the effects may be of games that portray death as comedic or reversible or trivial or all of the above. A casual attitude toward death might have partially inspired the language used in threats on Zoe Quinn’s life – threats that may not typically produce physical harm, but as Michael suggests, can effectively ‘kill’ an individual’s voice on the internet.
To close with a somewhat less concerning example: It’s always a bit weird to hear my younger brothers – just 13 and 14 years old – already tossing around words like “kill [the Goomba]” and “die[, Princess Peach!]” while they play Mario Kart or Super Smash Bros. Neither of them would hurt a fly, but they’ll sure talk with murderous intent about those damned blue shells.