Tag Archives: digital humanities

YOIT: The dark side of DH

Despite all my good intentions, I fell off the blogging wagon yesterday, thanks in large part to my annual early-January flu, which this year has coincided with MLA. Rather than being at yesterday’s panel on “The Dark Side of Digital Humanities,” as I had planned, I followed it via twitter from my hotel room, where I was fighting a fever. William Pannapacker’s Chronicle piece on the panel and Alexis Lothian’s notes both provide helpful summaries of the issues discussed.

One of the things the defenders of DH note in response to the admittedly provocative panel is that DH seems to be conflated in the minds of panelists with MOOCs, but that in actuality, nobody who does DH (and few people in English departments at large) is actually a fan of MOOCs. That may be true, but, as this conversation helpfully points out, that distinction is often not so clear in the eyes of administrators. Pannapacker cites Natalia Cecire’s succinct and accurate tweet: “1. DHers usually don’t see dh as a panacea. 2. Admins often do. 3. DHers often need for admins to have this erroneous belief.”

That’s something that bears continued discussion, because while DH’s emphasis on hacking can be seen as both transformative and subversive in relation to traditional academic practice and hierarchies, hacking can also be doing more with less, and making do with limited resources. While resourcefulness is a virtue, in a time of increased budget cuts and decreased respect for the humanities, the very buzzwords that make DH attractive to administrators–efficiency, productivity, even something as broad as “technology”–often imply a streamlining of resources and personnel that works to further marginalize the position of the humanities in relation to the rest of the university.

Advertisements

Learn to code?

As I’ve said, I’m often on the fringes of digital humanities. I try to follow what’s going on (although, in the interest of not being on the internet all the time, not as closely as I might), partly because digital archives are important to the way I work, and partly because I think there’s interesting and important stuff going on over there. But I have to admit, I’ve mostly ignored the calls to make 2012 the year of coding. I read references to Codeacademy and moved on because, well, I’m busy.

And then there was a bit of a kerfluffle over gender and the exhortation that digital humanists learn to code. Miriam Posner said it first and best here. Follow-up is here. Both are right on, and I’m not going to bother linking or recapping to the inevitable back-and-forth on gender and coding that followed because, well, I’m lazy.

But Posner’s discussion of gender and coding, particularly the part about how men are more likely to have been given access to a computer and encouraged to learn to use it at a young age, got me thinking. I think she’s right. That’s certainly been what I observed, both of the people I know, and in my experience as a woman (girl at the time) who did have computer access and tech skills. It’s something I often forget about myself, but I do know how to code. I learned BASIC and C++ at computer camp in high school, Unix for my first job, HTML from years of using the internet, and tiny bits of CSS during the two years I was hosting my own knitting blog (I also know how to knit, spin, quilt, sew, and participate in the overwhelmingly female DIY online communities Posner talks about in her follow-up). My freshman year of high school, I won the state-wide math and science fair with a project on Benford’s law. I wrote some code to test data sets for first- and second-digit distribution, which would be beyond laughable today, but which was a bit more impressive before everyone had access to high-powered computers on their phones. I wrote the code in C++ and it was a pain in the butt, and then a year and a half later I learned Unix scripting and realized how much quicker it would have made the whole thing.

Continue reading

Why I study reception: methodology(ish) edition

Still blocked on this chapter. I’ve resolved not to fight it too much, but to get some other things done, too, while I wait for things to work themselves out. In the meantime, what it is I’m doing when I study reception:

As I noted earlier, I want to know what people said about novels when they were first published, and what sort of strategies they had available for reading them. To be honest, that’s often enough for me—if my committee would let me stop at description, I’d read a whole bunch of reviews and describe away. But that generally falls a bit short of an actual argument, so the most important part of my job (and what’s been holding me up for a while on the new chapter, just like it did on the last one) is showing what it is about the reviews that’s significant to our understanding of the novel, the criticism, or some other broader issue.

In order to do that, the first thing I do is find every review I can of the book and read them all, labeling each one with the major themes that I see recurring. Scrivener is particularly good at handling the practical side of this, but that’s a different post. Then I read them again, and again, paying attention to the different strands running through them, and then I keep reading them until I have some sense of the story they tell. Then I try to tell it.

Not very systematic, right? It generally takes a whole lot of re-reading and writing a bunch of crappy notes before I figure out what it is about the reviews that’s interesting and significant. Once I see the seeds of the argument, it gets a bit easier. Then I’m putting together a case and providing evidence to support my claims that we should interpret the reviews in the way I’m proposing.
The evidence I provide is textual—direct quotation, paraphrase, and analysis of the reviews. And this is where it starts to get sticky, because how much evidence is enough evidence? None of the patterns I point out are going to be present in all the reviews. The impulse then is to count—tally up how many review deal with x theme and offer some percentages. But there are lies, there are damn lies, and there are statistics. What would the numbers actually show? That 36% of the reviews that I have available evidence this trend? My data set is “reviews I am able to find.” I’m able to find enough reviews that I can make some fairly confident arguments about the patterns and trends I see across them, but that doesn’t make them a statistically significant sample size of all the reviews of the book that were published. In this case, statistics would be damn lies, provided to give a sense of significance that mostly just takes advantage of our cognitive biases about certain types of data.

And that’s why the “so what” part of my argument is so important. I need to find compelling connections between the reviews, the novel itself, the historical context, and the criticism. If I can do that, then I don’t have to rely on some false sense of quantitative significance to justify my argument. Unfortunately, finding those connections takes a lot longer than crunching the numbers.

All of this puts me on the fringes of DH work and debates about quantitative analysis in literary study. I don’t do quantitative analysis, and despite the hypotheticals above, I’ve never attempted it, but I can’t say I haven’t thought about it. Most of the time, I have trouble seeing what data mining actually adds to the conversation, but in the face of curmudgeonly responses like Stanley Fish’s, I sometimes want to try it out just to be ornery. I’ve got some more thoughts on data mining that I want to work through at some point, but until then, Ted Underwood has a pretty thorough response to Fish’s grumpy rant.