While Stratford is gearing up for the Shakespeare birthday celebrations this weekend, I’m winding down as I prepare to go on leave for a little while. During that time, this blog will most likely remain silent, though I certainly hope to continue thinking about Shakespeare and digital culture while away! In the last couple of months I’ve had two journal articles published on the topic, and this summer I have a book chapter coming out. All three pieces of writing have been greatly informed by discussions begun on this blog, so I’d like to say a special thank you to everyone who’s taken the time to read my posts and comment on them, either online or in person. Here are the references to the publications if you’re interested:
I’ve also received some great emails from people creating their own digital Shakespeares. I leave you with a video from Ben Yackshaw, featuring a human and a robot sitting down to discuss Romeo and Juliet 🙂
I’ve been thinking lately about why I originally set up this blog, and why, more than three years later, I continue to post on it. In the very early days I think I was looking for a place to work through some emerging ideas about digital forms of performance, in particular live broadcasts. I was still in the midst of another research project on a different topic, and I knew that I wouldn’t be able to publish any work in this new area for quite some time. The blog seemed like a good way to document ideas as they came up, to get feedback on them, and then one day to put them together into something longer and more detailed–i.e. an academic publication.
The difference now is that ‘one day’ has finally arrived. The older project is finally done and dusted and the monograph out, and now my primary focus is on the publications that will come out of this research into digital technology and Shakespearean performance. At last, I’m able to devote the bulk of my research time to these ideas, and that time has also dramatically expanded, as I’m on study leave for about 7-8 months of this year. Hurrah!
But the thing I didn’t expect is that, now that I have the time and energy to focus solely on this digital research, I’ve actually started to blog less. In fact, I haven’t written a proper post on the subject for more than half a year. Instead, I’ve been writing up this research as a series of journal articles and chapters, and making plans for the book that will eventually come. Every research day has gone to this more publication-oriented mode of writing, and as a result the blog has lingered by the wayside.
So now that I am officially on sabbatical, I thought I’d take some time reflect on what I’ve learned about both my research and myself as a researcher through blogging, and to think about what I hope to get from it in the future…
1) Blogging offers a way of working oneself into a new research area, especially when time is limited and has to be split among many other things.
The biggest difference for me between starting my first book project and my second one has been time. When you’re working on your PhD, at least in the UK, your main focus is your research. After I started my first job I was suddenly responsible for a lot more things and many more people. Extended periods of research time took a particularly painful hit: I went from spending 4-5 days a week on my research to 1 if I was lucky. So this blog became a way of stealing snatches of time in between teaching, meetings, proof checking, and everything else to start working my way into a new topic. I could have done all this privately, keeping my own personal research diary, but to be honest being able to share my ideas with others was more motivating. This might be useful for me to remember in other aspects of my life: if I really want to do something, do it publicly/socially.
2) Blogging has allowed me to work up chunks of writing (and thinking) that can become part of future publications.
This is true, but also a bit trickier than I originally expected. It’s definitely been the case that several of the details I focused on in blogs have become key points in articles that I’ve recently been drafting. But I’ve also found myself a bit unsure about how to draw on this previous writing without duplicating it. For the most part I’ve developed existing points in new terms, but there are instances in which I’m just really happy with the way I originally wrote it. So I’ve actually been thinking about redacting the occasional sentence from some of my posts, should it prove an issue. I’m still not sure about all of this: I think it’s a grey area and that feelings about it can differ depending on who you ask. About 80% of A Year of Shakespeare had been published online before it became a book, for instance, and all that material is still available through www.yearofshakespeare.com. But I know that others are understandably more wary about material previously posted online, and so I’ve started thinking more pragmatically about what can go on the blog as I come closer to getting some of my ideas more officially in print.
3) Blogging has helped me become part of a community of researchers in this field, both directly and indirectly.
This maybe seems like a no-brainer: blogging is social, responsive, immediate, conversational. You can respond to ideas in a few hours, whereas academic publishing would at best take a few months, and more realistically a few years. This doesn’t necessarily make blogging better than academic publishing–just different. I’ve been able to get talking to others in the field, both directly and indirectly, and to learn from them as I go. This has perhaps been the greatest benefit for me. The flip side is, now that I feel well connected and reasonably well read in the field, I kind of just want to get my head down and write my ideas up the old-fashioned way. Blogging has been a great way of getting started, but, as of yet, not the most natural way of continuing on.
4) Blogging can take a lot of different forms and, presumably, they can change with time.
This is probably the most important thing for me right now. When I first started blogging, I was careful to post regularly and to make sure that those posts were in-depth pieces of writing that I would be happy to publish in more academic contexts. I still really value those posts, and I must say that they’ve been the most helpful in terms of generating feedback from others and establishing some of the key issues that have turned up again in longer publications. But shorter, more whimsical, more descriptive, and/or more irregular posts have their place too. I suspect that as I get further into the writing of this project, the blogs will become more about the process of writing or the activities that surround and support the writing, rather than the writing itself. We’ll see; I might surprise myself. But given how precious having time to write is, I plan to make the most of it while I have it. This blog–or, who knows, maybe a future one–will always be there when it’s time for something different.
March was a big month for me – my first monograph, Beyond Melancholy, came out with Oxford University Press. The book focuses on the different ways in which Shakespeare and his contemporaries understood and thought about sadness, and how this influenced explorations of identity and self-experience. While my digital Shakespeare research is in many ways a world apart from this work on the history of emotions, there are some important connections in terms of how new technologies shape how we feel and how we experience our own sense of self. I wrote the short essay below for OUP’s blog last week, and while it’s mostly about Renaissance sadness, you’ll quickly see that 21st century digital technology has made its way in too…
In September 2013, the American comedian Louis C.K. talked to chat-show host Conan O’Brien about the value of sadness. His comments emerged from a discussion about mobile phones, and the way they may distract us from the reality of our emotions. ‘You need to build an ability to just be yourself and not be doing something. That’s what the phones are taking away, the ability to just sit there. That’s being a person.’
For Louis C.K., a large part of that ‘being there’, of being a person, is about being sad. ‘[S]ometimes when things clear away, you’re not watching anything, you’re in your car … it starts to visit on you. Just this sadness. Life is tremendously sad, just by being in it.’ And the best response to this, he suggests, isn’t to dodge the feeling by picking up a mobile phone, but rather to look at it head on, ‘and let it hit you like a truck … Sadness is poetic. You’re lucky to live sad moments.’
Four hundred or so years ago, around the time of Shakespeare, Queen Elizabeth I, John Donne, and King James I, people also talked about the meaning of sadness, and whether or not it brought any value to life. While few would have described the experience of sadness as ‘lucky’, many did suggest that, in the right contexts, the emotion could be seen as useful, productive, and even enlightening. Think of Shakespeare’s King Lear on the stormy heath, whose extraordinary sorrow helps him see life from a different point of view, to acknowledge the suffering of his impoverished subjects and ‘to feel what wretches feel’.
If we read much of the literature of this time – and perhaps any time – we discover a world of agonizing, and yet somehow also constructive, pain and sorrow. Emotion is repeatedly represented as an extension of the self, meaning that as characters start to know their feelings, they also start to understand themselves and the world that they’re a part of. At the same time, if we read much of the more formal and explanatory writing on emotion from this period, we get a rather different story. Here, writers frequently characterized emotion as a ‘malady’, a ‘perturbation’, and even a ‘disease of the soul.’ For emotion was believed to cause motion in the mind and body, which could destabilize rational thinking and jeopardize the harmony of the self.
This was nowhere truer than in the experience of sadness. Of all the emotions recognized and discussed at this time – or of all ‘the passions’, as they were called then – sadness or ‘grief’ was widely regarded as the most dangerous and damaging. Countless writers emphasized the physical ailments sad feelings could bring. ‘There is nothing more enemie to life, then sorrow’, the humanist and diplomat Thomas Elyot wrote in his best-selling medical regimen The Castell of Health, and the theologian Thomas Wright likewise advised readers in his The Passions of the Minde in Generall to ‘Expell sadnesse farre from thee; For sadnesse hath killed many, neither is there any profite in it.’
Medical physicians agreed, identifying the passions as one of the six ‘non-natural’ factors dramatically influencing health (the other five being diet, sleep, exercise, environment, and, to put it delicately, ‘evacuation’). Linked to the cold, dry humour of melancholy (literally meaning ‘black bile’ in Greek), sadness was seen as the harbinger of numerous bodily troubles, including stomach aches, light-headedness, heart palpitations, and wasting illnesses, which, in their most extreme forms, might even cause death.
Indeed, while we might now think that dying of sorrow is a rather sentimental idea fit only for the stage, in the seventeenth century ‘grief’ was regularly included as a cause of death in the London Bills of Mortality, which were one of the earliest forms of municipal record keeping. Though many of the Bills no longer survive, if we look through those that do remain, we can see that during the years 1629-1660 more than 350 people in the city of London were believed to have died from extreme sadness. Elyot and Wright’s comments, it seems, were not idle threats.
And yet, despite the palpable dangers posed by sadness at this time, many writers still suggested that it had important benefits, and even a kind of ‘poetry’, to harken back to Louis C.K.’s twenty-first-century observations. First and foremost, these writers insisted that there were different sorts of sadness, which had different effects on the mind, body, and soul. ‘Grief’ was not always identical to ‘melancholy’, which was certainly not the same as ‘godly sorrow’ or ‘despair’ – both of which had much more to do with theology and the immortal soul than physiology and the medical body.
Second, and perhaps most importantly, these different sorrows didn’t mean the same thing irrespective of the sufferer. Even a dangerous grief could be productive if the person experiencing it deemed it so. In the literature and historical records of the period we can find numerous instances of people defying the advice of doctors, priests, family, and friends, and persisting in sorrow due to a belief that it revealed something important to them about their own sense of self.
Many scholars have suggested that culture offers people ‘emotional scripts’ by which to make sense of and act out their feelings, but looking at responses to sadness in Renaissance England we can also see how people engaged in what I call ‘emotive improvisation.’ These wilful, and often defiant, responses took sufferers ‘off book’ and towards new ways of understanding emotional experience and self-discovery. They show us what happened when people ‘put the phone down’, as it were, and let life hit them like a truck.
Call me naive, but I would have assumed that commercial theatre (i.e. Broadway and the West End) would be better equipped to capitalize on new financial ventures such as live broadcasting. But this piece from the New York Times — ‘Off Off Off Broadway (at Your Multiplex)’ — suggests that this isn’t the case. On the contrary, established arts institutions like the Met Opera and the National Theatre have both the cultural heft and long-term structure to be able to develop an in-house broadcasting programme and to keep it supplied with a steady stream of productions:
‘The scattershot attempts to follow National Theater Live and the Met suggest that there is still plenty of head scratching about the financial and philosophical issues behind the idea of canning Broadway for mass consumption.
Julie Borchard-Young, who along with her husband, Robert Borchard-Young, runs BY Experience, said the sheer institutional might of companies like the Met and the National made the process far easier to navigate than it would be for an individual Broadway producer. Ms. Borchard-Young, who knew the Met’s general manager, Peter Gelb, from his days at Sony Classical, was the one who initiated conversations about bringing opera to the movie-theater masses.
“Broadway is not a single unified institution that can do all the legwork to prepare the marketplace,” she said. “Also, the serial nature is important. When you have a series of productions, everything from marketing to other costs are easier to handle.”’
The other point the article makes are the manifold challenges of negotiating artists’ contracts and associated financial rights — something I’ve heard reiterated in smaller-scale digital projects as well.
Unsurprisingly the question of money is a very, very important one, and one that perhaps we hear far too little of. How much does it cost to produce a live broadcast, and to what extent are those costs recouped through cinematic and DVD/download distribution?
I did some reviewing work for Routledge in the autumn, and in return they very kindly sent me some new entries for my quickly growing digital Shakespeares reading list. Mostly these books have been decorating my desk over the holidays, but I have managed to get started with Steven E. Jones’s recently published The Emergence of the Digital Humanities. It’s a very helpful guide to the growth of digital humanities as an academic field over the last fifteen or so years, though Jones points out that ‘DH’ didn’t become ‘a thing’, as it were, until about 2009 — which apparently is 5 years ago now… (Side note: reading about digital technology and culture is involving a lot of moments in which I encounter dates that look very recent and then realize that, actually, they aren’t.)
Jones works his way through a number of interesting case studies that allow him to talk about the dimensions, people, places, things, publications, and practices that make up digital culture, with each word being the title of a different chapter in the book. His thesis is that digital life is material, located, and social, and most fundamentally that it can no longer be clearly separated from what we might be tempted to call ‘real life’. Virtuality, he suggests, is not a very useful way of thinking about what the digital is anymore, seeing as how digital tools are so enmeshed in so many very real aspects of modern daily life (think email, iPods, Google maps, smart phones, GPS, e-readers, swipe cards, and digitized records of all different varieties).
Rather than think of the digital as something that happens in a weird sort of cyberspace, Jones argues that we need to accept that it has ‘everted’, or exploded outward into the world at large. The result is an integrated, but nonetheless very mixed reality, in which we are constantly presented with ‘the paradox of living in two worlds at once.’ (Another side note: Jones tells us that the terms ‘cyberspace’ and ‘eversion’ both come from the sci-fi writer William Gibson, who coined them in 1982 and 2009, respectively. Which makes me wonder — what is Gibson writing about now??)
I find Jones’s central argument very helpful and persuasive, and I’m interested in how people like me approach the process of moving between the highly related but still distinctive worlds of analogue and digital (I’m tempted to use the word ‘toggle,’ but I feel like that must say something about my own digital coming of age). I think this process of oscillation is what I’m trying to understand better as I think about the experience of watching a performance live in-person versus live on-screen, or of trading Shakespeare quotes with a group of people through Twitter versus underlining passages in my own hard copy of the complete works.
These questions are put into sharper relief this week as my university resumes teaching, and I find myself giving the same lectures and leading the same seminars on-site for campus students and online for distance learning ones. My approach has always been to combine and blend the two groups as much as possible, extending the on-site into the online, and the online into the on-site. But it would be silly to say that differences don’t remain. Which makes me wonder, are there limits to eversion, or is it simply a matter of time?
The British Milton Seminar meets twice yearly to discuss papers on subjects relating to John Milton's life, work and times, together with his legacy and influence. The seminar is open to academic and academic-related staff and to postgraduate students.