Inside the Computer Clubhouse (Part One of Three)

The Computer Clubhouse is a worldwide network of digital literacy programs in after-school settings. The first clubhouse program started in 1993 at the Boston Computer Museum, an outgrowth of the work being done at the MIT Media Lab by Mitchel Resnick and Natalie Rusk. By 2007, there were more than one hundred clubhouses world wide. I I have long admired the extraordinary impact of the Computer Clubhouse movement, having had the privilege to get to know Resnick and others associated with the project during my many years at MIT. Few other programs have had this kind of impact on learning all over this planet, getting countless young people more engaged with the worlds of programming and digital design through an open-ended, constructionist practice, which respects each learner's goals and interests. A new book, The Computer Clubhouse: Constructionism and Creativity in Youth Communities, pays tribute to the fifteen year plus history of the movement, sharing some of its key successes, and offering key insights into what has made the Clubhouses so successful. The highly readable book, addressed to educators of all kinds who want to make a difference in addressing the digital divide and the participation gap, was produced for the Teacher's College Press by some key veterans of the movement -- Yasmin B. Kafai, Kylie A. Peppler, and Robbin N. Chapman. I know this book is going to be of great interest to many of you who follow this blog because of your interest in new media literacies. The publisher was nice enough to arrange an interview with the editors for this blog and I will be sharing their perspectives over the next three installments. In this installment, they share something of the goals and history of the clubhouse movement. In future installments, we will dig deeper into its global impact and its governing pedagogical assumptions.

Kalfai's work will already be familiar to regular readers, since she participated in an interview I did a year or so back with the editors of Beyond Barbie and Mortal Kombat: New Perspectives on Gender and Gaming.

How would you describe the vision behind the Computer Clubhouse movement? What factors led to the creation of the first Computer Clubhouse?

YASMIN: It all started out in the Computer Museum. Yes, in the late 80's there was a museum with a walk-through computer in Boston (it has since then moved into the Museum of Science). Coincidentally it was right next to the Children's Museum with the mission to make information technology more accessible to the public. Many of the exhibits in the museum allowed visitors to take a closer look at the inner working of a computer and some even asked them to make things, like robots. Those turned out to be the really popular exhibits with kids; so popular that some kids would come back and sneak past admissions into the museum in order to play with the computers. Remember, computers at home or in school were rare in those days. This led Natalie Rusk, the education director at the computer museum, to talk with Mitchel Resnick and Stina Cooke to propose an after school space to which youth could come independent from the museum with a special focus on creating things with technology.

The idea was to provide access to the tools professionals actually used to make graphics, robots or games. There is a great paper titled "Access is not enough" that recounts this history in more detail as does a chapter in the book. What's important here is though that the focus was not just on giving access to computers but on promoting creative uses -- the very ones kids found so intriguing that they came back voluntarily (!) to the museum. All of this was a really bold proposal in the early 90's - that kids would actually be interested in designing technology, in making things. But we had ample evidence that indeed they were interested in challenging activities.

At the time, I was working in a local elementary school where hundreds of students were designing video games to learn about programming and mathematics and science and writing stories and advertisements. They were spending months on it. We knew that these kinds of creative activities with computers were popular with kids not just in school but also in museums and after school clubs.

ROBBIN: The vision of the Clubhouse can be described as a response to the need for space where equity, opportunity, and learning community membership become resources for young people. I often express this idea as the function, Clubhouse Vision = f{equity(learning, creativity) + opportunity(self-development, new areas for growth) + learning community(participation, citizenship)}

Can you define constructivism? How has this philosophy shaped the work of the computer clubhouses?

ROBBIN: Constructionism is project-based learning that occurs through the building and rebuilding of projects that you share with others. I view constructionism as an organic learning model because it grows in depth and breadth as it is expressed different local learning environments. This ability to adapt keeps the model regionally relevant and robust.

YASMIN: Seymour Papert who coined the term 'Constructionism' clearly distinguished it from 'Constructivism' that emphasizes the construction of knowledge by learners. Papert emphasized that indeed learners construct their own knowledge but they do so best by making things of social significance. In the end, you're constructing knowledge by constructing artifacts - be it a computer program, robot, or games - that represent your thinking. Equally important is the idea of 'social significance' that means that you do so with others and for others. I believe these two aspects, the artifacts and the social context, are what make constructionism a pedagogy of the 21st century. Today, we take it for granted that people socially interact and contribute via technology but twenty years ago this was a bold assertion.

Give us a sense of the scale of the Computer Clubhouse movement. What has allowed this project to achieve this level of scalability and sustainability?

KYLIE: Currently, the Computer Clubhouse Network is an international community of over 100 Computer Clubhouses located across 21 different countries around the world. The whole movement started with the opening of the Flagship Clubhouse in Boston in 1993 and grew with support from the Intel Foundation and several others to reach the point that it's at now. In my opinion, there are three crucial ingredients that led to the success of Computer Clubhouse movement.

First, the model establishes new Clubhouses within existing community organizations. This is helpful not just for management and advertisement in the local community, but also helps with long-term planning and additional funding support for the new Clubhouses. There are some challenges with this model of expansion, however. Primarily, local staff need training and support to adhere to the Clubhouse philosophy, which can be challenging for people coming from more traditional ways of thinking about informal learning spaces as "computer labs". Instead, Clubhouses are more like digital studios, and have a wide array of tools available for youth beyond just computers. Of course, there are other issues of coordinators gaining the technical expertise to run the Clubhouse but, as the coordinators will tell you, you can learn those skills on the job. Helping coordinators to uphold the ideals of the Clubhouse is an active central Network in Boston that provides ongoing support in the form of training for new Clubhouse staff, in-person visits from Network staff, and a cutting-edge intranet that connects all of the Clubhouses and coordinators.

The Clubhouse intranet provides a worldwide social network to share ideas, projects, host social events, and share insights on how to run a successful Clubhouse. Of course, what really sets Clubhouses apart is that these spaces are really youth-organized and run. At local Clubhouses, the youth run for executive offices and oftentimes take on leadership roles in the local community. If youth didn't find the Clubhouses to be engaging, the Network would cease to exist. Youth really drive the Clubhouses and return even after they graduate to help mentor future generations of members - another key sign of their commitment to the long-term success of these programs.

ROBBIN: The core principles of the Clubhouse model, with its grounding in the constructionist learning framework, are important because various mechanisms can be wrapped around the model to facilitate learning. In the case of the Clubhouse, digital technology is one layer. Other layers include local customs, materials, and modes of engagement. The model doesn't exist because of the technology; instead the technology is another material being used by the model. Because of this layering characteristic, the model is very adaptable to local needs and resources.

Yasmin Kafai, professor of learning sciences at the Graduate School of Education at the University of Pennsylvania, has led several NSF-funded research projects that have studied and evaluated youth's learning of programming as designers of interactive games, simulations and media arts in school and afterschool programs. She has pioneered research on games and learning since the early 90's and more recently on tween's participation in virtual worlds, which is now supported by a grant from the MacArthur Foundation. She has also been influential in several national policy efforts among them "Tech-Savvy: Educating Girls in the Computer Age" (AAUW, 2000). Currently, she is a member of the steering committee for the National Academies' workshop series on "Computational Thinking for Everyone". Kafai is a recipient of an Early Career Award from the National Science Foundation, a postdoctoral fellowship from the National Academy of Education, and the Rosenfield Prize for Community Partnership in 2007.

Kylie Peppler is an Assistant Professor in the Learning Sciences Program at Indiana University, Bloomington. As a visual and new media artist by training, Peppler engages in research that focuses on the intersection of the arts, media literacy, and new technologies. A Dissertation-Year Fellowship from the Spencer Foundation as well as a UC Presidential Postdoctoral Fellowship has supported her work in these areas. Her research interests center on the media arts practices of urban, rural, and (dis)abled youth in order to better understand and support literacy, learning, and the arts in the 21st Century. Peppler is also currently a co-PI, on two recent grants from the National Science Foundation to study creativity in youth online communities focused on creative production.

Dr. Robbin Chapman is currently the Manager of Diversity Recruitment for the MIT School of Architecture and Planning and Special Assistant to the Vice-Provost for Faculty Equity. She is responsible for strategic leadership and development of Institute-wide faculty development programs and graduate student recruitment initiatives. She is PI on a Department of Education grant project that is underway in schools in the Birmingham, Alabama public school system.

Writing Between the Cracks: An Interview with Interfiction 2 Contributors

This is the last installment in my series about the release of Interfictions 2. Today, I interview some of the authors who contributed short stories to the collection. Once again, the focus is on the complex and contradictory role of genre in contemporary popular fiction. (By the way, I am taking Friday off to spend time with my family so see you next week.) Do you consider yourself an interstitial artist?

Carlos Hernandez -- Definitely. I think there are two general impulses for the artist: the desire to innovate and the desire to communicate. Communication is vital, of course: art can't happen without it. But since communication requires a certain quorum of similarity between writer and reader -- e.g. a shared language -- it tends to be the more conservative impulse. Innovation, by contrast, is where genius lives. It is the only legitimate reason to make new art. Otherwise, we should all simply go and enjoy old art exclusively. For me, writers who cleave too closely to a genre -- and I would most definitely include "literary fiction" as a genre -- are favoring communication over innovation: to the point where they are neglecting the most important reason for this new art to exist.

Jeffrey Ford -- This has always been one of my problems with the term interstitial. I don't believe that artists are interstitial or not. I only believe that works of art can be interstitial. An artist should, of course, always be willing to go anywhere and do or say anything necessary for the creation of a work of art. Sometimes it's as daring to create something in a "traditional" structure and mode as it is to make something that could be labeled interstitial. Artists are artists -- sometimes their art is interstitial. Besides, it's the interaction of the materials and influences as they come together or play off each other in a work of art, not the artist. When artists operate in the marchland between genres or where media coincide it can be as lame and stale as traditional approaches to creation. I think there is great potential energy in these places, but if one were to strart out with the idea of, "OK, now I'm going to get interstitial." That's not following an idiosyncratic vision -- which for me, is what art's about. You shouldn't know something is going to be interstitial before it reveals itself.

Alaya Johnson -- I think I probably do a lot of work in interstitial spaces. I definitely see Jeff Ford's point, though, about it being tricky to identify the artist as interstitial, as opposed to the art. And to some extent, perhaps one could argue that all art is interstitial in some way. The difference might be that in more conventional work, the interstices are narrower gradations of sub (or sub-sub) genres. For example: a novel about chain-smoking biker elves who fight vampires is pretty unequivocally interstitial, but the genres it straddles are so commonplace and predictable that it wouldn't be particularly innovative. (And now I'm going to negate the forgoing entirely, and say that if you do something with enough intelligence and self-awareness, even the tritest plot line can be made into something innovative.)

What motivated your participation in this project?

Carlos Hernandez -- I had written a strange story that I loved, and I wasn't sure what to do with it. Cue the Interfictions II call for submissions. I nearly swooned for pleasure. The philosophy of the project was not only in line with this particular story's aesthetic, but with pretty much everything I believe about art. The arrival of that call in my life, just when I needed it: it's enough to make you believe in Flying Spaghetti Monsters.

ALAYA JOHNSON -- That's essentially what happened to me, too. I'd written a strange, fractured story I had a hard time finding a home for, and then I saw that Interfictions was doing a second anthology. I jumped with joy-- especially when they bought it!

Brian Slattery: Like Jeff, I'd written my story before I knew that Interfictions 2 was coming out, and I knew I liked it (which is unusual for me regarding my own stuff) but didn't know what to do with it, because I also knew that it was, well, kind of weird. Meanwhile, I'd read the first Interfictions collection and really enjoyed it; reading the stories, I had the sense that these were my people. So I was delighted to learn

that Interfictions 2 was happening, and of course I'm even more delighted to be a part of it.

Jeffrey Ford -- I'd written the story before I'd heard that there was going to be an interfictions 2 anthology. I made some cursory attempts to publish this story, but had no luck. It was too wacky. I liked it, though. I thought it had something going for it, so when I heard about Interfictions I gave it a shot. The fact that the editors of this anthology accepted it gave me great encouragment to believe in the fiction I feel a personal connection to, even if it is perceived as too different by others.

Do you see the interstitial as a way of cutting across genres or escaping them all together?

Carlos Hernandez -- I don't think we ever escape genre. Genre is deeper than what our publishing industry has defined for us. I can meaningfully oppose the "personal essay" to a work of journalism because we all recognize that the different ways each has of producing truth and beauty. But I can cut across those genres, remix them, mash them up: I can write gonzo journalism or creative nonfiction or a hundred different other flavors of nonfiction subgenres. And fiction is, and should be, even slipperier, because fiction always begins with the following premise: "What I am about to tell you is a l

ALAYA JOHNSON -- I'm with Carlos-- I don't think genre is something anyone can escape (even literary artists who pretend they don't work in one). It's very fundamental to how humans perceive the world, I think: categories and taxonomies are how we structure information. It's crucial to our creative process. I see the purpose of defining the cross-genre impulse as "interstitial" as largely one of pollination. Of getting out of the constantly rehashed and well-plowed ideas in our most comfortable categories and exploring how others see things. My story cross-pollinates with a great deal of non-fiction, mostly of the political, polemical variety. I could never have written that story four years ago, before I started reading that sort of work. I became so immersed in it that, eventually, that story became something that I not only could write, but felt utterly compelled to create.I realize that my answer to this question is annoying, but here goes.

Brian Slattery: First off, I don't have nearly enough of a sense of the breadth of

literature these days to make a sweeping generalization about any genre. And second, the question of genre seems more important to critics and marketing folks--who need them for very good reasons--but I'm not a critic or a marketing guy. From where I stand as a dude who writes stuff, the genres aren't really well-defined enough to be cut across or escaped. The borders are so fuzzy that they aren't really borders at all. In the past, I've used the analogy of genres as

neighborhoods in a city with no walls between them. But here's another one: If the genres were countries, they'd be sort of quasi-failed states. The capital cities might be under control, but the farther from the seat of government you get, the less governed (governable?) the territory becomes. And there are wide stretches of land that are disputed, argued over, but never claimed outright; no genre has the power to assert its dominion over the others.

I understand the idea of genre better when it comes to certain types of plot or plot devices, character archetypes, or certain styles of writing, but even in that case, if I'm writing something, I see genre choices more as tools in a toolbox than as a series of constraints that one must work to stay within or break out of. Which is probably why the label "interstitial" applies so well to what I do.

Jeffrey Ford -- Again, for me, the interstitial is not a method but only a product. I guess it can cut across genres. I don't really know enough about genre to know if it can be escaped. I know about it, and many would say I've worked in specific genres for years, but genre has many facets and secrets, and if you were to escape genre, would your work then be in the genre of works that escaped genre?

Have you worked in genre fiction previously? If so, what had been your experience?

Carlos Hernandez -- Yes, I've published several short stories in genre magazines and anthologies. But again, I think the question is tricky, since I think we are never outside of genre. The experience of writing genre fiction and of working with editors to publish my genre fiction have easily been the best writing experiences of my life. Never have I learned more, nor felt more like a writer doing honest work, than I have when working with an editor on a story. Also, let me add that genre editors are, to my mind, the best readers a body could ask for. They tend to be these prose-devouring polymaths who deal equally well in the nuances of art and the practicalities of production. It takes a kind of double-genius to do the job of editor well -- a genius of both left brain and right -- and I've been very lucky to meet and work with some of them.

ALAYA JOHNSON -- If by "in genre fiction" you mean, "outside of the literary genre" then yes, that's largely where I've found myself working. The tropes of fantasy (and, to a lesser extent, Science Fiction) speak to me most powerfully of all the various literary forms. Every once in a while, I contemplate stories that lie firmly in what we call "literary" but they never feel as compelling as the ones that cross more boundaries.

Brian Slattery: Yes. My first short story was published in Glimmer Train, and probably by anyone's generic definition, it was an example of straight-up

literary fiction. At the time, I had given myself the assignment of seeing if I could cram a typical literary fiction novel's story arc into the shortest story I could write. I have no idea if I succeeded,

but I was completely psyched that Glimmer Train's editors saw fit to publish the result, and am still grateful to them today.

When I finished my first book, though, I really had no idea what I had written. It was either a science-fiction novel with literary-fiction stylings, or a literary-fiction novel with lots of science-fictional elements in it, and I was in no position to make the call. So I figured I'd let someone else decide, and submitted it to a few different places. The feedback I got taught me a lot. The

literary-fiction people I talked to seemed to like the quality of the writing (which was nice), but were very confused by the plot, number of characters, and other things that science-fiction readers don't get thrown by. The science-fiction people, meanwhile, seemed to just like the book. So I figured that I'd written a science-fiction novel and didn't think all that much more about it. My second book came out as even more clearly science fiction, which delights me. But in the

future? Who knows?

Can we produce works which appeal to popular readers without falling into genre formulas?

Carlos Hernandez -- This is a deceptively difficult question, because it calls on action from both writers and readers. I feel that, in the United States at least, the media conglomerates and mega-bookstores that have taken over the publishing world have taught the American public to narrowly define genres and, more importantly, to narrowly enjoy works of specific genres. Readers now will often check the spine of a book to make sure it is of a genre they will enjoy -- "if it doesn't say 'mystery' on the spine, I won't like it, because it's not a mystery." I know people speak disparagingly of Oprah's Book Club, but c'mon people! She got millions of folks to read Beloved, which is not only one of the greatest books of the 20th century, but as interstitial a work of art as we could ask for. What's unfortunate is that it takes a decree from Oprah to give popular readers the confidence and motivation to try something outside of their comfort zone. So, solutions? 1) Get publishers and mega-bookstores to stop insisting on narrowly-defined genres (near-impossible); 2) Get Oprah to endorse a lot more interstitial art (not exactly a reliable method); 3) Get writers to keep trying to write innovative work that eschews pretension while at the same time challenges readers -- and hope for the best.

ALAYA JOHNSON -- I agree that the issue of connecting readers to fiction has a great deal more to do with the juggernaut of the industry behind books than the works actual readers might enjoy if they could find them. Why else, for example, would book stores ghettoize literature by black writers in the "African American" or "urban" section of the book store? Not necessarily because white readers won't buy these books, but because segregation is how the industry has constructed itself. I've had the experience of hunting through the Science Fiction section for a single Octavia Butler novel, finding none present, and eventually discovering them shelved next to True to the Game III or whatever in Urban Lit. In the latter example, the segregation is based on race, but it's the same principle that means those readers who enjoy Atwood's The Handmaid's Tale will never find Sherri Tepper's Grass without venturing into a completely different section of the bookstore. The ostensible reason for this segregation is that it makes it easier for readers to find works they like, but as in both examples, it also makes it harder for readers to find works they like. So, to answer your question, I don't think "genre formulas" are the culprit here so much as an industry that makes it financially and practically impossible to look outside your own genre.

What kind of emotional experience do you think your story offers readers?

Carlos Hernandez -- I picked up a term from a collection of medieval words and phrases called: "merry-go-sorry." That phrase encapsulates my entire philosophy in three words: life is merry-go-sorry-go-merry-go-sorry-go-etc. Round and round we go, from moment to moment -- life is bewildering, awful at times but also awe-ful, a relentless dogpile of bathos and pathos. So what do we make of it? What can we say about an existence like that? If my story offers any answers, they are emotional ones, because I take the word "fiction" very seriously. I lie like hell all the way through.

ALAYA JOHNSON -- I'm not sure, really. I suppose I would hope a cathartic one, because issues of grief, guilt, redemption and eventual catharsis seem to run through most of my work. But because I have constructed it in such a way that the reader has to really work to find the story inside, it's entirely possible that readers could find the experience of reading it rather emotionally cold or clinical. It's hard to tell how people will react to things, especially when I as a writer have very deliberately eschewed most of the techniques in the writer's arsenal for heightening pathos (the equivalent, in my mind, of sweeping strings and roiling storm clouds in, say, a Steven Spielberg movie).

What was the biggest challenge in writing interfiction?

Carlos Hernandez -- Writing a relatable story that in its own small way tries to innovate. In short, writing a story that's good.

ALAYA JOHNSON -- One of the purpose of genres, and taxonomies and categories in general, is that it provides an in-group/out-group structure. It, by virtue of exclusion, creates a community and a community creates a shared language. Many science fiction writers, for example, have had the experience of writing a story that is perfectly comprehensible to their in-group of SF readers, and utterly enigmatic to those familiar with other communities and literary languages. So the greatest challenge of truly interstitial writing, I'd argue, is that you both need to have mastered the various languages of the genres you want to straddle while making the foreign tropes comprehensible to all parties. Out of the labor of this communication, though, comes particularly gratifying rewards.

Carlos Hernandez is a writer and English professor living in New York City. He was the co-author of Abecedarium.

Jeffrey Ford's The Physiognomy won The World Fantasy Award for Best Novel for 1997 and was a New York Times Notable Book of the Year. Ford has since finished the other two books of the trilogy, Memoranda (99 NY Times Notable Book of the Year) and The Beyond (2001).

Alaya Dawn Johnson's novels include Racing the Dark and The Burning City, the first two installments of The Spirit Binders trilogy and the forthcoming Moonshine, a vampire saga set in the 1920s.

Brian Slattery's novels include Liberation: Being the Adventures of the Slick Six After the Collapse of the United States of America and Spaceman Blues: A Love Song .

Counting on Twitter: Harvard's Web Ecology Project (Part Two)

Last time, I shared with you some of the work being done by Harvard University's Web Ecology Project, specifically focusing on the use of Twitter in the aftermath of the Iran Elections and around the death of Michael Jackson. Through qualitative and quantitative research, the team is seeking to develop a better understanding of the flow of ideas through the social networking world and how different participants exert influence on Twitter. My respondent last time was Dharmishta Rood, who I worked with when I was back at MIT. Today, I am showcasing the research being conducted by three other researchers on the Web Ecology team -- Erhardt Graeff, Tim Hwang and Alex Leavitt. I asked them each to share some of their current research and explain why they think it can contribute to our understanding of the new media environment. For more on the Web Ecologies Project, check out Alex Leavitt's recent post on the Convergence Culture Consortium Blog. Erhardt Graeff: One of our hopes for Web Ecology is a fusion of quantitative and qualitative approaches to studying social media phenomena. Our goal of constructing a scientific framework for tackling quantifiable data online is only possible when we recognize the cultural contexts. In Web Ecology, we see the formalization of these contexts as web ecosystems such as LiveJournal, Facebook, and Twitter.

Inspired by the ethnographic work of a number of researchers, including danah boyd, Mimi Ito, and Keith Hampton on Netville, we are beginning to profile individual social media networks. We call the outputs of our research "Web Ecosystem Profiles". The goal of each profile is to characterize the cultural landscape of a web ecosystem. As you might expect, much of this is done through participant observation.

Of course, the boundaries of each ecosystem are negotiable as in any study of a community. More importantly, a web ecosystem's state is in constant flux with users joining and leaving, new features being introduced, and memes propagating the network. Thus, Web Ecosystem Profiles must be dynamic documents. And to guide our work, we rely on a few of the central tenets of Web Ecology, first laid out in Reimagining Internet Studies:

• Interdependence: code and users are part of an inseparable aggregate web phenomenon;

• Boundedness: the web is constrained by various forces and configurations;

• Significance: content on the web retains inherent value.

Here is an abbreviated version of the outline we are currently using to build profiles. The full version is on our wiki (requires registration):

• Introductory Overview of the Ecosystem

• Common language for discussing components of the ecosystem

• "Typical" reasons that users register / access the ecosystem

• Technical affordances / Constraints of the ecosystem

• Requirements of site usage

• Landmarks of the ecosystem's evolution (e.g. eternal septembers, jumping the shark)

• Defined user cohorts

• Ecosystem-specific lexicon

• Phenomenology of 'Typical' Sessions in an Ecosystem

• Describe general experience of using the site

• Key use cases

• Possibilities for Quantitative Analysis

• Introduce available APIs

• List of atomized site components / activity that could be quantified (e.g. tweets, likes)

• Documentation of successful and unsuccessful approaches to this ecosystem

The last section is unique to the quantitative research Web Ecology hopes to undertake. On Twitter, this is easy because they provide a very open API, with decent documentation, and also the forms of interaction are easily quantified. For Twitter, a web ecosystem profile is particularly useful to help formalize the documentation of unconventional use cases (see excellent examples in danah boyd's draft of "Tweet, Tweet, Retweet"). Charting all the different ways users retweet can enable a better quantitative study of retweeting behavior by ensuring that we: 1) catch all of the various forms of retweets and 2) understand what the different forms might signify.

A more straightforward use of a Web Ecosystem Profile is when a social network has not been explored by many researchers. A few weeks ago, fellow Web Ecologist Seth Woodworth started to use the profile framework to document aspects of LibraryThing, which no one else in our community was using at the time. Did you know that the key demonym in the community is a "thingabrarian", or that one unconventional practice is the creation of fakester libraries for popular, dead authors?

Web Ecosystem Profiling is at a very early stage of preparation. But we believe the need for a peer-produceable way to continually document the contexts of social media phenomena is obvious and immediate. Hopefully, a larger community of researchers are willing to contribute and offer feedback.

Tim Hwang: The Era of Social Media has gifted us with two Big Ironies. First, there's the Big Irony of Business, where extensive practical experience with communities online hasn't successfully translated to the emergence of a science (or even a cluster of useful, concrete reliable methods) around building vibrant social spaces on the web. Second, there's also the Big Irony of Academia, where massive amounts of data, talent, and research on the dynamics of social networks fails to make it into informing the day-to-day practice of businesses (or, indeed, the popular discourse).

In both worlds, the irony is the same: we do in some sense have the key information right in front of us (either in terms of practical experience or reams of qualitative and quantitative research), but a notable lack of ability to convert it into specific, actionable knowledge.

Indeed, this has led us to kind of a sorry state, where good people -- some seriously sharp, brilliant people -- can spend hours talking about the really beautiful research about the social nature of the web. But when the key questions come down the pipe, "So what can I do to foster a community?," "So what factors are responsible for promoting the propagation of culture?," most folks are reduced to wandering generalities and the mantra-like suggestion that the person in question should really consider starting a Twitter account. Where we should be sifting through the available data and offering specific ideas, we've largely only got vague philosophies and anecdotes. Depressingly, the Emperor has no clothes. At the point we're sitting, he's not even really the Emperor, either.

And perhaps most scarily, there's a kind of superstition I feel that's starting to circle around the research, a suspicion that the whole idea of digging deep with data and getting scientific with our prescriptions is, in fact, a largely misguided idea. Social media expert Chris Brogan recently wrote about the quantitative side of things:

I'm writing this from a conference full of researches [sic]. They are all talking passionately about numbers, and I get this. I understand that they're passionate about exacting a science out of the crazy data of human passion. And yet, part of me thinks that numbers often serve us as little life rafts. [...]

We cling to numbers. In business, we use numbers as our primary gauges. But in relationships, we don't. Right? Do you count who hosted the holiday party and do you measure just how delicious the meal was on a chart? (If you do, I take it you like sleeping on couches.)

And he's dead on. But about the wrong point. It's true: you're are in fact a serious jerkface if you behave in the robotic way he's talking about. But we probably wouldn't , for example, blame the host for meticulously keeping track of what people liked and didn't like -- and using it to plan the menu for the next holiday party. This is a simple way of saying that, rigorous exploration isn't bad when it improves our results in a real way. And so, the responsibility for the flaw in Chris' voiced skepticism doesn't fall on him at all. I think it's a natural response to the failure of the research to actually step up to the plate and deliver some implementable knowledge beyond the generalities. If all of our experience and hard data can't come to anything practical, it's easy to believe that it might not be a worthwhile approach to rely on.

So how do we finally step up to the plate? And, before we get to that: how did we get here?

Largely, I'm willing to argue that the Big Ironies have emerged because there's no good space where people can playtest, experiment, and rapidly iterate on a variety of strategies, particularly where influencing the social space online is concerned. There's no good place to measure success, or even compare various approaches against one another to assess their usefulness. There's no way to prove that your methods and data mining can actually produce repeated success. Without that kind of lab, it's tough to take insights from both the research and business world, and try them again and again. Without trying and trying again, we never get to know how information might actually be transformed into useful, applied knowledge.

One of the big projects of the web ecology community has been to see if there's a way of providing that exact environment. Specifically, we've been talking about the concept of competitive games, and the fact that they provide the ideal social structure that we're looking for. Games create repeatable scenarios, allowing us to identify and test a given situation over and over again. Competitive games require measurable goals, and a structured way of assessing success. Finally (and, perhaps best of all) games are good experimental zones, places to try out tactics and strategies on low stakes.

Add the involvement of real people and social structures to take it out of an abstracted lab scenario, and you've gotten to an experiment that we're starting to undertake, something we call social wargaming.

The general premise is simple: beginning with a "battlefield" population of users (who are unaware that a game is going on -- indeed, revealing the existence of the game is against the rules of the game), teams compete to effect specific changes in their behavior. This goes from as simple as getting a social network to pass around a piece of content, to as complex as attempting to bridge the structural gaps between two unconnected clusters of users. We're starting out with single platforms, but the eventual idea is to level up to testing the ability of teams to create certain effects across various networks, and in the social ecosystem of the web as a whole.

The open, implicit challenge is equally simple, though perhaps provocative to the point of being considered trolling: if you're really so good at understanding what culture and community online is all about, if you're really so good at "engaging communities" and being a "trust agent," why not put the money where the mouth is and see if you can't straight up just do it?

The first iteration of this game, entitled "Triangles," builds around this premise. Essentially, teams are given a "terrain" of contested target users to study on Twitter that are connected in some way. The competition is for them to start fresh with an "ego account," which will compete with other groups to create as many tightly linked triangles of connection between their account and two other target users in a short period of time. Over a series of games, we can also change up the terrain and rules to ask other questions -- what tactics work best when trying to build new connections in an already tightly interconnected social group? Can robots achieve the same results as humans in fostering certain types of behavior?

The rules in more depth are available here (Social_Wargaming_Triangles.txt,) and we're actively looking for participants who want to play a role in this. First round begins November 20th, and will be running during the first week of December. Definitely drop an e-mail to tim@webecologyproject.org, if you'd like to be involved. And, with any hope, we're hoping that the outcome of this gaming will be something in actuality quite different that just mere entertainment: experiments towards forging an applied science of cultural and community spaces online.

Alex Leavitt: A primary goal of the Web Ecology Project aims to analyze how the relationship between social networking platforms and its users affects and is affected by the cultural practices of online communication and community building. To approach this goal, we had striven to establish a set of first principles for the Web on which to base our future research. Our analyses of influence on the Web usually started with these first principles. For example, the smallest units of communication might be a page view or a click. Using these measurements, how could we make declarative statements about how people interacted in mediated spaces like Twitter (which structure communication based on how the programmers design the platform)?

However, designing first principles proved a bit difficult, and when I wrote "The Influentials" I realized that we would have to shape sets of "elementary particles" (like chemical atoms and molecules) per each system. Basically, because each platform controls the possible modes of communication, first principles for Facebook are inherently different than those of Google Reader, for example. For Twitter, the platform analyzed in "The Influentials," these elements begin with the ordinary tweet, out of which we see related particles, like replies, retweets, and mentions.

For the elements on Twitter, I established an operational definition of influence (meaning that our analysis is ultimately separated from any theories of influence previously researched in academic circles). Tweets became actions on which replies, retweets, and mentions were enacted. Thus, we organized our arguments around influence as those messages sustaining a large amount of responses.

The focus on response is key to our results. The Web Ecology Project has attempted to respond to extremely generalized analyses of social media phenomenon, particularly with large amounts of quantitative evidence to support our claims. In "The Influentials," we wanted to criticize those analyses of influence that had primarily focused on follower counts, which of course are important; however, if a user has 10,000 followers and none of them respond to the user, then can we claim that this user is influential? Of course, we couldn't ignore follower counts, so we included equations and calculated graphs that accounted for both responses and numbers of followers, to weigh users that had smaller follower networks.

Probably the more interesting aspect of our initial analysis of influence of Twitter lay in our categorization of the cultural practices that lay underneath these interactions between popular users on Twitter and their followers. We split the ten users into three groups: celebrities, news outlets, and social media analysts. For the most part, the trends show that the members of these groups act fairly similarly (with discrepancies, of course, usually based on the number of followers).

The under-appreciated piece of our research ended up being our visualizations. We generated a colorful graph that illustrates the density of tweets and responses for each user in our report. It's intriguing to analyze our statistics visually, because you can occasionally pick out exceptional instances of response explosions. Although in our visualization our code could not parse out which responses corresponded to which original tweet, we can suppose that most of the wild groups of responses that follow occasional tweets are immediate responses that eventually ebb away.

To move beyond this initial, basic analysis of influence on Twitter, we would like to look closer at the networks of followers behind these mega-users. Looking at hypothetical extremes hints at the problems we might foresee in future research: If a user has a follower network that responds at an ordinary rate, but each of those users have extremely active responding networks (ie., the original user's secondary follower network), then that certainly affects how we might provide ratings or levels of influence for specific users.

Erhardt Graeff is a Lead Researcher and Developer for the Web Ecology Project, and also a social scientist and entrepreneur with an MPhil in Modern Society and Global Transformations from Cambridge University and a couple of bachelor's degrees from RIT. In addition to researching social media, he has studied rural internet use and social capital, digital divides, e-government, networked public spheres, and new media literacy. Beyond the Web Ecology Project, Erhardt is the Director of Technology and Strategy for BetterGrads, a startup aimed at preparing high school students for college life, and is a research assistant at the Berkman Center for Internet & Society at Harvard, studying OER and the political economy of the textbook industry.

Tim Hwang is the Director of the Web Ecology Project and an analyst with The Barbarian Group -- where he works on issues of group dynamics and web influence. He is interested in building a science around measuring the system-wide flows of content and patterns of community formation online. He is also the founder of ROFLCon, a series of conferences celebrating and examining internet culture and celebrity. He currently Twitters @timhwang, blogs at BrosephStalin, and is in the process of watching every homemade flamethrower video on YouTube.

Alex Leavitt is a Lead Researcher for the Web Ecology Project. His interests include geographical, linguistic, and transnational subcultures; the hybridization of popular culture and online humor; and the emergent cultural practices of (un)controlled online social networks. Alex also works as a research specialist with the Convergence Culture Consortium in the Comparative Media Studies department at MIT, and has previously worked with the Digital Natives Project at the Berkman Center for Internet & Society (Harvard Law School). In addition to his weekly articles on the Convergence Culture blog, Alex writes long-form about Japanese popular culture at The Department of Alchemy and short-form on Twitter (@alexleavitt).

Is Facebook a Gated Community?: An Interview With S. Craig Watkins (Part Two)

Today, I am sharing the second part of my interview with sociologist S. Craig Watkins about his recently released book The Young & The Digital. From the moment I read his manuscript, I knew that his chapter, "Digital Gates: How Race and Class Distinctions Are Shaping the Digital World" would be the one which generated a lot of the heat and the controversy here. Those of us who see the web as key to our vision of a more participatory culture have to be concerned with the obstacles which block many from full involvement. And those of us who celebrate the "virtual community" being achieved through digital media need to be especially concerned with the various forms of exclusion running through our online lives. Indeed, one could argue that for many, going digital involves a kind of "white flight" as they escape the "dangers" of their real world communities by seeking out other like-minded people in cyberspace. Watkins joins a growing number of writers who are asking in what ways our social networks online replicate -- for better and for worse -- our friendship networks offline, networks we know are shaped by continued segregation.

I was struck by a chart Watkins offers showing the language people use to describe and distinguish between Facebook and MySpace, language with long historical associations to our assumptions about race and class in the American context.

MySpace is described as "crowded, trashy, creepy, uneducated, immature, predators, crazy" while Facebook was praised as "selective, clean, trustworthy, educated, authentic, college, private." In other words, MySpace takes on values we associate with inner city slums, while Facebook is tied to the values one might associate with a gated community.

In this installment, I ask Watkins to reflect on these findings and how they might add another layer to our understanding of race in America; I also ask him to discuss the relationship between this new project on youth's digital lives and his earlier work on hip hop culture.

What challenges are educators facing as they try to teach the generation which has come of age in the era of web 2.0?

This is a fascinating question and, I believe, one of many that we are just beginning to reckon with as educators, researchers, and society. Part of my research included spending some time in the classroom and talking with teachers and school administrators.

What I soon discovered is that they are on the front lines of the move to digital. Teachers face a generation of students armed with more personal media than any other generation. Most teachers will tell you that the trend of permitting students to bring mobile phones, iPods, and other devices to school is a big mistake. Just think. The idea that I would have been permitted to bring a personal media device to school would have been out of the question. But it reveals how our values, behavior, and culture are shifting in the digital age.

The main concern among teachers is the degree of distraction these devices encourage in the classroom. It turns out that parents insist that their children carry mobile phones--easier to communicate and coordinate family schedules that are growing more challenging.

In The Young and the Digital I deal with some of the learning and educational challenges/opportunities posed by digital media. There are two kinds of technologies in today's classroom-- technologies that pull students away from the classroom, and technologies that pull students into the classroom. I give some examples of both.

But I am also interested in the social and behavioral challenges educators face in regards to technology. These include issues like citizenship, community, and helping students and educators make smart decisions regarding their engagement with digital media.

Most schools are being forced to deal with student conflicts that occur online and away from school. More and more, administrators are having to contend with issues like cyberbullying or the circulation of photos that reveal some sort of misconduct. These kinds of issues raise questions about privacy and authority (i.e., when is a student's behavior away from school an administrator's concern?) Their are no rule books or precedence for what is happening in the digital world and online lives students build.

I was surprised to learn that many principals are struggling not only with the online behaviors of students but of teachers also. A growing number of teachers and practically all recent college grads going into the profession maintain a personal profile. As you can imagine this raises many questions about the conduct of teachers away from school. Some teachers "friend" their students in places like MySpace and Facebook while others vehemently reject the idea. Like the rest of society schools and the people who run them are learning what it means to "be digital."

Building on work by danah boyd and others, you argue that Facebook has operated not unlike a "gated community" and may directly contribute to racial and class segregation in the online world. How can scholarship on race in the physical world help us to better understand how race operates in the virtual world? What steps should be taken to combat segregation in the online world?

It is easy to get caught up in the wonders of what scholars have variously referred to as "being digital," 'life behind the screen," or the "second self". But as the Web has become a more common experience it has also become a more local experience. That is, we use the World Wide Web to communicate most frequently with our friends, work colleagues, and acquaintances--that is, people we know, like, and trust. To use Putnam's language regarding social capital we use the Web to "bond" more than "bridge." This is certainly true with race.

When danah distributed her blog commentary about the class divisions in MySpace and Facebook, it struck me as a reasonable even predictable outcome, especially if you understand that what happens in our lives online is intimately connected to our lives offline. Some Web enthusiasts, however, were either surprised or annoyed by her claims.

But as your work and that of others show there is still a real "participation divide" that creates varying degrees of Internet engagement. No matter if we are talking about virtual worlds, mobile technologies, or social network sites race matters in the digital world. Most of the movers and shakers in the branding and marketing of the current generation Web show little, if any, interest in the social divisions that still mark the digital world. Mentioning the social divisions that are a part of the social Web is a kind of inconvenient truth. We learned a lot while studying young collegians embrace of Facebook. In reality, most of us use Facebook to connect to people that we know--we "friend" friends not strangers in our computer-mediated social networks. And who our friends are is usually influenced by race, class, education, and geography.

In examining the hundreds of surveys and one-on-one interviews we collected my grad assistant and I noticed a strong preference for Facebook among young white collegians and students more generally with a middle class orientation. It was more than a casual preference; it was also an intense rejection of MySpace. Our research found an interesting "racialization" of MySpace and Facebook among young people.

I began reading some of the research on the rise of gated communities in America and found some interesting parallels in the language used by residents living in physical world gated communities and young white collegians who preferred Facebook (a kind of virtual gated community) over MySpace. They both use words like "safe," "clean," "private," and "neat" to describe attachment to their communities. They both practice what cultural anthropologists call "gating," that is, the tendency to build physical/virtual, social, and cultural walls that are exclusive.

I also turned to French sociologist Pierre Bourdieu's work. I've used his work before to think about the kinds of cultural capital that young people accumulate, especially in the places that they create and inhabit, and how it works as a source of power, pleasure, and mobility. But in this case I was interested in what Bourdieu refers to as the "distinctions", that is, matters of taste, aesthetics, and values that middle class communities reproduce to maintain social and even physical separation between them and those that they view as below their own social status and class position.

When we began our work it was common to see college students switch from MySpace to Facebook. Among other things, the switch was also a bid for a social status upgrade, a move up the digital ladder. Today, middle class students in middle and high school are moving straight to Facebook. Social class distinctions like everything else in the digital age are trickling down to younger and younger users.

I was also intrigued by Bill Bishop's "Big Sort" argument. In short, Bishop argues that starting around the 1970s Americans underwent a massive social experiment that changed one of the most basic features of everyday life--where and with whom we live. The change in geography, Bishop maintains, is really a sorting by lifestyle. Racial and class segregation have been a fact of American life since the early 20th century (see Douglas Massey and Nancy Denton's work on residential segregation). But Bishop argues that American neighborhoods are now being stratified along ideological and lifestyle lines--not simply "red" and "blue" states but even more carefully sorted and homogenous neighborhoods. There are some interesting parallels in the digital world.

I'm a trained sociologists so I find it quite natural and instructive to look at wider sociological trends to understand what is happening in the online world. I simply can not separate the two.

Finally, social network sites do not cause racial divisions or the desire for homogenous online communities. Insofar as what we do online is intimately connected to the lives we lead offline the fact that a kind of digital sorting is happening is not that terribly surprising. Still, it is striking that among a generation that played a key role in electing America's first Black president race plays a crucial role in their use of social network sites and who they bond with online.

Tell us about the group you call "Four Pack." What did they help you to understand about the social dimensions of gaming?

The four pack is a group of young gamers I got to know quite well while working on the book. I first met Derrick. I interviewed him about his use of social network sites. During our conversation it was clear that most of his media time is spent playing games. I asked Derrick to identify a handful of his peers to join a panel of gamers I wanted to put together. The idea was to get to know them and follow them for a period of time to learn more about their experiences with games. Several young men in Derrick's peer group responded to my inquiry and I eventually settled on four of them.

I affectionately began calling the group the "four-pack." I visited them in their residential hall and established a rapport with them that lasted about six months. The four-pack provided me with what amounts to a life-history of their engagement with interactive media. Every two weeks I issued them questions via email to address in the media journals that they agreed to keep. One week the diaries, for example, may have been devoted to games, and the next week, to television. The diaries were honest, rich in detail, and provided intimate access to a group of young men who embody the rising generation of gamers. Each of the diary entries were followed up with one-on-one conversations.

I learned a lot from the four-pack--their thoughts about addiction, virtual worlds, and the appeal of games. I witnessed up close what many game scholars and industry insiders refer to as "social gaming."

Gaming among the four-pack and their peers was mainly a social experience. Rarely, if ever, did they play games alone. Often games were a way to have fun and also spend time with friends. In their own unique way, each member of the four-pack talked a lot about games as both a social lubricant and a social glue. The former refers to how games can make it easier to strike up conversations with new acquaintances, while the latter is a reference to how games give established friends a fun way to grow closer to each other. Games, it turns out, are the common denominator in their strongest and most meaningful social ties.

Some of your earlier work dealt with hip hop culture. What similarities and differences do you see between the technological and social practices of the hip hop culture and that you've found in your work on digital youth culture?

I've spent all of my academic career studying young people's relationship to media industries and technologies. The work I'm doing on digital youth culture is greatly informed by my earlier work on hip hop culture.

As you know their has been a substantial change in the way scholars examine the cultural practices and identities young people produce. Hip hop, like digital culture, is participatory and performative. Hip hop, like the social media practices of youth today, has always been about young people expressing themselves, building community, while also finding places of leisure, pleasure, and empowerment.

In my last book, Hip Hop Matters, I wrote a chapter titled "The Digital Underground." It was really an attempt to understand how the Web has become the new town square in hip hop culture--the place to find relevant and urgent dialogue about a host of issues facing young hip hoppers. To engage a community of young hip hop enthusiasts about a host of important social issues today you don't turn on corporate radio or read a corporate run magazine. You go online.

The innovative use of technology has been a part of hip hop's story from the beginning. That's how everything from graffiti art to mix tapes has been produced bearing a striking resemblance to the DIY culture of social media today.

My work has maintained a steady focus on understanding the world young people create and inhabit. It's clear that if you want to understand that world today you have to dig deep into the digital practices, identities, and communities young people are building. Writing The Young and the Digital gave me an up-close look at this world. The book and the blog we will be building is an effort to share what we are learning.

S. Craig Watkins teaches in the departments of Radio-Television-Film and Sociology and the Center for African and African American Studies at the University of Texas at Austin.

His new book, The Young and the Digital: What the Migration to Social Network Sites, Games, and Anytime, Anywhere Media Means for Our Future (Beacon) explores young people's dynamic engagement with social media, online games, and mobile phones. Craig participated in the MacArthur Foundation Series on Youth, Digital Media and Learning. His work on this ground breaking project focuses on race, learning, and the growing culture of gaming. He has been invited to be a Research Fellow at the Center for Advanced Study in the Behavioral Sciences (Stanford).

Currently, Craig is launching a new digital media research initiative that focuses on the use and evolution of social media platforms. For updates on these and other projects visit theyoungandthedigital.com.

Is Facebook a Gated Community?: An Interview With S. Craig Watkins (Part One)

Earlier this year, I was asked to write a blurb for `S. Craig Watkins's book, The Young & The Digital: What the Migration To Social-Network Sites, Games, and Anytime, Anywhere Media Means For Our Future. The book was an eye-opener as Watkins brings a sociological perspective to the kinds of social lives young people are building for themselves through their deployment of a range of new technologies and emerging cultural practices. Here's what I ended up writing about the book:

Why does Facebook have the same appeal as gated communities? Is distraction more concerning than addiction? How do video games like World of Warcraft value friendship? Bracing yet reassuring, often surprising, and always substantive, Craig Watkins acts as an honest broker, testing the contradictory claims often made about young people's digital lives against sophisticated fieldwork.

I don't agree with everything the book says -- that's probably what "bracing" means here -- but it shook up some of my own preconceptions and has stayed with me since I first read it.

We are seeing an explosion of significant new books on young people's digital lives -- in part inspired by the MacArthur Foundation's Digital Media and Learning initiatives, in part by the pervasiveness of digital culture all around us. I am trying to feature as many of these books and resources as they come out through the blog.

Watkins's book ranks among the best I've read on this topic. The Young & The Digital especially stands out for his close attention to the perspective of teachers as they grapple with the ways new media change how young people learn and to the perspective of young people who may not have the economic and social capital to fully participate in the digital and mobile realm inhabited by their more affluent and priviledged counterparts.

Watkins does not simply celebrate the "democratizing" impact of new media; he also looks at it as a space of social exclusions and in doing so, he calls attention to those factors which make it harder for some to participate more fully in the new media landscape. That's why I have chosen to highlight this interview as part of my contribution to this year's One Web Day with its theme -- "One Web. For All." And that's why I chose to include this book on the syllabus for the graduate course on New Media Literacies I am teaching at USC this fall.

In this installment, he takes on one of the senior figures in the sociological study of media, Robert Putnam, describing the ways that online participation may be paving the way for greater civic engagement, but he also ponders whether the online world may be making us "too social" for our own good, again striking a balance between utopian and dystopian arguments about the impact of digital media on young people's lives.

The Young and the Digital complicates in some important ways the arguments which Robert Putnam makes in Bowling Alone about the impact of electronic media on our social lives. Why did your field work lead you to reappraise Putnam's arguments?

The fieldwork did force me to reconsider some of the more enduring arguments about media and, especially, the well-traveled "Bowling Alone" thesis by Putnam. From the very beginning of the Web as an everyday tool, researchers have openly speculated about its influence in our social lives. Does the growing amount of time we spend in front of a screen make us more or less social, more or less interested in our friends, neighbors, and the world around us? Putnam's most compelling evidence regarding this questions is based on television. Among researchers who study TV as a leisure activity, the medium's greatest legacy is how it influences our connection, or lack thereof, to our neighbors, communities, and civic life. Putnam argues that TV watching comes at the expense of nearly every social activity outside the home, resulting in the erosion of social capital--a sense of neighborliness, mutual trust, and reciprocity that binds people and communities together. The big fear, of course, is that we will all retreat into our own media fortresses, forgoing any valuable social interaction with friends and acquaintances. While I understand the concern, the research evidence simply does not support it. This was certainly true in our research.

As we began talking with young people and combing through our survey results it became clear that their engagement with technology is first and foremost, a social activity. Conventional wisdom contends that time spent at home with TV is time spent away from friends and public life. But computer and mobile phone screens represent very different kinds of experiences than the ones traditionally offered by TV. Among the teens and young adults that we talk to, time spent in front of a computer or mobile screen is rarely, if ever, considered time spent alone. Screen time, increasingly, is time to connect with friends and acquaintances.

It's true, connecting via a mobile or Facebook is a different way of bonding, but, as I argue in the book, these practices are expressions of intimacy and community. We tend to get caught up on how much time young people spend with their computers and mobile phones. But what I came to understand is that their true interest is not in the technology per se, but rather the people and the relationships the technology provides access to.

Finally, I believe that young people's move online is also forcing us to reconsider another argument made by Putnam's regarding decreasing political participation. The final chapter of the book considers how young people's use of social and mobile media appears to be reversing some of the disturbing trends Putnam documents regarding a once decisive shift among Americans from political participation--for instance, attending political events, signing petitions, or writing an editor or politician. While establishing their support for President Obama, young people used Facebook, mobile phones, YouTube, and digital cameras to essentially redefine what electoral politics will look like in the future. Their use of digital media was social, communal, and in its own distinct way, political.

Throughout the book, you have a good deal to say about the ways digital media is reshaping young people's relations to traditional media (newspapers, television). What insight can you offer people working in the television industry about their prospects of attracting or holding the attention of younger Americans?

I'm glad you asked me about television. My interests in young people's engagement with the social Web is driven, in part, by a desire to understand the shift from television to screens that are more social, mobile, and personal. It's a historic shift and one that breaks from a more than fifty year cultural institution and experience--television as the first and most dominant screen in our lives.

Our research indicates that among persons ages thirty and under television is not the first or most preferred screen in their lives. They are just as likely to view their laptop or mobile phone as their "go to" screen. Young people still watch television but in ways that are quite distinct from previous generations--they watch it while media multitasking, on the go, and online. Moreover, kids are being socialized to engage TV in ways that are distinct from the generations that grew up in TV-centric households. These and other changes have forced a group of executives accustom to the dominance of TV in the household to rethink their business and programming models.

The television industry is diligently struggling to avoid what has happened to the pop music and newspaper industry. The TV business is struggling with what most of the corporate media world is struggling with and that is the question, "who will control content?" It's a hard lesson to learn but the rules of engagement really are changing.

It will be really interesting to see what network television looks like in about ten years. There is no doubt that it will look different but it will largely be outside forces--the ways our viewing and media behaviors shift--that will provoke change. Everything from rethinking the prime time schedule (NBC's decision to decrease scripted dramas and the impending Leno experiment) to the scaling back of the up-front presentations that once defined the industry's premium status among media buyers.

The biggest thing that the industry has to realize is that they can no longer control content or our viewing habits like they did in the past. It took a while but they began putting their shows online and making them available as downloads. Hulu-- a network response to the rise of YouTube--has shown signs of early success for long-format online video. But there is still a debate within the industry regarding this question of control. That is, should the network partners in Hulu make their content exclusive or, as some contend, make it available everywhere. I think it's clear that if network TV is to have a meaningful future it will have to permit its audience to not only access content across multiple platforms but also encourage audiences to shape and influence content, too.

You question the argument that digital media has had an anti-social impact on young people. Are there ways that these new media technologies and practices have made us "too social"?

I think so. Still, I realize that the idea of being "too social" is peculiar. Here is what I mean. The assertion that the Web and mobile phones are making us less social, caring, and involved with others is baffling when you consider the preponderance of evidence that actually compels a substantially different question: is today's "always-on" environment making us too social, too connected, and too involved in other people's lives?

In an "always on" world we are constantly communicating with each other via social network sites and mobile phones. It was interesting to learn that part of the initial appeal of Facebook among college students, for example, was the opportunity for constant status updates as well as the chance to gaze into the backstage world of friends and acquaintances. Young college students consistently made references to what they called, "e-stalking," that is the degree to which their peers frequently use social-network sites to track people's lives, activities, and relationships. Twitter and this idea of what Clive Thompson refers to as "ambient awareness" is another example of a technology that promotes a desire to be in constant connection with others.

In the digital age the idea of being out of touch or disconnected from family and friends is practically obsolete. No matter where we are--in class, at work, driving, or on vacation--the idea of being connected to our social networks is now a constant opportunity and, quite frankly, a constant challenge.

Rather than worrying about the likelihood of becoming anti-social I wonder if the reverse outcome--being too social--is a more legitimate concern. Talk to teachers in high school and you will learn that students are constantly connecting via their mobile phones while sitting in the classroom. Talk to university professors and there is a growing belief that students are constantly connecting with each other via platforms like Facebook while sitting in class. Again, it's the idea that we are using these emerging technologies in ways that are inventively social and dare I say excessively social.

S. Craig Watkins teaches in the departments of Radio-Television-Film and Sociology and the Center for African and African American Studies at the University of Texas at Austin.

His new book, The Young and the Digital: What the Migration to Social Network Sites, Games, and Anytime, Anywhere Media Means for Our Future (Beacon) explores young people's dynamic engagement with social media, online games, and mobile phones. Craig participated in the MacArthur Foundation Series on Youth, Digital Media and Learning. His work on this ground breaking project focuses on race, learning, and the growing culture of gaming. He has been invited to be a Research Fellow at the Center for Advanced Study in the Behavioral Sciences (Stanford).

Currently, Craig is launching a new digital media research initiative that focuses on the use and evolution of social media platforms. For updates on these and other projects visit theyoungandthedigital.com.

Hightlights from My Conversation With J. Michael Straczynski

Late last spring, I moderated a public lecture and interview with J. Michael Straczynski (JMS), the writer and producer known for his contributions to television (Babylon 5), comics (Thor, The Twelve), and film (The Changeling). Straczynski was speaking as part of the Julius Schwartz Lecture Series which MIT hosts in tribute to a long-time editor at DC Comics who spent his lifetime supporting genre entertainment. Straczynski was, as always, engaging in addressing questions posed by me or by members of the MIT audience and the discussion ranged across his career and addressed everything from his experiences interacting with fans online to the challenges of sustaining continuity across the full run of a complex science fiction series and explored everything from his early work for animated series such as He-Man and Ghost Busters and what he learned from Rod Serling and Norman Corwin to his forthcoming work on Ninja Assassian and Lensman.

The Comparative Media Studies program recently posted videos of the full event on line. They are broken down into three parts -- the first features Straczynski's opening remarks to the audience which center on the importance of being willing to risk failure in order to achieve creative rewards; the second features my one on one interview with Straczynski and the third features the question and answer period with the audience.

Altogether, the original program ran for 2 1/2 hours, thanks the persistence of the audience and the endurance of the speaker. The webcast version offers more extensive highlights from the significant longer exchange.

Today, I thought I would share some highlights from the exchange with you. In this first segment from the audience question/answer period, JMS speaks about how his ability as a showrunner to preserve continuity on Babylon 5 have been core to his personality since childhood, although he has not always been awarded for this obsessive attention to detail.

Here, JMS offers his predictions about what serialized television drama will be like five years from now and it sounds very much like what many of us are calling transmedia entertainment -- a form which breaks down the barriers between platforms and taps into the desire of audiences to more actively participate in the life of the franchise.

Here, I asked him about the persistence of themes of religion across his writing for Babylon 5, Jeremiah, and Twilight Zone. He describes it in terms of playing fair with his characters and his audiences.

JMS speaks about the "breakthroughs" Babylon 5 made in its representations of alien cultures on American science fiction television.

JMS explores how the innovations of Babylon 5 reflected his own tastes and interests as a fan of British television SF series such as Doctor Who, Blake's 7, and The Prisoner.

These segments do not begin to scratch the surface. There's a lot more to learn from this gifted creative artist who has done substantive work across multiple media and genres.

The Struggle Over Local Media: An Interview With Eric Klinenberg (Part One)

Earlier this summer, I moderated a panel on "News, Nerds and Nabes': How Will Future Generations of Americans Learn About the Local" as part of a conference which the MIT Center for Future Civic Media hosted for the Knight Foundation. My panelists were Alberto Ibargüen, president and CEO of the John S. and James L. Knight Foundation and Eric Klinenberg, professor of sociology at New York University and author of Fighting for Air:The Battle to Control America's Media. Our topic of discussion was the crisis in American local media -- particularly the decline in local newspapers. In this exchange, I tried to take panelists through core assumptions about the value of local media, the current threats it confronts, and possible scenarios through which citizens could play a more active role in reshaping the flow of information in their communities.

Following from conversations we had at the conference, Klinenberg agreed to be interviewed for this blog. His book, Fighting for Air, emerged prior to the increased public visibility which has surrounded these issues and so it may not be fully on the radar of many invested in rethinking the infrastructure for civic media. I'd gotten to know Eric through our mutual participation in a series of conversations hosted by the Aspen Institute on media policy and was delighted to have the chance to share his perspective with the readers of this blog. In the conversation that follows, we not only discuss issues surrounding local media but also talk a little bit about the cultural politics of media reform.

You published Fighting For Air almost two years ago. How would you evaluate the state of local media now as opposed to then?

Let's start counter-intuitively, with some good news: There's actually strong demand for local news and information. We all know that paying subscribers of print newspapers are an endangered species, that fewer of us watch local news on TV, and that, except for a few public stations, local news and music on the radio died in the 1990s. But local content is flourishing online. For instance, no matter where you're reading this, odds are that the overall readership for your local paper is higher than it has been in years. The problem is that fewer of us are willing to pay for journalism, and now, as a consequence, stories that you may want or need to know are beginning to disappear.

Some have faith that the supply of local reporters who do primary journalism will be replenished by new players in the emerging media eco-system. Today they can point to any number of innovative local news websites, from Crosscut to Voice of San Diego, which offer a glimpse of the world after newspapers. Others go further, arguing that the next generation of news outlets will be better than the one that is now dying off. After all, how many of us were satisfied with our local news options before the current media crisis?

I'm not persuaded. At the very least, it would be hard to argue that bloggers and citizen journalists have already replaced the beat reporters who, not long ago, were the best watchdogs we had at the Statehouse, City Hall, the school system, the local business scene, and the like. And what a time to lose them! The federal government is spending trillions of dollars on an economic bailout and stimulus package, and much of this money will go directly to states, municipalities, or the private sector. Does anyone trust them to police themselves, especially now that so few reporters are covering them?

I agree with David Simon (creator of The Wire), who recently told Congress that there's never been a better time to be a corrupt politician.

Right now, the focus is on the closing or threatened closing of a number of local newspapers around the country. Yet, Fighting for Air situates this decline in local newspapers in a larger context where the consolidation of media ownership has also impacted local radio and television. To what extent is the current concern about newspapers linked to that larger set of trends?

Directly. First of all, key elements of the journalism crisis pre-date the broader economic crisis. Take the loss of reporters. The layoffs started when big chains - Gannett is the classic example - began buying up papers throughout the country, driving out their smaller competitors (sometimes by violating antitrust law, as Fighting for Air reports), and then slashing their own editorial staffs to raise revenue. As monopolies, newspapers were fantastically lucrative, earning annual profit margins of 20, 30, or 40 percent, while the typical Fortune 500 company was getting a margin of 5 or 6 percent. The most entrepreneurial companies - Gannett, Tribune, and Knight Ridder, to name a few - went on feeding frenzies. They borrowed heavily to finance acquisitions of new papers, television stations, and all kinds of entertainment enterprises. They wanted to become giants, but to do so they had to load up with enormous debts.

As we now know, many of the properties they acquired turned out to be losers. Today newspapers have plenty of competitors for revenue. They've lost most of their classifieds. Their advertisers are cutting back, or posting ads online at a fraction of the price they used to pay to be in print. Their audience is refusing to subscribe if they can get content for free. The local TV stations they purchased are also in trouble. And the industry's technological fantasy, that they could merge print, TV, and Internet reporters into efficient and more profitable multimedia operations, just hasn't panned out.

The challenges of transforming newspaper companies for the digital age are formidable. The current economic climate is brutal, especially because the most reliable sources of newspaper ad dollars - car dealers, real estate developers, and department stores, to name a few - are all on life support. But what's driving newspaper companies over the edge is that they cannot just deal with these crises. They also have to service their crippling debts, and some just can't pull it off.

Look at what's happened to the great newspaper chains: Knight Ridder is out of business. Tribune is bankrupt. Gannett may well follow. Even the New York Times is teetering. When we autopsy these great corporations, the rise of the Internet or the recession may well look like the primary causes of death. Less visible, but equally lethal, is the self-inflicted damage done by their own executives. They weren't satisfied to run newspapers. They wanted to run conglomerates. And now we are all paying the price.

Throughout your book, you keep returning to the question of how local communities respond to disasters -- from storms to chemical leaks. Can you use that problem as an example to walk through some scenarios for how local communities may receive information in the future?

Disasters have always shaped U.S. media policy. Miscommunication on the airwaves after the Titanic went down helped to inspire the nation's first broadcast regulations, and (as the opening of Fighting for Air reports a dramatic failure of communication after a toxic spill in Minot, North Dakota triggered the most important development in recent media policy history: the emergence of millions of media activists, who collectively helped block a radical de-regulatory push from President Bush's appointees on the FCC.

Since the Cold War, the core public service responsibility of American broadcasters has involved issuing alerts during emergencies (who doesn't remember those radio announcements saying, "This is a test of the Emergency Broadcasting System. It is only a test"?), and then reporting on the aftermath. But that system has broken down, in part because of technological failures, and in part because digital voice tracking systems have replaced so many of the live human beings who once worked as radio reporters and DJs. Given our tempestuous climate, today all of us should know where we would turn for information if disaster strikes. But if the power goes out and your Internet service shuts down, what are you going to do?

In theory, mobile communications technologies are ideal for circulating emergency alerts (with, say, a reverse 911 program) and urgent news items. In practice, however, they haven't worked because our communications infrastructure is so shoddy. If you were in New Orleans during Katrina or New York City on 9/11, you were much better served by a battery operated radio than by a cell phone. There are many lessons from these events, and one of them is that "securing the homeland" (as our federal officials like to say) is going to require a far greater investment in building an information infrastructure than we are currently making.

Eric Klinenberg is Professor of Sociology at New York University. His first book, Heat Wave: A Social Autopsy of Disaster in Chicago, won six scholarly and literary prizes (as well as a Favorite Book section from the Chicago Tribune). A theatrical adaptation of Heat Wave premiered in Chicago in 2008, and a feature documentary based on the book is currently in production.

Klinenberg's second book, Fighting for Air: The Battle to Control America's Media, was called "politically passionate and intellectually serious," (Columbia Journalism Review). Since its publication, he has testified before the Federal Communications Commission and briefed the U.S. Congress on his findings.

Klinenberg is currently working on two new projects. One, a study of the problem of urban security, examines the rise of disaster expertise, the range of policy responses to emerging concerns about urban risk and vulnerability, and the challenge of cultivating a culture of preparedness. The other project is a multi-year study of the extraordinary rise in living alone. He reported on parts of this research in a recent story for NPR's This American Life, and is now working on a book, Alone in America, which will be published by The Penguin Press.

In addition to his books and scholarly articles, Klinenberg runs the NYU Urban Studies seminar, and writes for popular publications such as The New York Times Magazine, Rolling Stone, The London Review of Books, The Nation, The Washington Post, Mother Jones, The Guardian, Le Monde Diplomatique, and Slate.

How "Dumbledore's Army" Is Transforming Our World: An Interview with the HP Alliance's Andrew Slack (Part Two)

So you're using a language of play, of fantasy, of humor to talk about political change? Much of the time, political leaders deploy a much more serious minded, policy-wonky language. What do you think are the implications of changing the myths and metaphors we use to talk about political change?

I think it's so freaking important to break things down for people in a way that they can understand. We get into this wonky-talk. There are so many organizations doing amazing things, and they mobilize their membership really well - but it doesn't connect to young people. Young people, by and large, care about issues like genocide. They care about issues like poverty, discrimination, environment. They want to be engaged in these things, but the people who are going to be inviting them to engage, have to be thinking about "how do I authentically talk from my heart to this young person in a way that's authentic to their experience and to our shared experience?" One of the reasons why I was successful in beginning the Harry Potter Alliance is because I'm such a hardcore Harry Potter fan. Had I not been such a passionate Harry Potter fan, had I not been caring about this myth so much myself, I wouldn't have been able to translate the message as well.

And so it's important, I think, when talking with people to find out what you have in common, what you're both passionate about, and then to translate that into the real world in a way that makes sense. Activism should be fun. Activism is fun, but of course, the issues can get so heavy. We can get paralyzed by a sense of guilt of not wanting to even look at the problems because they seem so big. And if I look at them, we often ask, "how can I go on with my life?" This is similar in Harry Potter to people saying, ' I can't say Voldemort's name. I'm too scared to even say his name, so I say, you-know-who.' In our world we think, "I can't say AIDS. I can't say poverty. I can't say genocide because if I open my eyes, I'll never be able to look away, and it will ruin my life." And that's not a helpful attitude for anybody. We have to learn how to say the name Voldemort in stride, and how to say these words - genocide, etc. - in stride, and not get caught in this idea that we have to fix it all. We can be part of a larger community playing our part. And that experience can be empowering and fun.

We had a meeting a couple days ago - a conference call. It was for something called Stand Fast. We're working with this amazing organization called STAND, which refers to itself as the student arm of the anti-genocide movement, and they are building a constituency across the world of students who are standing up against genocide in Darfur and now against ethnic cleansing in Eastern Burma. They are funding a civilian protection program in Darfur, where $3.00 protects one woman from being raped for a whole week, and $5.00 protects a whole family in Eastern Burma by providing them with radios. And this is such an empowering concept because you can say to a young person, 'instead of going to a Starbucks and getting a latte, instead of going to a movie, on this particular date, we're going together not go to a movie or give up some sort of luxury item, and $10.00 will fund the protection of one woman in Darfur for a week and a whole family in Eastern Burma - just $10.' A young person can understand that, can grasp that, and can also understand that this is not just about charity - it's not just about your money. It's a political statement when 15 year olds are protecting the lives of people in Darfur and Eastern Burma because their governments have been unable to do it regardless of how many resources they have. That is a political statement, and so we talk about that. But here's how we did it - we got the leaders of the Harry Potter fan community, the biggest names in the Harry Potter fan community of the Websites that - the Leaky Cauldron, Mugglenet, the biggest wizard rock bands - we got them all together to make an announcement that we are going to have a live conference call where you can all come. We had over 200 people come on the conference call under short notice to talk about this one day where we'd all be donating, December 3rd, it just happened. And people can still do this at theHPAlliance.org/civilianprotection. But, and here's where part of the fantasy comes in: we didn't just call it a conference call. We called it a meeting of Dumbledore's Army. We're going to have a Dumbledore's Army meeting - that we're going into a Room of Requirement, where you're given a code to get in. You press pound, and you're in the room of requirement. We talk about, we're in the Room of Requirement now, and just like Harry got up and taught people how to do this, we're going to talk to you about the issues. And everybody was briefed, all the speakers on what to say, and how to talk about this issue - but they did it from their own place and what they're passionate about. And it was just incredible. The response we've had from the people on the call was unbelievable. People giving up smoking. People giving up coffee. People saying, "I'm taking half the money I would have spent on Christmas, and giving it to this. And I'm going to tell all my family that the reason I'm not giving them as much this year is because I gave it to people who need it in Darfur and Burma, and I'm sure they'll be proud of me. And I feel so proud of myself right now."

It was an amazing experience, but it was done through fantasy. We didn't just say we're like Harry. We actually pretended that we are in a Room of Requirement. We are Dumbledore's Army, and we're doing it. And it was really empowering last year when J. K. Rowling said that this is truly an organization that is fighting for the same kinds of values that Dumbledore's Army fought for in the books, and to everyone involved in this organization, the world needs more people like you. And it was a real boost for our morale, and it was an incredible thing to get a message like that from one of our favorite authors.

You've already started down this path - so why don't you say a little more about how the fan community provides part of the infrastructure for something like the HP Alliance?

Yeah, it couldn't happen without the fan community. When we started, I was blogging about these ideas - about the parallels between discrimination in Harry Potter and discrimination based on race or sexuality in our world. Or about political prisoners in Harry Potter and political prisoners in our world. About ignoring Voldemort's return, and ignoring the genocide in Darfur in our world. So I was blogging about this, but no one was reading my blog. You know that wasn't really taking off too fast. Then I met Paul and Joe deGeorge of the wizard rock band Harry and the Potters. These are two guys that started a band where they sang from the perspective of Harry Potter. They still do. They loved the idea of a Dumbledore's Army for the real world, and soon enough we began brainstorming ideas - and I took my blogs, where I provided action alerts for how people can be like Harry and the members of Dumbledore's Army, and they reposted it on their Myspace page. Their Myspace at the time was going out to about 40 or 50,000 profiles. Now it's going out to about 90,000 Myspace profiles. Soon other musicians began to form bands that were wizard rock - bands based off of other characters in the book. Draco and the Malfoys were the bad guy band. The Whomping Willows based off of a tree at Hogwart's . The Moaning Myrtles - there's so many of these bands, and they all began to repost together, collectively, the messages that I was writing. Soon, through these wizard rock bands, we were communicating with over 100,000 Myspace profiles, and then the biggest Harry Potter fan sites wanted to be a part of it as well because this is a community that is just so incredibly enthusiastic, idealistic - believes in the values that are in Harry Potter about love and social change and the values in Amnesty, and they began to post what we were doing.

And they put up our first podcast right before Deathly Hallows, the last book, came out. Thanks to their putting it on their podcast feed at the time, at the peak of Harry Potter's popularity - that podcast was downloaded over 110,000 times, and STAND, one of our partner organizations, saw a huge spike in involvement that month thanks to our efforts. They saw a 40% increase in high school chapter sign ups compared to a normal two week period in July, and over a 50% increase in calls to their hotline - 1-800-GENOCIDE in a two week period compared to a regular two week period in July. This year the wizard rock bands and Mugglenet posted this special project that we were doing with a group in the UK, called Aegis Trust. Aegis Trust works on all sorts of genocide remembrance issues around the Holocaust, around Rwanda, but they had a special project where they were sending letters to the United Nations, asking the Security Council to do something about war criminals that were being given protection, impunity in Sudan, and they ended up sending 10,000 letters to the UN Security Council. Of those 10,000 letters, over three-quarters of them came from the Harry Potter Alliance. We weren't members of government. They were getting a lot of members of governments to write. We got young people. We brought Dumbledore's Army. The Harry Potter Alliance - we have about 3500 people on our e-mail list. We have about 50 chapters. We have about 12,000 Myspace members - about 1500 Facebook members, but we could not have done that without this larger network of wizard rock bands sending it out and of fan sites posting - here's what Dumbledore's Army is doing now. Here's what Harry Potter Alliance are doing now. We're all part of this alliance. Let's all step up to the plate, and even though we reach sometimes about 100,000 people, getting about 8,000 signatures, that's almost 1 in 10 of who we're reaching, and that's a lot as far as action goes because different people are engaged in different ways through our organization.

So that's just one example. In the last year, we've raised well over $15,000 from small donations to fund the protection of thousands of women in Darfur and villagers in Eastern Burma.

In the process we educate young people through podcast interviews with survivors of the Rwandan genocide in 1994, with policy experts, as well as with partnerships with groups like the Genocide Intervention Network and it's student arm STAND, the ENOUGH Project, Amnesty International, Aegis Trust and several other human rights organizations.

And now we are building these chapters and we want them to exist in schools and after school programs. And we want to help shape curriculum on how Social Studies and English are taught, if schools would be open to it.

At the same time, you've been able to build an alliance with some very traditional political organizations and governmental leaders. Could you say a little bit of how they've responded to the Harry Potter Alliance approach?

When I first started calling traditional organizations letting them know that I wanted to help them, I was very afraid that they were going to hang up when I told them the name of the organization is the Harry Potter Alliance. And if I said, HP Alliance, they would think it was The Hewlett Packard Alliance. In fact, one of our board members has been getting mail to the Hewlett Packard Alliance. We've never referred to ourselves as the Hewlett Packard Alliance, but people see HP, and they think Hewlett Packard. (laughter) And that's an alliance I don't want to be part of. So (laughter) when I tell the organizations at first who we are, there's this initial insecurity that I have on how they're going to react, and at first that insecurity proved to be warranted because they didn't know what to do with a group that is named after a fictitious book for young adults and plus, we had no track record. Though despite some challenges here and there, I must say that I was actually impressed with how open minded some people were. I think the best example of this is the Co-Founder of the ENOUGH Project John Prendergast. John is a policy expert on issues of international crisis and truly is a celebrated activist. But John actively looks for outside of the box ideas. When I met him in 2005 and told him about our new organization, my heart was pounding with nerves and he looked at me very intensely and basically said, "Dude. Comic books turned me into an activist. The least I can do is mention this in the book I'm writing with Cheadle." And that's Don Cheadle who starred in Hotel Rwanda. And this was crazy to me. And we are in that book, which was a New York Times best seller. It's called Not On Our Watch: the Mission To End Genocide in Darfur and Beyond and it's an excellent book.

But now when I call up organizations to form coalitions and partnerships I can tell them that we can get you thousands of people to see what they're doing. This strategy is very important to us: connecting Harry Potter fans to NGO's that are doing impressive work. We see they need more people, and we provide them with the people. We tell them, 'look, you know Harry Potter, and you know there's a lot of enthusiasm here. We can channel some of that enthusiasm to this noble work that you're doing by just using examples from the books and this incredible community of people, and we've been in Time magazine - and we've been in The Los Angeles Times.' So you know that sort of helps them take us more seriously now. Now, they want the Harry Potter Alliance to be involved, and then sometimes I'm thinking, I have to kind of pinch myself that now they're coming to us - and there's been a couple examples of them paying us as consultants to help them with recruiting young people to become part of their movement. The best example of that has been with our efforts to get young people educated on the issue of media reform.

We've worked with an organization called Free Press which can be found at freepress.net - Free Press leads a group called the Stop Big Media coalition. And we have a whole campaign where we compare things in Harry Potter that involve media consolidation to media consolidation in our world. Most people don't know much about media consolidation, but when you begin looking at how minorities are not represented fairly in the media, ethnic and racial minorities make up about a third of the US population, and they own I believe less than 3% of commercial TV. Women and minorities make up about 66% of this country, and yet are on television news about 12% of the time. What we see on TV, what we are shown visually, what is defined as "normal" in our culture are white men. The problem here is that the Federal Communications Commission has stacked the deck in the favor a handful of conglomerates to own most media in any given city. And this wipes out independent local media. And we want the FCC to change that, because it affect our outlook on race, it affects our outlook on our own communities, it even affects how foreign news like the genocide in Darfur is covered. The big conglomerates have cut foreign news by around 80% since the 1980's and replaced that with celebrity gossip -which would explain why Brittney Spears is covered more than a genocide that would be stopped if the political will was there.

This issue has gotten our membership really fired up, and we say what media reform activists always say: "whatever your number one issue is, media reform should be your number two issue because your issue can't be communicated if the media is not free." It's been really exciting - but yeah, so these traditional organizations, whether it's the Save Darfur coalition and the ENOUGH Project, STAND and the Genocide Intervention Network and Aegis Trust, all issues - all organizations that work on genocide related things - or Free Press or the No on Proposition 8 campaign, which we worked on. We recently did something called Wizard Rock the Vote, where we registered close to 900 people. I think they were almost all new voters at wizard rock concerts across the country and online, and that was in partnership with the organization Rock the Vote. They loved us, and it's a lot of fun. It's a lot of fun for them, too, because these organizations have staff members that are Harry Potter fans. And I personally have put out a couple of videos satirizing Wal-Mart, and because of this fan base, we were able to get two of the videos over 2 million views on YouTube. It just sped out of control, and I mean it's incredible. I call it cultural acupuncture, when you can take something where there's a lot of energy, and then translate it to something else. A lot psychic energy you - psychological energy being placed on something, and you move to make it healthier. It's a remarkable thing to see what we can do, and for teachers and youth workers, I think it's really important to think about what are your students interested in?

I think one of the biggest problems with our education system - I mean I can't stand No Child Left Behind, not just because it hasn't gotten proper funding, but because I wasn't very good at standardized tests in school - and I think they are generally about regurgitating information. I call it, Leave No Imagination Recognized. When engaging young people to become civically minded, find out what they care about. If you're working at an after-school program with inner-city youth, find something that's going to speak to inner-city youth. Are they interested in a specific kind of music? A lot of the kids that I've worked with from inner-city environments have been interested in hip-hop, so can you find yourself a teacher who knows about hip-hop, and gets the people to be part of a contest that's hip-hop oriented - but that involves research to say that the greatest hip-hop music out there, not the kind you hear in clubs per se, but the greatest hip-hop artists have reflected what's been going on in their communities and how things can change. That's the real hip-hop, and to you really work on that - and do some sort of hip-hop activism through organizations like the League of Young Voters, who often times refer to themselves as the League of Pissed Off Voters - that gets young people engaged. Show them episodes of The Wire, the HBO series, and then talk about the issues of crime, poverty, and drugs that are depicted in that series. And then right after that discussion, begin working on a project together. My idea for The Wire is show one episode that's an hour, then the next hour, discuss the issues that are in that episode, and how that reflects your own personal life - and in a third hour, start a project that addresses those issues.

So it should start with a piece of art that provokes the discussion, then have the discussion, and then after the discussion, don't leave it there, turn it into action. And that's one way to engage a specific population of young people, but that same method can be replicated for any group of young people, especially if you have access to video equipment. If you had access to video equipment, if the kids know how to write, you can show them how they can produce videos that will be seen by a lot of people, and how there's more to their world than just where they are - that they really now more than ever - we don't need to be paying lip service to young people that they can change the world. They can do it today, they can do it right now. If they care about something, they can do it, and they will be better at coming up with a video than the teachers. Find writing teachers. Find acting teachers to help them refine their jokes - make their videos funny or emotionally powerful. Have them interview people in their communities on what they care about. Get that stuff up on YouTube - where ever a young person's voice can be heard by the world. Tom Friedman has a great quote that the only competition that now exists is the one between us and our own imaginations. And now it's purely a matter of getting young people the access to these resources to do it, and then getting them to learn how to most effectively make those ideas and things viral. All you got to do is get them to care about something, and then they'll take care of it from there.

We've talked about a number of new media platforms in all of this-- blogs, podcasts, social network sites, YouTube. How important is that infrastructure of new media to enabling the kind of work that you guys are doing?

Without new media, I don't know what we would be doing. I don't think we would exist. We would be like students at Hogwarts without wands. We would be a club at one or two high schools, which would be fine. It's great to be a club at a high school. But we probably would have a hard time being an organization that has 50 clubs that are really active, which we have right now as far as chapters go, and a message that gets out to 100,000 young people in Japan and in places...just all over. We've got kids in Japan that are working on media reform issues in the United States. New media has provided us with an opportunity where you know we always say to young people that they have a voice, that their voice matters. The Harry Potter Alliance communicates with over 100,000 young people across the world. We've gotten to old media, Time magazine, front cover of The Chicago Tribune "Business" section - The Los Angeles Times, etc. None of this could've happened without new media platforms.

Andrew Slack is the founder and executive director of the Harry Potter Alliance where he works on innovative ways to mobilize tens of thousands of Harry Potter fans through a vibrant online community. Andrew has also co-written, acted in, and produced online videos that have been viewed more than 7 million times. He has taught theater workshops and served as a youth worker for children and adolescents throughout the US and Northern Ireland. A Phi Beta Kappa graduate of Brandeis University, Andrew is dedicated to learning and extrapolating how modern myth and new media can transform our lives both personally and collectively.

I am looking for other compelling stories of how fans are becoming activists. If your fandom is doing something to make the world a better place, drop me a note. I will try to feature other projects through my blog in the future.

How "Dumbledore's Army" Is Transforming Our World: An Interview with the HP Alliance's Andrew Slack (Part One)

Last weekend, Cynthia and I drove up to San Francisco where I spoke about "Learning From and About Fandom" at Azkatraz, a Harry Potter fan convention. The key note speaker at this year's event was Andrew Slack of the HP Alliance. Slack is a thoughtful young activist whose work is exploring the intersection between politics and popular culture. He's really helped to inspired some of the research I am going to be doing in the coming year about "fan activism" and how we can build a bridge between participatory culture and democratic participation. I interviewed Slack for Journal of Media Literacy earlier this year and I thought this would be a good opportunity to share that interview with my blog readers. Slack's work is gaining greater visibility at the moment because of the release of the new film, including a recent profile in Newsweek magazine (warning -- the piece is typically patronizing and ill-informed about things fannish but that it exists at all speaks to the impact this group is starting to have in terms of rallying young people to support political change). At the con, Slack spoke about his "What Would Dumbledore Do" campaign, an effort to help map what the "Dumbledore Doctrine" might mean for our contemporary society. You can read more about it here.

The HP Alliance has adopted an unconventional approach to civic engagement -- mobilizing J.K. Rowling's best-selling Harry Potter fantasy novels as a platform for political transformation, linking together traditional activist groups with new style social networks and with fan communities. Its youthful founder, Andrew Slack, wants to create a "Dumbledore's Army" for the real world, adopting fantastical and playful metaphors rather than the language of insider politics, to capture the imagination and change the minds of young Americans. In the process, he is creating a new kind of media literacy education -- one which teaches us to reread and rewrite the contents of popular culture to reverse engineer our society. One can't argue with the success of this group which has deployed podcasts and Facebook to capture the attention of more than 100,000 people, mobilizing them to contribute to the struggles against genocide in Darfur or the battles for worker's rights at Wall-Mart or the campaign against Proposition 8 in California.

The Harry Potter novels taught a generation to read and to write (through fan fiction); Harry Potter now may be teaching that same generation how to change their society. The Harry Potter novels depicted its youth protagonists questioning adult authority, fighting evil, and standing up for their rights. It offers inspirational messages about empowerment and transformation which can fuel meaningful civic action in our own world. For example, in July 2007, the group worked with the Leaky Cauldron, one of the most popular Harry Potter news sites, to organize house parties around the country focused on increasing awareness of the Sudanese genocide. Participants listened to and discussed a podcast which featured real-world political experts -- such as Joe Wilson, former U.S. ambassador; John Prendergast, senior advisor to the International Crisis Group; and Dot Maver, executive director of the Peace Alliance -- alongside performances by Wizard Rock groups such as Harry and the Potters, The Whomping Willows, Draco and the Malfoys, and the Parselmouths. The HP Alliance has created a new form of civic engagement which allows participants to reconcile their activist identities with the pleasurable fantasies that brought the fan community together in the first place.

In this interview, Slack spells out what he calls the "Dumbledore Doctrine," explores how J.K. Rowling infused the fantasy novels with what she had learned as an activist for Amnesty International, and describes how the books have become the springboard for his own campaign for social change. Along the way, he offers insights which may be helpful to other groups who want to build a bridge from participatory culture to participatory culture.

Why don't we begin with the big picture? Can you just describe what the HP Alliance is, and what it's core goals are?

The Harry Potter Alliance, or the HP Alliance is an organization that uses online organizing to educate and mobilize Harry Potter fans toward being engaged in issues around self empowerment as well as social justice by using parallels from the books. With the help of a whole network of fan sites and Harry Potter themed bands, we reach about 100,000 people across the world.

The main parallel we draw on comes from Harry Potter and the Order of the Phoenix where Harry starts an underground activist group called "Dumbledore's Army" to wake the Ministry of Magic up to the fact that Voldemort has returned. The HP Alliance strives to be a Dumbledore's Army for the real world that is trying to wake the world up to ending the genocide in Darfur.

Recently we have expanded our scope, discussing human rights atrocities in Eastern Burma, and we're going to be incorporating Congo into our vision soon. I'll talk more on exactly what we have done regarding these issues in a moment, but the parallels don't stop with this notion of Dumbledore's Army waking the world up to injustice. The Harry Potter books hit on issues of racism toward people who are not so called "pure blooded" Wizards just as our world continues to not treat people equally based on race. House elves are exploited the way that many employers treat their workers in both sweat shops in developing nations and even in superstores like Wal-Mart. Indigenous groups like the Centaurs are not treated equally just as Indigenous groups in our world are not treated equally. And just as many in our world feel the need to hide in the closet due to their sexual orientation, a character like Remus Lupin hides in the closet because of his identity as a werewolf, Rubeus Hagrid hides in the closet because of his identity as a half-giant, and Harry Potter is literally forced to live inside a closet because of his identity as a Wizard. With each of these parallels, we talk to young people about ways that we can all be like Harry, Hermione, Ron and the other members of Dumbledore's Army and work for justice, equality, and for environments where love and understanding are revered.

The average person we reach is somewhere between the ages of thirteen and twenty-five, very passionate, enthusiastic, and idealistic - but often have very few activist outlets that speak to them. And this is no coincidence. Unfortunately, so much of our culture directed at young people is about asking them to consume. It's looking at them as dollar signs, as targets for advertising. But Harry Potter is a great example of a book that hasn't done that. Of course there's merchandising and all that kind of thing, but fundamentally the message of the book is so empowering for young people.

Young people are depicted in the books as often smarter, more aware of what's happening in the world, than their elders, though there are also some great examples where very wise adults have mentored and supported young people as they have taken action in the world. These books represent a very empowering tool for young people, and young people have taken it into their hands - and created Websites and fan fiction, and a whole genre of music called "wizard rock" around Harry Potter. And it's been extraordinary. So we are utilizing all of that energy and momentum to make a difference in the world for social activism. We are essentially asking young people the same question that Harry poses to his fellow members of Dumbledore's Army in the fifth movie, "Every great Wizard in history has started off as nothing more than we are now. If they can do it, why not us?" This is a question that we not only pose to our members, we show them how right now they can start working to be those "great Wizards" that can make a real difference in this world. Whose imprint can have a value that is loving, meaningful, and nothing short of heroic. And the enthusiasm we've seen from young people is just astounding.

By translating some of the world's most pressing issues into the framework of Harry Potter, it makes activism something easier to grasp and less intimidating. Often we show them fun and accessible ways that they can take action and express their passion to make the world better by working with one of our partner NGO's. Not to mention, our chapter members and participants on our forum section come up with their own ideas which they collaborate on together - so while we often make decisions from the top-down, we also are building a way for each member to direct the destiny of what they and the larger organization are working on.

J.K. Rowling used to work with Amnesty International. How do you think that background impacted the books?

Well there's definite parallels between Amnesty's themes and the themes in Harry Potter. One of the main human rights issues that Amnesty works on is for the release of political prisoners.

Harry's godfather, Sirius Black, was a political prisoner. His best friend James Potter and James' wife Lily were murdered and his godson Harry was orphaned. But on top of that trauma, he was accused of committing the murders. Now if he had had a trial, he could have made a case for why he was innocent and how the real killer was still on the loose. But that couldn't happen because the Ministry of Magic had suspended habeas corpus. This all happened at a time of great terror and in times of great terror, governments often lock people away without a fair trial. We need not look very far for that. It's happening right now in our own country. And not only are these prisoners, many of them innocent like Sirius, not only are they locked up without trial, they are subsequently tortured--another issue which Amnesty works hard to stop.

In Harry Potter, the Wizarding prison known as Azkaban is guarded by Dementors. Dementors suck all the happiness from you, and live you in a state of tortured non-stop panic attack/depression. They literally feed off of the unhappiness in your soul until they suck your soul dry. This is the essence of torture and this is what's been getting done to people in Guantanomo Bay and Abu Ghraib and Eastern European prisons that the CIA helped build. People are locked away without a fair trial and then tortured. This is all done under the rationalization that in times of terror, justice must be suspended in the name of freedom. But then the very freedom we profess to stand for gets suspended as well in the name of preying on people's greatest fears rather than praying for our better angels. And this hurts the cause. A society that becomes a tyranny in order to fight for its freedom has destroyed the very purpose for which it is fighting. And in doing so, such a society gives strength to it's opponent. We need not go very far in our research to understand that the torture that our country has committed in Abu Ghraib and Guantaomo Bay has not only been immoral, it has been dreadful on a public relations front. Images of tortured Muslims has become one of al Qaeda's most effective recruiting methods.

And this aspect of a government shooting itself in the foot while selling out it's ideals happens in Harry Potter too. After Dumbledore's Army forces the Ministry of Magic to acknowledge Voldemort's return, the Ministry returns to the days when people are no longer given trials. And in order to look like they are making some headway, they arrest someone innocent named Stan Shunpike. They know the guy is innocent. They arrest him anyway, and he ends up being released by the Death Eaters, and put under the Imperius Curse, thereby becoming one of the Death Eaters.

So these Amnesty themes of political prisoners getting the right to a fair trial and the end of torture are consistent with the Harry Potter books and the values of Amnesty International.

But JK Rowling in her personal work outside of the books, takes that a step further. This can be seen in her charitable work and advocacy on many fronts, including helping children who are caged in Eastern Europe. Besides this incredible work, there's the words that she speaks outside of the confines of the books and these words help articulate the messages of Harry Potter.

Her commencement speech at Harvard in the spring of 2008 was unbelievable. One of the main themes of the speech was around the power of imagination and how we must "imagine better." She said, this doesn't necessarily mean imagining a magical world like she has done, but about building the capacity to imagine oneself in other person's shoes, and in that speech she talks about her experience at Amnesty International as being formative for her imagination. She got to work with people that were so passionate about imagining themselves in other peoples' shoes. And she became one of those people - imagining herself in the shoes of political prisoners, in the shoes of people that have fought for democracy under tyranny. There's a horrific story she tells where she is helping somebody who had been in prison, and as she was guiding this person to the airport, she heard a blood curdling scream. She said she had never heard a scream like this in her life, and it was from a political refugee that had just been informed that because of his dissident activities in his own country, his mother had just been killed. She said it was a scream that will always stay with her. And in talking to the students at Harvard, she was really very, very adamant that those in the United States, which is for now the only world super power, those of us who have the privilege of education also have both an opportunity and a responsibility to to imagine better, and imagine ourselves in other people's shoes. Let me read her quote directly. She says, "If you choose to use your status and influence to raise your voice on behalf of those who have no voice; if you choose to identify not only with the powerful, but with the powerless; if you retain the ability to imagine yourself into the lives of those who do not have your advantages, then it will not only be your proud families who celebrate your existence, but thousands and millions of people whose reality you have helped transform for the better. We do not need magic to change the world, we carry all the power we need inside ourselves already: we have the power to imagine better."

What can you tell us about your own relationship to these books? How was the idea of the HP Alliance born?

I already had a very strong interest in the power of a story to grab people and to get them more engaged in living a healthier life and in contributing more in a way that is civically engaged and civic minded. As a college student at Brandeis University I got to explore my feelings around this while at a center for peace and reconciliation in Northern Ireland, while interviewing Civil Rights activists in person throughout the US, and while studying at an acting conservatory in London. It was when I graduated from college, however, that I found Harry Potter. I had heard of the books but had little interest in them.

Upon graduating, I was teaching at a creative theater camp, and I was amazed at the way these children discussed and debated Harry Potter - with so much passion. It was insane. I was intimidated to start reading the books; there was just so many of them. There were four released at the time. The teachers were enthralled by them, and urged me to read them.

I was still resistant. And then I started working in the Boys & Girls Club in Cambridge, and I was working with a completely different socio-economic group of kids - racially and ethnically diverse - yet they, too, were lovers of Harry Potter. One of my colleagues at the Boys & Girls Club of a different race and ethnic and socio-economic background from me was obsessed with the books. She would read them constantly and I couldn't understand how it could be so great - and finally I asked her to hand me the first book, and she did - and I read that first chapter, and I just started laughing so hard.

The first sentence - " Mr. and Mrs. Dursley, of number four, Privet Drive were proud to say that they were perfectly normal, thank you very much. " I was surprised. This is a subversive book that right away begins to indict what I eventually started to call a Muggle Minded attitude -- being obsessed with "normalcy," not being interested in imagination, not being able to see outside of one's self. So I was swept away, right away, and by the end of that first chapter, I turned to this young woman who handed me the book and I said, 'I think this book just changed my life.' I raced through those first four books. Read them again and again, and I began making personal connections with them for myself. I think when you read a book about a hero, often times you become the hero, and for me, I would see myself as Harry in specific situations - and issues that I have dealt with in my life around anxiety - fighting Dementors became similar to that. There's a lot of loved ones I have that suffer from addiction, and their struggle with addiction seemed to mirror some characters' struggle to get out of the hold that Voldemort has on them when they follow him as Death Eaters. There's a very addictive quality, and watching what happened to one of the characters and his family around being a Death Eater is interesting because you see the tragedy of what happens to anyone who has a family member that is an addict, as so many young people do. In the case of Voldemort's followers, it's a cult, but it's still got this very addictive element to it, and I'm sure if you go into areas where there's terrorism in the world, a lot of families - like the ones I met and worked with in North Ireland -- experienced that addictive quality. It might not be drug addiction, but having a family member who is in a paramilitary group is a very, very difficult thing to cope with. Even families that sided with them intellectually couldn't deal with the idea of them being imprisoned and all of the horrible things they were doing.

So these books were speaking to such a broad range of very human experiences - including the wish to live a normal life despite adversity. The wish to, in Harry's case, play Quidditch and Exploding Snap and to have a crush on a girl like Cho Chang or Ginny Weasley. And all the while having to contend with darker forces in the world that he is internally connected to. Well I was just swept away by all of this. And the feeling of the story: Harry Potter brought me to this child-like state where everything was fun. I mean the books are so fun. What's different about these books than a lot of other fantasy books is how hilarious they are. They're just full of jokes that go into the day to day existence of characters, and then all of a sudden we're back into that fantasy realm of suspense that you see in books like the wonderful series His Dark Materials, more commonly known as The Golden Compass books. Harry Potter has all of that but it has humor to it, and so it really--I spent years as a comedian and I really connected to her sense of humor. I really connected to her sense of fantasy and imagination - how utterly playful the books are. So I was connecting to them from the point of view of how well written they were, how fun they were, and how much they spoke to me on a personal level in my own life, but then at the end of the fourth book, I was just amazed at what Dumbledore says to Cornelius Fudge, the Minister of Magic at the time. He says, in the wake of Voldemort's return, we've got to get rid of dementors, form alliances with those in foreign lands, and end our attitudes of racism. He then gets up in front of the whole school and says that we must be able to say what we're scared of, which I think is essential for young people to do, and to vocalize their fears and to name their fears. And we must understand that Voldemort's greatest gift is spreading discord and enmity, and that's what we see in our world.

With terrorism, it's not just about killing and the number of people they kill. It's also about the fear that they inflict in those who survive. And that's the same as Voldemort, and Dumbledore says, we can only combat this discord and enmity with an equally strong bond of friendship and trust. And this is what I call this Dumbledore Doctrine - that as the band "Harry and the Potters" say, "the greatest weapon we have is love." That this can actually translate into policy that is really important. And I began thinking, wow, the world needs Dumbledore. The world needs a Dumbledore, and then when I read that fifth book, where Harry starts an activist group named after Dumbledore - Dumbledore's Army - I thought, the world needs a Dumbledore's Army, and I began imagining myself going into the Room of Requirement and meeting with young people as if we were part of Dumbledore's Army - and each of us could be like Harry Potter - could see ourselves in the hero role, not where we're the chosen one to bring down all evil or anything like that, but where each of us plays a valuable part in changing this world, where we are the shapers rather than the spectators of history. I think it's amazing how we in this country with all of our resources have an opportunity to connect with people in our communities as well as people all over the world. And to do so in our relationships but also through volunteering in our communities and service as well as through civic engagement in the political process. That doesn't mean to engage in a partisan fashion, although people can feel free to do that, but the Harry Potter Alliance doesn't advocate for anything in a partisan way. However, we do want people to both volunteer with people at a local AIDS clinic as well as advocate for better treatment of AIDS victims in Africa. We want our young people tutoring underprivileged kids and helping them read, getting them engaged in the Internet and learning those things, but then also challenging the rules of the game that are making it possible for kids to go without food. And to challenge our politicians on both sides of the aisle that need to do something about that.

I think a key part of Harry Potter's popularity is that it is an example of a myth that the world is so hungry for, not just that they are funny books or that they're entertainment or that they're suspenseful or that they help us escape. They do all those things, but these books open our minds and our hearts to benefiting humanity in a way that I think secretly we all know unconsciously needs to happen. And that there's something truly profound about the love that Dumbledore speaks about and the love that Harry has for his friends that ends up being the thing that defeats Voldemort. And we need that love now. Not in any flaky sense of the word, but in a way that comes from deep within us and that we can share from our hearts.

Boy and Girl Wonders: An Interview with Mary Borsellino (Part Two)

You describe a number of recent texts which have drawn implicitly and explicitly on the figure of Robin. I wanted to get you to comment on a few of these. I was surprised for example to see that Dexter had made such significant references to Robin. What do you think is going on there?

Heaven knows! The references to Robin in the Dexter books and TV series are one of the most interesting recent uses of the Robin figure, simply because they're so removed from our ordinary understanding of Robin as a pop figure. Out of all the fantasy figures a serial killer could potentially imagine himself as, why does he return again and again to Robin imagery? It may partly be because Dexter's vigilante training by his adoptive father is such a crucial element in who he is: without that education, he wouldn't be able to thrive in the world, just as Robin is defined by Batman's influence.

It may also relate to the fact that Dexter's origin story is a dark mirror to Robin's: both are orphaned as children and taken in by a crime fighter. Comics to this day experiment with 'what if' scenarios: what if baby Kal-El's capsule had crashed in Russia, things like that. The Dexter novels are almost a what-if of what could happen if Robin's childhood trauma created a sociopath rather than a child hell-bent on stopping bad guys.

What aspects of Robin did Eminem evoke in his "Without Me" music video?

Primarily the daredevil-trickster-troublemaker aspects; he's made a career out of being the village fool who's not scared of saying that the emperor has no clothes. Eminem most obviously borrows Robin's costume and some of the 60s TV show's set pieces -- walking up walls and things like that -- but on a deeper level, Eminem borrows Robin's eternal boyhood, and the freedom that youth brings with it. I think it's really interesting that three of the current musicians whom I cite as drawing most heavily on what Robin represents and offers -- Eminem, Pete Wentz, and Gerard Way -- are all in their thirties, and yet all three are still seen very much of being the voice of a generation that's only just over half that age. Eminem's got a teenage daughter and yet he's not yet percieved as a 'grown up' himself. How does he manage that? I think the answer lies partially in the way he employs tropes like Robin in his persona. He's a boy who never grows up.

Given your analysis of the character, which writer do you think has offered us the richest, most nuanced depiction of Robin and why?

This is a tough one to answer, because the nuances of Robin come about because of the opportunity later writers have to build on what earlier writers laid down as foundations. So I could rattle off an answer and say Devin Grayson's Nightwing/Huntress series was an excellent depiction of the way Robin's sexuality might develop when he reaches adulthood, and what

qualities he ends up attracted to in a partner or Andersen Gabrych's grasp of what qualities Batman is drawn to in Robins, and why those are exactly the worst qualities for a Gotham vigilante to have, is the stuff of epic gothic tragedy -- but Grayson and Gabrych's especial genius in their work isn't simply telling great stories; it's taking the disparate pieces of such a disjointed history and melding them into a coherent, nuanced whole.

There have been, of course, many attempts to depict Robin outside his/her relationship to Batman -- as a member of the Teen Titans or as an adult figure on his own right. What impact have these efforts had on the public perception of this figure?

I'm not sure that Robin's able to remain Robin all that well once the relationship with Batman is pushed to the back. I love the whole Teen Titans concept, but it and 'Robin' as a role seem to inevitably become mutually exclusive: it was in Teen Titans that Dick Grayson quit being Robin and instead became Nightwing. The Robin of the Teen Titans cartoon became Nightwing, as well, in a storyline set in the future, and there's a strong narrative thread throughout the cartoon of Slade acting almost as a surrogate Batman for Robin to clash with.

Robin with Batman is the protege, the squire, the ward: the student, essentially. Robin with the Teen Titans is no older than Robin with Batman, but with the Teen Titans he's the leader, rather than the student. There's too much cognitive dissonance between the two roles, and so time and time again it breaks down: either Robin quits the Teen Titans, or quits being Robin. Both outcomes have happened numerous times in the comics.

Mary Borsellino is a freelance writer in Melbourne, Australia. She has published essays about subjects such as the shifting portrayals of Batman's childhood family, a feminist critique of the TV show Supernatural, and gender in Neil Gaiman's Sandman comics. She is currently working on a series of YA novels which will begin release later this year and which have been described as 'Twilight for punks'. Mary is the Assistant Editor of the journal Australian Philanthropy.

You can download her book, Boy and Girl Wonders: Robin in Cultural Context here.

Boy and Girl Wonders: An Interview with Mary Borsellino (Part One)

Robin didn't start with Robin. Robin won't end when Robin ends. In fact, it's arguable that Robin's already begun to move on from Robin. In less smartypants language, what I mean is that the ingredients which were brought together to create the character of "Robin," Batman's red-and-green-and-gold-wearing sidekick, were ingredients which already shared numerous common elements. And once Robin could no longer embody these elements, other pop culture arose to take over the character's place.

Or so goes the opening paragraphs of Mary Borsellino's fascinating new work, Girl and Boy Wonders: Robin in Cultural Context. The self-published text, which can be downloaded here, explodes with new insights and information about Batman's oft-neglected and marginalized sidekick, the kinds of information that could only come from a dedicated aca-fan. I will be honest that despite being a life-long Batman fan, I had never given that much consideration to Robin's cultural origins, his contributions to the series, or his influence on our culture. Works like William Uricchio and Roberta Pearson's The Many Lives of the Batman or Will Brooker's Batman Unmasked have made significant contributions to our understanding of the mythology around the dark knight, but most of them given short shrift to his "old chum." Borsellino argues that Robin's marginalization, sometimes in response to homophobia, sometimes in response to a desire for a "more mature" caped crusader, is part of his message. The character has special appeal, she argues, for "those readers and viewers who are themselves marginalized."

I checked in with Borsellino recently, asking her to share some of her insights with my readers.

This project emerged in part from your own very active involvement in Project Girl Wonder, which responded to what you saw as DC's neglect of Stephanie Brown. Can you give us some background on this controversy? What were the issues involved? Why was this character so important to you? What was the outcome of the campaign?

Actually, Project Girl Wonder came about out of the project. I was so immersed in the potential meanings of all the stuff going on with Robin in comics, and so tuned in to the rapid decline of relevance with DC's mandated interpretation of Robin. The idea of Stephanie Brown as Robin was so fresh and strange as a direction, but was handled so clumsily and with such obvious institutionalised sexism that it was pretty vile to witness, both as a cultural observer and as a fan who's also a feminist.

Essentially, for those not familiar with the character or with Robin's larger back story: when the second Robin, a boy named Jason, died, Batman created a memorial out of his costume in the Batcave. Stephanie was the fourth Robin, and her costume was different to the three boys who'd had it before her in that she sewed a red skirt for herself. Just a few months after her first issue as Robin was released, Stephanie was tortured to death with a power drill by a villain, and then died with Batman at her bedside.

The sexualised violence alone was pretty vomitous, but what made it so, so much worse for me was that Batman promptly forgot her. DC's Editor in Chief had the gall to respond to questions of how her death would affect future stories by saying that her loss would continue to impact the stories of the heroes -- how sick is that? Not only is the statement clearly untrue, since the comics were chugging along their merry way with no mention of her or her death, but it was also an example of the ingrained sexism of so much of our culture. Stephanie herself was a hero, and had been a hero for more than a decade's worth of comics, but the Editor's statement made it clear that he only thought of male characters as heroes, and the females as catalysts for those stories. It was a very clear example of the Women in Refrigerators trope, which has been a problem with superhero comics for far, far too long.

Long story short, I got together with a few like-minded comics fans and set out to petition DC Comics into giving Stephanie a memorial like Jason's -- to acknowledge that she was just as much a hero, and just as much Robin, as any of the boys. It made such a clear and striking image: a costume in a memorial case, just like Jason's now-iconic one, but this time with a little

red skirt on it as well. We couldn't have asked for a better logo for our cause.

We were lucky enough to have some invaluable help, both outside comics and inside. Shannon Cochran wrote a wonderful, in-depth article about the situation for Bitch magazine; we were a Yahoo site of the day; the webcomic Shortpacked ran a sharply funny strip about it all; and several comics writers working for DC -- Geoff Johns and Grant Morrison, in particular -- dropped references to the absence/potential presence of a memorial case for Stephanie into comics.

In the end, DC glossed it all over by having a storyline where Stephanie shows up, miraculously alive this whole time, and having the current Robin say to Batman "oh! you always knew she was alive! no wonder you never made her a memorial case!". Despite the fact that stories in the interim had featured Stephanie's death, autopsy, burial, and appearances as a spirit in the afterlife. Nope, Batman knew she was alive the whole time! Good job with the damage control there, DC.

Still, a live heroine's better than a dead one any day, so I count the whole thing as a victory in the end.

Critics have written a fair amount about how Batman's persona was inspired by earlier popular heroes, including Sherlock Holmes and the Douglas Fairbank's version of Zorro. What popular figures helped to inform the initial conception of Robin?

Within comics, the most direct inspiration was Junior, who was Dick Tracy's young offsider. Robin was the first time that boy helper figure was put into a superhero costume, but Junior was playing the detective's assistant role years before, and screwing up in all the same ways Robin so often does, ending up as a hostage and things like that. More widely, you've gone halfway to answering your own question -- Sherlock Holmes had Watson there, to listen to his theories and help solve the mysteries. The sidekick role has been around a long time, and provided the template for Robin's role.

Culturally, the figure of the daredevil boy hero is an ancient one, dating back through epic literature of the middle ages to the statuary and myths of Greece and Rome. Robin just gave the archetype a new costume.

You suggest that the marginalization of Robin as a character has helped to make the sidekick a particularly potent point of reference for other groups who also feel marginalized. Explain.

The two examples I use in my book are queer fans and women, though I also know readers who've used this same framework for class and race. As a queer person, or a woman, or someone of a marginalised socio-economic background, or a non-Caucasian person, it's often necessary to perform a negotiated reading on a text before there's any way to identify with any character within it. Rather than being able to identify an obvious and overt avatar within the text, a viewer in such a position has to use cues and clues to find an equivalent through metaphor a lot of the time.

A recent example of this is Spock and Uhura in the new Star Trek movie. Uhura has always been vitally important as a role model to women of colour -- even Martin Luther King Jr thought so. And she still fulfils that role in the new movie. The narrative themes of racial discrimination and of the conflicts which dual cultural heritage can bring with it are in the movie as well, but they're not the story of Uhura, because Gene Roddenberry was committed to the idea of a future where the crew of a starship could be mixed-race without remark. The character who offers these is Spock: he's the one with all the 'outsider' cues in his makeup, which I think goes part of the way to understanding why the recent Star Trek movie has seen a massive re-emergence of Kirk/Spock slash on the fannish landscape: female fans and those seeking a queer reading are drawn to that sense of marginalisation, of the ongoing fight to be recognised as present and worthy.

I got off-topic a bit there, sorry -- my reason for bringing up Spock and Uhura was to demonstrate that 'otherness' as part of a character's construction isn't necessarily bound directly to traits such as race or gender. It can stand for them, but does so obliquely. And Robin, by being put down and rejected by wave after wave of commentators and creators, has

come to embody anything that's been sidelined or disregarded, anything that's rejected in the relentless quest to make Batman as heteronormatively masculine and dour as possible. Just as those who fight against personal discrimination can find an avatar in Spock, those who struggle to re-establish their voice in dialogues where they've been silenced can find an avatar in the way Robin is pushed out of the way by official texts.

Many know of the ways that DC has struggled with the homophobia surrounding the relationship between Batman and Robin. How has this concern shaped the deployment of Robin over time? Are there any signs that in an era of legalized gay marriage, our culture may be less anxious about these issues?

We also live in an age of Prop 8, alas. I live in Australia, and both Australia and America recently switched from a longstanding conservative leadership to a potentially more progressive government -- but both Prime Minister Rudd and President Obama have gone on-record as saying that they believe marriage should be between a man and a woman. Progress hasn't yet progressed as far as I'd like to see it go, frankly.

And I think DC Comics is an absolute trainwreck mess at this point, to be even more frank. You only have to look at All Star Batman and Robin, by Frank Miller and Jim Lee, to see what a disaster the company's current concept of a flagship book is. The writing's incredibly sloppy, sexist, homophobic, and unengaging. "That is so queer" is used by Robin as a slur.

Batman calls Robin "retarded" and declares himself "the goddamn Batman". It would be hilarious if it wasn't so awful.

It hasn't always been that bad, of course, but right now it appears to me that DC is more anxious than ever about potential gay readings. And then there's Christian Bale, who has stated outright that he'll go on strike if anybody tries to incorporate Robin into the movie franchise. His Batman is so joyless that it's no wonder everybody went starry-eyed for the Joker -- the guy may be a psychopath, but at least he seems to know that running around Gotham City in a stupid outfit is meant to be fun.

You argue that Robin is in many ways a "transgender figure." Explain.

Robin crosses all sorts of imposed gender boundaries, both literal and figurative. Carrie Kelley, for example, the young girl who becomes Robin in Frank Miller's The Dark Knight Returns, is referred to by a news broadcaster as 'the Boy Wonder'; she looks completely androgynous in-costume, and so is assumed to be a boy. Dick Grayson and Tim Drake both assume female identities to go undercover in numerous stories -- Dick even played Bruce's wife on one occasion back in the forties -- and Stephanie Brown's superhero identity before she became a Robin, the Spoiler, is thought to be a boy even by her own father.

Those are just the literal examples of gender transgression. There're also a lot of background cultural cues coming into play, in the way the Robin costume looks, the way different backstories for the Robins are structured, and how sidekicks function in adventure narratives -- all these elements work against the notion of pinning Robin down as definitively male or female as a character; the only classification which really fits is that of being constantly in-motion between options and unclassifiable.

Mary Borsellino is a freelance writer in Melbourne, Australia. She has published essays about subjects such as the shifting portrayals of Batman's childhood family, a feminist critique of the TV show Supernatural, and gender in Neil Gaiman's Sandman comics. She is currently working on a series of YA novels which will begin release later this year and which have been described as 'Twilight for punks'. Mary is the Assistant Editor of the journal Australian Philanthropy.

Risks, Rights, and Responsibilities in the Digital Age: An Interview with Sonia Livingstone (Part Two)

A real strength of your new book, Children and the Internet: Great Expectations and Challenging Realities, is that it combines ethnographic and statistical, qualitative and quantitative approaches. What does each add to our understanding of the issues? Why are they so seldom brought together in the same analysis?

I'm glad you think this is a strength, as it's demanding to do, which may be why many don't do it. The simple answer is that I am committed to the view that qualitative work helps us understand a phenomenon from the perspective of those engaged in it, while quantitative work helps us understand how common, rare or distributed a phenomenon is.

Personally, I was fortunate to have been trained in both approaches, starting out with a rigorous quantitative training before launching into a mixed methods PhD as a contribution to a highly qualitative field of audience research and cultural studies. While I don't argue that all researchers must do everything, I do hope that the insights of both qualitative and quantitative research can be recognised by all; as a field, it seems to me vital to bring these approaches together, even if across rather than within projects.

You begin the book by noting the very different models of childhood which have emerged from psychological and sociological research. How can we reconcile these two paradigms to develop a better perspective on the relationship of youth to their surrounding society?

I hope that the book takes us further in integrating psychological and sociological approaches, for I try to show how they can be complementary. Particularly, I rebut the somewhat stereotyped view that psychologists only consider individuals, and only consider children in terms of 'ages and stages', by pointing to a growing trend to follow Vygotsky's social and materialist psychology rather than the Piagetian approach, for this has much in common with today's thinking about the social nature of technology.

However, this is something I'll continue to think about. It seems important to me, for instance, that few who study children and the internet really understand processes of age and development, tending still to treat all 'children' as equivalent, more comfortable in distinguishing ways that society approaches children of different ages than in distinguishing different approaches, understandings or abilities among children themselves.

One tension which seems to be emerging in the field of youth and digital learning is between a focus on spectacular case studies which show the potentials of online learning and more mundane examples which show typical patterns of use. Where do you fall?

Like many, I have been inspired and excited by the spectacular case studies. Yet when I interview children, or in my survey, I was far more struck by how many use the internet in a far more mundane manner, underusing its potential hugely, and often unexcited by what it could do. It was this that led me to urge that we see children's literacy in the context of technological affordances and legibilities. But it also shows to me the value of combining and contrasting insights from qualitative and quantitative work. The spectacular cases, of course, point out what could be the future for many children. The mundane realities, however, force the question - whose fault is it that many children don't use the internet in ways that we, or they, consider very exciting or demanding? It also forces the question, what can be done, something I attend to throughout the book, as I'm keen that we don't fall back into a disappointment that blames children themselves.

As you note, there are "competing models" for thinking about what privacy means

in this new information environment. How are young people sorting through these

different models and making choices about their own disclosures of information?

There's been a fair amount of adult dismay at how young people disclose personal, even intimate information online. In the book, I suggest there are several reasons for this. First, adolescence is a time of experimentation with identity and relationships, and not only is the internet admirably well suited to this but the offline environment is increasingly restrictive, with supervising teachers and worried parents constantly looking over their shoulders.

Second, some of this disclosure is inadvertent - despite their pleasure in social networking, for instance, I found teenagers to struggle with the intricacies of privacy settings, partly because they are fearful of getting it wrong and partly because they are clumsily designed and ill-explained, with categories (e.g. top friends, everyone) that don't match the subtlety of youthful friendship categories.

Third, adults are dismayed because they don't share the same sensibilities as young people. I haven't interviewed anyone who doesn't care who knows what about them, but I've interviewed many who think no-one will be interested and so they worry less about what they post, or who take care over what parents or friends can see but are not interested in the responses of perfect strangers.

In other words, young people are operating with some slightly different conceptions of privacy, but certainly they want control over who knows what about them; it's just that they don't wish to hide everything, they can't always figure out how to reveal what to whom, and anyway they wish to experiment and take a few risks.

You reviewed the literature on youth and civic engagement. What did you find? What do you see as the major factors blocking young people from getting more involved in the adult world of politics?

I suggest here that some initiatives are motivated by the challenge of stimulating the alienated, while others assume young people to be already articulate and motivated but lacking structured opportunities to participate. Some aim to enable youth to realise their present rights while others focus instead on preparing them for their future responsibilities.

These diverse motives may result in some confusion in mode of address, target group and, especially, form of participation being encouraged. Children I interview often misinterpret the invitation to engage being held out to them (online and offline) - they can be suspicious of who is inviting them to engage, quickly disappointed that if they do engage, there's often little response or recognition, and they can be concerned that to engage politically may change their image among their peers, for politics is often seen as 'boring' not 'cool'.

In my survey, I found lots of instances where children and young people take the first step - visiting a civic website, signing a petition, showing an interest - but often these lead nowhere, and that seems to be because of the response from adult society. Hence, contrary to the popular discourses that blame young people for their apathy, lack of motivation or interest, I suggest that young people learn early that they are not listened to. Hoping that the internet can enable young people to 'have their say' thus misses the point, for they are not themselves listened to. This is a failure both of effective communication between young people and those who aim to engage them, and a failure of civic or political structures - of the social structures that sustain relations between established power and the polity.

Sonia Livingstone is Professor in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of fourteen books and many academic articles and chapters on media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Audiences and Publics (2005), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), Media Consumption and Public Engagement (with Nick Couldry and Tim Markham, Palgrave, 2007) and The International Handbook of Children, Media and Culture (edited, with Kirsten Drotner, Sage, 2008). She was President of the International Communication Association 2007-8.

If you've enjoyed this interview, you can hear Sonia Livingstone live and in person this summer at the 2009 Conference of the National Association for Media Literacy Education

(NAMLE)to be held August 1-4 in Detroit, MI. Her keynote address for this biennial conference -- the nation's largest, oldest and most prestigious gathering of media literacy educators -- is scheduled for Monday, August 3 at 4:00 pm in the Book Cadillac Hotel in downtown Detroit.

The conference - four days of non-stop professional development on topics such as teaching critical thinking, gaming, media production, literacy, social networking and more! -- will feature more than sixty events, including keynotes, workshops, screenings, special interest caucuses and roundtable discussions. Among the special events are the launch of the new online Journal of Media Literacy Education, the Modern Media Makers (M3) production camp for high school students, and a celebration of the 50th

anniversary of Detroit's famous "Motown Sound."

The conference theme, "Bridging Literacies: Critical Connections in a Digital World" speaks to the educational challenges facing teachers, schools and administrators in helping young people prepare for living all their lives in a 21st century culture. Complete details and online registration are available here.

Risks, Rights, and Responsibilities in the Digital Age: An Interview with Sonia Livingstone (Part One)

The first time I saw Sonia Livingstone speak about her research on the online lives of British teens, we were both part of the program of a conference organized by David Buckingham at the University of London. I was impressed enough by her sober, balanced, no-nonsense approach that I immediately wrote a column for Technology Review about her initiative. Here's part of what I had to say:

A highlight of the conference was London School of Economics professor Sonia Livingstone's announcement of the preliminary findings of a major research initiative called UK Children Go Online. This project involved both quantitative and qualitative studies on the place of new media in the lives of some 1,500 British children (ages 9 to 19) and their parents. The study's goal was to provide data that policymakers and parents could draw on to make decisions about the benefits and risks of expanding youth access to new media. Remember that phrase -- benefits and risks.

According to the study, children were neither as powerful nor as powerless as the two competing myths might suggest. As the Myth of the Digital Generation suggests, children and youth were using the Internet effectively as a resource for doing homework, connecting with friends, and seeking out news and entertainment. At the same time, as the Myth of the Columbine Generation might imply, the adults in these kids' lives tended to underestimate the problems their children encountered online, including the percentage who had unwanted access to pornography, had received harassing messages, or had given out personal information....

As the Livingstone report notes in its conclusion: "Some may read this report and consider the glass half full, finding more education and participation and less pornographic or chat room risk than they had feared. Others may read this report and consider the glass half empty, finding fewer benefits and greater incidence of dangers than they would have hoped for." Unfortunately, many more people will encounter media coverage of the research than will read it directly, and its nuanced findings are almost certainly going to be warped beyond recognition.

The last sentence referred to the ways that the British media had reduced her complicated findings to a few data points about how young people might be accessing pornography online behind their parents' backs.

This week, Sonia Livingstone's latest book, Children and the Internet: Great Expectations and Challenging Realities, is being released by Polity. As with the earlier study, it combines quantitative and qualitative perspectives to give us a compelling picture of how the internet is impacting childhood and family life in the United Kingdom. It will be of immediate relevence for all of us doing work on new media literacies and digital learning and beyond, for all of you who are trying to make sense of the challenges and contradictions of parenting in the digital age. As always, what I admire most about Livingstone is her deft balance: she does find a way to speak to both half-full and half-empty types and help them to more fully appreciate the other's perspective.

Given the ways I observed her ideas getting warped by the British media (read the rest of the Technology Review column for the full story), I wanted to do what I could to make sure her ideas reached a broader public in a more direct fashion. (Not that she needs my help, given her own skills as a public intellectual.) She was kind to grant me this interview during which she talks through some of the core ideas from the book.

In the broadest sense, your book urges parents/educators/adult authorities to

help young people to maximize the potentials and avoid the risks involved in moving into the online world. What do you see as the primary benefits and risks here?

My book argues that young people's internet literacy does not yet match the headline image of the intrepid pioneer, but this is not because young people lack imagination or initiative but rather because the institutions that manage their internet access and use are constraining or unsupportive - anxious parents, uncertain teachers, busy politicians, profit-oriented content providers. I've sought to show how young people's enthusiasm, energies and interests are a great starting point for them to maximize the potential the internet could afford them, but they can't do it on their own, for the internet is a resource largely of our - adult - making. And it's full of false promises: it invites learning but is still more skill-and-drill than self-paced or alternative in its approach; it invites civic participation, but political groups still communicate one-way more than two-way, treating the internet more as a broadcast than an interactive medium; and adults celebrate young people's engagement with online information and communication at the same time as seeking to restrict them, worrying about addiction, distraction, and loss of concentration, not to mention the many fears about pornography, race hate and inappropriate sexual contact.

Indeed, in recent years, popular online activities have one by one become fraught with difficulties for young people - chat rooms and social networking sites are closed down because of the risk of paedophiles, music downloading has resulted in legal actions for copyright infringement, educational institutions are increasingly instituting plagiarism procedures, and so forth. So, the internet is not quite as welcoming a place for young people as rhetoric would have one believe. Maybe this can yet be changed!

Risk seems to be a particularly important word for you. How would you define it

and what role does the discussion of risk play in contemporary social theory?

I've been intrigued by the argument from Ulrick Beck, Anthony Giddens and others that late modernity can be characterised as 'the risk society' - meaning that we in wealthy western democracies no longer live dominated by natural hazards, or not only by those. But we also live with risks of our own making, risks that we knowingly create and of which we are reflexively aware. Many of the anxieties held about children online exactly fit this concept.

My book tries to show how society has created an internet that knowingly creates new risks for children, both by exacerbating familiar problems because of its speed, connectivity and anonymity (e.g. bullying) and generating new ones (e.g. rendering peer sharing of music illegal). These are precisely risks that reflect our contemporary social anxieties about children's growing independence (in terms of identity, sexuality, consumption) in contemporary society.

As you note, some want to avoid discussion of "risk" because it may help fuel the climate of "moral panic" that surrounds the adoption of new media into homes and schools. Why do you think it is important for those of us who are more sympathetic to youth's online lives to address risks?

I have worried about this a lot, for it is evident to me that, to avoid moral panics (a valid enterprise), many researchers stay right away from any discussion or research on how the internet is associated not only with interesting opportunities but also with a range of risks, from more explicit or violent pornography than was readily available before, to hostile communication on a wider scale than before, and to intimate exchanges that can go wrong or exploit naïve youth within private spaces invisible to parents. I think it's vital that research seeks a balanced picture, examining both the opportunities and the risks, therefore, and I argue that to do this, it's important to understand children's perspectives, to see the risks in their terms and according to their priorities.

Even more difficult, and perhaps unfashionable, I also think that we should question some of children's judgments - they may laugh off exposure to images that may harm them long-term, for example, or they may not realise how the competition to gain numerous online friends makes others feel excluded or hurt.

Last, and I do like to be led in part by the evidence, I have been very struck by the finding that experiences of opportunities and risks are positively associated. Initially, I had thought that when children got engaged in learning or creativity or networking online, they would be more skilled and so know how to avoid the various risks online. But my research made clear that quite the opposite occurs - the more you gain in digital literacy, the more you benefit and the more difficult situations you may come up against.

As I observed before, partly this is about the design of the online environment - to join Facebook, you must disclose personal information, and once you've done that you may receive hostile as well as valuable contacts; to seek out useful health advice, you must search for key words that may result in misleading or manipulative information. And so on. This is why I'm trying to call attention to how young people's literacy must be understood in the context of what I'm calling the legibility of the interface.

You argue that we should be more attentive to the affordances of new media than

its impacts. How are you distinguishing between these two approaches?

Many of us have argued for some time now that the concept of 'impacts' seems to treat the internet (or any technology) as if it came from outer space, uninfluenced by human (or social and political) understandings. Of course it doesn't. So, the concept of affordances usefully recognises that the online environment has been conceived, designed and marketed with certain uses and users in mind, and with certain benefits (influence, profits, whatever) going to the producer.

Affordances also recognises that interfaces or technologies don't determine consequences 100%, though they may be influential, strongly guiding or framing or preferring one use or one interpretation over another. That's not to say that I'd rule out all questions of consequences, more that we need to find more subtle ways of asking the questions here. Problematically too, there is still very little research that looks long-term at changes associated with the widespread use of the internet, making it surprisingly hard to say whether, for example, my children's childhood is really so different from mine was, and why.

Sonia Livingstone is Professor in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of fourteen books and many academic articles and chapters on media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Audiences and Publics (2005), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), Media Consumption and Public Engagement (with Nick Couldry and Tim Markham, Palgrave, 2007) and The International Handbook of Children, Media and Culture (edited, with Kirsten Drotner, Sage, 2008). She was President of the International Communication Association 2007-8.

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part Three)

Are the "vast narratives" created under commercial conditions different from some of the avant garde experiments or eccentric art projects (Henry Darger) also discussed in the book? In other words, do artists think about such world building differently removed from the marketplace?

Artistic considerations can be opaque at the best of times, and that's especially true with someone like Darger. But it's probably safe to say that commercial considerations played no part in his mind. His work was obviously a very private, very internal process. As far as we know, no one but he even knew it existed until after he died. But it's impossible not to speculate, isn't it?--why someone would spend their life creating something like In the Realms of the Unreal. He's almost like a Borges character.

But getting back to the commercial considerations: Walter Jon Williams addresses this directly in his Third Person chapter, and goes into some detail about the commercial considerations of shared-world novels and novel franchises, and how they inform his artistic choices in different ways than his single-author series.

Monte Cook and Robin Laws also discuss this in regards to the tabletop RPG industry, and here we get into very interesting areas of artistic choice. Because what a tabletop RPG writer is doing is creating a kind of machine that other people can use to create stories. Speculatively, someone could write an entire RPG system from scratch, for their individual use, but they'd still be playing the system with other people. The primary consideration in any RPG design is: Does it work? In other words, does it create the kind of stories I want it to, in the way I want it to? And because the tabletop RPG hobby is an inherently social one, this question is very, very close to: Will other people want to play it?

Laws' essay touches pretty directly on the commercial considerations that go into publishers' decisions to go with one property or another, or create their own. And Cook's essay focuses on the sequence of choices a gamemaster has to make in order to enact a particular rules system for the players. What we still don't have much of, outside some of the other 2P and 3P essays (Hite, Hindmarch, Glancy, Stafford) are really nitty-gritty analyses of why designers have created particular rules systems. Why does Call of Cthulhu have a "Sanity" mechanism? Well, that's an easy one, but why, for instance, does Dogs in the Vineyard have a dice pool system, with which players "bet," "raise" and "call" against the gamemaster? Why does The Mountain Witch have a "Trust" mechanism? For every example like that, some designer or team of designers balanced genre appropriateness, individual preference, commercial potential, player familiarity, ease, elegance, playability, and on and on.

For comics, as much as we love them, there are serious narrative handicaps to anyone working within one of the established commercial universes. In particular, it's rare that anything ever truly ends in any real sense. Storylines wrap up, series get cancelled, characters die--but the universe spins on. It happens in this way because DC and Marvel can still make money from it. It takes a huge apparatus of creators, editors, printers, distributors, retailers, consumers, etc., to keep these universes functioning.

You see something analogous in MMOs, although in that case it's weighted much more heavily on the creative and consumer ends, with fewer middle steps. But in both MMOs and comics, there's an unslakeable thirst for new content. You can't just stop producing, or the whole thing dries up and blows away. The advantages MMOs have over comics in this regard are: 1) They are much, much more profitable, and 2) Consumers create a large part of the new content themselves, in the form of their characters, inter-character interactions, and user-created emergent storylines. Anyway, all of this exists in the marketplace, not the ivory tower; the final judgment is the commercial one.

Of course, the art world is also a marketplace--and even the competition for faculty positions (which support many of the more interdisciplinary and experimentally-oriented digital media artists) exerts what might be seen as a market-like pressure. But the pressures aren't the same as those for commercially-oriented vast narratives.

Comics and science fiction fans have long stressed continuity as a central organizing principle in vast story worlds. Yet, you close your introduction with the suggestion that continuity is only one of a range of factors structuring our experience of such stories. Can you describe some others?

"Continuity" is a byproduct of telling a bunch of stories within the same setting. If someone writes a stand-alone novel, she doesn't have to worry about it, except in the simplest sense of making sure that a character who dies on page 50 isn't alive again on page 200. It's only when an author writes a series of novels, or comics, or something else, or other people start writing in that world, or it otherwise grows longer and more complex, that continuity becomes an issue. On the most basic level, it's a sort of contract between author and reader, showing that you care enough to keep the details straight (and aren't engaged in a metafictional exercise or parallel-worlds plot). Too much sloppiness in this area breaks the trust and announces the story's fictionality too directly.

That said, in certain genres, like big comics universes, maintaining continuity is hilariously difficult, bordering on impossible. Grant Morrison is probably right when he says that continuity is mostly a distraction in big comics universes, and will be as long as characters are not allowed to age and die away. No one is going to kill off Batman permanently, no matter what happened in Final Crisis 6, just as Barry Allen, Hal Jordan, Oliver Queen, Superman and the others all came back from the dead.

This speaks to a wider problem in comics continuity--without any real endings, and with no meaningful change that can't be revised or done away with at any time, the DC and Marvel universes lack consequences. Any individual storyline might be good or bad, but because they all exist within this ceaseless flow of stories, any narrative power is slowly worn away. One of Pat's favorite DC storylines is Paul Levitz and Keith Giffen's 1984 "Legion of Supervillains" storyline, in which Karate Kid is killed. Now we see that Karate Kid is back in Countdown to Infinite Crisis. What does this do to our appreciation of the original story? Nothing has changed about the text, but now it's been robbed of permanent consequence, and Pat's pleasure in it is diminished. Maybe that's a shallow way of appreciating narrative, but few comics readers will deny that it's a significant part of their enjoyment. And not just comics: the same thing happens in all forms of storytelling. We don't know of any literary critic who appreciates the narrative twist with Mr. Boffin near the end of Our Mutual Friend. You feel cheated; it's arbitrary and it undermines everything that's gone before, and robs the story of what James Wood calls "final seriousness."

This is what made The Dark Knight Returns so powerful, when it was first published. By providing an ending to Batman's story, it cast its shadow both forward and backward over Batman's entire publication history. Suddenly it became possible to read a Batman story in light of where the character was ultimately going. Alan Moore tried to do the same sort of thing--provide a possible ending--for the entire DC universe in his unproduced Twilight of the Superheroes miniseries, a missed opportunity if there ever was one.

Even Agatha Christie recognized this, though her series novels are almost completely continuity-free, with Hercule Poirot and Miss Marple staying essentially static thoughout her uncountable novels. But she still wrote Curtain (and kept it in a bank vault for over 30 years, until a few months before her death) to provide an end to Poirot.

Maybe the best approach to comics is to view them, as Grant Morrison seems to, as existing in a sort of permanent mythological or legendary space, in which the importance lies in the relationships between the characters and the ritual reenactment of certain actions, and not in the movement of these characters through time. We're okay with Homer, Aeschylus, and Euripedes all giving us versions of the story of the House of Atreus, and we appreciate them on their own merits, as literary instantiations of the same story. We don't spend much time trying to reconcile the discontinuities.

Greg Stafford's 3P chapter discusses the process of distilling multiple sources of the Arthurian stories into a coherent, playable RPG campaign. This was a heroic undertaking, but it was possible because 1) Stafford had final authority to accept, reject, or reconcile discontinuous story elements, and 2) he was not working with a constantly-expanding data set, such as the DC Universe. The question is not so much "Could you coherently reconcile all of DC's continuity?" as, "Why would you bother?" Without meaningful consequence, it's better to view the whole universe as existing in a sort of timeless fugue state, with only transitory consequences.

Incidentally, Doctor Who exhibits a different strange mixture of semi-continuity, with irreconcilable story elements (e.g., the multiple histories of the Daleks) combined with actual, permanent consequences (e.g., the Doctor's regenerations). A lot could be said about this, and what it means for narrative reception, and there's certainly a lot of that discussion in Third Person, but we've gone on a bit long here already.

The issue of the "ending" is a recurring issue in the book with several essays promising us "my story never ends" or "world without end," while others point to the challenges of sustaining creative integrity given the unpredictible duration of television narratives. Does the idea of a "vast narrative" automatically raise questions about endings and other textual borders?

Perhaps not automatically, given that we're treating as "vast" projects that are both ambitious in scope and yet planned for a particular, bounded shape from early on. But it's a very common move for vast narrative projects to make, and it's probably an inherent part of those that are conceived as productive systems. Why turn the system off? Similarly, those that are connected closely to events in the world beyond their control, or which have important audience contributions, have something in their dynamics that resists not only the hard border (those are intentionally designed away) but also the ending. That's why we've seen audiences attempt to continue projects that the authors bring to an end. But, of course, that's just a current twist on an old phenomenon, one you've also seen in your work on fan cultures.

That said, and though it may betray a little stuffiness, Pat does prefer narratives that seem to have a traditional shape to them, with meaningful endings that pay off everything that's gone before. And Noah thinks this is essential to a certain kind of project, even if some of his favorite fictions (from Mrs. Dalloway to Psychonauts) succeed on different terms. Commonly, comics and television structures work heavily against traditional narrative closure, but for commercial reasons, not even interesting modernist, postmodern, or currently-experimental ones. Which is why it's so exciting to come across something like The Wire, which is a coherent literary work realized in the televisual medium, which until recently Pat at least didn't think possible.

What demands do "vast narratives" place on the people who read them? Is a significant portion of the reading public ready to confront those challenges?

At this point, the question might actually be whether the expanding end of the reading public is willing to take on something that isn't as vast as, say, the Harry Potter or Twilight books. Perhaps it's just our skewed viewpoint, but it seems like large fictional projects, which either start with novels or have them as part of a cross-media environment, are a key way the reading public is growing. This reminds Noah of how his experience of being in the university is changing, now that even graduate students often can't remember a time before the Web very clearly and most students think that games are "obviously" as important a media form as, say, television. Vast possibilities and large interaction spaces now seem a kind of media norm.

That said, the pleasures of our youths--e.g., reading Marvel and DC comics and playing Call of Cthulhu and Champions (not the forthcoming online version)--were pleasures that grew with extended engagement, with developing understanding and elaboration of fictional universes and their characters. Those could be thought of as "demands," but we didn't feel that way about them, and we don't have the sense that people today reading a long series of novels or playing a computer RPG for 50+ hours (without even being completionist) feel that way either.

T-t-t-that's all, folks!

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part Two)

A reader asked me whether the book included a discussion of soap opera, which would seem to meet many criteria of vast narrative, but doesn't fall as squarely in the geek tradition as science fiction series like Doctor Who or superhero comics like Watchmen. Pat does include a brief note about his own experience watching soaps with his grandmother. What do you see as the relationship between "vast narratives" and the serial tradition more generally?

Soap opera is definitely a missed opportunity for us. We had intended to have at least one essay on the subject, but it fell by the wayside as our contributors came aboard and our word count ballooned. We had also intended to have more essays on more purely literary topics; as it stands, Bill McDonald's essay on Thomas Mann seems a little lonely in the middle of all that television. We had wanted at least an essay on Faulkner, probably one on Dickens, and some others. But it's exactly there that Third Person would have started to tip over into more traditional areas of literary history, theory, and narratology. We think one of the strengths of the series is the unexpected juxtaposition of very different fields and genres. So in the end, we opted more for the digital.

The serial tradition seems to us to be a huge and maybe indispensible part of most "vast narratives." Comic books and television especially follow very naturally from the serial tradition exemplified by Dickens. In all cases, the story unfolds in the public eye, as it were: David Copperfield appeared in monthly installments, as do most modern comic books; TV serials are generally weekly. In all cases there's ample opportunity for the public to respond to plot developments and offer feedback.

In David Copperfield, for instance, you have the strange character Miss Mowcher, who appears first as a rather sinister and repulsive figure, but when she reappears is pixie-ish, friendly, and plays a role in helping David. What had happened in the meantime is that the real-world analogue of Miss Mowcher (Catherine Dickens's foot doctor) had recognized herself in the installment and threatened to sue. And as we understand it, the characters of Ben on Lost and Helo on the new Battlestar Galactica were both intended to be short-term minor characters, but proved so popular with viewers that they were promoted to central recurring positions.

There are plenty of artistic problems that arise from serialized storytelling, one of the most serious of which is the potential for unbalancing the narrative. Writing an unserialized novel allows you to edit, revise and generally overhaul the story before the public sees it. To serialize a story forces you to go with your thoughts of the moment, which may change before you finish the story, whether because of new artistic ideas of your own or because of outside forces (TV cast changes, editorial shifts in direction, Miss Mowchers, etc.). The Wire is one of the strongest televised serials ever aired--arguably it's simply the best--and that show was blessed with a strong writing staff with long-term narrative plans, substantial freedom from editorial direction, and as far as we're aware, very few unplanned cast changes. David Simon and the other creators like to talk about Dickens in reference to the show, but The Wire is in fact much more narratively balanced and formal in structure than most of Dickens's novels.

At the same time, a lot of exciting art happens in exactly the improvisational space that seriality provides. The writing staff on David Milch's Deadwood seems to have, on a daily basis and under Milch's direction, group-improvised nearly all of the Deadwood scripts. The end result is a constantly surprising story that still somehow appears as a tightly-structured drama, even down to following, more often than not, the Aristotelian unities of time and place. (And we'd be remiss if we didn't mention that Sean O'Sullivan does great work discussing seriality both in his Third Person essay, and in his essay in David Lavery's collection Reading Deadwood.)

First Person experimented with placing a significant number of its essays on line and encouraging greater dialogue between the contributing authors. What did you learn from that experiment?

One thing we learned is that putting a book's contents online, which previously had mostly been done with monographs, could also work with edited collections. MIT Press was happy enough with the results that we followed this practice with Second Person and will do it again with Third Person. We'd like to see this practice expand in the world of academic publishing, since we now have some evidence that it doesn't make the economic model collapse (it's other things that are doing that, unfortunately, to some areas of academic publishing).

Another thing we learned is that, while blogs were already rising in prominence by the time we started working with Electronic Book Review on this portion of the project, the kind of conversation encouraged by something like EBR isn't obviated by the blogosphere. In general, blog conversation is pretty short-term. People tend to comment on the most recent post, or one that's still on the front page, and this is only in part because blog authors often turn off commenting for older posts, as an anti-spam measure. EBR, on the other hand, solicits and actively edits its "riposte" contributions (returning them to authors for expansion and revision, for example) and ends up fostering a kind of conversation that still moves more quickly than the letters section of a print journal, but with some greater deliberation and extension in time than generally happens on blogs. These different forms of online academic conversation end up complementing each other nicely.

As you note, comics have had a long history of managing complex narrative worlds. What lessons might comics have to offer the new digital entertainment media?

Digital media has already absorbed a lot of helpful lessons. In Third Person this can be seen in Matt Miller's chapter on City of Heroes and City of Villains, which goes into depth on how Cryptic translated comics tropes into workable MMO content.

The place to speculate might actually be the reverse of the question: what comics could take from contemporary digital media. We don't have any idea what a Comics Industry 2.0 would look like, but we suppose it's possible that DC and Marvel could take some of the pressure off themselves by integrating user-generated content of some sort; overseeing, funding and formalizing fan web sites, or who knows.

Every so often the industry does try something like this: back when we were growing up, there was a comic series called Dial "H" for Hero, in which a couple of kids had some sort of magic amulets that would turn them into different random superheroes when activated. The twist was that all of the names, costumes and powers of the heroes were reader-generated. Readers would send in letters with drawings and descriptions of superheroes they'd invented, and then those heroes would be integrated, with the appropriate credit, into later issues. This sounds extremely childish, and it was. There were no opportunities for readers to affect anything except the most replaceable elements of the story. (Although we do give DC credit for making it a boy-girl team, so that one of each pair of superheroes created would be female. Trying to build female readership is an ongoing problem for the big companies.) Later in the '80s, DC did give readers the opportunity to alter the narrative, when they ran the "A Death in Family" storyline in Batman. In this case, the Joker attacks, beats and blows up Jason Todd, the unlikeable second Robin, and DC established a 1-900 number which readers could call to vote on whether Todd lived or died. Well, they voted for him to die, and so he did, but the whole thing is regarded, rightly, as pretty distasteful, and they never bothered with anything like it again.

So the impulse toward interactivity exists in the industry, though it's never really gone anywhere. We suspect that some type of formalized interactivity will be a part of the comics industry going forward. What it will look like, we don't know.

More to Come

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part One)

One of the first classes I will teach through my new position at USC will be Transmedia Storytelling and Entertainment. I've already started lining up an amazing slate of guest speakers and have put together a tentative syllabus in the class. The primary textbook will be Third Person: Authoring and Exploring Vast Narratives, which was edited by Pat Harrigan and Noah Wardrip-Fruin. Many of you who have been working with games studies classes may already know the first two volumes in the MIT Press series which Harrigan and Wardrip-Fruin have edited. I've been lucky enough to be included in two of the three books in the series: my essay "Game Design as Narrative Architecture" was included in First Person and my student, Sam Ford, interviewed me about continuity and multiplicity in contemporary superhero comics for Third Person. So, I am certainly biased, but I have found this series to be consistently outstanding.

A real strength is its inclusiveness. By that I mean, both that the editors reach out far and wide to bring together an eclectic mix of contributors, including journalists, academics, and creative artists working across a range of media, and I also mean that they have a much broader span of topics and perspectives represented than in any other games studies collection I know. They clearly understand contemporary games as contributing something important to a much broader set of changes in the ways our culture creates entertainment and tells stories.

For my money, Third Person is the richest of the three books to date and a very valuable contribution to the growing body of critical perspectives we have on what I call "Transmedia Entertainment", Christy Dena calls "Cross-Platform Entertainment", Frank Rose calls "Deep Media," and they call "vast narratives." Each of us is referring to a different part of the elephant but we are all pointing to an inter-related set of trends which are profoundly impacting how stories get told and circulated in the contemporary media landscape. I found myself reading through this collection in huge gulps, scarcely coming up for air, excited to be able to incorporate some of these materials into my class, and certain they will be informing my own future writing in this space.

And I immediately reached out to Pat and Noah about being interviewed for this blog. In the exchange that follows, the two editors speak in a single voice, much as they do in the introduction to the books, but they also signal some of their own differing backgrounds and interests around this topic. The interview is intended to place the new book in the context of the series as a whole, as well as to foreground some of the key discoveries that emerge through their creative and imaginative juxtapositions of different examples of "vast narratives."

Can you explain the relationship between the three books in the series? How has your conception of digital storytelling shifted over the series?

First Person was originally conceived as an attempt to reflect and influence the direction of the field, at a particular moment, while also trying to do some work toward broadening interdisciplinary conversation (in the vein of Noah and Nick Montfort's historically-focused New Media Reader). As such, most of the essays grew out of papers and panel discussions from conferences, especially Digital Arts and Culture and SIGGRAPH. This is also why we used the multi-threaded structure--in order to preserve some of the back-and-forth of ideas characteristic of any emerging field. Unfortunately the book didn't come out as quickly as we hoped, and we were a little worried that it would become more of a history. But it turned out that many of the issues the field was concerned with at the time (e.g., the ludology/narratology stuff) remained, and still remain, things that people entering the field have to think through--so readers still find the book useful today.

That said, we learned an important lesson about the potential for delay, and about thinking of the long-term relevance of a project, so for Second Person we very consciously tried to commission a book that we didn't conceive of as trying to influence the conversation of a particular moment. Pat was working at Fantasy Flight Games when 1P was released, and had been thinking a lot about the relationship of stories to games, especially board games and tabletop RPGs. We both thought it would be an interesting area to explore, especially considering that there wasn't much out there, to our knowledge, that covered similar ground. So the idea was to explicitly draw connections between hobby games, digital media, and other similar performance structures (like improvisational theater) and meaning-making systems (like artificial intelligence research). It was much less "of the moment" than 1P and to our minds, that's when the series really started to take its shape.

Third Person wound up being something of a hybrid of the first two books. Like 2P, it addresses some underserved areas of game design and experience--such as Matt Kirschenbaum's essay on tabletop wargames--but again we're trying a bit to change the terms of the discussion, arguing for a broader conception of our topics. While 2P may have been one of the first books to integrate real discussion of tabletop and live performance games with computer games, its concept is one that goes down easily with most people in the field (we even got reviewed in Game Developer magazine). 3P is a bit of a challenge to digitally-oriented people who think about their field as "new"--or exclusively concerned with issues related to computational systems--because we believe people making digital work have something to learn from people doing television, comic books, novels and the other forms discussed in the book. And we also believe there's something to be learned in the opposite direction as well, and from continuing to connect projects from "high art" and commercial sources. We're very curious to see what the reception turns out to be for this volume, which we view as completing a kind of trilogy.

One striking feature of this series has been the intermingling of perspectives from creative artists and scholars. What do you think each brings to our understanding of these topics? Why do you think it is important to create a dialogue between theory and practice?

Broadly speaking, our scholarly essays often provide a big-picture view of a subject, providing context and analysis, and our artists' essays provide a more detail-oriented, granular view, usually of just a single work or small number of works. Inevitably these distinctions become pretty blurry; for example, we intended John Tynes's 2P essay to be strictly about the Delta Green design process, but he wound up providing a wide-ranging, highly analytical piece about game design philosophy--which is wonderful! Later, in 3P, we gave Delta Green co-creator Adam Scott Glancy the same mandate, and got something of the same result, with a history of the Delta Green property mixed in with wider ideas of narrative strategy.

This is one of the benefits of getting all these contributors side by side in the same series of books; you can see ideas from one person reflected in very different contexts, or, in the case of Delta Green, how the somewhat different design philosophies of two of the three Delta Green creators combined to create the property. This is then situated in the larger context created by the contributions of other creators and scholars, working in a variety of forms related to our themes, resulting in something far richer than one author could deliver.

Incidentally, one notable thing we've found about hobby games designers, is that they're very willing to talk about what goes into their design process, but they're seldom asked! That's a result of the anemic academic attention paid to the field. For literary critics, a novelist's or poet's design process, philosophy, and narrative strategies are all legitimate areas of study (even if "author studies" is now rather out of fashion). Even video game designers are getting some respect these days. But the hobby games industry is too small, it seems, to have merited much attention. This despite the fact that many current video game designers started in the hobby games field: Tynes, Greg Costikyan, Ken Rolston, Eric Goldberg, etc.

While a central focus of the books has been on digital media, especially games, you have always sought to define the topics broadly enough to be able to include work on other kinds of media. In the case of Third Person, these include science fiction novels, comic books, and television series. What do we learn by reading the digital in relation to these other storytelling tradition?

When we talk about "digital media" or "computational media," we're talking about something that is both media and part of a computational system (usually software). As we see it, the lessons digital projects can learn from non-digital projects are both in their aspects that are akin to traditional media (for example, how they handle stories and universes constructed by multiple authors) and in their systems (how they function--and how these operations shape audience experience). The articulation between the two, of course, is key.

We're certainly not the first people to note this. For example, it's been suggested (Noah remembers hearing it first from Australian media scholar Adrian Miles) that digital media creators often fret about a problem well known to soap opera authors: What to do with an audience who may miss unpredictable parts of the experience? Obviously the problem isn't exactly the same, because one case is organized around time (audiences may miss episodes or portions of episodes) and the other is organized by more varied interaction (e.g., selective navigation around a larger space). But there is a common authorial move that can be made in both instances: Finding ways to present any major narrative information in different ways in multiple contexts, so that the result isn't boring for those who see things encyclopedically and doesn't make those with less complete experiences feel they've lost the thread.

Of course, what the above formulation leaves out is that this problem doesn't have to be solved purely on the media authoring side, and perhaps isn't best solved there. Another approach is to design the computational system to ensure that the necessary narrative experiences are had, as appropriate for the path taken by any particular audience. This requires thinking through the authorial problem ("How do we present this in many different contexts?"). But ideally it also involves moving that authoring problem to the system level ("How can we design a component of this system that will appropriately deliver this narrative information in many different contexts, rather than having to write each permutation by hand?"). And, if successful, you don't have to solve the difficult authoring problem of keeping your audience from being bored because they're getting variations on the same narrative information over and over. Then you can use the attention they're giving you to present something more.

Obviously, this isn't easy to do. Computationally-driven forms of vast narrative are still rapidly evolving (at least on the research end of things). But the basic issues are ones that non-digital media have addressed in a rich variety of ways. Even the question of what kinds of experiences one might create in this "vast" space is one that we need to think about broadly--it's a mistake to think we already know the answer--and looking at non-digital work broadly is a part of that.

You write, "Today we are in the process of discovering what narrative potentials are opened by computation's vastness." Is that what gives urgency to this focus on "authoring and exploring vast narratives"?

Personally, that's an important part of our interest. But it's certainly not the only source of urgency. As the variety of chapters in the book chronicles, in part, we're currently seeing exciting creativity in many forms of vast narrative. One might argue that something enabled by computers--digital distribution--is part of the reason for this (e.g., television audiences and producers are perhaps more willing to invest in vast narrative projects when "missing an episode" is less of a concern). But we think of this as distinct from things enabled by computation (permutation, interaction, etc.), especially because some systems (such as tabletop games) carry out their computation through human effort, rather than electronically.

How are you defining "vast narratives"? What relationship do you see between this concept and what others are calling "transmedia storytelling," "deep media," or "crossplatform entertainment"?

Definition isn't a major focus of our project, but there are certain elements of vast narrative that especially attract our attention.

First, we're interested in what we call "narrative extent," which we think of as works that exceed the normal narrative patterns for works of a particular sort. So, for example, The Wire doesn't have that many episodes as police procedurals go (CSI has many more), but it attains unusual narrative extent by making the season--or arguably the entire run of five seasons--rather than the episode, the meaningful boundary.

Second, vast narrative is interesting to us in the many projects that confront issues of world and character continuity. Often this connects to practices of collaborative authorship--including those in which the authors work in a manner separated in time and space, and in many cases with unequal power (e.g., licensor and licensee).

Third, and connected to the previous, we're interested in large cross-media narrative projects, especially those in which one media form is not privileged over the others. So, for example, the universe of Doctor Who is canonically expanded by television, of course, but also by novels and audio plays. On the other end of the spectrum, Richard Grossman's Breeze Avenue project includes a 3-million-word, 4,000 volume novel, as well as forms as different as a website and a performance with an instrument constructed from 13 automobiles--all conceived as one project.

Fourth, the types of computational possibilities we've discussed a bit already, which are present not only in games (we have essays from prominent designers and interpreters of both computer and tabletop games) but also in electronic literature projects and the simulated spaces of virtual reality and virtual worlds.

Fifth, multiplayer/audience interaction is a way of expanding narrative experiences to vast dimensions that we've included in all three books--including alternate-reality, massively-multiplayer, and tabletop role-playing games. Here the possibilities for collaborative construction and performance are connected to those enabled by computational systems (game structures are fundamentally computational) but exceed them in a variety of ways.

Given all of this, it's probably fair to say that our interests are a superset of some of the other concepts you mention. For example, your writing on transmedia storytelling certainly informs our thinking about vast narrative--but something like a tabletop RPG campaign is "vast" for us without being "transmedia" for you.

Patrick Harrigan is a Minneapolis-based writer and editor. He has worked on new media projects with Improv Technologies, Weatherwood Company, and Wrecking Ball Productions, and as Marketing Director and Creative Developer for Fantasy Flight Games. He is the co-editor of The Art of H. P. Lovecraft's Cthulhu Mythos (2006, with Brian Wood), and the MIT Press volumes Third Person: Authoring and Exploring Vast Narratives (2009), Second Person: Role-Playing and Story in Games and Playable Media (2007), and First Person: New Media as Story, Performance and Game (2004), all with Noah Wardrip-Fruin. He has also written a novel, Lost Clusters (2005).

Noah Wardrip-Fruin works as a digital media creator, critic, and technology researcher with a particular interest in fiction and playability. His projects have been presented by conferences, galleries, arts festivals, and the Whitney and Guggenheim museums. He is author of the forthcoming Expressive Processing: Digital Fictions, Computer Games, and Software Studies(2009) and has edited four books, including Second Person: Role-Playing and Story in Games and Playable Media (2007), with Pat Harrigan, and The New Media Reader (2003), with Nick Montfort. He is currently an Assistant Professor with the Expressive Intelligence Studio in the Department of Computer Science at the University of California, Santa Cruz.

A New "Platform" for Games Research?: An Interview with Ian Bogost and Nick Montfort (Part Two)

Henry: Does Platform Studies necessarily limit the field to writers who can combine technological and cultural expertise, a rare mix given the long-standing separation between C.P. Snow's "Two Cultures"? Or should we imagine future books as emerging through collaborations between writers with different kinds of expertise?

Nick: We definitely will encourage collaborations of this sort, and we know that collaborators will need all the encouragement they can get. It's unusual and difficult for humanists to collaborate. When the technical and cultural analysis that you need to do is demanding, though, as it is in a platform study, it's great to have a partner working with you.

Personally, I prefer for my literary and research collaborations to be with similar "cross-cultural" people, such as Ian; I don't go looking for a collaborator to balance me by knowing about all of the technical matters or all of the cultural and humanistic ones. It is possible for collaborators on one side to cross the divide and find others, though. Single-authored books are fine as well, and it's okay with me if the single author leans toward one "culture" or the other, or even if the author isn't an academic.

Ian: I also think that this two culture problem is resolving itself to some extent. When I look at my students, I see a very different cohort than were my colleagues in graduate school. I see a fluency in matters of technology and culture that defies the expectations of individual fields. So in some ways, I see the Platform Studies series as an opportunity for this next generation of scholars as much as it is for the current one, perhaps even more so.

When you think about it, popular culture in general is also getting over the two culture problem. There are millions of people out there who know something about programming computers. As I've watched the press and the public react to Racing the Beam, it's clear to me that discussions of hardware design and game programming are actually quite welcome among a general readership.

Henry: What relationship do you see between "platform studies" and the "science, technology and society" field?

Nick: A productive one. We're very much hoping that people in STS will be interested in doing platform studies and in writing books in the series. Books in the series could, of course, make important contributions in STS as well as in digital media.

Ian: Indeed, STS already tends strongly toward the study of how science and technology underlies things. Platform studies has something in common with STS in this regard. But STS tends to focus on science's impact on politics and human culture rather than human creativity. This latter area has typically been the domain of the humanities and liberal arts. One way to understand platform studies is as a kind of membrane between computing, STS, and the humanities. We think there's plenty of productive work to be done when these fields come together.

Henry: Why did you decide to focus on the Atari Video Computer System as the central case study for this book?

Ian: We love the Atari VCS. It's a platform we remember playing games on and still do. In fact, the very idea for platform studies came out of conversations Nick and I had about the Atari. We found ourselves realizing that a programmer's negotiation between platform and creativity takes place in every kind of creative computing application.

Nick: Another factor was historical. While contributing to the cultural understanding of video games a great deal, game studies hasn't looked to its roots enough. A console as influential as the Atari VCS deserved scholarly and popular attention beyond mere retro nostalgia. We wanted to bring that sort of analysis to bear.

Ian: Finally, I've been using the Atari VCS for several years now in my classes, both as an example and as an exercise. I have my Introduction to Computational Media class program small games on the system as an exercise in constraint. I also taught a graduate seminar entirely devoted to the system. Moreover, I often make new games for the system, some of which I'll be releasing this spring. So overall, the Atari VCS is a system that has been and remains at the forefront of both of our creative and critical interests.

In fact, I've continued to do platform studies research on the Atari VCS beyond the book. A group of computer science capstone students under my direction just completed a wonderful update to the "Stella" Atari VCS emulator, adding effects to simulate the CRT television. These include color bleed, screen texture, afterimage -- all matters we discuss in the book. I have a webpage describing the project at http://www.bogost.com/games/a_television_simulator.shtml.

Henry: You focus the book around case studies of a number of specific Atari titles from Adventure and Pac-Man to Star Wars: The Empire Strikes Back. Can you say more about how these examples allowed you to map out the cultural impact and technical capacities of the Atari system?

Nick: The specific examples gave us the opportunity do what you can do with close readings: drill down into particular elements and see how they relate to a game, a platform and a culture. But we wouldn't have found the same insights if we had just picked a game, or six games from different platforms, and got to work. We used these games to see how programmers' understanding of the platform developed and how the situation of computer gaming changed, how people challenged and expanded the 1977 idea of gaming that was frozen into the Atari VCS when they put this wonderful machine together.

Ian: We also chose to focus on a specific period, the early years of the Atari VCS, so to speak, from 1977 to 1983. These games in particular allowed us to characterize that period, as programmers moved from their original understanding of this system -- one based on porting a few popular coin-op games -- to totally different and surprising ways of making games on it.

Henry: Platform Studies seems to align closely with other formalist approaches to games. Can it also be linked to cultural interpretation?

Nick: Formalist? Really? We were indeed very concerned with form and function in Racing the Beam, so I won't shun the label, but we tried to be equally attentive to the material situation of the Atari VCS and the cartridges and arcade games we discussed. For instance, we included an image of the Shark Jaws cabinet art so that the reader could look at the typography and decide whether Atari was attempting to refer to Speilberg's movie. We discuss the ramifications of using a cheaper cartridge interface in the VCS design, one that was missing a wire.

Ian: We should also remember the technical creativity that went into designing a system like the Atari VCS, or into programming games for it. The design of the graphics chip, for example, was motivated by a particular understanding of what it meant to play a game: two human players, side by side, each controlling a character on one side of the screen or another.

By the time David Crane created Pitfall! many years later, those understandings had changed. Pitfall! is a one-player game with a twenty minute clock. But it's also a wonderful mash-up of cultural influences: Tarzan, Indiana Jones, Heckle and Jeckle.

Nick: I'll admit that ours is a detailed analysis that focused on specifics (formal, material, technical) rather than being based around broad cultural questions: it's bottom-up rather than top-down. We're still trying to connect the specifics of the Atari VCS (and other platforms) to culture, though. The project is not only linked with, but part of, cultural interpretation.

Ian: I'd go even further; there's nothing particularly formalist about a platform studies approach, if formalism means a preference of material and structure over cultural reception and meaning. If anything, I think our approach offers a fusion of many influences, rather than an obstinate grip on a single one.

Henry: There is still a retro-gaming community which is deeply invested in some of these games. Why do you think these early titles still command such affection and nostalgia?

Ian: Some of the appeal is related to fond memories and retro-nostalgia, certainly. Millions of people had Ataris and enjoyed playing them. Just as the case with the Apple ][ or the Commodore 64 may have introduced someone to computing, so the Atari VCS might have introduced him or her to videogaming. So part of the appeal of returning to these games is one of returning to the roots of a pleasurable pastime.

Nick: That said, we resist appeals to nostalgia in the book and our discussions about it, not because nostalgia and retro aesthetics are bad, but because it would be a shame if people thought you could only look back at video games to be nostalgic. There are reasons for retro-gaming that go beyond nostalgia, too. It's driven, in part, by the appeal of elegance, by a desire to explore the contours of computing history with an awareness of what games are like now, and by the ability of systems like the Atari VCS to just be beautiful and produce really aesthetically powerful images and compelling gameplay.

Ian: It's also worth noting that there is a thriving community interested in new Atari games, many of whom congregate on the forums at AtariAge.com. For these fans and hobbyist creators, the Atari is a living platform, one that still has secrets left to reveal. So the machine can offer interest beyond retro-gaming as well.

Henry: What factors contributed to the decline of the Atari empire? How did that decline impact the future of the games industry and of game technology?

Nick: I think it takes a whole book on the complex corporate history of Atari to even start answering this question. Our book is focused on the platform rather than the company. Scott Cohen's Zap!: The Rise and Fall of Atari is a book about the company, and my feeling is that even that one doesn't really answer that question entirely. We're hoping that there will be more books on Atari overall before too long.

Ian: There are some reasons for Atari's decline that are connected specifically to the Atari VCS platform, though. It turned out to be incredibly flexible and productive, to support more types of game experience than its creators ever could have imagined. No doubt, Atari never imagined that third-party companies such as Activision would come along and make literally hundreds of games for the system by 1983, cutting in on their business model right at the most profitable point. But the system was flexible enough for that to happen, too.

Nick: That's why Nintendo did everything they could, by license and through technical means, to lock down the NES and to prevent this sort of thing from happening with it. The industry has been like that ever since.

Ian: As we point out in the book, this was a bittersweet solution. Nintendo cauterized the wound of retailer reticence, but it also introduced a walled garden. Nintendo (and later Sony and Microsoft) would get to decide what types of games were "valid" for distribution. Before 1983, the variety of games on the market was astounding. So, on the one hand, we're still trying to recover from the setback that was first-party licensing. But on the other hand, we might not have a games industry if it wasn't for Nintendo's adoption of that strategy.

Henry: Can you give us a sense of the future of the Platform Studies project? What other writers and topics can we expect to see? Are you still looking for contributors?

Nick: Yes, we're definitely looking for contributors, although we're pleased with the response we've had so far. We expect a variety of platforms to be covered -- not only game systems, but famous early individual computers, home computers from the 1980s, and software platforms such as Java. Some families of platforms will be discussed in books, for instance, arcade system boards. And although every book will focus on the platform level, we anticipate a wide variety of different methods and approaches to platforms. While getting into the specifics of a platform and how it works, people may use many different methodologies: sociological, psychoanalytic, ethnographic, or economic, for example.

Ian: In terms of specific projects, we have a number of proposals in various stages of completeness and review. It's probably a bit early to talk about them specifically, but I can say that all of the types of platforms Nick just mentioned are represented.

There are a few different types of book series; some offer another venue for work that is already being done, while others invite and maybe even encourage a new type of work to be done. I suspect that Platform Studies is of the latter sort, and we're gratified to see authors thinking of new projects they didn't even realize they wanted to pursue.

Henry: You both teach games studies within humanities studies in major technical institutions. How do the contexts in which you are working impact the approach you are taking here?

Ian: Certainly both Georgia Tech and MIT make positive assumptions about the importance of matters technical. Humanities and social science scholarship at our institutions thus often take up science and technology without having to justify the idea that such topics are valid objects of study.

Nick: I have to agree -- it's very nice that I don't have to go around MIT explaining why it's legitimate to study a computing system or that video games and digital creativity are an important part of culture.

Ian: Additionally, at Georgia Tech we have strong relationships between the college of liberal arts, the college of engineering, and the college of computing. I have many colleagues in these fields with whom I speak regularly. I have cross-listed my courses in their departments. We even have an undergraduate degree that is co-administered by liberal arts and computing. So there's already an ecosystem that cultures the technical pursuit of the humanities, and vice versa.

I also think technical institutes tend to favor intellectual experimentation in general. We often hear cliches about the "entrepreneurial" environment at technical institutes, a reference to their tendency to encourage the commercial realization of research. But that spirit also extends to the world of ideas, and scholars at a place like Georgia Tech are perhaps less likely to be criticized, ostracized, or denied tenure for pursuing unusual if forward-thinking research.

Dr. Ian Bogost is a videogame designer, critic, and researcher. He is Associate Professor at the Georgia Institute of Technology and Founding Partner at Persuasive Games LLC. His research and writing considers videogames as an expressive medium, and his creative practice focuses on games about social and political issues. Bogost is author of Unit Operations: An Approach to Videogame Criticism (MIT Press 2006), of Persuasive Games: The Expressive Power of Videogames (MIT Press 2007), and co-author (with Nick Montfort) of Racing the Beam: The Atari Video Computer System (MIT Press 2009). Bogost's videogames about social and political issues cover topics as varied as airport security, disaffected workers, the petroleum industry, suburban errands, and tort reform. His games have been played by millions of people and exhibited internationally.

Nick Montfort is assistant professor of digital media at the Massachusetts Institute of Technology. Montfort has collaborated on the blog Grand Text Auto, the sticker novel Implementation, and 2002: A Palindrome Story. He writes poems, text generators, and interactive fiction such as Book and Volume and Ad Verbum. Most recently, he and Ian Bogost wrote Racing the Beam: The Atari Video Computer System (MIT Press, 2009). Montfort also wrote Twisty Little Passages: An Approach to Interactive Fiction (MIT Press, 2003) and co-edited The Electronic Literature Collection Volume 1 (ELO, 2006) and The New Media Reader (MIT Press, 2003).

A New "Platform" for Games Research?: An Interview with Ian Bogost and Nick Montfort (Part One)

Any time two of the leading video and computer game scholars -- Ian Bogost (Georgia Tech) and Nick Montfort (MIT) -- join forces to write a book, that's a significant event in my book. When the two of them lay down what amounts to a new paradigm for game studies as a field -- what they are calling "Platform Studies" -- and apply it systematically -- in this case, to the Atari system -- this is something which demands close attention to anyone interested in digital media. So, let me urge you to check out Racing the Beam: The Atari Video Computer System, released earlier this spring by MIT Press. In the interview that follows you will get a good sense of what the fuss is all about as the dynamic duo lay out their ideas for the future of games studies, essentially further raising the ante for anyone who wants to do serious work in the field. As someone who would fall far short of their ambitious bar for the ideal games scholar, I read this discussion with profoudly mixed feelings. I can't argue with their core claim that the field will benefit from the arrival of a generation of games scholars who know the underlying technologies -- the game systems -- as well as they know the games. I certainly believe that the opening up of a new paradigm in games studies will only benefit those of us who work with a range of other related methodologies. If I worry, it is because games studies as a field has moved forward through a series of all-or-nothing propositions: either you do this or you aren't really doing game studies. And my own sense is that fields of research grow best when they are expansive, sucking in everything in their path, and sorting out the pieces later.

That said, I have no reservations about what the authors accomplish in this rigorous, engaging, and ground-breaking book. However you think of games studies as an area of research, there will be things in this book which will provoke you and where Bogost and Montfort are concerned, I wouldn't have it any other way.

Henry: Racing the Beam represents the launch of a new publishing series based on what you are calling "Platform Studies." What is platform studies and why do you think it is an important new direction for games research?

Nick: Platform studies is an invitation to look at the lowest level of digital media -- the computing systems on which many sorts of programs run, including games. And specifically, it's an invitation to consider how those computing systems function in technical detail, how they constrain and enable creative production, and how they relate to culture.

Ian: It's important to note that platform studies isn't a particular approach; you can be more formalist or materialist, more anthropological or more of a computer scientist, in terms of how you consider a platform. No matter the case, you'll still be doing platform studies, as long as you consider the platform deeply. And, while platform studies is of great relevance to the study of video games, these studies can also be used to better understand digital art, electronic literature, and other sorts of computational cultural production that happens on the computer.

Nick: In games research in particular, the platform seems to have a much lower profile as we approach 2010 than it did in the late 1970s and 1980s. Games are developed for both PC and Xbox 360 fairly easily, and few scholars even bother to specify which version of a such game they're writing about, despite differences in interface, in how these games are burdened with DRM, and in the contexts of play (to name just a few factors). At the same time, there are these recent platforms that feature unusual interfaces and limited computational power, relative to the big iron consoles: Nintendo's Wii and DS and Apple's iPhone.

Ian: And let's not forget that games are being made in Flash and for other mobile phones. Now, developers are very acutely aware of what these platforms can do and of how important it is to consider the platform level. But their implicit understanding doesn't always make it into wider discussions, and that understanding doesn't always connect to cultural concerns and to the history of gaming and digital media.

Nick: So, we think that by looking thoroughly at platforms, we will, first, understand more about game consoles and other game platforms, and will be able to both make better use of the ones we have (by creating games that work well with platforms) and also develop better ones. Beyond that, we should be able to work toward a better understanding of the creative process and the contexts of creativity in gaming and digital media.

Henry: What do you think has been lost in game studies as a result of a lack of attention to the core underlying technologies behind different game systems?

Nick: For one thing, there are particular things about how games function, about the interfaces they present, and about how they appear visually and how they sound which make no sense (or which can be attributed to causes that aren't really plausible) unless you make the connection to platform. You can see these in every chapter of Racing the Beam and probably in every interesting Atari VCS game.

Ian: And more simply put, video games are computational media. They are played on computers, often very weird computers designed only to play video games. Isn't it reasonable to think that observing something about these computers, and the relationship between each of them and the games that they hosted, would lead to insights into the structure, meaning, or cultural significance of such works?

Here's an example from the book: the graphical adventure genre, represented by games like The Legend of Zelda, emerged from Warren Robinett's attempts to translate the text-based adventure game Colossal Cave onto the Atari VCS. The machine couldn't display text, of course, so Robinett chose to condense the many actions one can express with language into a few verbs that could be represented by movement and collision detection. The result laid the groundwork for a popular genre of games, and it was inspired largely by the way one person negotiated the native abilities of two very different computers.

Nick: More generally, the platform is a frozen concept of what gaming should be like: Should it come in a fake wood-grain box that looks like a stereo cabinet and fits in the living room along stereo components? Should it have two different pairs of controllers and difficulty switches so that younger and older siblings can play together with a handicap? Only if we look at the platform can we understand these concepts, and then go on to understand how the course of game development and specific games negotiate with the platform's concept.

Henry: Early on, there were debates about whether one needed to be a "gamer" to be able to contribute to games studies. Are we now facing a debate about whether you can study games if you can't read code or understand the technical schematics of a game system?

Nick: All sorts of people using all sorts of methods can make and have made contributions to game studies, and that includes non-ethnographers, non-lawyers, non-narratologists, and those without film studies backgrounds as well as people who can't read code or understand schematics. Games are a tremendous phenomenon, and it would be impossible for someone to have every skill and bit of background relevant to studying them. We're lucky that many different sorts of people are looking at games from so many perspectives.

That said, whether one identifies as a "gamer" is a rather different sort of issue than whether one understands how computational systems work. If your concern is for people's experience of the game -- how they play it, what meaning they assign to it, and how the experience relates to other game experiences -- then the methods that are most important to you will be the ones related to understanding players or interpreting the game yourself. But if you care about how games are made or how they work, it makes a lot of sense to know how to program (and how to understand programs) and to have learned at least the bare outlines of computer architecture.

Ian: Even if you want to thoroughly study something non-interactive, like cutscenes, won't you have to understand both codecs and the specifics of 3D graphics (ray tracing, texture mapping, etc.) to understand why certain choices were made in creating a cutscene? How can you really understand Geometry Wars without getting into the fact that vector graphics display hardware used to exist, and that the game is an attempt to recreate the appearance of those graphics on today's flat-panel raster displays? How could you begin to talk about the difference between two radically different and culturally relevant chess programs, Video Chess for the Atari VCS (which fit in 4K) and the world-dominating Deep Blue, without considering their underlying technical differences -- and going beyond noticing that one is enormously powerful and other minimal?

Nick: I certainly don't want to ban anyone from the field for not knowing about computing systems, but I also think it would be a disservice to give out game studies or digital media degrees at this point and not have this sort of essential technical background be part of the curriculum.

Dr. Ian Bogost is a videogame designer, critic, and researcher. He is Associate Professor at the Georgia Institute of Technology and Founding Partner at Persuasive Games LLC. His research and writing considers videogames as an expressive medium, and his creative practice focuses on games about social and political issues. Bogost is author of Unit Operations: An Approach to Videogame Criticism (MIT Press 2006), of Persuasive Games: The Expressive Power of Videogames (MIT Press 2007), and co-author (with Nick Montfort) of Racing the Beam: The Atari Video Computer System (MIT Press 2009). Bogost's videogames about social and political issues cover topics as varied as airport security, disaffected workers, the petroleum industry, suburban errands, and tort reform. His games have been played by millions of people and exhibited internationally.

Nick Montfort is assistant professor of digital media at the Massachusetts Institute of Technology. Montfort has collaborated on the blog Grand Text Auto, the sticker novel Implementation, and 2002: A Palindrome Story. He writes poems, text generators, and interactive fiction such as Book and Volume and Ad Verbum. Most recently, he and Ian Bogost wrote Racing the Beam: The Atari Video Computer System (MIT Press, 2009). Montfort also wrote Twisty Little Passages: An Approach to Interactive Fiction (MIT Press, 2003) and co-edited The Electronic Literature Collection Volume 1 (ELO, 2006) and The New Media Reader (MIT Press, 2003).

Ghouls Just Want to Have Fun: Doug Gordon on the Zombeatles (Part Two)

Is there a connection to be drawn between the return of the Zombeatles and the publishing success of books like Pride and Prejudice and Zombies? Can we expect other "classics" to go Zombie when they are no longer a living part of our culture?

There most certainly is a connection to be drawn between the return of the Zombeatles and the publishing success of books like Pride and Prejudice and Zombies. This is further evidence that the zombies are taking over. Zombies started by eating the stupid people first since they were the easiest to catch. As the stupid human food supply dwindled, zombies were forced to use more brainpower to hunt down the smart people. This "Smart People Diet" allowed the living dead to evolve in a Darwinian manner. Call it "natural selection" or perhaps "unnatural selection" would be more appropriate. Whatever you call it, it's clear that zombies are on the verge of taking over and establishing their own zombie-centric society, complete with their own zombified version of arts, entertainment and popular culture (of which The Zombeatles and books like Pride and Prejudice and Zombies are an integral part).

Yes, as zombies continue to take over, we can expect more classics to go zombie when they are no longer a living part of our culture. For example, it won't be too long before we such zombified classics as John Steinbeck's Of Mice And Men And Zombies; Arthur Miller's Undeath of A Salesman; Norman Mailer's The Naked and the Undead and Samuel Beckett's Waiting for Godot (Maybe He's Been Waylaid by Zombies?).

And, of course, it goes without saying that Angus MacAbre ("Scotland's Funniest Zombie Comedian") is going to try to get his piece of the pie with his Monster Mashups for Zombies. This is the latest addition to Angus' phenomenally successful line of "For Zombies" instructional books. The series launched in the early '90s with three titles - Dummies For Zombies, Geniuses for Zombies, and Idiot Savants for Zombies. These books covered the best ways to eat dummies, geniuses and idiot savants and offered a wealth of information about the nutritional content of their respective brains.

Now Angus is extending the "For Zombies" brand with Monster Mashups for Zombies. It's the perfect study aid for the zombie student. Monster Mashups for Zombies are "CliffsNotes" meet the "For Dummies series," with a modern mashup twist because Angus has condensed not one, but two, classic works of literature into one flimsy book.

The debut title is On the Road to The Road, a literary mashup of Jack Kerouac's slacker bible, On The Road," and Cormac McCarthy's best-selling, critically-acclaimed The Road. It's the story of two young hipsters who hit the open road in search of kicks, only to be confronted by the post-apocalyptic downer of a father and son on a journey, while trying to

avoid cannibals and zombies. Okay, Angus has taken a bit of artistic licence by including zombies but, the way he sees it, it's not that much of a stretch ("Zombies are just basically cannibals with really bad skin"). Angus maintains that his Monster Mashups for Zombies titles will offer an easy and entertaining form of "one-hour smartenizing" that will have students away from the books and back getting blotto with their slackass friends in no time.

The zombie apocalypse will even infect public radio. Before you know it, the airwaves will be filled with the intellectually nutritious sounds of NZR, National Zombie Radio. Popular NZR programs will include: A Scary Home Companion"with Garrison Karloff; This American Unlife with Ira Gass; and the wacky news quiz program, Wait, Wait...Don't Eat My Brain. And, of course, there'll be no avoiding the undeadpan, autobiographical humor of zombie humorist David Zedaris, author of such droll best-sellers as Me Form Coherent Sentence Later

This Afternoon and When You Are Engulfed In Zombies.

Other reporters have learned that the Zombeatles want to develop a transmedia

franchise. Can you share some of your plans for future extensions of the Zombeatles?

I certainly can. The Zombeatles will be part of an exciting entertainment extravaganza called "Zombiepalooza." This postmodern vaudeville show will feature the Fab Gore performing their hits live and undead. It will also feature a screening of the film, The Zombeatles: All You Need Is Brains, and the undeadpan comedy stylings of Angus MacAbre ("Scotland's Funniest Zombie Comedian"), the host of All You Need Is Brains." People will be encouraged to come dressed as zombies and there will be interactive zombie prom and zombie fashion show elements. We've got a "Zombiepalooza" scheduled for Shank Hall in Milwaukee on Friday, July 10th. We're also working on taking the "Zombiepalooza" to Chicago and other locations to be announced.

We've also got plans for a Broadway musical revue called Zombeatlemania ("Not the Zombeatles, But An Incredible Simulation"); Ice Station Zombie: The Zombeatles On Ice; and a zombie-oriented children's TV/web series called Angus MacAbre's House of Angst (It's Dawn of the Dead meets Pee-wee's Playhouse.)

Angus MacAbre is planning on teaming up with Morgan Super Size Me Spurlock to produce a documentary in which Angus will spend an entire month eating nobody but McDonald's employees and customers. The working title is Would You Like Thighs With That?

There are also plans for books (The Consumer's Guide to the Zomniverse by Angus MacAbre" and Angus MacAbre's Zomnibus, among them), comic books and such video games as Rock Band: The Zombeatles and Angus MacAbre's Radioactive Haggis.

We're also planning to tap into the lucrative (and tender) youth market with a TV series called Alaska Nebraska. This show will focus on the wacky misadventures of an average zombie teen girl who lives a double life. By day, she's a mild-mannered student but by night, she's a famous zombie pop singer named Alaska Nebraska. We figure this can't miss.

I hear you are contemplating a Zombie-owned and operated amusement park. Wouldn't this just become a tourist trap?

No, the tourist trap is just a very small part of "Angus MacAbre's MacAbreville." MacAbre

describes MacAbreville as "a dark version of Disneyland, but without the cloying corporate namby-pambiness. MacAbre says that in this age of heightened anxiety and extreme sports, the public needs an extreme theme park, or an "ex-theme park" for short.

MacAbreville features several intriguing "lands," such as "Hitchcockland" ("The suspensefulest place on Earth"). As the name indicates, Hitchcockland" features attractions and restaurants based on the films of Alfred Hitchcock. Visitors will line up for hours to experience "The Vertigo Bell Tower of Terror" and "The Birds: Voyage Across Bodega Bay."

Another MacAbreville land is "Tarantinotown," where a bloody, non-linear time is guaranteed for all. Based on the cinematic oeuvre of Quentin Tarantino, Tarantinotown will feature such popular eating spots as the Hawaiian fast-food joint "Big Kahuna Burger" and the 1950's-themed "Jack Rabbit Slim's."

One of the most popular MacAbreville attractions is the rollicking intellectual thrill ride, "Baristas of the Caribbean." What if Starbucks Chairman and Chief Executive Officer Howard Schultz opened several Starbucks coffeehouses on the Caribbean island of Haiti, where legend has it that living people can be turned into zombies through two special powders entering into the bloodstream, usually through a wound? And what if the malevolent Starbucks Haiti District Manager, Tor McAllister, turned his baristas into zombies so that they'd be willing to work extra-long shifts for extra-less money? And what if these zombie baristas started eating their customers? Well, then you'd have one of MacAbreville's most popular attractions, "Baristas of the Caribbean." Enjoy this satirical, splash-filled boat ride! Laugh at the Animatronic-Audio Zombie Baristas as they chow down on their Animatronic-Audio customers ("That pompous businessman yelling into his cell phone really got his just desserts, didn't he, Jessica?" "Actually, Gary, he just ended up as that barista's dessert!"). All this murderous mirth and mayhem takes place to the jaunty strains of the attraction's catchy worldbeat theme song - "Tall, Grande, Venti (A Barista's Life for Me)." In a clever albeit inevitable cross-promotional move, MacAbre has ensured that the Baristas of the Caribbean CD soundtrack can be purchased at your neighborhood Starbucks.

What relationship exists between fans of Zombie music and the "Deadheads"?

As far as I can tell, there's no relationship between fans of zombie music and "Deadheads" (Grateful Dead fans). However, "Undeadheads" (fans of legendary zombie jam band, The Ungrateful Undead) are a huge part of the zombie music scene. Many "Undeadheads" will travel to as many Ungrateful Undead shows as possible in as many different locations as possible (even such farflung locales as Miskatonic University in Arkham, Massachusetts and Transylvania). Many Undeadheads display a fanatical allegiance to the Undead; some go so far as to conduct entire conversations by quoting from such classic Undead songs as "Dire Werewolf" and "A Touch of Grey Matter."

Theodor Adorno and others from the Frankfurt School warned us decades ago that

the repetition of basic formulas in popular music would numb the audience, making them brainless followers of the culture industries. Is this how Zombie music was born? Or might we see Zombie music as simply the latest in a series of resistant subcultural communities who have asserted their own identities only to be coopted by major labels?

As you might imagine, Henry, there's a lot of debate about how zombie music was born.

Some do indeed subscribe to Theodor W. Adorno and the Frankfurt School's theory that the repetition of basic formulas in popular music would numb the audience, making them brainless followers of the culture industries. But there are also those zombie critics such as Greil Carcuss and the so-called Frankenberry School who believe that zombie music is the latest in a series of resistant subcultural communitites that have asserted their own identities, only to be co-opted (or "cannibalized," so to speak) by major labels. Now with the digital revolution in music distribution, the major zombie record labels have lost a lot of their influence and their ability to cannibalize has been dramatically compromised. Zombie musicians are now cannibalizing each other and, in a few extreme cases, themselves.

I was fascinated to learn that Zombies not only have developed their own popular culture but also their own cultural critics. Is there a possibility that we will see undead theorists one of these days and if so, what can you tell us about their thinking about contemporary music?

Yes, I think we're already seeing the emergence of undead cultural critics with the work of Greil Carcass. Carcass has established himself as the thinking zombie's undead cultural critic by placing undead contemporary music in a much broader cultural context, a context that includes film, literature and politics. I'm thinking especially of such seminal works as Mystery Brain in which Carcass draws parallels between zombie rock and the cultural archetypes to be found in such classic zombie literary works as Moby-Dick versus the Zombies and Bartleby, the Scrivener meets Ginger Nut, the Office Zombie.

You've shared with us something of Zombie music and comedy through the film. I

was left wondering about other forms of popular culture among Zombie-Americans.

Do Zombies like horror films and if so, what gives them a fright? What kinds of

reality television are being produced for zombie consumption?

Horror films are not as popular among zombies as you might expect them to be. Much like people, zombies consume movies primarily as a form of escapism, so horror films are a little too realistic and slice-of-life for them. Having said that, zombies are terrified of the big-screen adaptations of Richard Matheson's classic novel, I Am Legend - The Last Man On Earth (1964) and I Am Legend (2007). The idea of such a small human food supply strikes fear in the very hearts of the undead. Such small-cast Ingmar Bergman films as Scenes from A Marriage also scare zombies for the very same reasons.

There's all kinds of reality television being produced for zombie consumption, including Monster Chef (a horrifying version of Iron Chef featuring such ghoulish gourmets as "Zombie Chef," "Vampire Chef," "Werewolf Chef" and "Invisible Chef"); America's Got Zombies; So You Think You Can Shamble; Extreme Makeover: Haunted House Edition; and Is Your Brain Bigger Than A 5th Grader's?)

If any of my readers would like to contribute body parts to support the band, where would they send them?

They can send them to me via dougmgordon@gmail.com or directly to The Fab Gore at

beeftone@gmail.com. Thank you for your time, consideration and interest, Henry. And thank you for the very intelligent and very perceptive questions.

Doug Gordon is a producer for Wisconsin Public Radio's/Public Radio International's Peabody Award-winning program, "To The Best Of Our Knowledge." Originally from Canada, Gordon has a Bachelor of Fine Arts degree (Major: Creative Writing) and a Creative Communications diploma (Major: Journalism). When not trying to make public radio more entertaining, he can be found working on various creative, artsy multimedia projects.

Ghouls Just Want To Have Fun: Doug Gordon on The Zombeatles (Part One)

Unless you've been living under a rock for the past few years, you will have noticed that zombies are taking over the entertainment industry. Case in point, the Zombeatles. You can get a taste of their music in this highly popular YouTube video, "A Hard Day's Night of the Living Dead." Some readers may find the band hard on their eyes and ears, but others will quickly fall under their spell.

The Zombeatles first caused a stir in Madison, Wisconsin, where I did my graduate work, so I've been hearing alerts about their appearances for some time, and figured it was time to do a shout out to them here. At first, I was horrified by the prospect of Zombies performing on State Street, but then I realized that this perspective was small-minded of me. Cultural Studies scholars have long been committed to lending their voices to those who are voiceless in our society and to helping our readers to understand phenomenon which may disturb or disrupt the operations of the dominant system. Clearly, learning to appreciate Zombie music (and tracing its roots back to the cultural experiences of Zombie-Americans) requires us to think outside the box. It has required much less flexibility on the part of the media industries who have proven all too eager to cater to the tastes of any significant consumer niche and who are constantly trying to dig up new talent to circulate through the global media marketplace.

A new documentary, All We Need is Brains, recounts the story of the rise of the Zombeatles in all of its gory details, sharing not only some hit songs, such as "I Want to Eat Your Hand," "Hey, Food," and "P.S. I Love Eating You." I had a chance to watch the film over the weekend and while it churned my stomack and made my blood curdle, it also opened my head to some new experiences I wouldn't have had otherwise. This may make me sound like a spinless intellectual but this film helped me to wrap my brain around the Zombeatles. Here's a preview of the documentary which is circulating on the web.

You can order your very own copy here. And if this music makes your heart skip a beat or two, you can also order their new album, Meat the Zombeatles. Neither is going to cost you an arm and a leg and it's safe to say that you won't ever hear anything like their music again.

Doug Gordon, a Wisconsin Public Radio producer, has emerged as the mouth of the Zombeatles and he agreed to share with us what's on his mind. He certainly provided me with a lot of information to sink my teeth into. So let's give him a hand for helping out here.

Can you give us a little background on the Zombeatles and how they impacted contemporary popular music?

Jaw Nlennon and Pall IcKartney met as art students in Pool of Liver, England back in 1957. They were bitten by some skiffle zombies. The skiffle zombies transmitted the "solanum" virus that creates zombies (as discussed in Max Brooks' book, The Zombie Survival Guide: Complete Protection from the Living Dead") to Nlennon and IcKartney. These skiffle zombies were also suffering from "rockin' pneumonia and the boogie-woogie flu" (or, as it's more colloquially known, "a bad case of loving you"); this infectious disease was also passed on to Nlennon and IcKartney when the skiffle zombies bit them.

The combination of these two diseases transformed Nlennon and IcKartney into music-loving zombies. They soon developed a voracious appetite for human brains and for writing and performing original songs about their voracious appetite for human brains. The old adage, "Write what you know," was clearly not lost on these unlively lads. They formed a zombie skiffle group called The Gory Men. Guitarist Gorge Harryson joined the combo a short time later. The band realized that they would probably be able to rock out a bit more if they had a drummer so they tried to recruit Eat Breast to pound the skins for them. Breast was reluctant to join The Gory Men because of their name, as he felt it was a little too "on the nose." So the band changed their name to The Zombeatles and Breast took his place behind the drum kit.

However, The Fab Gore's producer, Gorge Mortem, had reservations about Breast. Mortem thought that Breast couldn't keep up with the other Zombeatles; he couldn't eat enough brains. So Breast was dismissed and replaced by Dingo Scarr, the recently-deceased drummer for the popular zombie rock combo, Rory Sturm und Drang and the Curried Brains.

Angus MacAbre ("Scotland's Funniest Zombie Comedian") and legendary undead rock critic Fester Fangs (of Rolling Tombstone Magazine) first encountered the Zombeatles at The Cadavern Club in Pool of Liver. Fangs was instrumental in bringing The Fab Gore to public attention and MacAbre was instrumental in bringing the public's brains to The Fab Gore.

The Zombeatles' impact on popular music was immense and immeasurable. As Fester Fangs wrote: "The Fab Gore brought a certain frenetic frisson to rock and roll. Their songs about eating brains really dug deep into the heart of the public's collective brain (if you'll excuse the mixed metaphors). With such classic songs as "I Wanna Eat Your Hand" and "Ate Brains A Week," The Zombeatles performed a kind of figurative electroconvulsive therapy on both popular music and popular culture, which left the rest of the music industry looking brain-dead (pun pretty much unavoidable)." (from "Eat 'Em Raw: The Cannibalization of The Zombeatles," as reprinted in Psychopathic Reactions and Cerebral Cortex Guano: The Work of A Legendary Undead Rock Critic, edited by Greil Carcass).

Zombie music has long been an underground phenomenon. Why do you think it is surfacing now?

I think it's surfacing now because the "underground" can only stay under ground so long before the mass media and popular culture "dig it up" (so to speak) and it becomes part of the mainstream. I'm not saying that zombie music is part of the mainstream yet but I think it's well on its way. Take, for example, Angus MacAbre's blatant attempt to cash in on the success of the popular American indie rock band Vampire Weekend by forming his own band called Zombie Workweek. This is the kind of derivative cannibalization that the music industry is famous for.

Zombie music is just riding the zombie zeitgeist. As June Pulliam so eloquently put it in her essay, "The Zombie," which appears in Greenwood Press' Icons of Horror and the Supernatural: An Encyclopedia of Our Worst Nightmares: "The zombie itself is a malleable symbol - representing everything from the horrors of slavery, white xenophobia, Cold War angst, the fear of death, and even apprehensions about consumer culture - and has become an icon of horror perhaps because it is quite literally a memento mori, reminding us that our belief that we can completely control our destiny, and perhaps through the right medical technology, even cheat death, is mere hubris."

Are the Zombeatles simply a revival band or do they bring their own fresh material?

The Zombeatles are a revival band only in the most literal sense of the word "revival" - that is to say that The Fab Gore breathed new life into popular music as only the living dead are capable of doing. The Zombeatles gave pop music a metaphorical Heimlich Maneuver; they transmogrified rock and roll from the bloated, maggot-ridden corpse it had become, replacing the figurative rigor mortis that had set in with a revolutionary, new, riboflavin-enhanced approach to rockin' and rollin'.

The Zombeatles influenced countless acts. Can you imagine The Zommonkees' recording their 1966 debut single, "Last Brain in Clarksville," without The Fab Gore paving the way with such classics as "Ate Brains A Week"? Not bloody likely. And who can deny the Fab Gore's influence on The Zomzombies' big hits "Thyme Is The Seasoning," "Smell Her Slow" and "She's Not Rare"? The Zombeatles even inspired a fictional parody band called The Zomrutles.

Your press materials suggest that the Zombeatles "went viral" after they were showcased by Rob Zombie as part of a Halloween promotion on YouTube. What happened next? How many people were infected? Could this viral spread have been prevented through sanitary measures?

It's like that old TV commercial for shampoo... "And they'll tell two friends. And they'll tell two friends. And so on. And so on." Friends kept telling friends about the Rob Zombie-endorsed Zombeatles' music video, "A Hard Day's Night of the Living Dead." These friends told other friends. As of right now, 1,121,999 people (give or take a few) have been infected. This number is based on the fact that there have been 1,121, 999 viewings of the video on YouTube. Of course, some of these viewings could have actually been re-viewings by the same person(s). And there's no telling how many people that would apply to. I'm confident, though, that YouTube founders Steve Chen, Chad Hurley and Jawed Karim are working on the cutting-edge technology that will allow us to determine this in the very near future. When I hear back from them, I'll definitely get back to you, Henry.

As for the question of whether or not this viral spread could have been prevented through sanitary measures, I really can't say for sure. All I know is that it's important to wash your hands immediately before and immediately after using YouTube.

Doug Gordon is a producer for Wisconsin Public Radio's/Public Radio International's Peabody Award-winning program, "To The Best Of Our Knowledge." Originally from Canada, Gordon has a Bachelor of Fine Arts degree (Major: Creative Writing) and a Creative Communications diploma (Major: Journalism). When not trying to make public radio more entertaining, he can be found working on various creative, artsy multimedia projects.