Whether the book has a future continues to elicit a lot of discussion these days and these debates continue to be polarized. My impression is that those who claim the book has no future, if not having the loudest, are at least more numerous. I disagree that the book is doomed to disappear.
A few years ago I organized a conference on The Future Perfect of the Book. The clever play in title was meant to show that the book had a future, but I wanted to us to think about what the book would have been when at some point in the future we look back at these debates. I don’t naively believe that the future of the book will be perfect, but I do believe that books are quite robust as a technology and that their demise will only come very slow indeed.
The debate about the future of the book may be polarized, but the discussion is not exactly taking place on an equal footing. The discussion often centres on one type of book or one type of reading which is then taken to be general. Books and reading experiences differ, however. It is not too difficult to see why that printed references works – dictionaries and encyclopaedias, technical and other manuals, directories and gazetteers, catalogues and indices etc. – are a thing of the past. A different matter is the type of book, however, that tends to invite linear reading – novels and essays, for example, or academic monographs, coffee table books, newspapers, magazines and journals. These will not so easily be replaced. There is an important caveat to be made here, one which was noted by early defenders of hypertext in the 1990s: that linear reading actually represents only a small portion of all human reading activity. Therefore when the promulgators of the digital book refer to the constraints of the traditional, printed book, they are actually confusing categories: true, the digital medium takes away these constraints, but the benefits of digital versatility show most clearly when we are reading for information, not for pleasure.
We should remember though that the printed book already was an ingenious device in the way it facilitated access to information. It differs in that from the scroll. The codex, unlike the scroll, allows you to access any part of the text in the same time and space. Functions such as tables of contents, running headers, page numbers and indices helped you to get to the information you want fairly quickly. Add to these portability and durability, and you have many of the reasons why I think the book is a robust piece of technology.
When it comes to linear reading, I don’t believe the functionality of the book realizes its full potential – which I take to be one aspect as to why the novel or books of poetry will disappear that quickly, if at all. No doubt, e-Books offer advantages: you don’t have to lug around “all that pulp”; you save on book shelves; you can transfer your e-Books between devices, and so on. But there are losses as well, losses which are perhaps less noticeable – or insufficiently understood – and therefore overlooked, even by the luddites. Books aren’t texts. They are printed texts. Aspects such as typeface and mise-en-page, book covers, title pages and running headers are not merely decorative features. They have a certain functionality. They facilitate reading. They shape the reading experience. Contrary to the saying that we shouldn’t judge a book by its cover, we do this constantly. (One quick look at a book and you immediately know whether what you hold in your hands is a work of literature, a trashy novel, a literary classic, a textbook, a monograph, a popular science book, and so on.) The typeface may be lost on most except the geekiest of readers, and yet the typeface determines legibility, which in turn influences the reading experience, the way you remember the book, perhaps even your comprehension.
Early e-Books and e-texts completely ignored these aspects. And even though the Kindle has put effort in improving the way it presents its text, the device nonetheless creates a uniformity the effects of which are difficult to foresee. What happens I wonder when your secret indulgences in Helen Fielding or Mills and Boon looks the same as your Dostoyevsky or Margaret Atwood?
I have returned, in other words, to the issue of reading experiences. Of course, the type of experience that I cherish and associate with physical books may not be relevant or important to other readers. My penchant for type, and for the touch and smell and shape of books, will not stop larger forces from taking its course. However, I want to reiterate that before the emergence of e-Books we already have different books for different readers. The type of reader who reads a book once and then takes it to the charity shop or second-hand bookstore will be well served by the e-Readers. Others may not.
There are, however, other aspects that in the very least may slow down the course of the digital evolution. First, book ownership is ingrained in the culture – and has been at least since the advent of universal literacy in the nineteenth century, regardless of how small one ‘s books collection may have been. For one this explains why e-Books are such a recent phenomenon. (The first Kindle hit the market only in November 2007.) Second, we yet have to see what the market will bear. How much are people willing to pay for something they cannot hold? For something they may not even own? The latest Margaret Atwood retails at the moment for £5.03. A substantial biography ranges between £8 and £10. Academic books can be up to £40. The point that consumers may balk at rising e-Book prices or subscription bundles has not yet been frequently made, but it is no doubt a factor that will determine how perfect the future of the e-Book is.
The Wordsworth Trust’s 60,000 manuscripts and documents have all been digitized and made available in high-quality images through Romanticism: Life, Literature and Landscape, a collection that is part of Adam Matthew Digital’s growing portfolio of digital archives. The realization of this project was no mean feat, for it makes the entirety of the Wordsworth Trust’s holdings in Grasmere available for study remotely. Their holdings consist of the largest collection of William Wordsworth’s poetic manuscripts, working drafts and notebooks in the world, but also reproductions of various printed materials (page proofs and annotated editions) and other archival documents, such as scrapbooks, financial papers, travel journals and vast amounts of correspondence. The Trust owns letters to and from William, Dorothy and Mary Wordsworth, Wordsworth’s brothers John and Christopher, Sara Hutchinson, and many of Wordsworth’s friends, associates and fellow poets, the most important of which are S.T. Coleridge, Robert Southey, Charles and Mary Lamb and Robert Haydon. In addition to the Wordsworth manuscript, the Trust also has Dorothy Wordsworth’s famous Journals, as well as manuscript materials by Coleridge and Thomas de Quincey, and also a small number of paintings and drawings owned by the Trust by artists connected with the Lake District or who visited there (such as Thomas Gainsborough, J.M.W. Turner, Joseph Wilkinson and John Constable), and various objects from the Wordsworth household.
All items in the AMD digital collection are fully searchable, zoomable and easily navigable, by document and by collection. The latter – the Dove Cottage Manuscripts (containing Wordsworth’s manuscripts), the Wordsworth Library Letters (containing the main part of the poet’s correspondence and that of his friends and family) and the Wordsworth Library Manuscripts Alphabetical Sequence (consisting of two parts, some 2,000 letters mostly to Wordsworth and S.T. Coleridge and a large collection of miscellaneous papers relating to Wordsworth’s life and circle) — are described in detail by Jeff Cowton, the Trust’s Curator. There is also a detailed Introduction, a timeline (which links to key documents), biographical sketches, interactive maps of Westmoreland and the Lake District (Image 1), modern photographs of the region, and a selection of scholarly essays, some introductory, some of a more specialist nature, by Jared Curtis, Stephen Gill and Judith W. Page. Of note is Curtis’ history of the Cornell Wordsworth Series, a collection of scholarly editions (with facsimile and transcription) of the main manuscripts of Wordsworth’s major works, which the user of Romanticism: Life, Literature and Landscape may want to use in conjunction with the site.
A major benefit of the site is the metadata, which offers general collection-level catalogue information (Image 2), short descriptions that accompany each document (Images 4 and 5), and detailed item-level descriptions (Image 3), all of which complement the Trust’s own online catalogue and, again, the volumes of the Cornell Wordsworth. Additional features are the ability to export catalogue records to EndNote and Refworks, to add images to a personal lightbox, and even to download the entire document.
Romanticism focuses on exploring the documents, rather than the texts of the poems, which makes it somewhat more challenging for anyone not deeply familiar with Wordsworth’s manuscripts. Still, the site makes plenty of allowances for the non-expert. Thus it is possible, for example, to search on particular poems to get you to the relevant drafts. The advantage, however, of a document-based approach over a text-based approach (such as that of the Cornell Wordsworth, which focuses more narrowly on the composition history and textual development of individual poems) is that it allows one to see the creative process more directly and in its original context. It offers the user a better sense of how Wordsworth used his manuscripts and how composition worked, revealing the often convoluted, meandering progression, as was the case with The Prelude for instance, whose narrative grew almost searchingly out of a series of inchoate verse fragments.
Unfortunately, Romanticism also has some clear shortcomings. Particularly in the representation of the documents there are some oversights and project management issues. The primary aim of Adam Matthew Digital, who in a previous incarnation was a publisher of microfilm collections, is to increase access to archives. While over the years AMD has placed more emphasis on contextual interpretation, the inclusion of adequate metadata, and usability, the functionality of their products is not exactly innovative. Their interest lies in mass digitization, not in creating tools to facilitate the process of analysis and understanding. Not surprisingly, then, I found a few problems with the design of Romanticism when I gave the resource a trial run. (I tested the site about a year ago, so possibly some of these glitches may have meanwhile been fixed.)
For instance, when you are viewing a particular folio or opening, you can navigate forwards or backwards in the document either one image at the time or by selecting another image from the pull-down menu. The navigation, however, is by image, not by folio or opening; no folio numbers are given at all. In itself this would not be a problem if it were not that, strangely, there seems not to be a consistent policy as to when the system shows a single folio or an opening: sometimes you get one and sometimes the other. (I would hazard to guess that the decision whether to show a single page or two pages was made on the spot when the items were being photographed.) Because of this, there is no correlation between image number and folio number, which can cause the researcher quite some difficulties, since one has to do the counting all by oneself, and there is no guarantee that a blank page – or even a page with writing on it – has not been accidentally missed. The situation can be further complicated when dealing with loose leaves, where it is not always possible to tell whether certain double leaves are two leaves photographed together or whether they actually form a bifolium.
Frustrating also is the lack of certain information. Below each image the user can see a short description and date of the document, but not the shelfmark. In the right-hand corner of the screen a plus sign can be clicked to open a pop-up window with brief information about the contents. But the information provided is not always consistent: sometimes it says something about the image itself, sometimes about the whole document (see Images 4 and 5). The text in this box is hyperlinked to other images, but again, since there are no shelfmarks, it is not always apparent to what image or document you are taken.
Matters get really complicated when absolutely essential information is withheld, and where actually the document-based approach begins to break down in a serious way. This is the case for example with DC MS 20, a notebook by Dorothy containing some of her verse and prose, and her journal from May to December 1800, where an image of the back cover of the notebook is suddenly followed by a number of additional images. These images begin with what looks like a new opening: an image of a pasted flyleaf which is shown as a left/verso with various notations and which is followed by some stubs of torn-out pages and then by other pages with writing on it. The unusual sequence is easily explained. Dorothy flipped her notebook upside and used the back as a new beginning. Anyone who has worked with the Wordsworths’ originals is familiar with the way they used their notebooks. But no explanation to this effect is provided. It would have been better to show the pages in their original position and add a rotation function. Now, as with the lack of correlation between images and folios, there is no telling if any folios were not reproduced.
Faults like these may be tiny, but they are serious, and puzzle the user. Still, they do not fully detract from the fact that Romanticism: Life, Literature and Landscape is an enormously useful resource which has the ability to give a new boost to Wordsworth studies and the study of the Romantic period in general. It cannot be denied that the significance of this new collection lies in the sheer volume of material which has been made available for study and that, with a fair degree of success, it helps to give insight in the life and working methods of one of Britain’s major poets and his circle.
This entry sets out some further detail which I hinted at in my previous blog post. Digital technology, as I said, is of course already used frequently, but it is used in the main to beam digital surrogates of manuscripts to the world. The emphasis in this is on showing, and a little bit on telling. Computers, however, are not just publishing tools; as instruments that can compute, they can be put to better use towards facilitating actual analytical processes.
The challenge therefore is to muster that computational power and bring it to bear on the interpretation and representation of manuscript.
What I have in mind is a Science Museum for manuscripts. Museums devoted to scientific knowledge and engineering across the world are very good at creating multimedia displays with buttons and models that help the visitor understand how a law of nature or a technological innovation actually works. Couldn’t we create an interactive tool that shows how manuscripts work? Manuscripts after all are not just objects; they are also an event, to borrow a term from Wordsworth scholar Jeffrey C. Robinson.
Web 2.0, social media and various modelling tools (such as those used to reconstruct archaeological sites and buildings) may prove inspirational. Just imagine that a manuscript had a Facebook page. Its “status updates” would be a daily-changing caption; under “about” you could find basic bacground information; the “timeline” would detail the composition history; “friends” would be the visitors leaving comments; and of course the photos are the digital facsimiles. The notion might be somewhat flippant, but the underlying concept is I think of extreme value, for what web 2.0 enables is to collect, create, process and repurpose contextualized (or contextualizable) information flexibly and easily. The QRator project developed by UCLDH is a good and simple example of how social media can be used to involve visitors in the interpretative process of the museum.
The project that Elena Pierrazo and I are thinking about aims to provide users with a suite of tools for analysis, interpretation and learning and to provide curators and archivists with a customizable interface that allows them to store and represent their holdings in a manner that can be easily and cost-effectively integrated in existing work flows.
The technological challenges to realize such suite may for the moment still be substantial, Out-of-the-box tools that require little or no advanced programming skills are still difficult to rhyme with the kind of flexibility and interoperability associated with web 2.0. So for the time being we will limit ourselves to some proof-of-concept models to begin testing the boundaries of what technology can do.
Elena has already built a demo for the manuscripts of Marcel Proust that dynamically illustates the sequence of the text, either in the order in which Proust wrote it or in the order in which it appears in the final published version. (See the post on “Genetic Encoding at Work” on her blog or you can go direct to the demo.) What the tool does, in other words, is create an augmeneted reality demonstrating the moment of inscription. In this instance, however, the tool only shows the result of a scholarly intervention: the sequence has been established for the user. However, for many other works that were written in “parcels” (the earliest instantiation of Wordsworth’s The Prelude for example or Ted Hughes’s Birthday Letters), that sequence is only conjectural at best. What if it were possible for the user to test alternative sequences? One can imagine a desktop environment with a set of digital facsimiles and drag-and-drop interaction, supported by ready-made transcriptions, on which the user can try out different sequences, comment on the process and share the result with others.
Screenshot from “Around a sequence and some notes of Notebook 46: encoding issues about Proust’s drafts”, by Elena Pierazzo (King’s College London) and Julie Andre (ITEM, Sciences Po Paris)
This might only be the start. For the idea of augmented reality is not only to limited to enhancing the display and interaction, but also to enable contextualization through the use of interlinked data. The stories that can be told about a manuscript are varied, from the sequence of composition to its place within literary and cultural history. The earliest drafts of Wordsworth’s Prelude for example are contained in a cheap, quarto notebook housed at The Grasmere Trust, Dove Cottage (DCMS 19), which William and Dorothy purchased at the cost of 1s in preperation for their journey to Goslar in 1798. The drafts appear at the back of that notebook, which also contains (among others) notes on German grammar, an account of their travel, an essay on Kloptstock and a entries from Dorothy’s Journal added at a later date. The notebook is thus very much a living record of the lives and preoccupations of William and Dorothy at this time. To help tell its story, the document can be amplified with linked information: for example, the Wikipedia entry on Friedrich Gottlieb Kloptstock, a map of Germany showing the Wordsworths’ itinerary, an image of eighteenth-century Goslar, a scholarly article with a codicological analysis of the notebook, and so on.
Again, this type of linked information does not have to be provided by the systemr. It could be up to the users to curate this content. The digital resource, therefore, could function as a social space where users can post comments, remarks, observations as in some of the household names in social media like Twitter or Google+; can bring together information with systems inspired by Scoop.it or other content curation tools. The interface, ultimately, would give the user access to and control over the pre-loaded digital content to explore, arrange, interpret and share the collection of manuscripts with other users and visitors to the site.
Interactivity has from some time now been a clear aim of digital textual scholarship, existing resources still achieve only a rather low level of interactivity. As it is clear that there are technological constraints (flexibility in the tools still comes at a cost: the need for programming time and expertise), some projects have already taken the route of augmented reality such as the SCARLET project at the John Rylands Library, University of Manchester. Still more research is needed in this area, however.
The usefulness of computers to humanities research is now something of a given. The uses to which computers are put diverse. But when it comes to literature those uses converge on two activities: (1) to employ the computer’s capabilities to handle large amount of data (particularly relevant in the production of scholarly and critical editions of text traditions with a large corpus); (2) to increase access to texts and archives. What literary manuscripts are concerned, the possibilities of the digital medium seem enormous. Some very good projects have indeed been produced, from the scholarly (e.g., the Jane Austen Fiction Manuscripts) to the more generally informative (the website that accompanied the Shelley’s Ghost exhibition at the Bodleian which has loads of digital images). There aren’t nearly enough of them — why, for example, in this bicentenary year was there no The Manuscripts of Charles Dickens project? But a more crucial question for me is: what do these digital projects do for the manuscripts? What other purpose besides facilitating access do they serve?
In recent months, I have given two papers reflecting on this particular question, one on “How to Work with Modern Manuscripts in a Digital Environment — Some Desiderata” presented at the Digital Resources for Palaeography Symposium, Kings College London last September, the other on “Unlocking the Literary Heritage: Digital Tools for Analyzing and Interpreting Manuscripts“ presented just a few weeks ago at a conference organzed by The Wordsworth Trust, Grasmere on Words on the Page, and the Meanings Beyond: The Innovative Interpretation of Manuscripts (for which I received generous and gracious input from Elena Pierazzo from the Department of Digital Humanities at Kings). The driving force behind these papers was to consider precisely the potential of the computer in helping us to come to terms with manuscripts in all their aspects — not only its textual form, but also its physical form (paper and ink studied by palaeographers and codicologists), its documentary form (or “support”, usually only the remit of the bibliographer or cataloguer), its historical form (the use or function the manuscript had for the author) and its cultural form (those seemingly very individualistic habits of a particular writer that actually reveal the shared customs a particular period).
To begin envision the potential that computers can offer we do well to take stock of what they already do. Without wanting in any way to sound disparaging, most digital manuscripts projects seem quite limited. Most of them are concerned with putting digital facsimiles online, usually accompanied by transcriptions, introductory materials and bibliography metadata. (The focus, furthermore, lies primarily on the writing contained in those manuscripts in many instances, which renders an important but only a partial view of the documents in question.) A similar point can be made about the manuscript in the context of the museum. While the document has been liberated from the glass case, it is newly ensnared in the confines of the computer screen. The digital technology, in other words, is used in a way that looks backward to the old technologies used for publication and display rather than forward to creating new and enhanced ways of experiencing and understanding the manuscript.
The challenges are obviously real. Philip Larkin has written about the power and fascination that emanates from these written documents from the past. To replicate that power in the digital medium requires not only the technical expertise to create innovate tools, but also the capability to conceptualize the kinds of analytical and interpretive engagements that enhances the way in which we experience manuscripts. The Grasmere conference — as well as the one the Wordsworth Trust hosted a year ago — had these challenges specifically in mind albeit not only in the digital environment. My ambition over the following months is to think further, together with my colleague Elena, about the conceptual models that are needed to make this digital experience real.
The fact remains that specific texts, existing in specific books, have a specific history which begins with their inception and creation, continues with their printing, publication and dissemination, and ends with their reception. The point is that these specific texts, even when they are faulty, have a life of their own. Rather than simply removing errors, we may also want to ask, as Peter Shillingsburg advises, why reader’s responses to one text are — or are not — different from the reader responses to another text of the same work (2006, 77). If the virtue of all bibliographical studies is, to quote again from McKenzie’s theoretical work, “to show the human presence in any recorded text” (1999, 29), then we must do this indeed for any text, not just those closest to the top of the stemma. Critical editions should more comprehensively incorporate “the making of” of the literary work than they do at present and also, at the metahistorical level, their own making of. In the digital arena, several scholars have already put forward new ideas for this type of edition. These include Ray Siemens’ social edition, which envisages a new model of researcher engagement which involves the user community in the construction of the digital edition to replace the old model in which the final word rests exclusively with a small editorial team; Siemens and his colleagues see the edition instead as a process and the editor as a facilitator (Siemens at el., forthcoming). Edward Vanhoutte has repeatedly argued to diversify the functionality of the digital edition to make it suit different audiences (e.g., Vanhoutte 2010). That this is not simply a return to textual pluriformity is made clear by Elena Pierazzo (2011), who distinguishes between the display text and the embedded source files: variant editions exist in potentia within the TEI encoding and can be activated at will.
What the critical edition in the time of the history of the book will look like is not a question I can answer specifically, but that it will build on the new directions in digital editing is certain. What is already beginning to emerge is that the creation of new digital tools is also bringing about changes in editorial practice: what is becoming apparent is that in five or ten years we will be editing differently.
This brings me by way of a concluding point back to the granularity of the edition and to McKenzie’s sociology of the text. For McKenzie, textual scholarship was often not concerned enough with the “material concerns of historical bibliography” considered from the “economic and social dimensions of production and Readership” (McKenzie 2002, 200). For generations, critical editors have performed a vital role and more often than not have performed it well. But what they have done is produced editions – have produced texts – that are new and different from the old. By the very nature of what editors do, they push to a greater or lesser degree the old texts to the side. What they don’t reckon with, however, is the pastness of these texts, which comes, as McKenzie would say, because they do not entertain their texts as belonging to books. Books look the way they do because of the involvement of other agencies such as typesetters and designers. The book is an expressive form, and we need to understand that expression, whether or not it came about with or without the complicity of the author. It’s time we learned how to read their language (McKenzie 2002, 207).
To my knowledge only one print critical edition exists so far that resembles a social edition and that is the edition of Yeats’s Mythologies prepared by Warwick Gould and Deirdre Toomey. This edition of Yeats’s early prose stories tries to do justice not only to Yeats’s final intention, but also to the intentions of collaborators. Moreover, Mythologies is not just a collection of texts, but a book project that existed in ever-changing forms and emanations; Mythologies grew over time as stories were collected and then re-collected in separate volumes, sometimes under different titles. The text that Gould and Toomey represent – though they include some genetic material as well – is that of the Edition de Luxe that was in preparation but never realized, for that edition was to be, in Yeats’s mind, the expression of his permanent self. To top it all, the layout of the edition, published by Macmillan, pays homage to the volume’s original layout by replicating Yeats’s favourite typeface, Caslon Old Style, in the text and running headers and its imitation of the original title page (Figures 3 and 4).
Figure 4: W.B. Yeats, Mythologies, edited by Warwick Gould and Deirdre Toomey, Palgrave Macmillan (2005)
This example shows that not the material aspects of texts and books must not by necessity be suppressed in a scholarly edition. Even though the Macmillan edition of Mythologies is a contemporary book, its design references the time and place of the work’s original production. An edition like this mediates its text differently than ordinary the scholarly edition. It does not purport to exist outside of its own interface.
The digital environment, rather than diminishing the granularity of the text, increases it. For those areas of textual transmission in which the physical form of the book is as important as the text which it contains, this statement is becoming self-evident. Digital editions of medieval manuscripts or of modernist magazines cannot really avoid the forms of their original design. One can only hope that scholarly editions of texts and books that have a less spectacular design will nonetheless follow suit in rendering some of their original historical forms.
McKenzie, D.F. 1999. Bibliography and the Sociology of Texts. 2nd ed. Cambridge: Cambridge University Press.
–. 2002. Making Meaning: ‘Printers of the Mind’ and Other Essays. Eds. Peter D. MacDonald and Michael Suarez. Amherst, Boston: University of Massachusetts Press.
Pierazzo, Elena. 2011. “A rationale of digital documentary editions”. Literary and Linguistic Computing 26, 463–47 [accessed 14 Jan 2012].
Shillingsburg, Peter L. 2006. From Gutenberg to Google: Electronic Representations of Literary Texts. Cambridge: Cambridge University Press.
Siemens, Raymond, et al. Forthcoming. “Toward Modeling the Social Edition: An Approach to Understanding the Electronic Scholarly Edition in the Context of New and Emerging Social Media”. Literary and Linguistic Computing.
Vanhoutte, Edward. 2010. “Defining Electronic Editions: A Historical and Functional Perspective”. In Willard McCarty (ed.), Text and Genre in Reconstruction: Effects of Digitalization on Ideas, Behaviours, Products and Institutions. Cambridge: Open Book Publishers, pp. 119–144.
The argument that books as carriers of texts are important for textual scholarship is easy enough to make, however. But how should it change our practice. Is the aim merely to produce scholarly editions with additional content, as Bodo Plachta (2007) has suggested? The case he builds using examples of “politically charged” paratexts to demonstrate the importance of book forms to the study of the production, dissemination (to stay under the radar of censorship, for example) and reception of printed texts (2006, 96, 99). His recommendation to include facsimile materials in the (digital) edition is “not to replace textual criticism, but rather to add to it, so that the edition may also serve as an archive” while “offer[ing] urgently needed starting points for the use of editorial products in literary studies” (2006, 103). Of course, the practical implications of bringing book history into the edition are considerable, even in the electronic environment, no matter how better suited it may be to completist editions.
Textual editing in the time of the history of the book should, however, not simply be a matter of producing editions that have more content, but rather that do more things. Such a reconceptualization of scholarly editions should go beyond the binary created by the digital era: of having digital critical editions on the one hand and digital archives on the other. Shillingsburg has stated repeatedly that the task of the editor is to edit, thereby fulfilling an important responsibility towards the reader. Editors “whose work stops at archiving”, however, “perform valuable work, but they offer no more than starting places” (1997, 224). While I am certainly in agreement, one might object that Shillingsburg takes a rather narrow view when he sees the digital archive only as a toolkit for making editions and not as a repository for the process of textual transmission; he certainly does not see the critical apparatus that way, which rather than “a dumping ground for superseded textual forms” is “a guide to the progression of composition and production processes creating a succession of versions” (1997, 212). Digital archives are no doubt better adapted at providing this guidance than the printed edition whose apparatus – mockingly dubbed a “Variantenfriedhof” [cemetery of variants] (Gabler 2008, 14) — as an output of research is quite difficult to repurpose. Regardless of the painstaking accuracy with which the apparutus is put together, to reconstruct particular states from the welter of detail is only all-too cumbersome, if not impossible. Digital editions, by contrast, being both edition and archive, offer the potential for digging more deeply into the textual data.
Where once it was enough for editions to present an accurate text, provide a rationale for its emendations as well as a record of the textual history and transmission, digital scholarly editions have begun to move away from such singular editorial goals. If we now agree that editions have only a limited life cycle before new theoretical perspectives and new research questions prompt us to remake out editions, then we must also acknowledge that not all editorial aims can be served by one edition alone. Even though we currently accept that having rival editions is good (since no edition can claim to be “definitive”), we still tend to see them as rivals for truth rather than as editions occupying different shares of the scholarly and readerly market.
Textual pluralism is good — as long as it adds to our understanding of the nature of text, work and book; it cannot simply be good for its own sake, or to make it fit a liberal humanist agenda. The point is that textual pluralism only becomes really insightful when we recognize the differences in purpose that editions serve and that different editions may have different users.
In some cases, controversies over editorial practice can bring out such differences. For W.B. Yeats’s poems we have two rivaling critical editions: the first by Richard Finneran, originally published in 1983, revised in 1989 and available in the Collected Works issued by Scribners in New York, the second by A. Norman Jeffares published by Macmillan (but available only in the UK) (Yeats 1989 and 1997). Both editions apply the principle of final authorial intention, and as a result both editions agree on most readings. The contentious word, however, is not “final” or “intention”, but “authorial”. For Finneran, Yeats is the author and his edition accepts only the final readings that Yeats authorized during his lifetime. Warwick Gould, who was Jeffares’ collaborator, rejects this view (Gould 1989). Yeats was of course the author of his poems for Gould too, but not so to speak the sole author. Using letters and other archival evidence, Gould demonstrated that Yeats delegated certain “final” decisions about his texts to other people, in particular to his editor at Macmillan, Thomas Mark, and his wife, George Yeats. After Yeats’s death, Mark worked closely together with Yeats’s widow, who implemented revisions that Yeats had indicated he wanted but had never carried out (Gould 1994, 110-11). Gould, in other words, sees Yeats as a “social” author who was at the heart of a small network of people who all had some authorial input.
Apart from a handful of variant readings, this different conception of authorship has had significant impact on the order of the poems in Yeats’s canon. When in 1933 Macmillan issued the Collected Poems containing all of Yeats’s work to date in two volumes, Thomas Mark suggested to depart from the normal chronological order (an order which had already been established for the Edition de Luxe) in favour of a division between “Lyrical Poems” in volume one and “Narrative and Dramatic Poems” in volume two. Mark proposed this arrangement primarily for commercial reasons. To arrange the poems chronologically would have meant opening with a long narrative poem called The Wanderings of Oisin, and Mark felt that this might put off potential buyers; placing the narrative poems in the second volume meant that volume one could open with the better-known lyrics from Crossways (1889) and The Rose (1893), including the immensely popular “The Lake Isle of Innisfree”. Yeats warmly welcomed Mark’s suggestion (Gould 1989, 714-15). Since no other collected edition appeared during Yeats’s lifetime, the order of the Collected Poems became de facto the poet’s final intention and is followed thus by Finneran. However, this not the full story. As Gould argues, Collected Poems represents Yeats’s canon for a particular time and audience: it served its purpose wonderfully as a trade edition, bridging old and new audiences while aiming for some form of completion; but it deviated from Yeats’s own vision of his canon, which did not marginalize the longer poems. During the 1930s, he was making preparations for no less than two de luxe editions of his collected work, one to be published by Macmillan in the UK and Ireland; the other to be published by Scribner’s in the United States. At no point was the new arrangement of Collected Poems, separating lyrical from narrative poems, considered for these editions (see, e.g., Gould 1989, 725). Unfortunately, neither edition ever appeared. Although they were not published, they exist in numerous runs of page proofs and draft contents and therefore still supersede the 1933 Complete Poems.
What this example illustrates is not simply that different rationales lead to different editions, but that editors deal with more than just text. In Yeats’s case, his texts cannot be seen separately from how he conceived (and frequently reconceived) his œuvre; as such, they are embedded in the social condition of their coming into being. (They cannot be separated either from the book as physical object, insofar as for Yeats design and layout form an integral part of the symbolic structure of the poetry, though this dimension has not yet been realized in any existing edition.) While Finneran’s edition is defensible in its application of accepted editorial principles, the edition is narrowly text-centred and does not rehearse the full history of the textual transmission. Jeffares’ edition offers a much wider purview as well as a more complete textual history that reflects the sociological condition. Moreover, in arguing against Finneran the edition includes a sense of its own purpose. Or rather begins to do so. In other words, it is no longer context-free, the way Finneran and other editors thought the edition to be universal. Different rationales lead to different editions that may lead to different uses and fulfill different research needs.
Critical editions generally are not self-conscious about their aims. Eclecticism in the Anglo-American world is still all-too often leads to the production of editions allegedly suited to the needs of all readers. It reinforces the idea that the text can exist outside of time and place, that the edition itself is transparent and value-free, and that it has so to speak no interface. Consequently, literary scholars can continue treating textual editing as an activity preparatory to the real work of criticism. The academic book market is complicit in this. Not only are popular series like the Norton Critical Editions and Oxford World Classics uncritical when they boast about offering the most “authoritative texts” (a nomenclature that is ambivalent at best, because most are simply reprints of what is considered the “best” text), most academic presses who publish scholarly editions also issue what one could call “light” versions that strip away the edition’s paraphernalia, i.e., textual introduction and critical apparatus, on the grounds that students of literature have no interest in this. A self-fulfilling prophecy if ever there was one.
Insofar as editors accept that no edition can be definitive, they acknowledge that the history of the textual transmission is reflected in the editorial choices they make; but the possibility that these editorial choices reflect back on the nature of the textual transmission is not understood. Even McKenzie in his posthumous edition of the works of William Congreve did not, it seems, fully adopt his own conceptualization of the sociology of the text in his editorial practice. Opting for the 1710 Folio edition of Congreve’s Works as his copy-text instead of the first Quarto editions of the individual plays, McKenzie produced, by his own admission, an eclectic text based on final authorial intention. His choice in favour of an authorially revised text over the historical context of its original production, though, is not simply led by orthodoxy. In fact, the choice of copy-text was far from obvious given that Works not without problems. Congreve in some cases relied on corrupted reprints for his own base text; he seemed to have revised his texts in a rush (2011, xxxiii); and not all his revisions can be considered improvements, particularly not because, following increased pressure from Queen Anne’s Court to quell licentiousness, Congreve submitted himself to self-censorship (2002, 224-25; 2011, xxi-xxii). What motivated McKenzie, however, was the fact that Works was consciously fabricated as an œuvre. While its “historical form and concept” are just as valid as the “textual structures” of the original Quarto design, Congreve gave his plays a whole new intent. Congreve did more than just revise his texts, the regenerated their “textual structures” as he lifted them, as McKenzie puts it somewhat lyrically, “from the soil of [their] first growth” and replanted them “in new relationships”. The difference with the forms of the individual plays is that their new “display is more likely to favour the design of the whole than the diverse forms of the earliest state of each item” (2011, xviii). The neo-classical design that he gave to this plays was not only an innovation for the time, the result of close collaboration between Congreve with the bookseller and publisher Jacob Tonson and Tonson’s printer John Watts, but also approximates the “distinct unitary form” which Congreve used for scene divisions and stage groupings in some of his extant manuscripts (2011, xxiii; see also McKenzie 2002, 123-24). McKenzie, therefore, believed that for Congreve the Folio edition meant a typographical translation of the play text to the book to create a “hand-held theatre” (2002, 201).
In an unexpected twist, however, McKenzie, argues that the eclectic edition itself constitutes a “sensitive response to social [and historical] context” that “serve[s] the play to the fullest” (2002, 226). Explicitly positioning himself against Zeller, he states: “Conflation is inevitable. But it is also critically and historically responsible only in so far as the causes of the variant readings have been explained, in this case by that peculiar complex of attitudes — personal, social and trade – which obtained for Congreve [. . .] in the first decade of the 18th century” (2002, 225). One can quibble with McKenzie’s insistence that the eclectic text does justice to rather than violates the historicity of the text. Zeller and McKenzie are at odds here in that McKenzie, somewhat unexpectedly, defends the historicity of the variants in the text, whereas Zeller argues for the historicity or “Befund” [“record”] of the authorized versions of the text (see Zeller 1995).
Gould, Warwick. 1989. ‘The Definitive Edition: A History of the Final Arrangements of Yeats’s Work’. In A. Norman Jeffares (ed.). Yeats’s Poems. London and Basingstoke: Macmillan Papermac, 706–749.
——. 1994. “W.B. Yeats and the Resurrection of the Author”. The Library, 6th ser., 16, 101–134.
McKenzie, D.F. 2002. Making Meaning: ‘Printers of the Mind’ and Other Essays. Eds. Peter D. MacDonald and Michael Suarez. Amherst, Boston: University of Massachusetts Press.
——. 2011. . “Textual Introduction”. In D. F. Mckenzie and C. Y. Ferdinand (eds.). The Works of William Congreve. Vol. 1. Oxford: Oxford University Press, xvii-xxxiii.
Plachta, Bodo. 2007. “More Than Mise-en-Page: Book Design and German Editing”. Variants: The Journal of the European Society for Textual Scholarship 6, 85–105.
Shillingsburg, Peter L. 1997. Resisting Texts: Authority and Submission in Constructions of Meaning. Ann Arbor, Mich.: University of Michigan Press.
Yeats, W. B. 1989. Yeats’s Poems. Ed. A. Norman Jeffares. London and Basingstoke: Macmillan Papermac.
——. 1997. The Poems. Ed. Richard J. Finneran. 2nd ed. New York: Scribner.
Zeller, Hans. 1995. “Record and Interpretation: Analysis and Documentation as Goal and Method of Editing”. Hans Walter Gabler et al. (eds.) Contemporary German Editorial Theory. Ann Arbor: University of Michigan Press, 17-58.
D.F. McKenzie, more than anyone else in the field, has made the leap from text to book, advocating that we should amplify “our sensitivity to the printed book as physical form in order to refine our notions of the historicity of printed texts and our function in editing them” (1984, 334). A discussion of the nature of text and book cannot take place without reference to McKenzie’s seminal idea that “forms effect meaning”, a subtle but profound adjustment of the old structuralist idea that form and content are inseparable.
The depth of McKenzie’s statement is adumbrated when we consider its misprint in the Routledge Book History Reader: “forms affect meaning”, with an “a” rather than an “e” (Finkelstein and McCLeery 2006, 37). In English the words “effect” and “affect” are often confused. Obviously close to each other in sense and orthography, the first means to bring about something; the second means to have an influence on. (To give an example: “pollution affects the health of the nation” but “pollution effects the destruction of the ozon layer”.) When McKenzie states, therefore, that “forms effect meaning” (1999, 13) he does not say that bibliographical forms have meaning, but that they bring about meaning.
Certainly forms can also affect meaning, as Jerome McGann has argued in respect of his bibliographical codes. However, apart from the cases that McGann and his followers have cited to defend his notion, I have doubts as to the magnitude of the bibliographical codes and their ability to bring about “shifts and changes” in meaning (1991, 59). What may be obvious for Byron’s “Fare Thee Well” may not be so obvious with other works. The circumstances in which Byron’s poem was published – privately printed, twice pirated, and finally canonized in the 1816 Poems — make clear that not just the “look” of the publication, but also their function and audience were different. The conditions in which other works that have appeared in different editions were published may not have created such marked differences. Take the case of James Joyce’s Ulysses, for example, which from 1918 onwards was serialized in two small magazines, The Egoist in England and The Little Review at the other side of the Atlantic, before being published in 1922 by Sylvia Beach’s Shakespeare & Co. The similarities of these two publications are probably greater than their differences. Despite looking very different, the two magazines were both aimed at the individualist, discerning and conscientiously-modern reader. Both were in part orchestrated by the P. T. Barnum of literary modernism, Ezra Pound. As such the magazines each occupied a niche in the literary market in England and the United States, and they shared a do-it-yourself attitude towards publishing that was later fulfilled also by Beach when she offered, despite not being in the business herself, to publish Ulysses to help Joyce. In other words, McGann’s argument, appealing though it is, slightly overdetermines the power of form. I would argue instead that different bibliographical codes may result in differentiation in meaning, but it should be matter for reception history to ascertain whether these differences also registered in the minds of readers. Evidence in the case of Byron’s “Fare Thee Well” certainly suggests that the different versions were received differently.
I find McKenzie’s point, therefore, more discerning. His argument is less about hermeneutics than about facilitating the transmission of texts and their meaning, and about understanding the signs of that transmission as they are apparent in the physical books themselves as well as from “conceptions of the book” that printers and book designers in the archives about how readers are expected to interpret these signs which can be found in the archives of the printing presses (McKenzie 2002, 207). Running headers, type size and other paratextual features of the book do not have meaning; to appropriate a term from Roland Barthes, they are a zero degree of printing, unburdened by the need to communicate themselves. Nevertheless, they help the communication and the production of meaning to come about. The aims of book design and typography thus are “aesthetic sensibility, technically informed, serving the communication of meaning, the creation of the distinct experience of reading the work” (McKenzie 2002, 214). But where functionality combined with aesthetics to facilitate the reading experience are the attributes of the “book’s total form”, as McKenzie put it in “Typography and Meaning”, the question for the meaning of the book is what “expressive resources” were “available to an author through his printer” (2002, 215, 216).
The Kindle’s low granularity, in other words, should be seen as part of the device’s functionality. I suspect, on the one hand, that the company invested more in designing a gadget that was highly portable and that had a high readability (the non-backlit screen is without doubt a great technological achievement) to the detriment of text format. On the other hand, the low granularity of its text fits with the ideology of unconstrained reading that the device supports, prioritizing an iterative text flow over look, and hence the software supports html and css. This means that content creators have the ability to adjust the layout of their e-Books, provided they do not fall foul of the end users’ desire for unrestrictive formats. But not all documents can be rendered in html. The “expressive resources” of html are mostly inadequate to represent the finer aspects of typography (e.g., ligatures, tables).
Books, by contrast, are more versatile and have a high level of granularity. The “typographic vocabulary” (McKenzie 2002, 218) used in James Joyce’s Ulysses, for example, can be easily summed up. Being neither similar in outlook to a French book, despite its paper wrappers, nor like an English one — and especially not an Irish one — Ulysses uniqueness rests in its boldness, both as a physical object and a literary work. The only area in which Joyce was involved was the now familiar pale blue colour of the cover and the white lettering of the title. Joyce ordered repeated samples before he was satisfied that the tone of blue was right, alluding to the white on blue of the Greek flag. If this allusion was possibly too subtle, readers certainly would not miss the dominating, hefty bulk of the book which measured in the large-sized state
printed on verché d’arches handmade paper c. 25 x 19.8 cm and at 732 pages was about 4.6 cm thick. It was also a very fragile item as its weight easily caused the inner hinges to crack. On the inside the mise-en-page was rather odd too, which underscores the sense that this is a book that is not easy to digest. The classic Elzevir typeface certainly adds gravitas, though it was probably also chosen for its compactness. The layout of the text, however, is not conform to the so-called Van de Graaf canon which dictates that the text area be proportional to the page size; the text height in Ulysses is effectively too squat. The physical obstacles in reading it are matched by the innate difficulty of the novel itself, and Joyce’s critics found an easy target for derision in the book’s unusual look. George Slocombe in his Paris column in the Daily Herald of 17 March 1922 exclaimed: “And here it is at last, as large as telephone directory or a family Bible, and with many of the literary and social characteristics of each” (quoted in Deming 1997, 217). The format, though, was not intentional (as McKenzie points out decision about format were generally motivated by business practices [2002, 220-21], but resulted from the challenge posed by Joyce’s lengthy work. For the textual editor, the challenge then lies in what to do with the bibliographical information, particularly in cases where typography itself has gone wrong. The layout of T. S. Eliot’s The Waste Land is a case in point. Eliot’s poem was published no less than four times as an individual poem; in October 1922 the poem appeared in the first issue of Eliot’s new magazine The Criterion, followed a month later with publication in the American magazine The Dial; on 15 December Boni and Liveright in New York issued the first publication in a print run of 1,000 copies; Virginia and Leonard Woolf’s The Hogarth Press finished their hand-set edition of 460 copies almost a year later on 13 September 1923. Most editors, with Eliot’s own approval, consider the text of the American edition to be the most reliable and has been used as copy-text in Lawrence Rainey’s critical edition (Eliot 2006; see 46-48 for details). The stemma of The Waste Land is not at all transparent, but what is almost certain is that The Criterion text and the Boni text were set around the same time from two different typescripts. The printer of The Criterion, however, had made a number of “undesired alterations” which Eliot not with considerable effort reversed (2009, 752). It was as if prophecy had come true, for two months earlier when Eliot sent the poem to New York for Boni and Liveright he had already expressed concerns: “I only hope the printers are not allowed to bitch the punctuation and the spacing, as that is very important for the sense” (2009, 707). He was very satisfied, though, with the proofs when they arrived, an achievement that no doubt fuelled his lingering dissatisfaction with The Criterion, even when he had largely set the poem right before the magazine appeared.
Figure 1: “What is that noise”, The Criterion
Figure 2: “What is that noise”, Bone and Liveright
The textual authority of the Boni and Liveright edition stands in marked contrast to its layout. Feeling that a poem of mere 430 lines was not enough to fill a book, Liveright asked whether Eliot could not add a few extra poems. Eliot’s refusal to do so eventually led to the book being padded out with on the one hand the famous notes, especially written for the edition, and the choice of format. The Boni and Liveright edition became a smallish, octavo volume printed in a large type with ample leading, with poem and notes bulked out to comprise 64 pages. As a result of this, the space and line division was almost completely lost, leaving the poem with an air of being even more greatly disjointed than it actually is. The Criterion text by comparison consisted of only 15 pages in a large size (height c. 23 cm); Eliot’s poetry, not interrupted by unusual and frequent line turns, really comes into its own in this more spacious environment and has an orderly, almost classic feel to it. Furthermore, the spacing follows more closely the spacing in the typescript now at King’s College Cambridge. Even with one particularly difficult passage, where the text literally cascades down the page, a typographical feature meant to increase the anxiety conveyed in the passage, The Criterion manages to replicate the intended shape even though it does not get it exactly perfect (Figure 1). None of the original editions, nor for that matter any of the Faber editions of Collected and Selected Poems, gets it right either, but in the Boni and Liveright edition the feature is lost entirely, where the blocked text on the verso is almost as jumbled as the cascading text on the recto (Figure 2). Rainey further claims that the Boni and Liveright edition was used as copy-text for the Hogarth Press edition because it Eliot considered it the best text (Eliot 2004, 47). This is, however, not the case. The famous Notes apart, which had only appeared in the Boni edition, a small number of substantive variants make it obvious that the Woolfs did not set their text from the American edition, but presumably from a typescript which may or may not have a copy of. Moreover, the marked dissimilarity to the Boni edition also make it quite clear that they did not use the American edition. It would have been impossible to reconstitute the layout of the poem in accordance with Eliot’s intention. Eliot thought the “[s]pacing and paging are beautifully planned to make it the right length, far better than the American edition” (2009, 2:202). This not only vindicates The Criterion setting, it also goes to show that the book cannot simply be put aside in scholarly editing.
Bibliography: Deming, Robert H. 1997. James Joyce: The Critical Heritage. Vol. 1: 1907-27. London: Routledge.
Eliot, T. s. 2005. The Annotated Waste Land with Eliot’s Contemporary Prose. 2nd ed. New Haven and London: Yale University Press.
——. 2009. The Letters of T. S. Eliot. Eds. Valerie Eliot and Hugh Haughton. London: Faber and Faber.
Finkelstein, David and Alistair McCleery. 2006. The Book History Reader. 2nd ed. London: Routledge.
McGann, J. J. 1991. The Textual Condition. Princeton: Princeton University Press.
McKenzie, D. F. 1984. “The Sociology of a Text: Orality, Literacy and Print in Early New Zealand”. The Library, 6th ser., 6, 333–65.
——. 1999. Bibliography and the Sociology of Texts. 2nd ed. Cambridge: Cambridge University Press.
——. 2002. Making Meaning: ‘Printers of the Mind’ and Other Essays. Eds. Peter D. MacDonald and Michael Suarez. Amherst, Boston: University of Massachusetts Press.
[The draft text below is the third part of larger essay on Textual Scholarship in the Time of the History of the Book, which in its revised version will appear in the forthcoming volume of Variants: the Journal of the European Society for Textual Scholarship. For this section I am indebted to @ETraherne (Elaine Traherne), @nickmimic (Nicholas Morris) and @praymurray (Padmini Ray Murray) for an informative conversation on Twitter on the im/materiality of text, all of whom have suggested further areas where matters get complicated, such as with oral text, the functionality of text and the difference between common use and scholarly terminology.]
The common view is that text in its physical presence is the means through which we access the work from which somehow it is separate. But what is text actually made of? Why is it that we cannot hold text in our hands? What, in other words, is the material text is made of?
Commentators take the term “material text” to be a tautology (Chaudhury 2010, 2). And the self-evidential material nature of text is nowhere more apparent than in arguments whether digital texts are material are not. Matthew Kirschenbaum rejects what he calls the “tactile fallacy”, the supposition that digital text is not material because “you cannot reach out and touch them”, and argues instead that physicality in the digital environment is as real as in the printed environment; just because digital text cannot be touched does not mean that its “computational variables” do not contain any bibliographic codes (Kirschenbaum 2002, 43). Yet one can quite easily turn this argument upside down and point out that it is not the text that is tactile, but the paper, the binding of the book or the indentation left by metal type. Material text may therefore well be an oxymoron. Thus Shillingsburg: “[The text] is something that, although it exists in physical forms, is in some sense capable of existing in more than one form, and is, therefore, not itself physical but must be conceptual or symbolic” (2006, 14).
The attributes codified by the bibliographical codes, furthermore, are not textual; they codify the paratext and other entities that belong to the book. When I wrote in the previous paragraph by way of paraphrase that Kirschenbaum considers digital text to be “physically real”, I extrapolated “physical” to be implied in what he said. Kirschenbaum does not actually use the adjective “physical” at all — which leads me immediately to qualify the immateriality of text. On the one hand, things need not have physical form in order to be real. Certainly, from a phenomenological as well as a literary-critical point of view readers experience texts without being aware of any material attributes. On the other hand, while texts cannot come into being without intentionality, they are more than signs consisting of a lexical or alphabetic, numeric or alpha-numeric codification. (A return to the abstract notions of text employed by literary theory is not a way forward.) If nothing else, texts are visual. They come about from the application of ink to the page (or pixels on the screen, or the marks made by a stylus or other sharp object) which allow the writer’s message to be read. The point I am making is that texts are packaged in ways that appear transparent to many readers, but that you realize are not transparent as soon as you strip away the layers to a lower granularity. Imagine a black-white newspaper photograph of the Mona Lisa next to the original painting and you realize that there is a difference in granularity between, for example, Hamlet on the Kindle and in the first Folio. To talk about the visuality of print, however, to limit ourselves to the book for now, suggests again that the link with a support is necessary.
Peter Shillingsburg helpfully reminds us that unlike with books or manuscripts, there is no universal agreement on what text is: “By texts, for example, some scholars mean physical objects, some mean a series of signs or symbols (the lexical text), and some mean conceptualizations only” (2006, 12). In Resisting Texts, Shillingsburg differentiates, no doubt refining the work of Jerome McGann, between the “material text” and the “reception text”. Where the latter is not quite the “work” (for which he reserves the term “conceptual text”) but the abstract mental construction the reader creates in the act of reading, the former is the “union of linguistic text and document: a sign sequence held in medium of display” (1997, 51-52, 81-82, 101). German editors in this respect talk about “Textgestalt” [literally, text form] and “Textträger” [textual carrier]. Text and document are thus different yet wholly interdependent. The text, in other words, is mediated through the book, which functions as its interface and from which it is inseparable. Like a tattoo it sits, as it were, underneath the skin.
The current e-Reader revolution makes this clear, I think, insofar as we accept that the term e-Book is in fact a misnomer. What we the Kindle and other devices are is not a book or a codex, but bits of plastic casing with various intelligently put together electronics inside that offers content for reading. The content it delivers on the screen, however, is not a book either, for it has few of the bibliographical features that we normally associate with a book, not in the least the presence of paratexts, typographical design and layout. While I don’t want to claim that Kindle offers pure text – for that is in fact impossible – its mise-en-page certainly consists of the lowest common denominator of page design in a book.
That the Kindle is more text than book is also demonstrated by another feature of text: its capacity to reflow. The text on reading devices and computer screens is not intended to be static, but is open to reformatting. I can look any text on my web browser and force the layout to suit my needs, either by increasing or decreasing the font size or overrule stylesheets by converting the text using the Readability add-on-in Firefox, which de-clutters the web page and renders it in a format that is easier on the eye. I can also copy and paste it into Word, clip it to Evernote or import it in a e-Reader, all of which causing the text to reflow without the risk of human interference that resulted in textual error with previous technologies such as printing or copying by hand. Even in those eras the iterative function of text existed. Texts, for Shillingsburg, can be “reincarnated” — something in which the scholarly editor plays an important role). A copy of a text, error notwithstanding, does not create a new text, but a copy of a manuscript creates a new document, “there being two material objects each occupying a different space, though each purports to bear the same text” (2006, 13, 14). Books, however, as pure material objects have no such iterability, and therefore they cannot be edited. Hence, it remains a challenge to define what role they have in textual editing.
I do not want to push the argument about the im/materiality of texts any further, except to say that scholars at times use “material text” when they clearly mean “book”. McKenzie, for instance, uses “text” as a shorthand for any kind of carrier of verbal and non-verbal communication (1999, 13). In common English usage, the words “text” and “work” are of course practically interchangeable, but without a doubt the pervasive use of “text” in literary scholarship, which has experienced a dramatic increase since the 1980s, has left its mark too Understanding the granularity of books in terms of meaning and functionality is no doubt where book history and analytical and descriptive bibliography depart from each other.
Chaudhury, Sukanta. 2010. The Metaphysics of Text. Cambridge: Cambridge University Press.
Kirschenbaum, Matthew G. 2002. “Editing the Interface: Textual Studies and First Generation Electronic Objects”. Text 14, 15–51.
McKenzie, D.F. 1999. Bibliography and the Sociology of Texts. 2nd ed. Cambridge: Cambridge University Press.
Shillingsburg, Peter L. 1997. Resisting Texts: Authority and Submission in Constructions of Meaning. Ann Arbor, Mich.: University of Michigan Press. ——. 2006. From Gutenberg to Google: Electronic Representations of Literary Texts. Cambridge: Cambridge University Press.
Despite the sociological turn, the question of how one can edit socially remains problematic. The traditional function of editing — to come close to what the author originally wrote – seems largely incompatible with the types of evidence that McGann and McKenzie considered. The impact of their work, however, has nonetheless signified a broadening of the perspective. As textual scholarship has become more inter-disciplinary, it seems to have moved beyond the core business of correcting texts according to rigorous scholarly principles. The history of the Society for Textual Scholarship in the United States reflects this change. Since its inception in 1979, the Society has brought the work of a new generation of editors and scholars to the fore. Zooming in on the theoretical and interdisciplinary aspects of scholarly editing, the Society aimed to be different from the older bibliographical societies. The new theoretical perspective also meant that the Society moved closer to what was happening in English departments; there was less interest in analytical and descriptive bibliography and more interest in questioning what “text” was and in demonstrating how textual criticism was in fact a form of literary criticism. At the same time, the notion of final authorial intention was slowly being displaced.
One could say that the outcome of this evolution was that textual scholarship became something of a broad church with editing remaining a central concern, but the study of textual phenomena within a wide array of contexts its ultimate aim. Dirk Van Hulle defended “textual awareness” as being as important as the editing of texts. From his primary interest in “textual process” he argues for the far-reaching instability of the textual product underscoring the task of studying the “palpable evidence of the textual past’s ‘permeable presence’” (Van Hulle 2004, 11), a task relevant not only to geneticists but to all textual scholars.
The outcome of changed attitudes, Van Hulle’s stance is simultaneously anti-positivist and empirical. It expresses the need for engaging with the textual history, lest the act of interpretation remains a naive or merely creative enterprise, while not assuming that the totality of that textual history is simply and transparently available to the scholar. His position also has implications for the editorial project when he reminds us of an article from 1924 by Reinhold Backmann which insisted that the critical apparatus should reconstruct the textual history by representing all variants. The apparatus, in other words, occupies a central place against “the privileged status of the edited text” (Van Hulle 2004, 15).
These impulses have given rise to another development: the advent of “material text” and the emergence of associated terms such as “material philology” and “textual culture”. This development may well be one that occurred in the wider field of literary studies, as it reacted against the abstract notions of text used in most forms of literary criticism and theory. (The notion that text is material itself relies on the notion that language is material, something that can be seen and heard. The idea has its origins in Saussurian structuralist linguistics, whose binomial concept of language divides the sign in signifier Form Manager: shortcode must include a form slug. For example, something like '[form form-1]' and signified [meaning]. As David Chandler remarks, however, the signifier for de Saussure was a sound-image rather than a form; the signifier was not a physical, but a psychological entity. Later theorists reclaimed its materiality [Chandler 2007, 16, 51-52]). Texts rather are not disembodied entities that consist of meaning alone. They exist in specific socio-economic contexts and possess physical attributes that are the result of various agencies that include the author, but also the publisher, typesetter, editor, marketing director, censor and so on.
Thus emerged a variety of approaches and research agendas. They were almost always interdisciplinary, some studying “a huge range of textual phenomena and traversing disciplinary boundaries” (“Welcome to the Cambridge Centre for Material Texts”) that stay well within the reach of archival sources, others endeavouring to frame the materiality of the text in new theoretical understandings at the interstices of history, culture and literature while leaving behind some of the perceived normative functions of the more traditional scholarly approaches (Material Texts Network). Even the Society for Textual Scholarship followed suit when in 2004-5 it included all inquiries into the nature of “textuality” in its remit and changed the name of its journal from Text to Textual Cultures (Storey 2006, 3-4). The parallel with book history is evident, especially at a time when “book historians are increasingly framing their work in terms of ‘mediation’, shifting the emphasis from recovering exact meanings in text to understanding the place of texts within contemporary society” (Finkelstein and McCleery 2005, 27).
Does this evolution imply that textual editing is in fact on the wane? Definitely not. But as a practice textual editing has been undergoing changes. The idea of a “definitive” edition now seems long behind us (at least in certain quarters). For a while it was replaced with an enthusiasm for the everyone-his-own-editor movement inspired by the power of hypertext. But that wave too has now happily passed. The scholarly digital edition, nonetheless, far from have made editing h obsolete, is moving us forward in exciting new directions, while respectfully keeping an eye on the traditions it has inherited from printed scholarly editions. Indeed, digital editions require scholars to think again what is at stake in textual editing (Galey 2010, 100-101). Not only new technological possibilities that lead to new tools and new ways of understanding the text play a part in this, but also book history.
Rethinking the relationship of form to content in digital humanities —a relationship that was almost completely bypassed in theories about hypertext — Galey considers the function of the interfcace as having a “granularity” that places itself between “material form” and “idealized content” (Galey 2010, 93-94; see also Kirschenbaum 2002, 20-27). Galey’s remarks about the design of digital tools equally apply, however, to printed books whose granularity is what separates them from “plain” text. Just as with the digital medium, the “interface” of the book — its design, layout and typography — is aimed at enabling communication between writer and reader in an effective and aesthetically pleasing manner. For the most part, scholarly editing has ignored this granularity. The textual idealism that underpins the Anglo-American tradition in particular in which the scholarly edition is meant to represent (or approximate) the ideal form of the work pushes the very idea of the book pushed to one side, even if most editors now agree that conceptually a critical edition is form of interpretation mediating between the author’s intentions and the documentary evidence. The scholarly edition eradicates the original bibliographical codes and updates them for a contemporary readership.
On the one hand, I must acknowledge a paradox in that critical editing is an intervention in the original text which in its original state was not considered accurate. To recreate the text as the author intended it seems impossible without totally altering the material nature of the text. On the other hand, if we want scholarly editions to be grounded in the history of textual transmission, we cannot avoid the question of the historicity of the text. The theories of McGann and McKenzie as yet have not resulted in new “social” editions. In order to imagine we need perhaps to temper some of old idealism about text and think more strictly — as German textual editors do — along historical-critical lines. Editing in the time of the history of the book requires us to acknowledge in the very least that texts do not exist on their own. What is text without the thing that supports it?
Chandler, Daniel. 2007. Semiotics: The Basics. 2nd ed. Oxford, New York: Routledge.
Finkelstein, David and Alistair McCleery. 2005. An Introduction to Book History. New York and London: Routledge.
Galey, Alan. 2010. “The Human Presence in Digital Artefacts”. In Willard McCarty (ed.) Text and Genre in Reconstruction: Effects of Digitalization on Ideas, Behaviours, Products and Institutions. Cambridge: Open Book Publishers, 93–117.
Kirschenbaum, Matthew G. 2002. “Editing the Interface: Textual Studies and First Generation Electronic Objects”. Text 14, 15–51.
Storey, H. Wayne. 2006. “Dirty Manuscripts and Textual Cultures: Introduction to Textual Cultures 1.1”. Textual Cultures 1(1), 1–4.
Van Hulle, Dirk. 2004. Textual Awareness: A Genetic Study of Late Manuscripts by Joyce, Proust, and Mann. Ann Arbor, Mich.: University of Michigan Press.
“Welcome to the Cambridge Centre for Material Texts”. In Centre for Material Texts: A New Forum for the Study of the Word in the World. Cambridge: Cambridge University, 2009. http://www.english.cam.ac.uk/cmt/. Accessed 12 March 2012.
A fundamental issue that is frequently discussed in textual scholarship is the relationship between “text” and “work”. Since the emergence of the History of the Book, a third term must be taken into consideration too: the “book”. As a field of inquiry that is by its own admission incredibly diverse, the History of the Book encompasses multi-disciplinary and multi-cultural approaches to the study of the book and of the production and dissemination of all written and recorded knowledge. According to David Finkelstein and Alistair McCleery, book history aims “to study all aspects of the creation of books” whether as “physical artefacts” or objects with “unique cultural symbols” (2005, 5), an undertaking that has its firm roots in the traditional bibliographical disciplines such as descriptive and analytical bibliography and textual criticism. To these forms of study, book history added a new layer of social and socio-economic history that began with the paradigm-shifting work of D.F. McKenzie that began with his ground-breaking essay “The Printers of the Mind” (1969) and culminated in his Panizzi lectures collected in Bibliography and the Sociology of Texts (1999). In that early essay McKenzie challenged traditional bibliographical analysis and significantly added to our understanding of printing-house practices in seventeenth- and eighteenth-century England by using quantifiable evidence from printing catalogues, accounts’ ledgers and business correspondence. His emphasis on the conditions of book production, moreover, indicated a breaking away from the study of the individual book or text to books plural.
With that not just the production of literature, but the production of all books and their dissemination falls within the purview of book history. As a commodity, books become the subject of investigations into the economic support structures that existed to bring this commodity to the reader – from book designs to marketing tools, from pricing mechanisms to the means of transportation. But books are also cultural products that help with the exchange of knowledge and ideas across time and space, and thus the production and reception of books play an essential role in shaping (as well as being shaped by) historical mentalités (Willison 2006, 2-3).
As I have written in my introduction to Textual Scholarship and the Material Book, textual scholarship and book history share the book as an object of study, but look at it from different directions (Van Mierlo 2007, 4-5). In that piece I made an early attempt at gauging how the two fields relate to each other. In this essay, I want to continue that inquiry. The question as to where textual scholarship – and particularly textual editing – fits into the history of the book is a question I reserve for a later date. That relationship is any case the more difficult one, given the predominance of the literary in textual scholarship. For now I want to reflect further on the presence of book history in textual scholarship. I want to look at the directions textual editing has taken since the emergence of book history and the sociological turn in Anglo-American textual scholarship introduced by D. F. McKenzie and Jerome McGann, and what potential there is for further cross-fertilization. To do so is to revisit some of the underlying assumptions at work in textual editing and to ask again what editions purport to do and who they are for. The argument that underlies these questions centres upon the higher granularity that the book offers. This granularity I contend is an aspect of bibliographical investigation that editors have not yet quite reckoned with in their editorial work.
Finkelstein, David and Alistair McCleery. 2005. An Introduction to Book History. New York and London: Routledge.
McKenzie D. F. 1999. Bibliography and the Sociology of Texts. 2nd ed. Cambridge: Cambridge University Press.
–. 2002. “Making Meaning: ‘Printers of the Mind’ and Other Essays”. Eds. Peter D. MacDonald and Michael Suarez. Amherst, Boston: University of Massachusetts Press.
Van Mierlo, Wim. 2007. “Introduction”. Textual Scholarship and the Material Book. A special issue of Variants: The Journal of the European Society for Textual Scholarship 6, 1-12.
Recently someone who I hadn’t seen in some time asked me whether I was still working on genetic criticism. I said that I was — in fact I was about to give a paper on the typescripts of The Waste Land at the conference we were both attending. Yet genetic criticism is a term I generally do not use to describe my work on modern literary manuscripts. The name of this blog, for one, indicates that I see myself as doing something differently. Generally I don’t object to genetic criticism — neither as a designation for the study of a writer’s creative process, nor as a field of investigation, so my choice to name what I do by another name was not motivated by any polemic. Nonetheless, the result is that I’m possibly the only one using the term modern manuscript studies.
So why? I felt the need to put the study of modern literary manuscripts on a slightly different footing than that of my compères of the Institut des textes et manuscrits in Paris, where the traditions and methodologies of critique génétique were founded. Insofar the early practioners at ITEM built on, but also set themselves apart from, notions of “écriture” as propogated by poststructuralism, they turned to the study of writing in its material manifestations as it can be observed in authors’ manuscripts. The methods they developed focussed on the procedural and temporal aspects of writing, and they continued in the first instance to be interested in the text of the manuscripts.
A manuscript contains of course much for than text or writing. It also consist of paper and ink; it has a specific size, weight (or quality) and colour; the writing that appears on it has a particular, very often determined by the physical dimensions and qualities of the paper as well as the circumstances in which the writing took place. Furthermore, the manuscript has a very particular function, which might be different from other manuscripts — think, for example, of cheap school copybooks which many writers of the twentieth century appear to use for rough drafting: this type of object almost expects a mode of writing that is quite distinct from, say, the preparation of fair copy on regular-sized A4 paper. Apart from a function, manuscripts also have a purpose — creating a text is different from preparing a text for publication — each of which comes at a different moment in what one can call the “biography of the work”.
In other words, each manuscripts consists of different entities or components of which the text is only one, and each of these components needs to be understood in its own right as well as in relation to the all the other ones. Moreover, the manuscript and all its components also relate to a broader context: the habits and idiosyncracies of the writer, the writer’s life and his work, the writer’s time and culture, and so on. As objects, manuscripts are embedded in their time and place: they answer to a broader set of habits and customs that, just as for the mediaeval period, fit in with the habits and traditions of the culture, and that despite the idiosyncracies of every writer.
The ITEM scholars were never blind to the practical and philological exigencies of studying manuscripts. Deciphering, preparing the avant-texterequired skills and techniques to analyze the physical attributes of the manuscripts. But all-too often this kind of work was, on the one hand, devalued as preparatory; on the other hand, that preparatory work was only half formalized in proper methodologiescal principles and did in any case not go far enough regarding some of the physical detail of the manuscripts. Palaeography and codicology in the study of mediaeval manuscripts are well-established fields, but similar kinds of investigation are virtually non-existent for the modern period.
Some change, to be fair, is now now taking place in this respect, and some work is being done for instance on codicology. Even so, one of the drawbacks of the genetic focus is that the purview has always been on manuscripts that reveal the traces of creation. Any manuscript, in other words, that does not shed light on this process is declared of no interest.
Hence, my perspective is a broader one. I do not believe that the study of manuscripts is relevant only to understanding the creative process. Manuscripts can be of interest in their own right — and their entire history (including their afterlife as they leave the author’s hands and move into the hands of the publishers and possibly later the hands of the collectors) deserves to be studied.
One final polemic point though — I resist using the term genetic criticism for there are those who see it as a form of literary criticism. It is true that the study of composition can shed light on the meaning and understanding of the final work, but to see the study of manuscripts as simply another heuristic method seems greatly reductive and goes counter to the larger historical enterprise that is the history of the book.