Susan Schreibman’s article, ‘Digital Scholarly Editing’, posits the evolution of text encoding systems in the field of digital editing as an open, dynamic and crucial element in digitization projects. Early efforts created digital facsimiles but as the medium was explored, new problems, decisions and opportunities arose, creating the need for a standardized and transferable system that didn’t just reproduce primary materials, but allowed for customized study and analysis, mainly within thematic research collections.
In viewing Thematic Research Collections such as Blake’s manuscripts and Emily Dickinson’s poetry, one can see how the application of TEI can enhance the digital editions for scholarly exploration and how the use of The Versioning Machine tool or framework can compare existing versions of texts for analysis.
While McKenzie defines text as “verbal, visual, oral, and numeric data, in the form of maps, prints, and music, of archives of recorded sound, of films, videos, and any computer-stored information”, it seems a lot of the works mentioned in the article are of the literary kind. How does one apply TEI or other encoding frameworks to the non-literary works and also resolves the issue of encoding being “subjective, theoretical and interpretative”? Currently it may apply only to the metadata associated with these non-literary texts, which effectively separates the content from the display. As these objects, born-digital or otherwise, migrate to different platforms or are remediated, refashioned and repurposed, it is necessary to have an attached set of searchable data to ensure provenance, copyright, etc. Editors must be aware of changes in accepted standards and systems to ensure the durability of texts. Crowdsourcing and mass digitisation projects can take the process of encoding out of the scholarly field and into a wider participatory engagement with the material.