For humanities scholars throughout the ages, a pen, ink and some paper might suffice as the tools with which they practiced their craft, much like a builder with bricks or timber, saw and hammer. Although technological advances have improved the range and capabilities of the available tools (computers and the internet, diggers and cranes) so that bigger and better things can be built, the craft remains the same. Digital Humanities is not just about using a computer to type or annotate, or using the internet to search for an article, though these are of course very useful capabilities. It is about choosing the right tools, and knowing why you are choosing to represent something in a particular way, as well as exploring how it will be perceived and what you are leaving out. It is about experimentation and collaboration using technology for inquiry, visualisation, representation and research.
In this module, there was a practical, hands-on approach to the introduction of some of these tools.
I set up a WordPress blog on Reclaim Hosting to host my research area in an open and accessible format and for collaborative purposes. While initially, I had some reservations about blogging publicly, as this is not something I was familiar with, I have learned to be less intimidated by it and welcome the prospect of open and peer-reviewed scholarship. It is, and will be, an excellent record of the ongoing process of research during the MA. As I get more familiar with its management, I intend to apply some CSS to modify its presentation and appearance.
Scalar is a ‘free, open source authoring and publishing platform’ and a ‘semantic web authoring tool’ (https://scalar.usc.edu/scalar/). It is another type of blog for born digital content. What I liked about it was the ease with which I could add media content from its many archive links. I could envision it being used as a stable, multi-media, collaborative platform and I will explore its possible use in my research of the experiences of migrants as they begin a new life here (such as ‘Syrian Voices From Ireland’ on Facebook). I have created a book, with some pages and media content but have not as yet explored Scalar’s full capabilities.
Omeka is a ‘free, open source content management system for online digital collections’ (Wikipedia). Anyone can use this; researchers, museum curators, students, enthusiasts, archivists or anyone with a collection of photos or memorabilia that they would like to archive and display. I created an exhibit, a collection and some items quite easily, using some of the Dublin core standard metadata, making it searchable and usable by others. It meets the standard of Fred Gibb’s rubric of ‘transparency, reusability, data and design’ (Critical Discourse in Digital Humanities).
I have rendered some simple text about my research areas in both XML and HTML code (visible in module DH6012). I downloaded Notepad++ , which I found very useful as it ‘hints’ where there are mistakes or unfinished code and also W3 Schools’ tutorials were invaluable. HTML can be used to build a website from scratch, allowing an author unprecedented control over their output and I hope to get more familiar with it, thus mitigating my fear of coding, and allowing me to get ‘under the hood’ in regards to my own websites. XML is particularly interesting in its human and machine-reading capabilities and I have explored this further in another section. While I did not personally attempt using TEI, it is a very useful tool for text encoding at a granular level. It makes searching through texts (often hand-written) for single key words, annotations, revisions, versions and marginalia possible, often bringing the history and provenance of a historical text alive in a way that scanned pdfs cannot. The fact that there are many open projects for people to engage in TEI themselves (and are doing so in vast numbers) makes a once-daunting task seem possible, and desirable. Another code I am interested in learning more about is SPARQL, which is a query language for RDF databases. In my research, I would like to explore the census data from the Central Statistic Office using this.
Voyant, while not a traditional statistical tool per se, is a useful textual analysis tool that allows you to visually represent a text in a word cloud and to explore the frequency of words therein, exposing the key ideas and themes of the corpus. It is easy to use and visually pleasing. I can see it being used to analyse word frequency in a text or multiple texts from different time periods, or geographical locations, to detect possible shifts in terminology reflecting the ethos of a particular time or place. One might also use it to compare and contrast the key ideas in contemporary websites or forums and online discussions of two different political parties or affiliates. I have entered the text of this analysis as an example. the tool allows you to see trends and relative frequencies also.
Neatline and Carto are two mapping tools that I have not engaged with yet but intend to explore. Neatline is available as an add-on through Omeka and allows for an interaction between maps and timelines to enrich the narrative. I could see using it to illustrate the route of individuals who have migrated here, as part of my research project, possibly interspersed with photographs or diary entries to document and highlight their journey. Carto can be used to create interactive, thematic and multi-layered online map visualisations using geospatial data and the built-in tutorials. I could also envision using this in my research, perhaps to highlight the discrepancies between the need and the provision of resources to growing migrant communitites.
While there might be a steep learning curve associated with the mastery of some of the above digital tools, I believe they are also exciting in their possibilities and applications and I am enthusiastic to learn more. Digital tools should be seen as an extension of the modes of inquiry of the Humanities, not as separate to it.