Lecture: “The CMV+P document model”

Digitalized and born-digital documents are being changed all the time. How can we discuss changes like “rephrasing a sentence” when all the computer sees is a different string of bits? In the talk “The CMV+P document model” Dr. Gioele Barabucci (CCeH) will introduce a novel document model that allows humans and computer to compare different documents and different versions of the same document at multiple interpretative levels.

When and where

The talk will be held on the 24th of October, 11:00 in the Seminar room of the CCeH (Universitätsstraße 22).

Abstract

The CMV+P model is a layered document model that can describe any electronic document. Each document is seen a stack of layered abstraction levels, each characterized by three components: Content, Model and Variant. This layered structure allows for the co-existence of many separate and concurrent document formats. Such a structure is needed to refer with precision to parts of a changing document, as well as to identify at which of these layer a modification has been done (did I modify some bits or did a split a paragraph of text?).

“A new digital edition of St Patrick’s writings: canonical text vs manuscript transcription”, Presentation by Dr. Roman Bleier (MSC/DiXiT)

25th August 2016, 16:00h
CCeH, Seminar Room (Universitätsstraße 22)

Everybody welcome!

Abstract:

Arguably the most important sources for fifthcentury Irish history are two contemporary epistles written by St Patrick. These epistles survived in seven medieval manuscript witnesses in repositories in Ireland, England and Europe. During my PhD at Trinity College Dublin I created TEI transcriptions of the manuscripts following a documentary editing approach.
Currently, I am working as DiXiT Experienced Researcher at the Centre for Information Modelling Austrian Centre for Digital Humanities at the University of Graz on the topic Canonical reference & sustainability of digital editions. As part of my DiXiT research, I develop a new digital edition of St Patrick’s epistles based on the above mentioned TEI transcriptions. This new edition will serve as a case study for my research in canonical referencing and, at the same time, it will provide a new digital resource for the study of the versions of St Patrick’s texts from a documentary perspective.
This work-in-progress paper will briefly introduce the primary sources, the rationale for the development of the TEI transcriptions and discuss my current thoughts about possible directions for the new edition of St Patrick’s epistles.

Short biographical note:

Roman Bleier studied history and religious studies at the University of Graz. He completed a PhD in Digital Arts and Humanities (DAH) at Trinity College Dublin and works now as DiXiT Marie Curie fellow at the Centre for Information Modelling Austrian Centre for Digital Humanities, University of Graz.

“Letters 1916: Building and understanding a large corpus of correspondence” Presentation by Richard Hadden (DiXiT-Fellow)

“Letters 1916: Building and understanding a large corpus of correspondence”
Presentation by Richard Hadden (DiXiT-Fellow)

When and where

The talk will be held on July 5th 2016, 16:00 in the Seminar room of the CCeH (Universitätsstraße 22).

Abstract:

This presentation will look firstly at the work behind building the Letters of 1916 corpus, a collection of correspondence from 1915 and 1916, covering the period leading up to, and the aftermath of, the Easter Rising. As Ireland’s first “public digital humanities project”, the corpus is strongly reliant on crowd-sourced methodologies for sourcing, digitising and transcribing letters. This first section will discuss the tools and workflow employed, including automated processes for converting crowd-transcribed and marked-up text into full TEI documents.

The next section will showcase a number of letters from the corpus, highlighting above all the effects upon correspondence itself of the political situation in Ireland: the Rising is, after all, a revolution that began in a post office.

The final section looks more deeply at my own research into the corpus, asking what questions can be legitimately asked of such a large and disparate corpus over and above its obvious utility as a collection of individual texts. It will explore the many problems to be considered in the application of digital humanities
techniques such as topic modelling to a corpus of this nature.

“The web as a platform”, Guest lecture by Andrea Marchesini (Mozilla Foundation)

“The web as a platform”, Guest lecture by Andrea Marchesini (Mozilla Foundation)

When and where

The talk will be held on May 11th 2016, 17:00-18:30 in the Seminar room of the CCeH (Universitätsstraße 22).

Abstract

The web is in ongoing and rapid evolution. Often there is a huge gap between what browsers offer and web-developers’ knowledge, within the fields of multi-threading programming, network interception, offline management, 3d APIs, inter-context communication, hardware and networking APIs and so on. I will focus on two main issues: firstly, the reason why we should consider any web-page at the same level as an application; secondly, how browsers can offer functionalities in order to reach the complexity and performance of native apps: for instance, JIT, WebGL and multi-threading computation.
I would like to have a horizontal discussion with the DH community about HTML5 and web technologies in general, as well as a deep conversation about new standards. It would be interesting to hear more about your projects and your interests in order to create new networks and exchange ideas.

Andrea Marchesini is currently working in Mozilla platform – hacking on DOM, WebAPIs, Workers, privacy and security components.

Lecture: “The Web stack in 2016: From the original static web sites to the current bleeding-edge web technologies”

How has the Web changed since its inception? In which direction(s) is it evolving at the moment? In the talk “The Web stack in 2016: From the original static web sites to the current bleeding-edge web technologies” Dr. Gioele Barabucci (CCeH) will discuss how the Web has progressed since its early days at CERN.

When and where

The talk will be held on May 10th 2016, 14:00-16:00 in the Seminar room of the CCeH (Universitätsstraße 22).

Abstract

The talk will touch many technical details but also more general aspects of the development of the web. It will show how the Web worked originally (a simple server sending static files and a simple browser visualizing static HTML pages) and how it works now (with layers of middleware, with a lot of computation being moved into the clients via JS or NaCL, with clients being small devices with 10cm screens and with pages generated on the fly via PHP or XQuery). How did we get here? How will things evolve from this point? Does the development of the Web conflicts with the will to preserve all the data and the knowledge that flows through it?

Workshop: project data management with git

For every humanist that always wanted to know what this “git” thing is that everybody seems to be talking about these days:

On April 26., 14:00 we will have a workshop about Management of humanities research data with Git and GitHub. Everybody interested is very welcome to attend.

Topics:

– git basics: version management
– commits / merges: best practices
– collaborative editing of (XML) research data using GitHub
– Using GitHub for project management: readme, issues, wiki

DH-Kolloquium “Aktuelle Forschungsthemen”, SoSe 2016

Im Sommersemester 2016 führt das CCeH gemeinsam mit der Spachlichen Informationsverarbeitung das Kolloquium zu aktuellen Forschungsthemen in den Digital Humanities fort. Das Kolloquium verschafft einen Überblick über aktuelle Fragestellungen aus dem Bereich der digitalen Geisteswissenschaften und diskutiert die vielfältigen theoretischen und praktischen Ansätze am Beispiel laufender Forschungsprojekte. Im weiten Spektrum der Digital Humanities werden dabei neben Themen der Informationsverarbeitung und der Medieninformatik auch Vorhaben aus anderen text- und objektbezogenen Fachbereichen einbezogen. In jeder Sitzung wird ein Thema oder Projekt vorgestellt und mit den Teilnehmern diskutiert.

Blog

Die Veranstaltung wird von dem Blog “Digital Humanities Cologne” begleitet: dhc.hypotheses.org. Hier werden Zusammenfassungen der einzelnen Sitzungen veröffentlicht. Auf der einen Seite soll so eine kontinuierliche Dokumentation der Veranstaltung geleistet werden. Auf der anderen Seite wird den Studierenden eine Plattform geboten, auf der sie üben können, Lehrinhalte in die Form von Blogbeiträgen zu übertragen.

Organisation

Franz Fischer, Jürgen Hermes, Claes Neuefeind, Patrick Sahle

Zeit und Ort

Sommersemester 2016; Do 16 – 17.30 Uhr; S 14 (Seminargebäude)

Zuordnung und Teilnahme

Die Veranstaltung kann im Rahmen der Studiengänge Informationsverarbeitung und Medieninformatik besucht werden, steht aber auch allen Studierenden der Philosophischen Fakultät im Rahmen des Studium Generale offen. (Rückfragen bitte an Jürgen Hermes.) Die Veranstaltungen sind offen für alle Interessierten!

Programm

Datum Thema
14.04.

Patrick Sahle (CCeH/UzK): Digital Humanities: Wer wie was – wieso weshalb warum
21.04. Moritz Hoffmann: Digital Past
28.04. Elena Parina (Phillips-Universität Marburg): Middle Welsh religious texts from the Book of the Anchorite of Llanddewibrefi – in search of a digital representation for fluid translations of fluid texts (cf. online.uni-marburg.de/welshtranslations)
05.05. FEIERTAG
12.05. Tessa Gengnagel (CCeH/UzK): Superstrukturen
19.05. PFINGSTFERIEN
26.05. FEIERTAG
02.06. Franz Fischer (CCeH/UzK): Digitale Editionen – Das Neuste vom Neusten
09.06. Frederik Elwert (Ruhr-Universität Bochum): Netzwerkanalyse als Methode der Digital Humanities.Beispiele aus der Forschungspraxis
16.06. Lisa Dieckmann (prometheus/UzK) mit Jürgen Hermes & Claes Neuefeind (Spinfo/UzK): Bild, Beschreibung, (Meta)Text
23.06. Frank Fischer (Higher School of Economics, Moskau): Distant Reading Showcase – Datennarrative in den Geisteswissenschaften
30.06. Claes Neuefeind (Spinfo/UzK): Digitale Lexikographie
07.07. Øyvind Eide (HKI/UzK): Digital Humanities – Ein Ausblick
14.07. Zusammenfassung

Weitere Informationen

Zur Sitzung am 28.4.

Elena Parina – Middle Welsh religious texts from the Book of the Anchorite of Llanddewibrefi – in search of a digital representation for fluid translations of fluid texts

Abstract: The project “Übersetzungen als Sprachkontaktphänomene – Untersuchungen zu lexikalischen, grammatischen und stilistischen Interferenzen in mittelkymrischen religiösen Texten” (founded by the Fritz Thyssen Foundation) is focused mainly on the analysis of translator’s work in terms of traditional linguistic analysis of the texts’ lexis, syntax and style. The main data are 11 texts from a Welsh manuscript dated 1346, potentially with readings from other manuscripts, aligned with the Latin texts (the versions closest in form to the Welsh). Welsh versions of one and the same translation show a high degree of fluidity of the text, Latin sources are highly fluid too and even for a text that is very well studied, Visio Sancti Pauli, it is difficult or rather impossible to find a direct source ˗ different manuscripts are similar to the Welsh text in different features. An online „parallel edition“ of these texts in the complexity of their transmission could promote a better understanding of the translator’s and scribes work. In the paper I will present the data available hoping to get advice on the creation of such a resource.

Zur Sitzung am 9.6.

Frederik Elwert (Ruhr-Universität Bochum): Netzwerkanalyse als Methode der Digital Humanities.Beispiele aus der Forschungspraxis
Literatur:

  • Moretti, Franco. „Network Theory, Plot Analysis“. New Left Review 68
    (2011): 80–102. Print. II.
  • Lietz, Haiko. „Mit neuen Methoden zu neuen Aussagen: Semantische
    Netzwerkanalyse am Beispiel der Europäischen Verfassung“. 2007. Web. 29
    Aug. 2011. http://www.haikolietz.de/docs/verfassung.pdf.

Projekte

Vortrag:

Zur Sitzung am 23.6.

Frank Fischer (Higher School of Economics, Moskau): Distant Reading Showcase – Datennarrative in den Geisteswissenschaften

– Netzwerkanalyse literarischer Texte
– New Distant Reading
– Datenvisualisierung mit Edward Tufte
– Design datengetriebener Poster

Folien:
Distant Reading Showcase – Datennarrative in den Geisteswissenschaften

DHd-Poster:
“Distant-Reading Showcase”

Blogs:
weltliteratur.net – A Black Market for the Digital Humanities
Der Umblätterer – in der Halbwelt des Feuilletons

“Netzwerktheorie und Geschichtsschreibung”, Gastvortrag Dr. Matteo Valleriani, 3. Mai 2016

“Netzwerktheorie und Geschichtsschreibung: Die Analyse der Kommentartradition der Sphaera von Sacrobosco in der Frühneuzeit”

Gastvortag von Dr. Matteo Valleriani (Max-Planck-Institut für Wissenschaftsgeschichte, Berlin)

Datum: 3. Mai 2016
Uhrzeit: 15-17 Uhr
Ort: Seminarraum des CCeH (Universitätsstraße 22, Dachgeschoss)

Abstract

In der Frühneuzeit wurden ca. 300 Kommentartexte zum Tractatus de sphaera von Johannes de Sacrobosco ediert und publiziert. Während der Originaltext unverändert gedruckt wurde, wurden zusätzliche wissenschaftliche Themen zu den gedruckten Ausgaben hinzugefügt, entweder in Form von direktem Kommentar zum ursprünglichen Text oder als zusätzliche Traktate. Aufgrund der hohen Diffusion dieser Traktate und derer Verwendung in den meisten Universitäten Europas bis zur zweiten Hälfte des 17. Jahrhunderts, spiegelt die Geschichte dieser Kommentartradition, die Geschichte der Entstehung und Etablierung einer gemeinsamen wissenschaftlichen Identität Europas wider.
Die Kommentare veränderten sich insofern, als dass neue Wissensaspekte und Innovationen in einzelnen Traktaten hinzugefügt wurden und in folgenden Ausgaben immer weiter verbreitet wurden.
Die Rekonstruktion dieses historischen Prozesses wird durch Anwendung der Netzwerktheorie und der sozialen Netzwerkanalyse durchgeführt.
Anhand dieser Fallstudie wird im Vortrag das methodologische Instrumentarium erläutert und das heuristische Potential der Anwendung der Netzwerktheorie für die Geschichtsschreibung diskutiert.

“About Data Science”, Gastvortrag Prof. Dr. Christoph Schommer, 28. Jan. 2016

Die Sprachliche und die Historisch-Kulturwissenschaftliche
Informationsverarbeitung der Universität zu Köln lädt ein zu einem Gastvortrag von Prof. Dr. Christoph Schommer von der Universität Luxembourg:

“About Data Science”
Do., 28.01.16, 10.00 Uhr
HS 80

Abstract:

The sensors of the Big Data hype have engulfed the digital world as well as our society. Technical innovations, scientific achievements, and an insistent data-centric thinking have made it possible that data has been put in front more than ever before. This leads to consequences, for example a deeper understanding of own data, but also the necessity to clearly differentiate between correlation and causality and what should be public and what should be private. Today, the ‘dealing with data’ has become a fundamental concern. In this regard, the lecture is split into two parts: first, I’d like to motivate the field of Data Science. Second, I will present some of the projects that are currently performed at my research group.

Christoph Schommer ist seit 2009 Professor für Computer Science and Communication an der Universität Luxembourg. Seine Forschungsschwerpunkte sind Data Mining, Text Mining, Maschinelles Lernen und Intelligente Datenbanken.

“Combining Media Science and Computer Science”, Gastvortrag Prof. Dr. Frank Leymann / Johanna Barzen, 21. März 2016

Prof. Dr. Frank Leymann & Johanna Barzen M.A.

“Combining Media Science and Computer Science: What can we learn from each other?”

Zeit/Ort:
21. März 2016, 16:00 Uhr, Besprechungsraum des CCeH, Universitätsstraße 22, Dachgeschoß rechts.

Abstract:
When taking a closer look at natural sciences and engineering the use of concepts, methods and technologies of computer science is in an advanced stage. In comparison, the use of techniques and methods of computer science in the humanities is still rather marginal. This is what the Digital Humanities wants to change. In this talk we provide a brief overview on the paradigm of eScience and the scientific method. Influenced by this, we outline our method to derive costume languages in movies based on the concepts of formal languages, ontologies and pattern languages. These concepts are used quite frequently in computer science but haven’t been seriously applied to answer questions from the media science. By generalizing the approach for costumes to other domains in the humanities, we want to outline how these ideas can be of advantage for the humanities.