Stellenausschreibungen im Forschungsprojekt SustainLife

Zur Vorbereitung und Durchführung des Forschungsprojektes “SustainLife – Erhalt lebender, digitaler Systeme für die Geisteswissenschaften” unter der Leitung von Prof. Dr. Brigitte Mathiak sucht das Data Center for the Humanities (DCH) zum 01.01.2018 eine/n wissenschaftliche/n Mitarbeiter/in und eine wissenschaftliche Hilfskraft (WHB).

Die Bewerbungsfrist ist jeweils am 15.11.2017.

Weitere Informationen:

wissenschaftliche/r Mitarbeiter/in

wissenschaftliche Hilfskraft (WHB)

Zur Projektbeschreibung: “SustainLife – Erhalt lebender, digitaler Systeme für die Geisteswissenschaften”

Manfred Thaller: From History to Applied Science in the Humanities, HSR Supplement 29 (2017)

“Im Zuge der methodologischen Öffnung der bis dahin sehr traditionellen Geschichtswissenschaften in den 1970ern erschien EDV-Anwendung in historischen Forschungsprojekten als eine innovative, interdisziplinärer Ausrichtung. Anfangs fokussierte man – besonders stark getrieben durch die Kölner QUANTUM-Gruppe – fast ausschließlich auf quantitative Analysen. Doch schon kurze Zeit später erweiterte sich das Feld, indem man versuchte, die ganze Bandbreite von informationstechnologischen Möglichkeiten einzusetzen. Manfred Thaller war ein entscheidender Akteur in dieser Entwicklung. Mehr als 20 Jahre lang arbeitete er am Max Plack Institut für Geschichte in Göttingen daran, IT-Tools und -Methoden direkt auf historische Forschung auszurichten. Schließlich wurde Thaller auf den ersten Lehrstuhl für Fachinformatik in den Geisteswissenschaften berufen, der ausdrücklich nicht linguistisch ausgerichtet war: Bis zu seiner Emeritierung 2016 forschte und lehrte Manfred Thaller am Institut „Historisch-Kulturwissenschaftliche Informationsverarbeitung“ an der Universität zu Köln.

Dieser HSR-Supplementband beginnt mit einem autobiografischen Essay, in dem Manfred Thaller die Entwicklung dieses interdisziplinäre Feldes von „History & Computing“ bis hin zur „Digital Humanities“ beschreibt. Das Fazit seiner Erinnerungen ist zwiespältig: Hinter einer glänzenden Fassade mit häufig oberflächlichen Projekten und kurzfristigen Zielen wurde das enorme epistemische Potenzial ernsthafter Anwendung von Informatik auf die Geschichte oft nicht erkannt. Die hier neu abgedruckten 13 Beiträge aus mehr als 30 Jahren beschreiben die vielfältigen Herausforderungen, vor denen man stand (und steht), wenn es um eine ernsthafte interdisziplinäre Zusammenarbeit zwischen Informatik und Geisteswissenschaften geht. Neben allgemeinen methodologischen Überlegungen, fokussieren die Beiträge auf die Spezifika von Text und Zeit in historischen Quellen. Das Ziel: Aus den vielen Fäden ein allgemeingültiges Modell für die Repräsentation historischer Informationen in der Informationstechnologie zu machen.”

Autobiographische Anmerkungen
  • Manfred Thaller: Between the Chairs. An Interdisciplinary Career. [Abstract]
  • Manfred ThallerAutomation on Parnassus. CLIO – A Databank Oriented System for Historians [1980]. [Abstract]
  • Manfred ThallerUngefähre Exaktheit. Theoretische Grundlagen und praktische Möglichkeiten einer Formulierung historischer Quellen als Produkte ,unscharfer’ Systeme [1984]. [Abstract]
  • Manfred Thaller: Vorüberlegungen für einen internationalen Workshop über die Schaffung, Verbindung und Nutzung großer interdisziplinärer Quellenbanken in den historischen Wissenschaften [1986]. [Abstract]
  • Manfred Thaller: Entzauberungen: Die Entwicklung einer fachspezifischen historischen Datenverarbeitung in der Bundesrepublik [1990]. [Abstract]
  • Manfred Thaller: The Need for a Theory of Historical Computing [1991]. [Abstract]
  • Manfred Thaller: The Need for Standards: Data Modelling and Exchange [1991]. [Abstract]
  • Manfred Thaller: Von der Mißverständlichkeit des Selbstverständlichen. Beobachtungen zur Diskussion über die Nützlichkeit formaler Verfahren in der Geschichtswissenschaft [1992]. [Abstract]
  • Manfred Thaller: The Archive on Top of Your Desk. An Introduction to Self-Documenting Image Files [1993]. [Abstract]
  • Manfred Thaller: Historical Information Science: Is there such a Thing? New Comments on an old Idea [1993]. [Abstract]
  • Manfred Thaller: Source Oriented Data Processing and Quantification: Distrustful Brothers [1995]. [Abstract]
  • Manfred Thaller: From the Digitized to the Digital Library [2001]. [Abstract]
  • Manfred Thaller: Reproduktion, Erschließung, Edition, Interpretation: Ihre Beziehungen in einer digitalen Welt [2005]. [Abstract]
  • Manfred Thaller: The Cologne Information Model: Representing Information Persistently [2009]. [Abstract]

Presentation: Mohammad Aljayyousi on the project “iNovel”, 15 Nov 2017

Assistant Professor Mohammad Aljayyousi, Philadelphia University Amman, Department of English Language and Literature, is a visiting scholar at the CCeH from October 2017 to March 2018. His stay is funded by the DAAD. During his time at Cologne University, Mr. Aljayyousi will contribute to the CCeH’s work in the field of literary studies and widen his own skills in Digital Humanities. In particular, he will work on his own research project “iNovel”. Dr. Aljayyousi is going to present and openly discuss his research at a public lecture on …

An Interactive, Innovative and Inter-medial Approach to Literature
Wednesday, November, 15th, 15:00
CCeH – Meeting Room (Universitätsstr. 22)

All are welcome …


iCriticism: An Interactive, Innovative and Inter-medial Approach to Literature.

The presentation will introduce a new approach, to the study of literature in the digital age tentatively called, iCriticism. Broadly speaking, iCriticism is a response to the fact that reading now takes place in an ecosystem of devices including both print and digital, and it starts from the belief that the computer is a unique invention which is adaptable to a wide variety of uses. Within literary studies, the computer can be used in a humanistic way to best serve the purposes of the field. Some main principles of the approach that will be elaborated on in the presentation include the following:

  1. Texts are multi-dimensional and heterogeneous and the relation among their various dimensions, codes of significance, or levels is not heuristic.
  2. The algorithmic, dynamic nature of traditional texts.
  3. Rejection of formal logic and the CRUM (Computational-Representational Understanding of Mind) paradigm as the only option.
  4. Material conditions, including textuality, are created in the space between physical and non-physical (human) factors.
  5. Digitizing texts is a process of translation / rewriting that can result in pedagogical tools.
  6. The computer technology can introduce fun and increase the engagement of students through attention to experiential aspects, and the multiple roles that the student can play: user-player-learner-reader-writer.

XML Pipelines and XProc 3.0: Report of the WG Meeting in Aachen

Last week (14th and 15th of September 2017) a meeting of the XProc 3.0 working group took place in Aachen, organized by Achim Berndzen of xml-project and Gerrit Imsieke of le-tex and hosted by LOGOI.

The meeting was extremely successful, consensus has been reached on many topics and important roadblocks have been overcome. I will tell you about what the WG accomplished in a second. Before that allow me to introduce XProc, XML pipelines and explain why they are useful. (If you already know all this stuff, skip directly to the XProc 3 section, that’s OK. :))

XML pipelines? What are you talking about?

Pipeline, by JuraHeep, CC0

Everybody who has worked with XML knows that real-world applications are always born as simple transformations (“I’ll just convert this XML to HTML with XSLT”) but quickly develop into a big tangled web of unreadable code as soon as you have to deal the inevitable…

  • small mistakes in the input (“wait, why is there a <p> inside a <em>?”),
  • flaws in the receiving applications (“let’s have a separate output for Internet Explorer 6, so that the poor students access this from the library”) or
  • requests from the project collaborators (“could you make a summary version with only one sentence per chapter?”).

Addressing all these needs can be done, but doing it by adding fixes on top of fixes on the original core transformation is a nightmare in terms of maintenance and readability.

Small steps and scripts

A better way to solve all these issues is splitting monolithic transformations into smaller pieces, or steps. (More about how our experience at the CCeH in splitting complicated transformations into focused steps in a future article.)

Now that you have all these steps, how do you transform the input into the output in practice?

Are you going to run each step manually, clicking around in your XML editor? I hope not.

A much better way to run this split transformation is to create a shell script that takes the input file, applies the first step (fix the small mistakes), then the second (transform into HTML) and then, if requested, the third (uglify HTML to make it IE6 compatible).

Such a script would work just fine but it has many problems:

  • Either you hardcode how to invoke the XSLT processor or you have to write an abstraction layer that allows you to call other XSLT processors.
  • Requires a working Bash shell environment (not that easy to get on Windows).
  • Does not provide any kind of validation of the intermediate results.
  • Requires a deserialization/serialization cycle for each step.
  • Gets rapidly very complex as soon as other steps, conditional steps and loops are added.
  • Works only on a single document.

We could address all these problems ourselves making a better script. Or we could avoid reinventing the wheel and make use of XProc and write a declarative XML pipeline.

Enter XML pipelines and XProc

XProc is a language for writing declarative XML pipelines.

An XML pipeline is a series of steps though which an XML documents flow, just as in the shell script in the previous example. However, in contrast with a shell script, XProc pipelines are:

  • Declarative: you state what you want and the XProc interpreter chooses the right tools. (A PDF transformation? Let’s use Apache FOP. An XSLT Transformation? Let’s use libxslt. Oh, are we running inside oXygen? Let’s use the internal Saxon-EE engine then.)
  • Portable: pipelines can run wherever there is a XProc interpreter: Linux, Windows, Mac OS, you name it.
  • Specialized for XML: documents are not deserialized and serialized in each step.
  • Can have more than one input and produce more than one output.
  • Easily extend to intricate pipelines with loops and parallel branches.

An example pipeline looks like the following

<p:pipeline xmlns:p="" version="1.0">
        <p:input port="stylesheet">
            <p:document href="fix-mistakes.xsl"/>

        <p:input port="stylesheet">
            <p:document href="convert-doc.xsl"/>

    <p:xslt use-when="p:system-property('ie6-compatible') = 'true'">
        <p:input port="stylesheet">
            <p:document href="make-ie6-compatible.xsl"/>

XProc 3.0

XProc 3.0 is the upcoming version of XProc. The original XProc 1 specifications have been published in 2010 by the W3C and since then users and implementers have found small problems, inconsistencies as well as ergonomic issues that make writing XProc pipelines harder than it should.

The focus of XProc 3 is simplifying the language, making implementations behave in more sensible way by default and making it possible to process non-XML documents (think LaTeX or graphic files).

During last week’s working group meeting in Aachen plenty of progress has been done in this direction, with consensus reached on many key issues. I will summarize the main outcomes; the minutes are available at

Simplified and streamlined language

  • The actual unnecessary distinction between a pipeline and a step will be removed. (It turns out that the current definition of a pipeline makes it so strict that nobody is actually using it.)
  • The definition of a port and the use of a port will use different names. (This often confused beginners.)
  • Non XML documents will become first-class citizens in XProc 3.0 and treated exactly as XML documents.
  • The well known try/catch/finally construct will be introduced.

Run-time evaluation and extension

  • A new eval step will be introduced to run dynamically created pipelines.
  • User functions written in XPath, XSLT and XQuery will be usable in all fields where XPath can be used.

Diagnostic and debugging

  • Steps will be able to output side-information on the diagnostic, forwarded to stderr by default.
  • Implementation will provide a way to dump all the temporary documents produced by the intermediate steps.
  • A separate specification will standardize error reporting (so that XML editors like oXygen will be able to highlight where the problem occurred).

Plenty of interesting stuff, isn’t it? If you are interested in the development of XProc 3.0 or XProc in general, please participate in the discussions held on the XProc mailing list, join the W3C Community group, and suggest improvements on the XProc developement website.

See you at the next XProc meeting!

Cologne Autumn School and Expert Workshop: „Encoding Inscriptions, Papyri, Coins & Seals“

From 9 to 13 October 2017 the University of Cologne is hosting an Epidoc Autumn school in combination with an expert workshop on digital sigillography. During the first three days the autumn school will introduce the participants to Epidoc, the encoding standard for epigraphic texts and materials. Wednesday afternoon is dedicated to presentations on advanced imaging technologies in the fields of epigraphy, papyrology and sigillography. On Thursday and Friday there will be an expert workshop focusing on digital formats and standards for the description and publication of seals and similar materials.

Time: 9-13 October 2017
Place: Universität zu Köln
Language: English
Deadline for registration: 24 September 2017
Registration contact:
School participants: max. 25


Monday, 9.10.2017

EpiDoc Autumn School, Day 1
(Thomas Institut, Seminar room, Universitätsstraße 22, ground floor; see on map)

14:00 Welcome, Introduction to XML, TEI and EpiDoc (Background; Markup; Semantic tagging; XML Rules)

15:30 break

16:00 EpiDoc Guidelines and Quick reference docs, Download package; Oxygen demonstration and hands-on exercises; History and description fields (Description; Places; Dates)

17:30 ends

Tuesday, 10.10.2017

EpiDoc Autumn School, Day 2
(Thomas Institut, Seminar room, Universitätsstraße 22; see on map)

09:00 Text transcription and Leiden (Lacunae; Abbreviations)

10:30 break

11:00 Further Leiden practice (Structure of Text; Certainty & precision; Apparatus Criticus)

12:30 lunch

14:00 Transforming EpiDoc to HTML; Customizing Example stylesheet transformations (Parameters)

15:30 break

16:00 Entities, indexing and vocabularies (Token tagging; London rules; Authority Lists; EAGLE-Europeana vocabularies; Pleiades; LGPN; – time permitting)

17:30 ends


Wednesday, 11.10.2017

EpiDoc Autumn School, Day 3
(Thomas Institut, Seminar room, Universitätsstraße 22; see on map)

09:00 More Leiden practice, marking up all elements of an edition

10:30 break

11:00 EpiDoc Community and resources (; markup; workshop blog; wiki page)

12:30 lunch

Afternoon presentations on advanced imaging technologies
(location: Wienand Haus – Morphomata, Weyertal 59; see on map)

15:00 Hubert Mara (Heidelberg): Visual Computing for Analysis of Sealings, Script and Fingerprints in 3D (slides)

15:30 Branko van Oppen (Amsterdam): SigNet – A Network of Hellenistic Sealings & Archives (slides)

16:00 break

16:30 Stephan Makowski (Cologne): Seal Digitisation with Reflectance Transformation Imaging (RTI) (slides)

17:00 Tiziana Mancinelli (Cologne): RTI and Ancient Magic Curses

Brauhaus (Restauration Pütz; see on map)

Thursday, 12.10.2017

Seals expert workshop, part I: Encoding Seals
(location: Wienand Haus – Morphomata, Weyertal 59; see on map)

9:00-12:30 Introduction & Overview
– Seal digitization projects: state of affairs
– Adjacent projects and encoding standards (TEI, NUML, CEI)
– Vocabularies and terminology

Alessio Sopracasa (Paris): SigiDoc
Georg Vogeler (Graz): Seals as Objects – Seals as Part of Charters (slides)

14:00-17:30 Towards an encoding standard in digital sigillography:
– Metadata
– Physical description
– Iconography
– Transcription

Public lecture
(Neues Seminargebäude / Seminar room S13 / 1. floor; see on map)

18:00-19:30 Charlotte Roueché (London): Back to Socrates: Publication as Dialogue (slides)

Abstract: “The traditional model of scholarship has been an exchange of ideas built up over time, using print; but in the second half of the twentieth century this became steadily more difficult, as the volume of academic publications increased, and the cost of printing rose. Cologne was the home of new approaches, particularly in epigraphy and papyrology – the Inschriften griechischer Städte aus Kleinasien series has transformed our understanding of the epigraphy of Asia Minor; and ZPE has stimulated new levels of conversation. In the home city of such innovation, I would like to ask what the 21st century might look like.”

Friday, 13.10.2017

Seals expert workshop, part II: Presenting Seals
(location: Wienand Haus – Morphomata, Weyertal 59; see on map)

9:00-12:30 Topics to be discussed:
– Interfaces
– Presentation systems
– Portals

Conclusions, Plans & Perspectives

Teachers & Organizers:
– Gabriel Bodard (London)
– James Cowey (Heidelberg)
– Martina Filosa (Cologne)
– Franz Fischer (Cologne)
– Antonio Rojas Castro (Cologne)
– Patrick Sahle (Cologne)
– Claudia Sode (Cologne)
– Simona Stoyanova (London)

– Institut für Altertumskunde, Abteilung Byzantinistik und Neugriechische Philologie
– Nordrhein-Westfälische Akademie der Wissenschaften und der Künste, Arbeitsstelle für Papyrologie, Epigraphik und Numismatik am Institut für Altertumskunde
– Historisches Institut, Abteilung Alte Geschichte
– Cologne Center for eHumanities (CCeH)

Questioning models: Intersectionality in Digital Humanities, Symposium, 8-10 Nov 2017

Questioning models: Intersectionality in Digital Humanities.
Digital Editing, Literature and Gender Studies

Call for Papers

The Cologne Centre for eHumanities (CCeH) is organising a three-day symposium from the 8th to the 10th of November at the University of Cologne. The event aims at exploring intersectional approaches on textual scholarship and Digital Humanities theories, practices, and tools. A session will be dedicated to Italian and German women writers during the Renaissance. This specific case study is part of a project funded by NetEx (Network and Exchange funding programme, University of Cologne).

We welcome proposals in any area of scholarship, that pay specific attention to intersectionality, and that employ digital and collaborative approaches to the study or the editing of marginalised subjectivities and their digital modelling and representations. We encourage the submission of projects’ presentations at an advanced stage that investigate how digital technologies can re/produce, enable or restrict the construction of identities (e.g. in racialised and gendered terms).

Researchers of all levels, including students and professional practitioners, are welcome. We expect a diverse audience of textual scholars, historians, information scientists, social scientists, digital humanists, graduate students and interested members of the public. The communication language of the symposium will be English, but we are accepting proposals and papers in English, Italian and German.

Type of presentations:

– Short paper (20 minutes)
– Lightning talk (10 minutes)
– Posters

To submit a paper, please email an abstract to up to 300 words as an attachment to by 31st August, 2017.


Confirmed plenary speakers:

Barbara Bordalejo (KU Leuven)
Øyvind Eide (Universität zu Köln)
Vera Faßhauer (Goethe-Universität Frankfurt am Main)
Domitilla Olivieri (Utrecht University)
Elena Pierazzo (Université Grenoble)
Serena Sapegno (University of Rome La Sapienza)

Topics include but are not restricted to:

– Critical race, feminism, gender, queer, and disability studies in Digital Humanities
– Women writers during the Renaissance and women’s writing
– Digitization, editing, and curation of primary texts and the writing process by women and marginalized identities
– Building and analysing corpora of texts produced by or about marginalised identities
– Traditional authorship, subversive subjectivities, and challenging canonical models of scholarship
– The role of social media and new media in constructing racialised and gendered identities
– Collaborative digital research, infrastructures, methods and tools
– Representations of identities, transmedia storytelling and digital media
– Digital archives in relation to black and LGBT histories
– The challenges and implications of developing digital literary archives and online repositories of diaspora communities and marginalised identities
– Context of production: diversity in academia, publishing, library, information science, or programming
– Dissemination, accessibility,sustainability, and the challenges faced by digital projects

Important dates:

Deadline for submissions: 31st August 2017
Notification of acceptance: 15th September 2017
Symposium: Cologne (Germany), 8th-10th November 2017

Der Rigveda wird bald Vedaweb

Der Rigveda, im altindischen Sanskrit verfasst, stellt eines der zentralen, wenn nicht das wichtigste, Datenkorpus der Indogermanistik bzw. Historisch-Vergleichenden Sprachwissenschaft dar. Dieses Korpus zeichnet sich sowohl durch seine große Zeittiefe – die ältesten Schichten stammen aus dem späten zweiten Jahrtausend vor Christus – als auch durch seinen großen Umfang aus.

Das Vedaweb-Projekt hat zum Ziel, eine virtuelle Forschungsumgebung zu entwickeln, in der der Rigveda vollständig digital sowie in linguistisch vorannotierter Weise zugänglich gemacht und der Forschungsgemeinschaft zur Weiterbearbeitung und Analyse zur Verfügung gestellt wird. Eine Besonderheit bildet dabei die Verknüpfung jedes einzelnen Wortelements mit den Lemmata der in Köln angesiedelten digitalen Sanskritwörterbücher (alte / neue Installation), was den qualitativen Sprung in der linguistischen und kulturwissenschaftlichen Erforschung unterstützt.

Die komplette morphologische Annotation des Rigvedas wird von Prof. Dr. Paul Widmer und Dr. Salvatore Scarlata von der Universität Zürich zur Verfügung gestellt.

Das Projekt wird flankiert von Bestrebungen, digitale Sprachressourcen zu Südasien an der Universität zu Köln neu zusammenzuführen. Unter dem Namen C-SALT (Cologne South Asian Languages and Texts Portal) sollen hier lexikalische Hilfsmittel (wie das Critical Pāli Dictionary) sowie Texte und Korpora versammelt und besser zugänglich gemacht und ggf. mit dem Vedaweb verbunden werden.

Mehrere Einrichtungen der Philosophischen Fakultät der Universität zu Köln sind im Projekt beteiligt: Die Allgemeine Sprachwissenschaft und die Historisch-Vergleichende Sprachwissenschaft sind für die Datenaufbereitung zuständig. Das CCeH übernimmt die TEI-Modellierung des Korpus und verwandter Ressourcen. Die Webapplikation wird von der Sprachliche Informationsverarbeitung entwickelt. Die Archivierung und Nachhaltigkeit der Daten wird schließlich vom Data Center for the Humanities (DCH) gewährleistet.

Projektteam Vedaweb 

(Einrichtungen sind alphabetisch sortiert)

Allgemeine Sprachwissenschaft, Institut für Linguistik

Dr. Uta Reinöl

Felix Rau

Jakob Halfmann

Cologne Center for eHumanities

Prof. Dr. Patrick Sahle

Francisco Mondaca

Data Center for the Humanities

Jonathan Blumtritt

Historisch-Vergleichende Sprachwissenschaft, Institut für Linguistik

PD Dr. Daniel Kölligan

Natalie Korobzow 

Sprachliche Informationsverarbeitung, Institut für Linguistik

Prof. Dr. Jürgen Rolshoven

Claes Neuefeind

Börge Kiss


Juli 2017- Juli 2020

vorläufige Projektseite am Institut für Linguistik

Das Projekt Vedaweb wird gefördert von der

‘Digital Editing and Medieval Manuscripts’ – Workshop series in collaboration with Ca’ Foscari, University of Venice

“Digital Editing and Medieval Manuscripts” is a series of three workshops organised by Ca’ Foscari, University of Venice together with the Cologne Center for eHumanities within the ERC StG Project BIFLOW seminar programme “Lingue, saperi e conflitti nell’Italia medievale”. It aims to explore the role of digital technologies in the field of medieval studies and to provide insights into current methodologies and digital tools in scholarly editing. These three workshops will introduce participants to current approaches for editing medieval manuscripts in a digital framework. During the workshops the following topics will be covered: palaeography; codicology; as well as practices and theories of digital editing, including critical apparatus, multilingualism and text-image linking.

Announcement on

I: Digital Manuscripts

3 luglio 2017 ore 9:30 – 18:00 Università Ca’ Foscari Venezia, Dipartimento di Studi Umanistici, Palazzo Malcanton Marcorà , Sala Consiglio (“Sala Grande”)

9.30 – 10.00 Welcome Antonio Montefusco, Università Ca’ Foscari Venezia

10.00 – 10.30 Franz Fischer, Cologne Center for eHumanities, Universität zu Köln

“Editing Medieval Texts: Theories, Practices, and Challenges in the Digital Age”

10.30 – 11.00 Paolo Monella, Università di Palermo
“Multi-layer textual representation of pre-modern primary sources”

11.00-11.30 Coffee break

11.30 – 12.00 Marjorie Burghart, Centre National de la Recherche Scientifique (CNRS)
“Tools and Software for Editing Medieval Texts”

12.00 – 12.30 Discussion

12.30 – 14.00 Lunch Break

14.00 – 18.00 Workshop

Alberto Campagnolo, CLIR/Library of Congress, (video conference)
“Towards a digitization of the materiality of documents”

Tiziana Mancinelli CCeH – Università Ca’ Foscari Venezia

Comitato scientifico: Antonio Montefusco, Tiziana Mancinelli

Comitato organizzatore: Sara Bischetti, Maria Conte, con la gentile collaborazione di Stefano Pezzé, Giulia Zava.

Ciclo di workshop organizzato nell’ambito del progetto BIFLOW

Bilingualism in Florentine and Tuscan Works (ca. 1260 – ca. 1416)

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 637533).

‘Natural Language Understanding and AritficiaI Intelligence’, Gastvortrag von Dr. Kruchinina, 10. Juli 2017

Die Sprachliche Informationsverarbeitung lädt zu einem Gastvortrag von Dr. Ekaterina Kruchinina (Nuance) mit dem Titel “Nuance NLU and AI Research. Driving Innovations” ein. Der Vortrag wird am kommenden Montag, 10. Juli 2017, im PC-Pool (Raum 72) im Philosophicum stattfinden.


Speech Recognition, Biometrics, Text-To-Speech, Natural Language Understanding and AI are key research areas to redefine the relationship between people and technology. Nuance’s Research team is working on all of these in order to develop a more human conversation with technology. This talk will highlight a few current research topics and trends in the company. Automotive solutions in cars on the road today and others that will come out in the next few years will be used to illustrate how achievements in Natural Language Understanding and AI help to create the next generation of digital assistants.

Speaker: Dr. Ekaterina Kruchinina, NLP Research Manager at Nuance Communications

Ekaterina Kruchinina is a Research Manager in the Natural Language Understanding department at Nuance. Her principal research responsibilities are in NLU, machine learning, corpus annotation and evaluation of NLU systems. Ekaterina joined Nuance as a Senior Research Scientist in 2012. Before joining Nuance, she worked as a research associate at the JulieLab at the Friedrich-Schiller-University of Jena.  She received her Phd supervised by Prof. Dr. Udo Hahn (Friedrich-Schiller-University Jena)  and  Prof. Ted Briscoe (University of Cambridge) with a dissertation titled “Event Extraction from Biomedical Texts Using Trimmed Dependency Graphs” in 2012.  Ekaterina developed the relation extraction system JReX that was ranked second during the BioNLP 2009 Shared Task on Event Extraction (at NAACL-HLT 2009). Ekaterina has 12 years of academic and industry experience, leadership in development and application of cutting edge technology in NLP solutions for intelligent human-machine interfaces. Ekaterina speaks Russian, German, English and French. Ekaterina’s work has been featured in the article „Die Computerversteherin. Ein Job an der Schnittstelle von Mensch und Maschine“, c’t 04/2017.

CCeH-Mitgliederversammlung, 14. Juli 2017

Die diesjährige CCeH-Mitgliederversammlung wird am Freitag, 14. Juli 2017, 15:30-17:30 Uhr, im Sitzungsraum des Dekanats im Philosophikum stattfinden. Alle Mitglieder sowie all diejemigen, die es werden wollen, weil sie an digitaler geisteswissenschaftlicher Forschung und Lehre an der Universität zu Köln interessiert sind, sind herzlich eingeladen.