A Call for Papers that can be of interest for our projects

DH-CASE II: Collaborative Annotations in Shared Environments: metadata, tools and techniques in the Digital Humanities, will be held in conjunction with the DocEng 2014 conference.

I copy here a message sent by Patrick Schmitz

We invite submissions for DH-CASE II: Collaborative Annotations in Shared Environments: metadata, tools and techniques in the Digital Humanities, to be held in conjunction
with the ACM Document Engineering 2014 conference.
Digital Humanities is rapidly becoming a central part of humanities research, drawing upon  tools and approaches from Computer Science, Information Organization, and
Document Engineering to address the challenges of analyzing and annotating the growing number and range of corpora that support humanist scholarship
== Focus of workshop
From cuneiform tablets, ancient scrolls, and papyri, to contemporary letters, books, and manuscripts, corpora of interest to humanities scholars span the world’s cultures
and historic range. More and more documents are being transliterated, digitized, and made available for study with digital tools. Scholarship ranges from translation to
interpretation, from syntactic analysis to multi-corpus synthesis of patterns and  ideas. Underlying much of humanities scholarship is the activity of annotation.  Annotation of the “aboutness” of documents and entities ranges from linguistic markup,to structural and semantic relations, to subjective commentary; annotation of “activity”
around documents and entities includes scholarly workflows, analytic processes, and patterns of influence among a community of scholars. Sharable annotations and  collaborative environments support scholarly discourse, facilitating traditional practices and enabling new ones.
The focus of this workshop is on the tools and environments that support annotation, broadly defined, including modeling, authoring, analysis, publication and sharing.  We will explore shared challenges and differing approaches, seeking to identify emerging best practices, as well as those approaches that may have potential for wider application or influence.
== Call
We invite contributions related to the intersection of theory, design, and implementation, emphasizing a “big-picture” view of architectural, modeling and  integration approaches in digital humanities. Submissions are encouraged that discuss data and tool reuse, and that explore what the most successful levels are for reusing  the products of a digital humanities project (complete systems? APIs? plugins/modules? data models?). Submissions discussing an individual project should focus on these larger questions, rather than primarily reporting on the project’s activities. This workshop is a forum in which to consider the connections and influences between DH annotation tools and environments, and the tools and models used in other domains, that may provide new approaches to the challenges we face. It is also a locus for the discussion of emerging standards and practices such as OAC (Open Annotation Collaboration) and Linked Open Data in Libraries, Archives, and Museums (LODLAM).
== Submission procedures
Papers should be submitted at www.easychair.org/conferences/?conf=dhcase2014. An abstract of up to 400 words must be submitted by June 1st, and the deadline for full papers (6 to 8 pages) is June 8, 2014. Submissions will be reviewed by the program committee and selected external reviewers. Papers must follow the ACM SIG Proceedings format.
Up to three papers of exceptional quality/impact will be invited to submit an extended abstract (2-4 pages) for inclusion in the DocEng 2014 conference proceedings.
== Key dates:
June 1    Abstracts due (400 words max)
June 8    Full workshop papers due
June 30   Notification of acceptance to workshop. Up to 3 papers may be invited
           to submit extended abstracts
Sept. 16  Workshop
We look forward to seeing you in Ft. Collins!
Workshop Organizers: Patrick Schmitz, Laurie Pearce, Quinn Dombrowski

Distant and Close Reading

I must confess that I am still an affectionate adherent of “close” reading.  Nevertheless I am powerfully impressed by the results of “distant” reading.  So I won’t here engage in the mock battle between the two opposing parties, but rather suggest another possible way of creating “synergistically recursive interactions” between distant and close reading ( Katherine Hayles, How We Think, 2012, p. 31 ).  Aerial photography allows us to see unnoticed archeological sites, but then we have to dig…

The reason of this post is disclosed here at once:  I recently wrote a letter to Ernesto Priani about the Pico Project and when I congratulated Massimo Riva for the opening of this blog, I was prompted, as a reply, to publish my letter.  So let me try to contextualise it.  Ernesto and I had an exchange about annotation in the discussion that was started to prepare the new project proposal :  I was insisting on the use of linked data to bring to the fore intra-textual relations among terms, whereas Ernesto was stressing the need of pointing out inter-textual relations to other works, either sources or later works influenced by Pico.  We agreed that the two concerns could indeed be reconciled and lately I was reflecting on a practical approach, that I presented to Ernesto.

Before translating my letter a few explanatory remarks are here in order.  Pico’s 900 Theses or Conclusiones CM are a collection of statements of past philosophers of all schools, together with a collection of statements of his own, that all aim at confirming their possible overall concordance.  To identify the exact sources of the statements that Pico reports, is a daunting philological task.  But in the case of Thomas Aquinas there might be a chance…  so let me (more or less) translate what I wrote :

« In some cases—I am thinking of Thomas Aquinas—we have at our disposal the entire corpus online.  A sort of “non-consumptive reading” or “topic modelling” may be of help, I believe, in finding within Aquinas’ corpus the passages referred to by each one of the theses attributed to him by Pico in his Conclusiones.

« Finding the exact references through this form of “distant reading” may be very helpful indeed, for here lies a chance to satisfy both our requirements and to reconcile our two distinct points of view.  Singling out source references in this way can offer us a heuristic basis for a subsequent analysis and “close reading” of Pico’s text, aimed at a critical interpretation of his thought.

« In “topic modelling,” a “topic” is defined in the following way:  “A ‘topic’ consists of a cluster of words that frequently occur together” (cf. MALLET).  In relation to Pico’s theses and by referring to their sources, it seems then possible to individuate distinct “topics,” that may be used as an interpretative device to analyse Pico’s works, possibly enabling us to develop, in a bottom-up way, fragments of controlled vocabularies or ontologies, that can be assumed as a basis for a systematic annotation of his texts and the production of linked open data for their semantic enrichment.

« This is in short the idea I am proposing.  As a tool to individuate source references, instead of MALLET (see above), I would prefer the “word2vec” approach, that consists in a vector representation of words, based on their co-occurrence.  A very interesting feature of this method is that by adding or removing a term in a cluster of words, that we may choose to define a “topic,” the set of the passages referred to changes radically.  This method seems to me very interesting indeed, because it brings to mind the notion of ‘language games’ introduced by Wittgenstein, according to which the meaning of a term is defined by the set of its relations to all the other terms in a given game.  And I think that this particular aspect of the “word2vec” approach can bring about, along with significant results, also very important theoretical insights ».

Dino Buzzetti

 

Welcome/Benvenuti (Massimo Riva, Director, Virtual Humanities Lab @ Brown University)

This blog is a platform for discussing topics in the Digital Humanities, focusing on the implementation of an experimental framework for close collaboration of a worldwide network of scholars contributing to the Virtual Humanities Lab at Brown University and currently at work on the creation of significant digital resources for the study of various facets of  humanist culture.

In the age of data mining, “distant reading” and cultural analytics, we increasingly rely upon automated, algorithm-based procedures in order to parse the exponentially growing database of digitized textual and visual resources. Yet, within this deeply networked and massively interactive environment, it is crucial to preserve the “expert logic” of primary and secondary sources, expert opinions, textual stability, citations, and so on which forms the heritage and legacy of humanities scholarship. Scholarly collaboration cannot be limited to the developing of tools or the application of tools developed by others but must envision “a disciplined set of practices that problematizes methodology, tools and interpretation at the same time”  (Stefan Sinclair, Introduction: Correcting Methods).

We want to develop “strategies for Scholarsourcing” (D’Iorio-Barbera), as opposed to crowdsourcing, because we believe that comprehensive research protocols for open collaborative work would advance the agenda of networked communities of practice similar to the one envisioned here.