digitalhumanities – Parerga und Paralipomena http://www.michelepasin.org/blog At the core of all well-founded belief lies belief that is unfounded - Wittgenstein Mon, 21 Sep 2015 19:17:51 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 13825966 Another experiment with Wittgenstein’s Tractatus http://www.michelepasin.org/blog/2015/09/21/another-experiment-with-wittgensteins-tractatus/ http://www.michelepasin.org/blog/2015/09/21/another-experiment-with-wittgensteins-tractatus/#comments Mon, 21 Sep 2015 18:58:46 +0000 http://www.michelepasin.org/blog/?p=2717 Spent some time hacking over the weekend. And here’s the result: a minimalist interactive version of Wittgenstein’s Tractatus.

Screen Shot 2015 09 21 at 19 52 31

The Tractatus Logico-Philosophicus is a text I’ve worked with already in the past.

This time I was intrigued by the simple yet super cool typed.js javascript library, which simulates animated typing.

Screen Shot 2015 09 21 at 19 54 12

After testing it out a bit I realised that this approach allows to focus on the text with more attention that having it all displayed at once.

Since the words appear one at a time, it feels more like a verbal dialogue than reading. As a consequence, also the way the meaning of the text gets perceived changes.

Slower, deeper. Almost like meditating. Try it out here.

Credits

  • the typed.js javascript library.
  • the Tractatus Logico-Philosophicus by Wittgenstein
  •  

    ]]>
    http://www.michelepasin.org/blog/2015/09/21/another-experiment-with-wittgensteins-tractatus/feed/ 1 2717
    The role of Digital Humanities in a natural disaster http://www.michelepasin.org/blog/2012/05/24/the-role-of-digital-humanities-in-a-natural-disaster/ http://www.michelepasin.org/blog/2012/05/24/the-role-of-digital-humanities-in-a-natural-disaster/#comments Thu, 24 May 2012 07:22:46 +0000 http://www.michelepasin.org/blog/?p=1914 As part of the New Directions in the Digital Humanities series this week we had a very inspiring presentation from Dr Paul Millar, Associate Professor and Head of the Department of English, Cinema and Digital Humanities, the University of Canterbury (NZ). The talk focused on the CEISMIC project, with which Millar and his team intended to ‘crowdsource’ a digital resource to preserve the record of the earthquakes’ impacts, document the long-term process of recovery, and discover virtual solutions to issues of profound heritage loss. (p.s.: this entry was cross posted on the DhWip blog)

    Screen Shot 2012 05 24 at 15 11 40

    In the months since a 7.1 magnitude earthquake hit New Zealand’s Canterbury province in September 2010, the region has experience over ten thousand aftershocks, 430 above magnitude 4.0. The most devastating aftershock, a 6.2 earthquake under the centre of Christchurch on 22 February 2011, had one of the highest peak ground acceleration rates ever recorded. This event claimed 185 lives, damaged 80% of the central city beyond repair, and forced the abandonment of 6,000 homes. It was the third costliest insurance event in history.

    As part of the project, a number of inspiring community-oriented digital resources have been made available, including:

  • Quakestories http://www.quakestories.govt.nz/: it allows anyone to share stories and photos of the Canterbury earthquakes.. e.g. “Shelves were crashing to the ground and books spewing everywhere. Everyone was bent over and..
  • Quakestudies https://quakestudies.canterbury.ac.nz/: a digital archive, built to store all types of content related to the Canterbury earthquakes. It has been developed with companies, government organisations, websites and individuals, to help them to preserve their content. The resource will be made available in the coming weeks..
  • Whenmyhomehook http://whenmyhomeshook.co.nz/: a website dedicated to helping Canterbury School children overcome the recent earthquake by providing a plaform where they can openly share their personal earthquake stories.
  • In particular, Quakestudies is going to become a massive federated archive, containing content sourced from the research community and peak agencies involved with the earthquakes. All of this information will be “looked after in perpetuity and be available to approved researchers either now or in future years”. As it is being indexed using a number of approaches (including semantic web technologies too, says Millard) it’ll make available a number of exploratory pathways into these materials – many of them it is not possible to foresee.

    This is certainly an inspiring example of the employment of digital technologies to support a large number of people; in particular, it is remarkable how the entire initiative was promoted and coordinated by a team of dedicated people at the University of Canterbury that has managed to become a key reference point for the community in such a difficult time.

    From DDH, we certainly want to send our best wishes to the project, and we’re looking forward to using Quakestudies!

    Related resources

  • The September 11 Digital Archive: http://911digitalarchive.org/
  •  

    ]]>
    http://www.michelepasin.org/blog/2012/05/24/the-role-of-digital-humanities-in-a-natural-disaster/feed/ 1 1914
    Humanities Computing and web2.0 http://www.michelepasin.org/blog/2008/02/04/humanities-computing-and-web2/ Mon, 04 Feb 2008 14:34:06 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=264 Last week I spent two interesting days in London, at the Epistemic Networks and GRID Web 2.0 for Arts and Humanities workshop. I went there representing PhiloSURFical and Cohere, but unfortunately due to technical reasons the fliers I’ve prepared to let this community know about our work were not handed out on time. The printers at Imperial College didnt want to work for me :-). Too bad.. I managed anyway to spread the word a bit, and the reactions have been very very positive.

    But let’s talk about the workshop. I found out quite a few humanities-related resources providers I was not aware of, which is pretty cool. I’m going to try accessing them using the APIs (where) provided, and see whether some interesting cross-sites services can be built out of that. As usual, I found quite boring the long and never ending talks about super-solid architectures that do marvellous things, or the ones about the state-of-the-art of such and such technologies bla bla bla. You can hear enough of them at the semantic web conferences, so I am not going to report on any of those. I found much more interesting the presentations about existing systems that do stuff for us (in this case ‘us’ should be the humanities scholars, i suppose). And the good news is that most of the talks presented systems like that!

    So here we go (in order of appearance, and based on what I remember – sorry if I’m missing out something):

  • Paul Watry (univ. of Liverpool): Named Entity and Identity Services for the National Archives: the Multivalent software, and its descendant FabFour browser. The first one, to my understanding, is an extensive suite of tools for humanities-computing with a very long list of features organized into four sections: Browser, Tools, Developer and Research.
  • Browser
    Natively view HTML, PDF, TeX DVI, man pages, and other document formats
    Annotate in situ on all formats, robustly anchored
    Notemarks
    Lenses (show OCR, decypher, magnify)
    Tools
    PDF: impose, compress, uncompress, info, encrypt / decrypt, split and merge, validate
    HTML: Robust Hyperlink signing
    All document formats: UPDATED Extract text, structure, and links. Full-text search with Lucene
    Developer
    Deep and pervasive browser extension via behaviors
    Parse all formats, extract content and style from DOM, format and extract layout geometry
    Render all formats
    and embed in Swing
    PDF library: read – modify – write (supports PDF 1.5)
    Research
    NEW Digital Preservation
    A Platform for New Ideas
    Robust Hyperlinks
    Robust Locations
    PDF Compression: “Two Diet Plans for Fat PDF” and Compact PDF Specification

    It’s quite impressive. I haven’t tried it out but I downloaded instead the FabFour browser which is built on top of Multivalent. It’s a java application for browsing/annotation:

    * Can open HTML, PDF, DVI, SVG, JPEG and other formats natively, without native helper applications
    * Allows shared, distributed annotations using open standards (SRW, SOAP, …)
    * Uses the XML digital signature standard to guarantee provenance of the annotations
    * Annotations are stored separate from the original file, so the original file remains untouched
    * Annotations are attached to the documents using different identifiers, so they are location and file format independent (you can annotate an ODF file with Fab4, email the file converted to PDF, and the receiver will be able to see the same annotations as in the original ODF file)
    * Public notes are indexed in a Cheshire database, so they can be searched and documents can be retrieved trough their annotations.
    * Supports copy editing, style editing and lenses to enrich the document view

    Really nice. It worked smoothly on my mac, and it looks powerful and easy to use. I think it’s a tool I’ll try to integrate in my daily workflow, for studying & annotating what I find on the web!

     

  • Marc Wilhelm Kuster , TEXTGRID . It’s an eclipse-based platform for textual research in the humanities. I tried to look for stuff to download online but couldnt find anything, so here’s their blurb:
  • Integrated tools that satisfy the specific requirements of text sciences could transform the way scholars process, analyse, annotate, edit and publish text data. Working towards this vision, TextGrid aims at building a virtual workbench based on e-Science methods.
    The installation of a grid-enabled architecture is obvious for two reasons. On the one hand, past and current initiatives for digitising and accessioning texts already accrued a considerable data volume, which exceeds multiple terabytes. Grids are capable of handling these data volumes. Also the dispersal of the community as well as the scattering of resources and tools call for establishing a Community Grid. This establishes a platform for connecting the experts and integrating the initiatives worldwide. The TextGrid community is equipped with a set of powerful software tools based on existing solutions and embracing the grid paradigm.

    It seemed a comprehensive suite of tools for distributed text analysis – the only downside, from my non-german perspective, the fact that it’s all tailored to the German language for now.

     

  • Greg Crane (Department of the Classics, Tufts University) – Perseus project
  • Perseus is an evolving digital library, engineering interactions through time, space, and language. Our primary goal is to bring a wide range of source materials to as large an audience as possible. We anticipate that greater accessibility to the sources for the study of the humanities will strengthen the quality of questions, lead to new avenues of research, and connect more people through the connection of ideas.

    It’s a gigantic collection of literary texts, something you’d really want to check out. Plus, it also provides a few ways to analyze all this material using visualizations and statistical tools. I really liked this..We are using computers right? So we should take advantage of their number crunching capabilities!

     

  • David ShottonClarosWEB project. These guys are based in Oxford and although they haven’t produced any software yet, they impressed me for the clear intention of contributing to a ‘data web’ (= sparql endpoints, rdf-ed data etc.). What the project is about:
  • CLAROS – CLassical Art Research Online Services – developed from discussions between European university research centres held in Oxford in 2000, but the concept dates back to the early 1990s, when the Beazley Archive participated in the EU R&D project RAMA (Remote Access to Museum Archives). The development of web technology has made it possible for RAMA’s aspirations to be realised.
    CLAROS aims to use Web 2.0 and image recognition technologies to bring classical art to anyone, any time, anywhere thanks to collaboration with the University of Oxford’s OeRC and the Departments of Engineering Science and Zoology.

    In particular, the data-web branch of the project is called FlyWeb, more information about it can be found here.

     

  • Kalina Bontcheva: AKT and GATE: GRID-WEB Services AKT/GATE. Kalina gave an overview of Gate, which is a well-known tool in the NLP community. The interesting thing is that they’re now working on a web (web2?) version of it, called SAFE
  •  

  • Marco Passarotti Index Thomisticus Treebank. I dont remember much about the Latin language, but this seemed a pretty serious software for analyzing and annotating Latin works
  • Lessico Tomistico Biculturale aims to develop IT in a lexicon, whose lexical entries are all the IT lemmas. Each entry is a report about the morphological, syntactic and semantic uses and values of the lemmas in IT.

    The software is java-based. The thing I really liked is that they’ve also done a web-friendly implementation of it: LemLat.

     

  • Jurgen RennThe Epistemic Web, Max Planck Berlin. This talk mentioned another big source of humanities resources, the ECHO project
  • more than 330 authors represented by ECHO collections
    70 seed collections in several disciplines and thematic fields,
    in particular history of science
    more than 206,600 documents
    more than 265,000 high resolution images
    of historical and cultural source documents and artefacts
    more than 240 film sequences of scientific source material
    more than 57,500 full-text page transcriptions in several languages

    For that regards the tools, that’s all. Other two talks I liked, are the ones by Martin Doerr (about an interesting first-order-logic framework for managing co-reference on the semantic web) and by Annamaria Carusi (on the relationship between technology and human practice, i.e., among other things, about how and when the digital instruments we use are chaging the way we do things also in humanities-research).

    Conclusion:

    Interesting experience, cool softwares, but one major remark… where is the web2 in all of this? The web should be the platform, right? And how about the other principles (users’ collaboration, contents syndication, lightweight business models, ease of use, ajax, folksonomies, mashups etc. etc.) ?

    Interestingly enough, these aspects have not been mentioned a lot in the presentations. I think it’s a clear sign that humanities&computing is a little bit behind in this respect. We should start thinking how to get more results from all those wasted ‘cycles of human computations’, or better, ‘cycles of humanists’ ruminations‘, by finding cheap and effective ways to get more people in the loop and letting them have fun on the web & at the same time contribute to it.
    Cohere is trying to go this direction, in the next months we’ll surely find out how many humanists will find it useful!

     

    ]]>
    1349