I’m in Edinburgh, Scotland. Attending the Object Artefact Script workshop organized by the National e-Science Centre. My attendance here is mostly for learning purposes as I am new to this discipline. The program starts Today, Thursday, October 8th until Friday, October 9th, 2009.

Edinburgh is a beautiful city. Last night I had a chance to walk through Grassmarket, and up the hill to see the Castle.

Introduction

Gabriel Bodard introduced the workshop by talking a little about its main concept. He argued that there is a need for a space to analyse text and the objects where the text is written together in order to gain wholesome understanding of them. This workshop is meant to create that space.

Written Things, Human Agents, Inhabited Worlds: Revealing processes of mutual constitution through digital technologies

Kathryn Piquette [abstract]

Too much emphasis is put on the text as an independent object of study, separating it from the object. Kathryn has developed some methods in order to study and capture the text as well as the material. She argues that meaning is only created in a particular context. She has found Atlas.ti to be a useful tool to aid her research. Atlas.ti is used to combine multimedia (pictures, video, text) using functions to annotate, select and organize. The study of materials is important because it shows us the object as changing, reused, engaged, etc. James Gibson’s properties of materials and the environment: Medium, Substances, and Surfaces, already makes an argument for the study of the object in context. She is interested in seeing how digital tools can help capture this context. An example is Digital Karnak project from UCLA, which is a digital representation of the temple of Karnak, offering videos, a 3d reconstruction, with patterns of construction, modification and destruction. Some other issues might arise, like misleading visualizations.

QP

Projects like these have a shelf life. We need to make sure we document the sampling, data gathering. In the future, researchers using improved methods and technologies might take advantage of our experience.

Digitization has already a lens of interpretation, and we need to come to terms with this. We cannot present a digital representation as pure data.

A puzzling letter form, or how ways-of-seeing and ways-of-looking differ

Ségolène Tarte [abstract]

Ségolène works at E-Science and Ancient Documents (ESAD) project at Oxford University. The aim of the ESAD project is to develop a web-based interpretation support system. It is very important that the system is not conceived as a replacement of the expert. The project aims to understand how the expert thinks in order to provide support and help her do her job better. An expert reader takes into account a set of levels of reading in order to build a meaningful document. Digitization can help only with a few levels: Physical attributes of the documents, and Features of a character. Image Processing can help with the levels of reading: Features of character and Possible character. Knowledge bases: Possible Character Possible sequence of characters, Possible word or morphemic unit, Grammar, Meaning or sense of a word, and Meaning or sense of a phase or group of words.

Digitization takes advantage of the physicality of the document. For example, multiple pictures with varying lighting angles can help capture the incisions of clay tablets. Image processing algorithms work much better in black and white. Subjectivity is a challenge in several steps of the process, Identifying characters, for example is subject to visual illusions, preconceptions, and expectations from the researcher. Experts have different strategies in interpreting the text, Kinaesthetic / palaeographical (drawing, embodiment, visual feedback), and Cruciverbalistic / philological (puzzle solving, script as a collection of symbols, intuition). She creates a map of the process of reading and interpreting a tablet, taking into account skills, expectations, levels of reading.

Panel discussion

How do we assess what makes good evidence?

The technology must be an enabler, and not a constraint.

What is a relevant amount of information to provide?

Simple assumptions help us understanding and analysing, but they affect how future work is made. How much documentation or rough data can you publish in order to allow further analysis, alternative work, etc. What was useful to the project as it was made is probably enough.

How well do editions of a text reflect the active resolution of the above-mentioned tensions?

(Ségolène’s talk)

What is the process of favouring one resolution? Our aim should be to aid that resolution, to provide the tools to make interpretation. We cannot aim to make an exact transcription, we want to get as close as possible to what it was intended to be. Scholars present an illusion of a level expertise that is not subject to interpretation. One strategy could be to show that there can be several ways to arrive to an understanding, show competing hypotheses.

Is it possible to help the researcher? – should be the important question. We claim that we can and hypothesis generation is one way to do this.

Is formalism contributing to the loss of detail in the capture of objects?

Documentation does not help, the fact that we know how we got to a conclusion, does not mean it can be reversed.

Can we reconstruct how previous decisions were taken, how previous interpretations were formed.

Can this ISS be useful to other disciplines?

The ISS draws inspiration from medical support systems. There is an important difference though, in this case we are building a narrative about an object.

It could be used to interpret metaphorical representations in paintings.

Kings College London - Visualization Lab’s concept of paradata.

Further Reading: Formality Considered Harmful

Object, text and image: concepts and technology

Dot Porter [abstract]

Dot talks about the relationship between the text and materiality of objects. She uses Beowulf as an example. There is only one manuscript of the poem, and every version is derived from this original manuscript. Digital means would allow us to make it available to researchers without putting the manuscript in danger. Her main question is how do we use this digital technologies in an editorial process. What kind of information about the digital process do we use, keep, etc. What kind of information do we make available in the final edition.

There are many types of digital images that can be incorporated into the projects: there is technology helps us recovering the information, 3d models of the object, high resolution photos.

Manuscript description, Transcriptions, and Facsimiles are different ways to represent a document. TEI can help combining this representations into one rich document. But, there is a disconnect between what the TEI structure and the technology that is available to use that information. The software is limited in how it takes advantage of the encoding.

We need to avoid to create exact representation of the text, they will never be the original (Uncanny valley). We need to create applications that take advantage of the metadata and present the information in a useful way. The model should clarify rather than mimic the object.

Further reading: Uncanny Valley

QP

How far do you have to go through he uncanny valley until it becomes OK again? Imitation is misleading because it does not add real properties, but algorithmically defined flaws to add to the illusion.

There is a certain experience of the materiality of the book, or the scroll. Should digital visualizations try to recreate that experience?

AG: Digital tools have recreated the original documents, cleared the text, brightened contrast, but they are a fundamentally different experience than the embodied object. Any attempt of recreating that experience would fail. Also, we could claim than any recreation of that experience is based on a personal account of that experience, either from the editor. Most likely it would be what the editor thinks that the experience of that object was by somebody who read it in the period that it was created. There is no definite experience of the object to mimic.

img2xml: Linking Image and Text with SVG

Hugh Cayless [abstract]

Hugh presents img2xml, a tool to link text with images via the Scalable Vector Graphics (SVG) format. SVG is a World Wide Web Consortium (W3) specification for vector graphics for the web. Vector graphics are infinitely zoomable, they are not meant to be a perfect representation of the image’s elements, but they are useful in several ways. They are also very flexible, therefore very useful for visualization purposes.

An SVG trace consists of shapes and regions which both together map to a transcription. These structures could be mapped to text by using RDF. There are some advantages to using this technique: we can capture how elements of the text worked with each other, we can modify elements. There are some disadvantages as well: it is 2-dimensional, it reduces the image into a 1 bit colour space, it lacks browser support, and it lacks semantics.

Project Objectives:

  • Working prototype by the end of 2009
  • Research other texts (papyri, Archimedes palimpsest)
  • Incorporate a transcription tools (SoSOL)

QP

Is is useful? YES!

It uses Potrace to generate the vector graphics.

It does not create a perfect representation of the text, and it is meant to be used together with the image.

TILE: The Text Image Linking Environment and Image-based Editing

Dot Porter

Text Image Linking Environment TILE is a project that aims to semi-automate the linking of text regions in xml encodings. It will probably use Hugh’s img2xml open source code to define the regions. They have funding to develop the tool and have some actual projects test it and give feedback. The Mapas Project and the Homer Multitext Project, the Chymistry of Isaac Newton, the Algernon Charles Swinburne Project have shown interest.

It needs to be interoperable and TEI compliant, modular and extensible, and focused on community in development and testing.

QP

TILE is being built on top of the AJAX XML Encoder (AXE).

END OF DAY 1

Images as Evidence: Challenges for Imaging Ancient Objects

Ryan Baumann [abstract]

Ryan talks about establishing images as evidence of objects when advanced imaging techniques are being used. Images will not look like the object would be seen in natural light because often it is damaged. It is important to document the process of image acquisition and modification. Images can be obtained with several methods or at different points in time. Image/Image linking shows the relation between images or parts of images to a reference image. The use of reference frames, however, can create some constraints. Metadata links can be used to create links between images without altering the reference data. Understanding the imaging techniques is important to understand the images.

Showing relationship between complex object representations is new and not very developed. Through metadata we can transparently show direct comparisons between representations, thereby improving understanding of the object.

QP

This is cool from the technical perspective, how do we make it useful for scholars/editors?

We want to avoid the loss of data to conform to predefined formats of reference images, ie. resize, crop, etc. Image/Image linking uses metadata to show the relationship between new data and reference data.

Further Reading: Rainbow Color Maps (Still) Considered Harmful

Augmenting Epigraphy

Leif Isaksen [abstract]

Leif shows pictures from his various trips around the Europe, where he finds many inscriptions that are not really documented. He would like to have this inscriptions studied in some way. He offers an idea for a solution using mobile computing devices like phones, the web, and computing services. The larger community could take advantage of services that are already available, OCR, Digital Corpora, GPS… The service can take advantage of web-community mechanism: information is collaboratively created and shared. As the community increases in size, more information is available, and the information that is available is, presumably, corrected. Provides some sort of motivation system for tourists, or the general public, as they get some instant information when they are curious. It is interesting for scholars as well: collections can be created, large amounts of information can be gathered. Additional benefits could be automatic 3d modeling, texture mappings, provide a model for other projects. Challenges: current corpora’s accessibility is limited (API search?), image correction, text identification, bootstrapping.

QP

Feasible? Effort? Funding? If the focus is changed to general recognition of what object a person is looking at or where the object is, it seems like a possible project. Flickr could be mined for images, they already have a large dataset, and location information for a large set of them. GPS is already quite a ubiquitous technology in phones, or will be soon.

There needs to be some expert input into the service, an authority who can confirm what an inscription says.

Machine learning is probably the appropriate technique given the variability of the images.

Can we crate something that the community can start generating transcriptions, and add the additional services after.

Serious question: Is it valuable for the research community? or is it just a cool thing to do?

It is establishing a database of inscriptions.

Further Reading: Dictionary of Words in the Wild

Panel Discussion

Leif’s Augmented Epigraphy

Issues surrounding hosting images. Would images endanger the monument by stating where it is? Are there going to be so many of those issues that the project would be better off using a image hosting service, like Flickr, or an existing resource database like the British Museum. Museums that do not have the budget to 3D scan all their objects might like to have those community generated resources available to them.

Methods for preserving inscriptions used by researchers are damaging to the object, and their work is not necessarily available to the general public. This would be a good way of non-destructive methods being used to make some data available. It would also create a database of these objects and their locations, which can prove beneficial for the purposes of preserving them.

General Discussion

Divide between disciplines are relatively new, but we have sometimes strong divisions between them. Additionally adding the ‘Digital’ term might create more divisions. Why is this?

Further Reading: Writing as Material Practice

Discussion and collaboration networking

Greek Curse Tablets

Gabriel Bodard talks about an idea for a side project. There is a jigsaw puzzle of several hundred Greek Curse tablets at the British Museum. The tablets are translucent and very hard to read. There is also another collection in France (Louvre maybe?). There are about eight hundred people named in those tablets, which talk about political affairs of a small town in Cyprus.

The writing is carved on the tablet in very thin strokes, and the material of the tablets is semi-transparent, which renders most methods of digitization almost useless.

Who are the appropriate people to work on this project? What do we do with this text in the case we can digitize it?

Can we find out from the text if the damage is deliberate?

Other Topics

How do we create a model for linking metadata to 3d representations.

Using the ISS system for creating narratives of cultural objects, not only texts.

END OF DAY 2