QUANTITATIVE APPROACH TO ANCIENT NEAR EASTERN GLYPTIC

Itineraries - Projects

QUANTITATIVE APPROACH TO ANCIENT NEAR EASTERN GLYPTIC

by Sergio Camiz, Alessandro Di Ludovico, Elena Rova

Quantitative data analyses in the research on Western Asiatic art history have mainly focused on the iconography of cylinder seals, most probably due to the large number of specimens available, their physical shape, their dense chronological and geographical distribution, and especially due to the repetitive pattern of most of the images engraved and their recurrent modes of use. Indeed, all these factors provide an ideal context for quantitative analyses, since, on one hand, they facilitate the encoding process of cylinder seals’ iconographies and physical features, and, on the other hand, they allow large-scale comparisons and research on the multiple components (functional, iconographic, symbolic, etc.) of the ancient Near Eastern glyptic phenomenon.

There is no doubt that the earliest landmark in the field of quantitative approaches to ancient Western Asiatic art history – especially concerning glyptic – might be found in the works of the French school of analyse logiciste. Within that frame, and mainly under the direction and at the initiative of Jean-Claude Gardin, historically fundamental contributions were carried out since the 1950s.

A true milestone in the digital encoding and filing of glyptic artifacts was the Répertoire analytique des cylindres orientaux, a huge open catalogue provided with a highly formalized encoding and an automated system to retrieve the records. The importance of the Répertoire lays not only in its ability to concretely represent the potential of a theory and a methodology, but also in the fact that it was an open (hence expandable) catalogue in which to collect information and images related to all cylinder seals the publication of which was previously scattered and not homogeneous. Although it was evident to the authors themselves that it would have hardly been kept alive in the following years, it was designed to be a dynamic global source of information on ancient Near Eastern seals. The encoding of such a diversified and heterogeneous corpus of materials was (and would still be today) a grueling challenge, which was taken on by a team from the CNRS with strategies based on linguistics, in line with the tradition of the Logicist school.

During the 1970s further experiments of encoding and digital processing of glyptic iconography were proposed and tested in the USA by Marilyn Kelly-Buccellati and her collaborators. In this case a binary (presence/absence) encoding system was adopted in order to describe each representation carved on seals as a sum of different basic features. The outcome tended to be, in fact, a rigid catalogue of all possible retrievals which could be obtained through the use of the automated method, i.e. an assemblage of possible combinations of elements which already included the similarity relations among the records. This project, which focused on the glyptic production of the Old Babylonian period only, had no actual follow-up: it was just presented to the scientific community, but its results were never made available.

Completely different, both as for its perspectives and for its original scientific issues, was the approach developed between the end of the 1980s and the early 2000s by Elena Rova and Sergio Camiz. The encoding of the visual language of early historic cylinder seals of the late IV millennium BCE was organized with the aim of studying them thanks to multidimensional analysis techniques (Presence/absence and Textual Correspondence Analysis and Hierarchical Classification). This led to the adoption of an encoding system which was similar to the logicist one, but was used to explore and investigate the data set through data analysis techniques, instead of cataloguing and organizing the material consistently.

Both the encoding system and the investigation methods were sufficiently flexible and versatile to conduct different types of study on that corpus. They were specifically tailored to it, without aiming at providing a “universal coding” to be used on corpora from different periods and areas. In fact, methods were mainly applied for detecting compositional sequences of iconographic elements, outlining thematic classes, making geographical and diachronic comparisons, and testing existing hypotheses on the connection between cylinder seals imagery and their use as administrative instruments. Different experiments were carried out, with promising results, on binary and symbolic  encodings of the corpus.

A similar approach was part of the methodology of a research work which began in the early 2000s as a mixed qualitative and quantitative investigation into the development of presentation scenes in third millennium BCE Mesopotamian glyptic. On the basis of a linguistic and an analytical perspective, Alessandro Di Ludovico devised a system to encode the presentation scenes carved on seals crafted in different historical periods in a symbolical form. Besides focusing on iconographic elements, special attention was directed to the compositional structure of cylinder seals and to the constraints imposed by the cylindrical shape of the support. The encoding was then converted into a presence/absence table, with the aim to apply some kind of algorithms belonging to the Artificial Neural Networks (ANNs) computing systems. Such experiments included a number of tests on the quality of the encoding, and even a simulation of an automated reconstruction of profiles and classes of scenes through machine learning.

A further step in the investigation consisted in the use of the methods applied by Rova and Camiz to compare their outcomes with those produced by the ANNs. The encoding was converted into a highly formalized textual description, which complied with the original logics of the first experiments; then, logical relations among iconographic elements (represented by single textual forms) and chains of them (segments, or chains of forms) were singled out and projected onto the graphics issued by the analyses. The first results helped the project to develop further, thus fostering new and still in progress research steps, which are being used both to refine or re-interpret the data set and to test experimental data analysis methods.

 

VMAC – 2017