Showing posts with label Archaeology. Show all posts
Showing posts with label Archaeology. Show all posts

Friday, 20 December 2019

Archaeology, Alexa and NLP

Hello everyone,
this post regards some test we are doing in these weeks about the application of NLP (Natural Language Processing) to archaeology. This research is conducted by our friend Andres Reyes (Arc-Team), an expert in this field.
Among the many possibilities of NLP in CH (Cultural Heritage), we decided to start with something particular and probably not so easy, but very useful for everyday work: a project manager for archaeology. The video below shows a preview of the system (how the system finds an old project).


 

To understand what I mean, I have to explain very fast why this tool would be a great help in our field. In Professional Archaeology (or, if you prefer Commercial Archaeology) projects can be divided in 4 main categories: excavations (probably the 70% of the work), surveys (and explorations in general), Cultural Heritage Enhancement (Valorization) and studies (mainly researches on specific archaeological and historical topics). From a logistical point of view, the most critical projects are the ones related with excavation and surveys, especially if performed in extreme conditions (Glacial Archaeology, High Mountain Archaeology, Underwater Archaeology, Speleoarchaeology, etc...), since in most cases the office (and all its comforts) is far away. Even if assisted by the strong computerization of the last 15 - 20 years, field operations can end up with errors, especially if many people work simultaneously to the same project from different area (for instance, a common mistake is giving the same code to different layers or artefacts).
A way to try to avoid errors is to use DBMSs (DataBase Management Systems) and GIS directly on the field, but this solution has some weakness, mainly related with the devices on which these software runs and with the necessity to find a comfortable location to insert the data (even if temporary). Thanks to the wider and wider coverage of internet and the new generation's smart-phones it is now simpler and faster to insert data into a main server trough a DBMS with a well designed interface (for GIS it is still better to work with a rugged laptop). Nevertheless these operations are still time consuming and keeps the archaeologist busy for a while, with all the difficulties coming from the use of a small touch-screen (gloves, dirty hands, rain, etc...). For this reason a Project Manager based on vocal commands could improve the work on the field, avoiding the main errors deriving by some of the most common stress factors (short time-table, weather conditions, several people working simultaneously, etc...).
Despite our decision to work with FLOSS, for this first experiment with NLP we decided to start with Amazon Alexa virtual assistant, for several reasons: the great effort of Amazon in developing the system, its strong diffusion among users and the good support in Italian (the language of our firts prototype). Nevertheless, as soon as we will have a first prototype, we plan to test and develop also open source solution, like Microft. BTW all our code will be released ASAP, with open source licenses, in this public repository on GitLab.
Currently our prototype is in a very early stage, but we already modified it a couple of time, with sensible changes in our strategy. For instance, in order to keep everything simple, at the beginning we based on shared google doc spreadsheets. This solution was more than enough to manage the list of codes related with US (Unità Stratigrafice, EN Stratigraphical Units), artefacts, samplings, documentations (in 3D and 2D), with also the possibility to keep controlled the budget and the working hours. soon we changed this strategy to have a more performing DBMS, based on the FLOSS PostgreSQL. Currently we are developing more options, like the possibility to ask to the Project Manager in which project we worked during a specific month.
I hope this post will be useful. If you want to collaborate to the project, please contact us. Have a nice day!

Monday, 11 June 2018

Imago Animi editathon

Hi all,
sorry for the long silence, but working in archaeology and cultural heritage is not a simple task and this year we had very few time for our open projects and, consequentially for ATOR.
This new post is to report that in 2018 we went on with the open source research about "Facce. I motli volti della storia umana" ("Faces. The many aspects of human history"). The new step consisted in the opening of a new derived exhibition, called "Imago Animi. Volti dal passato" ("Imago Animi. Faces from the Past"), hosted in the Councilor's Palace (Palazzo Assessorile) of Cles (Trentino - Italy).
One of the goals of this new event is to go on with the scientific dissemination of the open source material produced until now and regarding the topic of the human face, under an anthropological, archaeological and artistic point of view. To speed up this work, we are trying to organize a wiki editathon which will be focus on the exhibition "Iamgo Animi", in order to enrich many of the pages of wikipedia which are connected to these arguments and to upload old and new facial reconstructions, done during the preparation of the event.
As an example, I post here the reconstructive portrait of Bernardo Clesio, the Italian cardinal, born in Cles, who was the main contributor of the famous Council of Trent. His facial reconstruction is one of the new work done by @Cicero Moraes (Arc-Team expert in 3D forensic facial reconstruction) for the new exhibition. These images are also particular because they are not done through the forensic facial protocol we developed during the last years here in ATOR, but rather with a new iconographic technique, based on the art-historical study on the known portraits of Bernardo Clesio (performed by Marcello Nebl), validated by a comparison to select the the common facial features (performed by @Luca Bezzi), in order to achieve a philologically reconstructed 3d portrait (modeled by @Cicero Moraes). This workflow has been necessary because it has not been possible to organize a forensic study on the remains of the cardinal (deu to the strict time table of the preparation of "Imago Animi").
Here below are the two iconographic reconstructive portraits performed with the technique described above.

The iconographic reconstructive 3d portrait of Bernardo Clesio

The iconographic reconstructive 3d portrait of Bernardo Clesio (profile)

Like always, the material uploaded in ATOR is licensed under a Creative Commons Attribution 4.0 International License.
I hope this post was useful and that the editathon will give a small contribution to the Wikipedia project.
Have a nice day!

Monday, 24 April 2017

ArcheOS Hypatia Virtual Globe: Cesium

Hi all,
I am starting here a series of short post to show some of the features of the main software selected for ArcheOS Hypatia, trying to explain the reasons of these choices. The first category I'll deal with is the one of Virtual Globes. Among the many available options of FLOSS, one of the applications which meets the needs of archaeology is certainly Cesium. This short video shows its capability of import geolocalized 3D complex models, which is a very important possibility for archaeologist. In this example I imported in Cesium the 3D model (done with Structure from Motion) of a a small boat which lies on the bottom of an alpine lake (more info in this post).


Soon I'll post other short videos to show other features of Cesium. Have a nice evening!

Thursday, 17 November 2016

Torre dei Sicconi - Chapter 9 - Rebirth

After surveying, digging and historical research and virtual reconstruction, here is the final result:

Watch in the last chapter of Arc-Team's "Torre dei Sicconi" series our idea of how the castle looked like in the Middel Ages.

Enjoy!

Torre dei Sicconi - Chapter 9 - Rebirth


Wednesday, 6 July 2016

Arc-Team: Open your Mind and share your Knowledge

Arc-Team Archaeology was founded in 2004 as a way to open Archaeology to people through a free & open approach.

Since the first day we have shared our experiences with this type of research with our friends and colleagues.

We are still searching for new horizons and there is no better way than being able to open our mind and share our knowledge.

Let's go on collaborating and sharing our results, our techniques and our experiences!


Sunday, 6 March 2016

Intervallo n° 3

As you probably noticed, it is a long time since we wrote the last post, so it is the perfect time for a new "Intervallo", just to say that we are still alive and that we will soon write something new.
I hope you will enjoy this new useless short videoclip :). If you missed the other two (and you have 70 seconds to waste), here are the link: 1, 2.



Have a nice day!

Wednesday, 19 August 2015

ArcheoFOSS I, proceedings of the workshop now available as Open Access

Hi all,
this fast post is to notify that are finally available as Open Access the proceedings of the first workshop "Open Source, Free Software e Open Format nei processi di ricerca archeologici" (en: "Open Source, Free Software and Open Format in archaeological reasearch precesses"), which in the later editions will be known as ArcheoFOSS. The event took place in Grosseto in May 2006.
Since Open Access in archeology has always been one of the main topics of this workshop, some days ago we started a discussion on the official mailing list to try to free some of the proceedings which are actually available just as printed publications. The first result has been the release of the articles collected in the first edition, thanks to the kindness of Giancarlo Macchi Janica. Currently we are working on the other two workshops which are not yet available: ArcheoFOSS V (held in Foggia in 2010) and ArcheoFOSS VI (held in Neaples in 2011). 
The image below shows the front cover of the digital publication of the proceedings of the first edition, while here you can read the official announcement about the Open Access publication (pdf here).

Front cover of proceedings of the first workshop "Open Source, Free Software e Open Format nei processi di ricerca archeologici"
A special thanks also to +Stefano Costa for uploading everything on ArcheoFOSS website.

PS

In the proceedings you can also find some articles written by Arc-Team members, regarding:
1. One of the first release of ArcheOS (v.1.6): here in Academia and here in ResearchGate (by +Alessandro Bezzi, +Luca Bezzi, +Denis Francisci, +Rupert Gietl)
2.  The use of +GRASS GIS in archaeology: Academia / ResearchGate (by Michael Burton, +Alessandro Bezzi, +Luca Bezzi, +Denis Francisci, +Rupert Gietl+Markus Neteler)
3. The use of FLOSS in a case of study in archaeology: Academia / ResearchGate (by +Luca Bezzi, Stefano Boaro, Giovanni Leonardi, +damiano lotto)

Thursday, 2 July 2015

The archaeometric excavation

Last year, on November 28, Arc-Team joined the conference "Lo scavo archeometrcio: scienza e tecnologia applicate allo scavo archeologico" (en: "The archaeometrcic excavation: science and technology applied to the archaeological excavation"), which was held in Rovereto (Italy) at the Museo Civico.
During the meeting we gave a presentation titled "Professional archaeology. Innovations and best practice with free technology. Toward an Open Research." Today I uploaded on our server the slides, so that we can share this work (like always under Creative Commons Attribution - CC BY).
As usual the presentation has been done with impress.js through the Graphical User Interface Strut (both GPL licensed) and it is optimized for Firefox or Iceweasel (better visualized here).




Here is a little explanation regarding the single slides:

SLIDE 1
A fast presentation regarding Arc-Team.
SLIDE 2
An animation representing the importance of geocoding in archaeology (from space to site).

SLIDE 3
Differential GPS and Total Station: the main tools needed by archaeologists on the field (to georeference every single element of the archaeological record).

SLIDE 4
Some examples of geocoding in archaeology: everyday work, project in extreme conditions and missions abroad...

SLIDE 5
... survay and excavations

SLIDE 6
In survay projects the geocoding tolerance for archaeology is higher, so that we are testing alternative solutions to build a low-cost and open source GPS with centimetric accuracy, using the software RTKLIB (or its port in Android)

SLIDE 7
All the recorded data (in 2D and 3D) can be imported into an open source GIS.

SLIDE 8
For aerial archaeology it since 2008 we are working with open source DIY UAV, like the UAVP or the KKcopter (in the slide).

SLIDE 9
Our last UAV prototype and an example of 3D pointcloud form aerial pictures.

SLIDE 10
Since 2014 we are testing DIY camera (using the filter of Public Lab) for NDVI and NGB pictures in archaeological remote sensing.

SLIDE 11
Just removing the IR filter, a normal camera can be used for endoscopic prospections in low light conditions.

SLIDE 12
In the field of geophysical prospections we use a DIY  machine for Electrical Resistivity Imaging. The data can be visualized in a GIS (e.g. GRASS GIS in the slide), using the east and north and the resistivity values.

SLIDE 13
Some geoarchaeological analyses can be performed directly on the field, like the settlement test (using the soil triangle) for the texture or the lithologic recognition for the skeleton.

SLIDE 14
Also some basic analytical chemistry can help during the excavation (giving indications on the ancient use of the soil), to verify the presence/absence of phosphates or of organic remains.

SLIDE 15
Other preliminary laboratory (flotation and sieving) analyses can prepare the samples for further investigation. Also in this case we use a DIY machine.

SLIDE 16
Colorimetry can be performed in many ways. Currently we are testing different options, like the open source spectrometer of Public Lab.

SLIDE 17
For some laboratory geoarchaeological analysis (e.g. microscopic morphology) we use normal optic microscopes, while for more advanced studies we externalize the service (e.g. SEM or energy dispersive x-ray spectroscopy)

SLIDE 18
Currently we are testing the potentialities of the FLOSS MorphoJ to speed up the process in carpological remains recognition

SLIDE 19
To document archaeozoological remains in the field, we use the standard digital documentation techniques (in 2 and 3D), with FLOSS (e.g. bidimensional photomapping with the Aramus method or 3D recording through SfM and MVSR)

SLIDE 20
In the evolutionary anthropology field we developed a new technique (anatomical deformation) thanks to the FLOSS Blender

SLIDE 21
The same software (Blender) is used in the process of archaeological forensic facial reconstruction

SLIDE 22
Open source GIS (e.g. GRASS) are the main software we use to process and manage the recorded data

SLIDE 23
Thanks to open source UAV and Blender we experimented new ways to disclose archaeological data in a four-dimensional way (x,y,z,t)



A more detailed explanation of the entire presentation will come soon with the related article. For the topics which were already discussed in AOTR, I suggest to read the related post (see the above bibliography). For the latest experiment (e.g. near infrared, NDVI and NGB; Electrical Resistivity Imaging; Sedimentation test; litologic recognition on the field; flotation and sieving; colorimetry; microscopic morphology; MorphoJ;), we will try to write something as soon as possible.

Bibliography

Lo scavo archeologico professionale, innovazioni e best practice mediante metodologie aperte e Open research (here on Research Gate and here in Academia)

Webography (from ATOR):

3D and 4D GIS

SfM and MVSR

Aerial 3D documentation

Archaeological endoscopy

Geoarchaeology

Archaeobothany

Evolutionary anthropology
Anatomical Deformation Technique (ADT): validation; ADT Paranthropus boisei; ADT Homo rodhesiensis;

Archaeoanthropology
Archaeological Forensic Facial Reconstruction (AFFR); Digital AFFR: technique validation; AFFR: state of the arts; AFFR: poster;

Archaeological dissemination
Caldonazzo Castle 4D (case of study);

Saturday, 13 June 2015

Intervallo n° 2

As you probably noticed it is a long time we do not write something new in ATOR and the reason is simple: summer is the most productive season for archaeologists, so most of us are working in the field and we have no time for new post.
Luckily we are engaged in new interesting projects and this will give us the opportunity to experiment new solutions and test new techniques, so we will have soon new material to share through ATOR.
In the meantime, like I did in 2013 with this post, I leave you with a short "Intervallo", just to tell that we are still active and that we will come back soon with new post and articles.
Have a nice day!


Friday, 24 April 2015

Doing quantitative archaeology with open source software

This short post is written for archaeologists who frequently perform common data analysis and visualisation tasks in Excel, SPSS or similar commercial packages. It was motivated by my recent observations at the Society of American Archaeology meeting in San Francisco - the largest annual meeting of archaeologists in the world - where I noticed that the great majority of archaeologists use Excel and SPSS. I wrote this post to describe why those packages might not be the best choices, and explain what one good alternative might be. There’s nothing specifically about archaeology in here, so this post will likely to be relevant to researchers in the social sciences in general. It’s also cross-posted on the Software Sustainability Institute blog.

Prevailing tools for data analysis and visualization in archaeology have severe limitations

For many archaeologists, the standard tools for any kind of quantitative analysis include Microsoft Excel, SPSS, and for more exotic methods, PAST. While these software are widely used, there are a few limitations that are obvious to anyone who has worked with them for a long time, and raise the question about what alternatives are available. Here are three key limitations:
  • File formats: each program has its own proprietary format, and while there is some interoperability between them, we cannot open their files in any program that we wish. And because these formats are controlled by companies rather than a community of researchers, we have no guarantee that the Excel or SPSS file format of today will be readable by any software 10 or 20 years from now. 
  • Click-trails: the main interaction with these programs is by using the mouse the point and click on menus, windows, buttons and so on. These mouse actions are ephemeral and unrecorded, so that many of the choices made during a quantitative analysis in Excel are undocumented. When a researcher wants to retrace the steps of their workflow days, months or years after the original effort, they are dependent on their memory or some external record of many of the choices made in an analysis. This can make it very difficult for another person to understand how an analysis was conducted because many of the details are not recorded. 
  • Black boxes: the algorithms that these programs use for generating results are not available for convenient inspection to the researcher. The programs are a classic black box, where data and settings go it, and a result comes out, as if by magic. For moderately complicated computations, this can make it difficult for the researcher to interpret their results, since they do not have access to all of the details of the computation. This black box design also limits the extent to which the researcher can customise or extend built-in methods to new applications.
How to overcome these limitations?

For a long time archaeologists had few options to deal with these problems because there were few alternative programs. The general alternative to using a point-and-click program is writing scripts to program algorithms for statistical analysis and visualisations. Writing scripts means that the data analysis workflow is documented and preserved, so it can be revisited in the future and distributed to others for them to inspect, reuse or extend. For many years this was only possible using ubiquitous but low-level computer languages such as C or Fortran (or exotic higher level languages such as S), which required a substantial investment of time and effort, and a robust knowledge of computer science. In recent years, however, there has been a convergence of developments that have dramatically increased the ease of using a high level programming language, specifically R, to write scripts to do statistical analysis and visualisations. As an open source programming language with special strengths in statistical analysis and visualisations, R has the potential to be a solution to the three problems of using software such as Excel and SPSS. Open source means that all of the code and algorithms that make the program operate are available for inspection and reuse, so that there is nothing hidden from the user about how the program operates (and the user is free to alter their copy of the program in any way they like, for example, to increase computation speed).

Three reasons why R has become easier to use

Although R was first released in 1993, it has only been in the last five years or so that it has really become accessible and a viable option for archaeologists. Until recently, only researchers steeped in computer science and fluent in other programming languages could make effective use of R. Now the barriers to getting started with R are very low, and archaeologists without any background with computers and programming can quickly get to a point where they can do useful work with R. There are three factors that are relevant to the recent increase in the usability of R, and that any new user should take advantage of:
  • the release of an Integrated Development Environment, RStudio, especially for R
  • the shift toward more user-friendly idioms of the language resulting from the prolific contributions of Hadley Wickham, and 
  • the massive growth of an active online community of users and developers from all disciplines.
1. RStudio

For the beginner user of R, the free and open source program RStudio is by far the easiest way to quickly get to the point of doing useful work. First released in 2011, it has numerous conveniences that simplify writing and running code, and handling the output. Before RStudio, an R user had little more than a blinking command line prompt to work with, and might struggle for some time to identify efficient methods for getting data in, run code (especially if more than a few lines) and then get data and plots out for use in reports, etc. With RStudio, the barriers to doing these things are lowered substantially. The biggest help is having a text editor right next to the R console. The text editor is like a plain text editor (such as Notepad on Windows), but has many features to help with writing code. For example, it is code-aware and automatically colours the text to make it a lot easier to read (functions are one colour, objects another, etc.). The code editor has comprehensive auto-complete feature that shows suggested options while you type, and gives in-context access to the help documentation. This makes spelling mistakes rare when writing code, which is very helpful. There is a plot pane for viewing visualisations and buttons for saving them in various formats, and a workspace pane for inspecting data objects that you've created. These kinds of features lower the cognitive burden to working with a programming language, and make it easier to be productive with a limited knowledge of the language.

2. The Hadleyverse

A second recent development that makes it easier for a new user to be productive using R is a set of contributed packages affectionately known in the R user community as the Hadleyverse. User contributed packages are add-on modules that extend the functionality of base R. Base R is what you get when you download R from r-project.org, and while it is a complete programming language, the 6000-odd user contributed packages provide ready-made functions for a vast range of data analysis and visualization tasks. Because the large number of packages can make discovering relevant ones challenges, they have been organised into 'task views' that list packages relevant to specific areas of analysis. There is a task view for archaeology, providing an annotated list of R packages useful for archaeological research. Among these user-contributed packages are a set by Hadley Wickham (Chief Scientist at RStudio and adjunct Professor at Rice University) and his collaborators that make plotting better, simplify common data analysis activities, speed up importing data in R (including from Excel and SPSS files), and improve many other common tasks. The overall result is that for many people, programming in R is shifting from the base R idioms to a new set of idioms enabled by Wickham's packages. This is an advantage for the new user of R because writing code with Wickham's packages results in code that is easier to read by people, as well as being highly efficient to compute. This is because it simplifies many common tasks (so the user doesn't have to specify exotic options if they don't want to), uses common English verbs ('filter', 'arrange', etc.), and uses pipes. Pipes mean that functions are written one after the other, following the order they would appear in when you explain the code to another person in conversation. This is different from the base R idiom, which doesn't have pipes and instead has functions nested inside each other, requiring them to be read from the center (or inside of the nest) to the left (outside of the nest), and use temporary objects, which is a counter-intuitive flow for most people new to programming.

3. Big open online communities of users

A third major factor in the improved accessibility of R to new users is the growth of an active online communities of R users. There has long been an email list for R users, but more recently, user communities have former around websites such as Stackoverflow. Stackoverflow is a free question-and-answer website for programmers using any language. The unique concept is that it gamifies the process of asking and answering questions, so that if you ask a good question (ie. well-described, includes a small self-contained example of the code that is causing the problem), other users can reward your effort by upvoting your question. High quality questions can attract very quick answers, because of the size of the community active on the site. Similarly, if you post a high-quality answer to someone else's question, other users can recognise this by upvoting your answer. These voting processes make the site very useful even for the casual R user searching for answers (and who may not care for voting), because they can identify the high-quality answers by the number of votes they've received. It's often the case that if you copy and paste an error message from the R console into the google search box, the first few results will be Q&A pages on Stackoverflow. This is very different experience compared to using the r-help email list, where help can come slowly, if at all, and searching the email list, where it's not always clear which is the best solution. Another useful output from the online community of R users are blogs that document how to conduct various analyses or produce visualizations (some 500 blogs are aggregated at http://www.r-bloggers.com/). The key advantage to Stackoverflow and blogs, aside from their free availability, is that they very frequently include enough code for the casual user to reproduce the described results. They are like a method exchange, where you can collect a method in the form of someone else's code, and adapt it to suit your own research workflow.

There's no obvious single explanation for the growth of this online community of R users. Contributing factors might include a shift from SAS (a commercial product with licensing fees) to R as the software to teach students with in many academic departments, due to the Global Financial Crisis of 2008 that forced budget reductions at many universities. This led to a greater proportion of recent generations of graduates being R users. The flexibility of R as a data analysis tool, combined with  rise of data science as an attractive career path, and demand for data mining skills in the private sector may also have contributed to the convergence of people who are active online that are also R users, since so many of the user contributed packages are focused on statistical analyses.

So What?

The prevailing programs used for statistical analyses in archaeology have severe limitations resulting from their corporate origins (proprietary file formats, uninspectable algorithms) and mouse-driven interfaces (impeding reproducibility). The generic solution is an open source programming language with tools for handling diverse file types and a wide range of statistical and visualization functions. In recent years R has become the a very prominent and widely used language that fulfills these criteria. Here I have briefly described three recent developments that have made R highly accessible to the new user, in the hope that archaeologists who are not yet using it might adopt it as more flexible and useful program for data analysis and visualization than their current tools. Of course it is quite likely that the popularity of R will rise and fall like many other programming languages, and ten years from now the fashionable choice may be Julia or something that hasn't even been invented yet. However, the general principle that a scripted analyses using an open source language is better for archaeologists, and science generally, will remain true regardless of the details of the specific language.

Sunday, 27 July 2014

July 27, 2014: just another birthday for ATOR

... and here we are: the third birthday of ATOR! 
To respect the "tradition" (1st anniversary, 2nd anniversary), today I'll publish some data about our "open research" blog.
This year, unfortunately, there are no new entry between the active authors (or AuThORs, as someone already says), but the number of post has (obviously) increased from 160 to 215, leading to 767 comments from the community. Currently (22:31 p.m. in Italy) we had 253116 visualizations (41708 since last reset of the revolvermaps counter... this time it was my fault). Our members are, up to now, 85 and this means that we have 24 new regular readers.


In my opinion, our little experiment in "sharing tests, problems and results" of our research is working, due to some events that have occurred over the past three years:

  1. through ATOR, Arc-Team's research in archeology increased in terms of development and results
  2. ATOR's post were useful also for other disciplines and sciences (soon more news about this topic)
  3. oldest project (e.g. ArcheOS) took advantage form ATOR visibility
  4. new projects (e.g. Taung project) and derivatives subprojects (e.g. the exhibition "Facce") started also thanks to ATOR
  5. we have improved our English :)

As I wrote last year: "This short post is intended as a thanks for all the people composing the community of ATOR, readers and authors as well", but this time I want to mention the authors (in alphabetical order), without which our blog could not exist:
Bernhard Fischer 

Thank you all!

Monday, 12 May 2014

WebRTIViewer

Hi all,
I write this post to complete the one +Rupert Gietl did regarding Large Scale Reflectance Transformation Imaging. As you read in that article, Rupert, using +GRASS GIS, re-built virtually the necessary light conditions to process an RTI image of an entire archaeological area. 
This is just one of the test we are carrying on with RTI techniques, since we are trying to evaluate this methodology under different aspects. Obviously, during our experiments, we encounter interesting researches carried on by other institutions. 
This post regards one of the projects we found on our way (I will write soon about other related works) and, more precisely, a software to share RTI images through internet: WebRTIViewer. Actually the source code of the application, an HTML5-WebGL viewer, is release under the therms of the General Public License 3 (GPL 3) on the website of its author: +Gianpaolo Palma.
Here is an example of its application, using Rupert's data of the archaeological site (better visualized here). To see it, just turno on the light and, holding the left button, move your mouse around.





The software comes with two binary tools (one for Windows 32 bit and the other for Windows 64 bit), which are necessary to prepare the RTI images for the viewer. For this reason I wrote to Giampolo Palma to ask if there would be the possibility to insert WebRTIViewer and the other applications in ArcheOS (to do this we would need the access to the source code of the binary tools, called webGLRTIMaker) and he kindly answered that he likes the idea and he would agree, but before to release the code of the webGLRTIMaker under an open license he will ask the opinion of his labs colleagues (the Visual Computing Lab). This institute, part of the Italian CNR-ISTI, is the same that develops other nice Free/Libre and Open Source (FLOSS) software, usefull in archaeology, such as MeshLab, which is often in our post, or 3DHOP (soon a post about it). Hopefully, if everything goes well, we will have another nice tool to add to the ArcheOS software selection, helping Cultural Heritage professionals in sharing data through RTI technologies.

Here below you can see again webRTIViewer in action (better visualized here), this time with data coming from the archaeological excavation of Khovle Gora (in Georgia), where we work for the University of Innsbruck (Austria) and support technically the fieldwork directed by Dr. Walter Kuntner of the Institut für Alte Geschichte und Altorientalistik.



Thursday, 27 February 2014

Digital Archaeology at Lund University

This year, as usual since 2011, +Alessandro Bezzi and me taught some lessons during the course "Archaeology and Ancient History: Digital Archaeology, GIS in Archaeology" at Lund University, held by +nicolò dell'unto. We used the opportunity to update the presentation with which we always start the first lecture. Here below you can see its last version, done with impress.js (just click on the first slide and us the spacebar to navigate).



For a better view, click here

The main topic is digital archeology (or "computational archeology", as it is also known in Italy). 
Initially we define five main operations that are common to any archaeological project: data acquisition, processing, management, analysis and sharing. The first three steps refer to the documentation work-flow, while the last three actions are related with the real research process (of course data management is in common with both of the phases).
Thereafter we analyze each step, starting with data acquisition, which is mainly based on hardware devices. During this operation are normally registered two elements, points and pictures, in order to virtually recover what the archaeological excavation is destroying. With points and pictures it is possible to document objects (artifacts and ecofacts) and actions (basically the archaeological samplings), and their elaboration or, in some cases combination, allows the researchers to record lines, polygon, 3d surfaces and real volumes, to register also the most complex elements of the archaeological record (layers, structures, etc...).
On the contrary of what happen with data acquisition, data processing is mainly based on software. Nowadays it can be divided into two orders of operations: standard procedures (raw coordinates elaboration, 2D photomapping, 2D vector archaeological drawing) and advanced techniques (3D restitution, volume calculation and 3D modeling). The very first and basic step to visualize recorded data is to elaborate the raw coordinates, registered with a total station or a RTK GPS, into a GIS readable code (e.g. CVS or WTK). Combining points and pictures is also possible to create georeferenced photomosaic, using a photomapping techniques (e.g. the metodo Aramus, the Khovle method or the newest Corte Inferiore method). Once obtained a complete georeferenced photomosaic it is possible to use a GIS to draw over the raster level, using one or more vector layers and to connect them with a database. Advanced techniques of documentation are more directly related with 3D and can be based on different methodology to extract morphological, topological and metric informations from one or more pictures (e.g. SvR, SfM, IBM, 3D photogrammetry, etc...). With these informations it is possible to calculate the real volumes of the elements of the archaeological records and use this data to reconstruct the depositional and post-depositional processes, using, when necessary, 3D modeling. Normally, during the different work-flows that can be involved in data processing, many kind of informations are elaborated with raster, vector and voxel graphic in 2 (x,y), 3 (x,y,z) or 4 (x,y,z,t) dimensions. The final aim is to set up a system which is able to handle such a variety of data and this system is the GIS.
In fact GIS software, combined with DBMS, are extremely useful during the data management phase, exactly for their capacity to handle different kind of informations (as many as are the disciplines or sciences which help archeology in its task). The use of such instruments helps to optimize the research, especially in comparison with the traditional techniques, not only during data management, but also during the more delicate stage of data analysis (when most of the cognitive processes are involved).
Among other things, in this fourth step, it is more evident the importance of using open source software and tools to maintain a continuous control on every single process of a study that can lead to the elaboration of new theories. Of course, not all the the analysis are equally sensitive under this aspect: for the simplest researches (anastylosis, building techniques, basic geomorphology, etc...) it is not strictly mandatory to know the source code of the applications, also because in these cases the main examinations are done directly by humans. On the other hand, for more complex studies (landscape archeology and Cost Surface Analysis, statistics, advanced geomorphology, etc...), it is very important to have a complete access to  the formulas and algorithms used by the software in order to keep an human control and do not completely delegate to the computer, among difficult quantitative calculations, also more delicate qualitative investigations (in which the human operator is still essential). In this way it is possible to correctly study all the different informations collected during the archaeological research, considering, at the same time, future integrations (GIS is an open system under a temporal point of view). The last goal of data analysis is to share results with the (scientific and non) community, which is the best way to improve the archaeological discipline itself, especially exploiting the potential of internet.
This lead us to the final step of an archaeological project (data sharing), which can follow different channels, like traditional publication, e-publication (e.g. webgis), exhibitions, etc... The most important thing, at least for scientific disclosure, is to grant a public access to all the informations used for the study (not only the filtered data, but also the raw data), in order to propose new hypothesis and (at the same time) give the all the necessary elements to verify them (no dogma, no authority principle).
To summarize the meaning of this contribution, considering archeology as a science (empiric approach) and a humanity (speculative approach), we can see how computational archeology helps to improve the scientific (empiric) approach, which is often underestimated, granting a more objective data acquisition and processing respects traditional techniques, especially during the critical phase of the archaeological excavation. In fact, unlike scientific experiments, the archaeological excavation is unrepeatable, being the most destructive approach of the discipline (and, at the same time, the most important).

PS

All the screenshots were done with ArcheOS. Some of them are related with really old projects, slowly we will replace them with more updated images...

Monday, 25 February 2013

Cloud distance tool.

I was working on different SfM/IBM of a grave we dug in 2010. we have the documentation of four different levels (see picture below). It was a complex archaeological context, with two skeletons buried in different times (double burial), both partially destroyed by the construction of the Renaissance apse. Moreover the tomb was built on the side of a prehistoric house.



I tried to rectify the point clouds inside CloudCompare v. 2.4 (normally i use GRASS with the ply importer addon or MehLab) and I discover this fantastic tool: compute cloud/cloud distance. It can calculate the distance between two different overlapping mesh, similarly to the GRASS command "r.mapcalc". As you can see in the pictures below, the distance analysis between the first and the last documentation can represent the quantity of removed ground. It could be really useful for analysis of damages in buildings.

first point cloud

fourth point cloud

cloud/cloud distance

cloud/cloud distance over the fourth point cloud


Saturday, 23 February 2013

The first photomosaic for architectural purposes?

In these days I am teaching various techniques to document in 2D cultural heritage with FLOSS (Free/Libre and Open Source Software) at the UNESCO master Open Techne. In particular I speak about photomapping and 2D photogrammetry technologies (to record horizontal and vertical surfaces).
Teaching students is always an interesting and instructive experience and , in most cases, it is often a mutual exchange of information, so it is more similar to a dialogue than to a monologue. I often learned a lot during these occasions and sometimes I have the opportunity to further investigate particular topics or to change my point of view on them, thanks to the discussion with other people.
Today it happened something like this: we were investigating the right way to correctly take pictures in order to use them for am architectural photomosaic (fortunately among the students there are not only archaeologists, but also architects, engineers, computer technicians, etc...), so I thought that a good example was the "photographic paint" maid in 1873 by Giacomo Rossetti, that I happened to see some years ago visiting the Musei Civici di Arte e Storia di Brescia (IT). You can see this "masterpiece" in the image below:

Photographic paint of S. Maria dei Miracoli in Brescia (G. Rossetti)

If I remember well what I read about this photographic paint, G. Rossetti built a wooden stage in order to collect the different photos that compose the photomosaic without excessive distortion. Of course now there are simpler ways to take good pictures (just read the last post Alessandro wrote about the UAV dornes we built), but the question of the students was: 

is this the first example of photomosaic for architecturale purposes?

To be honest, I was not able to answer the question. I just know that G. Rossetti presented its work during the exposition in Vienna in 1873 (where he won the medal of merit), but he started similar project earlier (around 1862). It seems that Rossetti's experiments were most appreciated abroad that at home (there is not even an Italian page in Wikipedia), so I think that better informations can be found in foreign countries.

 If some of the readers knows similar work of other photograpgers/artist (or of G. Rossetti), please report them on this blog, so next time maybe I will be able to better answer to student's question about this topic :).

Friday, 1 February 2013

It is Carnival!

Once we were young and stupid, now we are no more young
(quote attributed to Mick Jagger)

OK, I am stupid, but the Taung Child face was the only 3D data I had in my computer at this moment, so I gave a try to a software we would like to add in ArcheOS.
We are working on the implementation of some new functionalities for the next release (Theodoric), especially regarding a good 3d engine and some augmented reality applications. I think Alessandro, surfing on the net, found the right open source software (openspace3d) and, with the help of ORNis (aka Romain Janvier) we hope to port it in GNU/Linux as soon as possible.
So here is the result of the first test:


Do not worry for the slow reaction of the software, it is mainly caused by the on-line screen recorder I was using to register the video (it was based on Java and it slowed down a little bit the applications that were running on my computer...). As usual, if you want to help us (also for software evaluation), just join ArcheOS channel on IRC.
Stay tuned :).

Thursday, 1 November 2012

Kinect 3D limits: documentation of small objects

As Moreno Tiziani wrote in his post, last Monday (October 22) I was in Padua to start the "Taung Project". The first step of this research was indeed the 3D documentation of the cast of the Taung Child, preserved in the Museum of Anthropology of Padua University
To digitally register our subject we chose SfM/IBM techniques (using ArcheOS and PPT), because, as I reported in this post, the methodology is accurate enough to document small objects. Nevertheless I brought in Padua also our hacked Kinect, to show Moreno how this system is working in 3D recording operations. 

Red circle: Kinect. Green circle Taung Child's cast. Blue circle: RGBDemo compiling on ArcheOS

 As we thought, the cast was too small to be documented with Kinect. The reason is clear: when Kinect is too close, it simply does not "see" the subject to record, while when the device is too far away, it register too few 3D points, so that the final mesh is not accurate enough. 
Unfortunately, I did not capture a screenshot of our test, but I think the images below illustrate the concept: in the first picture my hand is to close to the sensor and it appears completely black, while in the second picture Kinect can see my hand, which appears pink, but the resolution is too low.

The sensor is too close to the subject

The distance between the sensor and the subject is adequate, but the resolution is too low

However we used Kinect to document something in the Museum of Anthropology: a wooden Egyptian sarcophagus. 
As you can see in the short movie below, we registered just one side of the object, for the same reason I explained before: when Kinect is too close to the subject it does not work properly. In this case the position of the sarcophagus was too close to the wall (almost 50 cm) and to a glass showcase (almost 20 cm). It would have been possible to scan all the three visible faces and join them together in post-processing with MeshLab, but this was just an experiment, so we concentrate on the Taung cast. 



However in the movie it is also possible to observe another interesting characteristic of Kinect: being an infrared based device it is not able to go through glass, which is registered like a normal opaque object.

I hope it was useful, have a nice day!


Monday, 8 October 2012

Kinect 3D indoor: excavation test

To complete the "Kinect trilogy", today I write this post about our first test during a real archaeological fieldwork. 
Also in this case we (Alessandro Bezzi and me) used our "hacked Kinect" with the external battery in connection with the rugged PC and, again, the chosen software for data acquisition was RGBDemo. This time we documented in 3D a layer during an "indoor" excavation, to avoid the problems with direct sunlight I descirbed in this post.
The video below tries to summarize this operation...




... and here are some screenshots to have an idea of the final result:

The pointcloud (frontal view)


The pointcloud (side view)

The mesh

The mesh (wireframe)

As you can see the general quality is lower respect the results we can obtain with other techniques (e.g. SfM and IBM), but Kinect and RGBDemo have the benefit to acquire and elaborate the data almost at the same moment, with the possibility to see the documentation process in real time. 
Ultimately Kinect is one more option to consider for 3D indoor documentation, considering the peculiarities of the archaeological project (the light conditions, the available time, the required level of detail, etc...). Our experiments will now go on now with some tests in particoular situations, where this technique could be the best option (expecially in underground environments).
Have a nice day!

Friday, 27 July 2012

July 27, 2011 - July 27, 2012: a year of ATOR

Hello all,
today ATOR reaches its first year, so I thought to post and analyze some statistics to see how this experiment of shared research is progressing.
Until now we have six active authors, who wrote 79 posts. The community reacted with 96 comments, although most of them are written by the authors in response to direct questions from readers. Overall the blog counted 21484 visits (8695 visits since the activation of the Revolver Maps plugin, as you can see in the image below).


Today ATOR has 25 members and the general trend is still growing, but sinlge posts may affect the statistics with an increase of visitors related both to the quality of the post and to the interest aroused by the topic. A good example of this situation is the post of Cicero Moraes about forensic facial reconstruction, which has captured the attention of the community of 3D modelers and of physical anthropologists, reching the peak traffic you can see in June 2012 in the graph below.





The post reached also the attention of Ton Roosendaal (original creator of Blender), who wrote a tweet about it:


Anyway, the main strength of ATOR and of an open approach to research remains the active collaboration between researchers operating in different fields (not only archaeologists). In this case must be placed, for example, the new collaborations with the 3D artist Cicero Moraes (already mentioned) and with the anthropologist Moreno Tiziani, creator of anthropological association Antrocom Onlus, which publishes the Online Journal of Anthropology
We will go on working with this open philosopy in archaeology, inspired by the Free/Libre and Open Source Software movement, and to further increase the quality of posts in ATOR with the help of the community. As usual, if you want to collaborate, just contact us! 
Thank you. 
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.