Wednesday 29 August 2012

3D documentation of small archaeological finds

Today I want to present the results of an experiment we did 2011 thanks to the collaboration of Nicolò Dell'unto (Lund University).
The goal of the project was the comparison between different methodologies in documenting small archaeological finds. In such a wide applications field we decided to evaluate mainly two technologies: Computer Vision (Structure form Motion and Image-based modeling) and optical sensors.
In both groups we compared open source and closed source possibilities, documenting the same artefact with Microsoft Photosynth (closed source software), Python Photogrammetry Toolbox (open source software), then with a NextEngine Scanner (closed source hardware) and finally with Three Phase Scan (open source hardware).
For the test we used a souvenir from Lund: the small statuette of a female viking (we called her Viky) you can see in the image below.

The test object: Viky

The characteristics of such an object granted us a medium-high gradient of difficulty (a good level to test our technologies).
For all the post-processing elaboration we used MeshLab (also when another commercial software was avalaible, like for the NextEngine Scanner), because we considered it a "killer application" for mesh-editing (by the way, for next experiments I would also consider CloudCompare, another great open source project). The screensot below shows the four results of our experiment.

Comparison of the four 3D documentations

To better observe the differencies between the four 3D models it is possible to check the next panoramic 2D (sorry I did not clean all the models, but I think it is enought to understand the general quality):

Python Photogrammetry Toolbox

MS Photosynth

Three Phase Scan
NextEnigine Scanner

As you can see there is not a big difference between the two Computer Vision documentations, as they are based on similar software (having a common origin in the Photo Tourism project), while the comparison between the NextEngine and Three Phase Scan is simpler: untill now the open source instrument is not accurate enought to record small objects (it is possible to observe the error on the arms of Viky), having too low quality for archaeological aims. Anyway, it has to be said that the cathegory of the two optical sensors is different: the NextEngine is a triangulation scanner (very accurate and precise), while the Three Phase Scan is a structured-light scan. Unfortunatley we had no time to consider also the open source MakerScanner, which would belong to the same cathegory of the NextEngine.
In conclusion our opinion about 3D small finds recording is that actually the NextEngine is a very good low-cost solution (sadly not open source, IMHO) and it is an optimal choice in those pojects where there is the necessity to document a big amount of objects in a short time. If we consider also the price, then the better applications seems to be the Computer Vision based software Photosynth and Python Photogrammetry Toolbox (PPT). Personally I prefer the second one for two reasons: it is a FLOSS (GPL licensed) and has no need of internet (while with Photosynth it is necessary to upload the pictures on external servers). This characteristic makes PPT a perfect tool in difficult situations, like, for exemple, an arcaheological mission abroad. In these conditions it is possible to record 3D models with a very simple hardware equipment (just a digital camera and a laptop), in short time (at least during data acquisition) and without an internet connection (as I described in this post). Moreover PPT (and in general CV vision applications) gave us also good results in "extreme" projects, like we illustrated in the slides (sorry, just italian) of our contribution during the Low-Cost 3D workshop, held in Trento in March 2012

2016-04-07 Update

In 2011 we wrote an article about this experienced:

ResearchGate: Article
Academia: Article

I hope it will be useful, even if no more up to date it.

Wednesday 15 August 2012

Facial reconstruction of a Neanderthal

I already reconstructed some faces using skull of ancient people, but I always had the tissue depth of modern measurements to help me make this.

This time I choosed reconstruct a neanderthal man, using only the facial musculature how reference.

I used the coordinates of Manchester method shown in Caroline Wilkinson's book called "Forensic Facial Reconstruction". But, like I wrote, I haven't used the tissue depth markers. The reason is obvious... It's impossible to obtain this data, because we don't have any Homo neanderthalensis alive.

The Process

I tried to find some CT-scan of a neanderthal Skull, but I found only one. And it was not compatible with my software:

I tested more than twenty CT-scan software to convert the INR file in a mesh. Unfortunatelly I didn't get this time.

After I tried to use SfM to reconstruct using sequence of images, videos and other ways. But all in vain, like you can see in my attempts album:

To obtain a answer about CT-Scan I sent more than thirty e-mails to scientists, students and institutes asking about neanderthal skulls. But no one could help me.

In vain, I spent a lot of days trying , until deduce that I should to model the skull from scratch, even this being not the most accurate way to obtain the data.

First of all, I found a good skull references here:

So, the skull was modelled in Blender.

After, the muscles was placed, using a technology called metabals.

The skin over the muscles.

Obs.: Note that I put the eyes with blue color. My friend called Moacir Elias Santos, a brazilian archaeologist told me that the blue eyes is a genetic mutation that has not happened with Neanderthals, like you can see in these articles (thanks Moacir!):


The UV Mapping (texturing).

The first rendering... with right eye color.

The hair setup.

And the final results.

All the modelling was made with free software. I used:

Blender to model
Gimp to help with the textures
Inkscape to organize the references
All running under Linux Ubuntu.

I gave a image to Wikipedia, like you can see in these links:

The process was fun. The main goal was modelling all in a few hours with quality and precision... at least the possible precision.

I need to thank the Arc-Team for motivating me to write this article with my bad English.

I hope you enjoyed it. A big hug and see you the next article.

Monday 13 August 2012

Software recovery: e-foto rectification module for 64 bit

As you can see from this mail (ArcheOS developer mailing list), since July 2011 one of the problem in mainatinnig ArcheOS e-foto package was related with the rectification module of this software. In fact this module seems to be abbandoned in the latest releases. Unfortunatly this code is very important for our archaeological field-work, being connected with the Metodo Aramus (the procedure we use to obtain georeferencad photomosaics).

e-foto's rectification module at work (Metodo Aramus)

For this reason I fisrt tried to contact the software developers (in the official forum) and then, having too few time to dedicate to this problem (I know, my fault...), I decide to upload the code on github at this link: This solution should help to keep the rectification module of e-foto ("rectify") an active project as a stand-alone application, avoiding the risk to become an abbandonware.
However, looking to the development of ArcheOS new release (codename Theodoric), there was still a big problem: I was not able to compile "rectify" with Qt4 also for 64 bit, as ArcheOS 5 should have both a 32 and a 64 bit version.
To solve this situation I asked again the help of the community, writing a post in the italian Qt forum. As you can see from the discussion (sorry, just italian) an user (Tom) helped me in updating the source code. It was necessary to modify just two files: matriz.cpp and matriz.h, so I did a new commit on github and now the code is ready to be compiled with Qt4. I did not yet packaged rectify for 64 bit, but I will do it ASAP. Anyway if someone has this kind of machine and needs to compile the module, he can use the source-code in github (It should work, but if there are problems please report them).
I hope it was usefull.

The commit in the source code (Github)

Tuesday 7 August 2012

ArcheOS v.4 (Caesar) beta release presented during the ArcheoFOSS VII

with a big delay I uploaded the slides of the official presentation of ArcheOS 4 (Caesar) beta release, during the ArcheoFOSS VII, which took place in Rome (23 and 23 June 2012). The file can be seen here (Academia) or here (Researchgate).
With our contribute we tried to satisfy the scientific committee guidlines, illustrating not only the new software integrated in ArcheOS, but also the archaeological methodology connected with the system. We also presented some projects in which ArcheOS has been used, the community feedbacks and a preview of the future developments.

I would like to thank both Roberto Angeletti (aka BobMax) and Alessandro Furieri (SpatiaLite) for the fruitful discussions in Rome.

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.