Wednesday 29 August 2012

3D documentation of small archaeological finds

Today I want to present the results of an experiment we did 2011 thanks to the collaboration of Nicolò Dell'unto (Lund University).
The goal of the project was the comparison between different methodologies in documenting small archaeological finds. In such a wide applications field we decided to evaluate mainly two technologies: Computer Vision (Structure form Motion and Image-based modeling) and optical sensors.
In both groups we compared open source and closed source possibilities, documenting the same artefact with Microsoft Photosynth (closed source software), Python Photogrammetry Toolbox (open source software), then with a NextEngine Scanner (closed source hardware) and finally with Three Phase Scan (open source hardware).
For the test we used a souvenir from Lund: the small statuette of a female viking (we called her Viky) you can see in the image below.

The test object: Viky

The characteristics of such an object granted us a medium-high gradient of difficulty (a good level to test our technologies).
For all the post-processing elaboration we used MeshLab (also when another commercial software was avalaible, like for the NextEngine Scanner), because we considered it a "killer application" for mesh-editing (by the way, for next experiments I would also consider CloudCompare, another great open source project). The screensot below shows the four results of our experiment.

Comparison of the four 3D documentations

To better observe the differencies between the four 3D models it is possible to check the next panoramic 2D (sorry I did not clean all the models, but I think it is enought to understand the general quality):

Python Photogrammetry Toolbox

MS Photosynth

Three Phase Scan
NextEnigine Scanner

As you can see there is not a big difference between the two Computer Vision documentations, as they are based on similar software (having a common origin in the Photo Tourism project), while the comparison between the NextEngine and Three Phase Scan is simpler: untill now the open source instrument is not accurate enought to record small objects (it is possible to observe the error on the arms of Viky), having too low quality for archaeological aims. Anyway, it has to be said that the cathegory of the two optical sensors is different: the NextEngine is a triangulation scanner (very accurate and precise), while the Three Phase Scan is a structured-light scan. Unfortunatley we had no time to consider also the open source MakerScanner, which would belong to the same cathegory of the NextEngine.
In conclusion our opinion about 3D small finds recording is that actually the NextEngine is a very good low-cost solution (sadly not open source, IMHO) and it is an optimal choice in those pojects where there is the necessity to document a big amount of objects in a short time. If we consider also the price, then the better applications seems to be the Computer Vision based software Photosynth and Python Photogrammetry Toolbox (PPT). Personally I prefer the second one for two reasons: it is a FLOSS (GPL licensed) and has no need of internet (while with Photosynth it is necessary to upload the pictures on external servers). This characteristic makes PPT a perfect tool in difficult situations, like, for exemple, an arcaheological mission abroad. In these conditions it is possible to record 3D models with a very simple hardware equipment (just a digital camera and a laptop), in short time (at least during data acquisition) and without an internet connection (as I described in this post). Moreover PPT (and in general CV vision applications) gave us also good results in "extreme" projects, like we illustrated in the slides (sorry, just italian) of our contribution during the Low-Cost 3D workshop, held in Trento in March 2012

2016-04-07 Update

In 2011 we wrote an article about this experienced:

ResearchGate: Article
Academia: Article

I hope it will be useful, even if no more up to date it.


  1. Great article, very interesting!

  2. Thanks for the post. I think inside Meshlab exist a tool specific to compare meshes. Is useful to cases like this, comparing different data acquisitions of the same object.

    Also in the realm of affordable structured-light scan there's David 3D scanner:

    Some years ago a friend tried it, but was not impressed. Apparently to achieve accurate results you need an expensive camera and also an expensive laser. Using the project recommended equipment made David so much expensive as NextEngine.

    But that happened some years go, maybe things now are different and more affordable.

    P.S. Great job. I only missed the lack of a Kinect scan in the test ;-).


    1. Hi Ricardo,
      thank you, as soon as I have a little bit of time I will try to compare the different meshes with MeshLab as you suggested.

      With Alessandro I tried David 3D Scanner (closed source) some years ago and it was working not so bad, but for the experiment described in this post we decided to use NextEngine instead (in the closed osurce field), because the Lund University had already this hardware and they were using it very much. Anyway if in the feauture they will open the code of David, we will consider it as a valid alernative. In the meantme we will do new test on the open source MakerScanner.

      Unfortunatley we decided to by a Kinect just some months ago :).


  3. Apparently that question isn't new David's forum:

    David Laserscanner vs Nextengine: pricing/accuracy


    1. Interesting topic, thanks for the link Ricardo!

  4. Thanks to Pierre Moulon for the post review and for reporting this interesting project:


BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.