Today I want to present the results of an experiment we did 2011 thanks to the collaboration of Nicolò Dell'unto (Lund University).
The goal of the project was the comparison between different methodologies in documenting small archaeological finds. In such a wide applications field we decided to evaluate mainly two technologies: Computer Vision (Structure form Motion and Image-based modeling) and optical sensors.
In both groups we compared open source and closed source possibilities, documenting the same artefact with Microsoft Photosynth (closed source software), Python Photogrammetry Toolbox (open source software), then with a NextEngine Scanner (closed source hardware) and finally with Three Phase Scan (open source hardware).
For the test we used a souvenir from Lund: the small statuette of a female viking (we called her Viky) you can see in the image below.
The test object: Viky |
The characteristics of such an object granted us a medium-high gradient of difficulty (a good level to test our technologies).
For all the post-processing elaboration we used MeshLab (also when another commercial software was avalaible, like for the NextEngine Scanner), because we considered it a "killer application" for mesh-editing (by the way, for next experiments I would also consider CloudCompare, another great open source project). The screensot below shows the four results of our experiment.
Comparison of the four 3D documentations |
To better observe the differencies between the four 3D models it is possible to check the next panoramic 2D (sorry I did not clean all the models, but I think it is enought to understand the general quality):
Python Photogrammetry Toolbox |
MS Photosynth |
Three Phase Scan |
NextEnigine Scanner |
As you can see there is not a big difference between the two Computer Vision documentations, as they are based on similar software (having a common origin in the Photo Tourism project), while the comparison between the NextEngine and Three Phase Scan is simpler: untill now the open source instrument is not accurate enought to record small objects (it is possible to observe the error on the arms of Viky), having too low quality for archaeological aims. Anyway, it has to be said that the cathegory of the two optical sensors is different: the NextEngine is a triangulation scanner (very accurate and precise), while the Three Phase Scan is a structured-light scan. Unfortunatley we had no time to consider also the open source MakerScanner, which would belong to the same cathegory of the NextEngine.
In conclusion our opinion about 3D small finds recording is that actually the NextEngine is a very good low-cost solution (sadly not open source, IMHO) and it is an optimal choice in those pojects where there is the necessity to document a big amount of objects in a short time. If we consider also the price, then the better applications seems to be the Computer Vision based software Photosynth and Python Photogrammetry Toolbox (PPT). Personally I prefer the second one for two reasons: it is a FLOSS (GPL licensed) and has no need of internet (while with Photosynth it is necessary to upload the pictures on external servers). This characteristic makes PPT a perfect tool in difficult situations, like, for exemple, an arcaheological mission abroad. In these conditions it is possible to record 3D models with a very simple hardware equipment (just a digital camera and a laptop), in short time (at least during data acquisition) and without an internet connection (as I described in this post). Moreover PPT (and in general CV vision applications) gave us also good results in "extreme" projects, like we illustrated in the slides (sorry, just italian) of our contribution during the Low-Cost 3D workshop, held in Trento in March 2012
2016-04-07 Update
In 2011 we wrote an article about this experienced:
ResearchGate: Article
Academia: Article
I hope it will be useful, even if no more up to date it.
In conclusion our opinion about 3D small finds recording is that actually the NextEngine is a very good low-cost solution (sadly not open source, IMHO) and it is an optimal choice in those pojects where there is the necessity to document a big amount of objects in a short time. If we consider also the price, then the better applications seems to be the Computer Vision based software Photosynth and Python Photogrammetry Toolbox (PPT). Personally I prefer the second one for two reasons: it is a FLOSS (GPL licensed) and has no need of internet (while with Photosynth it is necessary to upload the pictures on external servers). This characteristic makes PPT a perfect tool in difficult situations, like, for exemple, an arcaheological mission abroad. In these conditions it is possible to record 3D models with a very simple hardware equipment (just a digital camera and a laptop), in short time (at least during data acquisition) and without an internet connection (as I described in this post). Moreover PPT (and in general CV vision applications) gave us also good results in "extreme" projects, like we illustrated in the slides (sorry, just italian) of our contribution during the Low-Cost 3D workshop, held in Trento in March 2012
2016-04-07 Update
In 2011 we wrote an article about this experienced:
ResearchGate: Article
Academia: Article
I hope it will be useful, even if no more up to date it.