Showing posts with label 3D. Show all posts
Showing posts with label 3D. Show all posts

Wednesday, 24 October 2018

openMVGScript

Hi all,
this fast post regards the FLOSS openMVG. This software is our first choice in documenting archaeological evidences in 3D (via SfM), from the ground level (for Aerial Archaeology we often use MicMac).
Due to the fact that, in the last years, we started to gradually abandon a simple 2D documentation, our use of openMVG increased significantly. For this reason we developed a small script to speed up the use of this software (without its GUI: openMVG-GUI), adding some preliminary operations (like a general quality reduction of the pictures via ImageMagick) and registering some statistics about the whole process. The script is released through the GNU General Public License and it is freely downloadable here. At the same address (on GitHub), you can help us in improving the script. Like always, any kind of help is be greatly appreciated (also simple language translations, since the script is currently in Iatlian). 

Archaeological 3D done with openMVGScript (image quality reduced to 2000 px)


In the next future we would like to use ImageMagick to add a variable to the script, in order to optimized pictures for underwater 3D archaeological documentations, following the methodology we used in some of our past missions.

The process statistics reported by the script

Have a nice day!

Wednesday, 13 June 2018

Francesco Petrarca, the mocap experiment in Blender

This post is related with the Wikipedia editathon we are organizing for the open source exhibition "Imago Animi", a project derived from the previous experience of "Facce. I molti volti della storia umana".
This time I will write about the experiment in facial MoCap we performed with the 3D model of the FFR (Forensic Facial Reconstruction) of Francesco Petrarca. The poet was indeed one of the five historical personalities connected with the city of Padua, who were the protagonist of a specific session within the exhibition "Facce". Moreover Petrarch is also present in "Imago Animi", due to the fact that its mortal remains were studied by the scientist Giovanni Canestrini, born in Revò, a town very close to Cles (Trentino - Italy), where the exhibition is currently open to visitors.
The image below (Creative Commons Attribution 4.0 International License) is the result of the Forensic facial Reconstruction of Francesco Petrarca, performed starting from the cast of the skull, found in 2005 in the "fondo Canestrini" at the University of Padua.

The FFR portrait of Francesco Petrarca


This cast is the only data available for the FFR, because, as the 2013 recognition on the mortal remains revealed, the skeleton of Petrarch is currently buried with a female skull, dated (with the C14 techniques) between 1134 and 1280 (almost one century before the life of the poet). The aDNA analysis performed in 2004 by Prof. David Caramelli (University of Florence) confirmed this thesis (the skeleton had a male DNA, while the skull a female DNA) [1].
In 2015 Arc-Team has been commissioned to perform the Forensic Facial reconstruction of Petrarca and other historical personalities, in order to prepare the open exhibition "Facce". The work started with the 3D documentation of the cast of the "fondo Canestrini", done (with SfM techiques) by Luca Bezzi (Arc-Team). The cast was previously validated by Dott. Nicola Carrara (of the Anthropological Museum of the Univesrity of Padua), with osteometric measurements based on the drawing published by Giovanni Canestrini on his study about the mortal remains of the poet [2]. Cicero Moraes, the forensic specialist of Arc-Team, later performed the FFR in Blender, with the techniques developed during the years starting from this first post in ATOR: Forensic Facial Reconstruction with Free Software.
Once achieved the final 3D model, we decided to test Blender potentialities in facial MoCap, starting from previous experiences. In this case the idea was a short video in which Francesco Petrarca would have "reciting" one of its poetry and, in particular, the proemial sonnet of the Canzioniere ("Voi ch'ascoltate in rime sparse il sono...").
The video below shows the final result...


... while this video shows the "making of".


For the two open exibitions ("Facce" and "Imago Animi") has been chosen a combination of the previous videos, in order to show also the technique of facial MoCap. The final product, you can see here below, has been performed by Cicero Moraes (Arc-Team) using the facial MoCap tools of Blender, starting from the original video registered by Luca Bezzi (Arc-Team), with the technical help of Dott.ssa Emma Varotto and Dott. Nicola Carrara (Anthropological Museum of the Univesrity of Padua), which recorded the excellent performance of the actor Antonello Pagotto.



This post wants to be also a tribute to all the people involved in the project, for their professionalism and kindness!
Have a nice day!


Bibliography

[1] N. Carrara, L. Bezzi, Lo strano caso del cranio di Francesco Petrarca, in Imago Animi. Volti dal passato, 2018
[2] G. Canestrini, Le ossa di Francesco Petrarca, 1874

Tuesday, 2 January 2018

Lake Monticello eploration open data: 3D bathymetric chart

Hi all,
this second, brief post is intended to share other open data regarding our underwater archaeology mission in the inland waters of Trentino (Italy). 
As you know, this summer, we joined the exploration of the lake Monticello (almost 2600 m asl, near Paradiso Pass), looking for evidences of the WW1 in the Adamello front. If you missed the post, I described here the new methodology we used to achieve a complete 3D bathymetric chart, using just a low-cost sonar sensor. Today I just uploaded on our server the 3D data, so that other researchers can use them, if they will find them of some interest.
Here below I post a screenshot of the data loaded within +QGIS:

The bathymetric chart of Lake Monticello

Here is possible to download the 3d bathymetric chart of Lake Monticello. As always the data are available with the following license:


Creative Commons License
Lake Monticello 3D bathymetric chart by Arc-Team is licensed under a Creative Commons Attribution 4.0 International License.

I hope this data will be useful. Have a nice day!

Saturday, 23 December 2017

Project Red Lake open data: 3D bathymetric chart

Hi all,
it has been a long time without post here in ATOR, but this year we had to work on several different projects, without the possibility to report fast feedback in our blog.
I start today to write again, due to the fact that some of these projects grab the attention of different institutions of the academic world and, in particular, this happened for our underwater archaeology missions. 
Of course our primary interest during our diving is related with the archaeological perspective, but often the data we collect can be useful for other specialists (e.g. limnologists or biologists).
This is the reason why we decided to share our data and we start today with the bathymetric chart of Lake Tovel (previous post in ATRO: 1, 2). I processed this map working on the Red Lake Project, a research, directed by Prof. +Tiziano Camagna, which tries to study the medieval submerged forest of Lake Tovel (Trentino - Italy). I produces a 3D model of the bathymetric chart of this lake directly digitizing the map of Edgardo Baldi did in the '30s. Than I calibrated the result with the LIDAR model of the landscape, freely accessible form the geographic open data portal of the Autonomous Province of Trento (here a short tutorial about how to download data from the webgis).

Here is possible to download the file (an ESRI ASCII grid), ready to be integrated in most of the GIS software (below a screenshot of the data in GRASS GIS).

Tovel Lake bathymetrci chart in GRASS GIS

The data are available under the following license:
Creative Commons License
Lake Tovel 3D bathymetric chart by Arc-Team is licensed under a Creative Commons Attribution 4.0 International License.

I hope this data will be useful. Have a nice day!

Monday, 24 April 2017

ArcheOS Hypatia Virtual Globe: Cesium

Hi all,
I am starting here a series of short post to show some of the features of the main software selected for ArcheOS Hypatia, trying to explain the reasons of these choices. The first category I'll deal with is the one of Virtual Globes. Among the many available options of FLOSS, one of the applications which meets the needs of archaeology is certainly Cesium. This short video shows its capability of import geolocalized 3D complex models, which is a very important possibility for archaeologist. In this example I imported in Cesium the 3D model (done with Structure from Motion) of a a small boat which lies on the bottom of an alpine lake (more info in this post).


Soon I'll post other short videos to show other features of Cesium. Have a nice evening!

Wednesday, 9 November 2016

Torre dei Sicconi - Chapter 8 - Reconstruction

After surveying, digging and historical research, we have started to think about, how the castle was looking like in the Middle Ages. 
Photos from the beginning of the 20. century, archaeological finds, 3D models, the comparison with similar, preserved castles: This are the bases for the virtual reconstruction made by Cicero Moraes.
Watch in the next chapter of Arc-Team's "Torre dei Sicconi" series the single steps of 3D reconstruction with Blender

Enjoy!

Torre dei Sicconi - Chapter 8 - Virtual Reconstruction

Friday, 27 May 2016

ArcheOS Hypatia, a new tool for 3D documentation: opnMVG-GUI

In these days we are working very hard to package new software for ArcheOS v. 6 (codename Hypatia). This time we just finished to work on the new GUI +Martin Greca developed for +Pierre Moulon software, openMVG, setting up all the requested dependencies. The result is a new tool for 3D photogrammetry in +ArcheOS: openMVG-GUI. This software can be considered as the evolution of the old Python Photogrammetry ToolBox, but we are currently working to fix some bugs of this application to keep providing it in ArcheOS, since it gave the best results in underground environment documentation.
Here below you an see a fast videotutorial I did for our brand new YouTube channel:



To speed up ArcheOS Hypatia development, we set up an unofficial new repository, which we will use (by now) just internally our society, to be sure that everything works fine before to release it publicly to all the users. Anyway we will share this repository also during the university courses in which we should teach this years, like the one in Evora (Portugal) or the one in Venice, since in this conditions it is possible to work under strict control, avoiding problems in unresolved package dependencies. As soon as the new repository will be hardly tested, we will open it, adding the coordinates to the ArcheOS main branch.

The new GUI (by +Martin Greca) for openMVG (by +Pierre Moulon)
 

PS

If you are interested, there are still available places for the course in Evora (regarding open source technologies and cultural heritage). Here more infos.

Have a nice day!

Saturday, 26 December 2015

ArcheOS Hypatia 3DHOP package (call for testing)

This short post is related with the previous one (about Nexus) and it is a call for testing for the deb binary package (arm64) of 3DHOP, the software developed by the Visual Computing Lab (CNR-ISTI, Pisa - Italy) to create interactive and multi-resolution web-galleries of 3D objects.
Here below you can see just an example of a gallery related with conflict archeology (WW 1), which we are developing in these days.


Example of a 3D web-gallery with 3DHOP (work in progress)

Being a Debian Jessie derivative distro, ArcheOS Hypatia will install 3DHOP in the folder /var/www/html/3dhop-3.0.
The best way to practice with this software is to modify the code of the different examples the package comes with (they are placed in /var/www/html/3dhop-3.0/examples).
For people that wants to help ArcheOS development and test this package, I upload it here. Soon we will prepare also 32-bit packages.

I hope this package will be useful! Merry Christmas and Happy new year!

PS

Marco Callieri (Visual Computing Lab - CNR-ISTI) notified me that Nexus and 3DHOP are under development right in these days, so there will be probably a new version in a short time. As soon as there will be changes, I will update the packages with the new functionalities.

Wednesday, 28 October 2015

Australopithecus sediba

The Australopithecus sediba is another important reconstruction done for the open source exhibition "Facce. I molti volti della storia umana" [1]. In getting access to the cast and in producing the 3D model of the skull, to start the to work of the facial restitution, we have been supported by Prof. Telmo Pievani, who put us in connection with the exposition "Homo sapiens" (and with its scientific material). Once the digital model of the cranium has been produced with photographic (SfM/MVSR [2]) techniques, +Cícero Moraes could proceed with the protocol we developed about Forensic Facial Reconstruction [4] of Homini (Paleoart) with coherent anatomical deformation of a Pan troglodytes CT scan [3].
In order to go on with the free sharing and disclosure, under open licenses (Creative Commons Attribution International: CC-BY-4.0), of the material we produced during the preparation of the exhibition "Facce", I uploaded today the result of this FFR in Wikimedia Commons.
Here below is the final image, which has been developed thanks to a joned effort of Luca Bezzi (Arc-Team) and Nicola Carrara (Anthropological Museum of the University of Padua), for 3D model of the skull; +Cícero Moraes (Arc-Team) for the main work of 3D FFR modeling; Prof. Telmo Pievani (University of Padua, Biology Department), for scientific validation.


Facial Reconstruction of the Australopithecus sediba


The anatomical deformation technique, used for the facial reconstruction of the Australopithecus sediba, is well illustrated in the following video (by +Cícero Moraes):






Webography

[1] FaceBook, ATOR 1, 2, 3, 4, 5, TV7, oggiscienza, Archeomatica

[2] ATOR 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19

[3] ATOR 1, 2

[4] ATOR 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11

Monday, 27 July 2015

Documentation of a bas-relief on a cliff : the workflow

This summer, between May and June, we worked for a joint mission, led by the University of Innsbruck (Institut für Alte Geschichte und Altorientalistik) and the Cultural Heritage, Handcrafts and Tourism Organization of Iran. The project was held in Firuzabad, in the Pars Province of Iran. We will write more details about this work in the next post. By now I just want to use some material we collected to illustrate the work-flow in data acquiring during an archaeological documentation of a bas-relief on a cliff.
The video below shows the overall process.



You can see the initial preparation phase (1), during which we placed the Ground Control Point (GCP) to perform normal 2D vertical photo-mapping and to rectify and georeference the 3D point-cloud. Than (2) we collected pictures with three different flights of our DIY drone, in order to use them with different open source SfM/MVSR software (PPT, openMVG and MicMac), to reach the best possible result: a couple of flights with parallel camera, to have a good superimposition of the whole bas-relief, and a higher acquisition to cover the upper details. In the meantime (3) another operator (+Rupert Gietl) was collecting pictures from the ground, to register also the lower perspective. Later (4), I prepared the total station and collected the GCP, thanks to some fixed points we placed the day befor (0) with our GPS. Finally +Rupert Gietl  took the last (very close) details photos, using a ladder.
The entire process lasted more or less four hours, but we needed some more time the day before to place the fixed GCP down in the valley (in international Geographic Coordinates System). A good part of the work involved just the logistics or the approach to the site, and has been slowed by the transportation of the necessary equipment (ladder, total station and drone) through a couple of passages where it was necessary to climb some rocks.
It is interesting to note that it would not have been possible to accomplish this mission with a commercial drone, due to the embargo rules (which are currently under revision), while with a DIY hexacopter it has been simple to disassemble the components which were not allowed (like the FPV system ore the GPS controlled flight).
I hope this post was useful, have a nice day!

Thursday, 2 July 2015

The archaeometric excavation

Last year, on November 28, Arc-Team joined the conference "Lo scavo archeometrcio: scienza e tecnologia applicate allo scavo archeologico" (en: "The archaeometrcic excavation: science and technology applied to the archaeological excavation"), which was held in Rovereto (Italy) at the Museo Civico.
During the meeting we gave a presentation titled "Professional archaeology. Innovations and best practice with free technology. Toward an Open Research." Today I uploaded on our server the slides, so that we can share this work (like always under Creative Commons Attribution - CC BY).
As usual the presentation has been done with impress.js through the Graphical User Interface Strut (both GPL licensed) and it is optimized for Firefox or Iceweasel (better visualized here).




Here is a little explanation regarding the single slides:

SLIDE 1
A fast presentation regarding Arc-Team.
SLIDE 2
An animation representing the importance of geocoding in archaeology (from space to site).

SLIDE 3
Differential GPS and Total Station: the main tools needed by archaeologists on the field (to georeference every single element of the archaeological record).

SLIDE 4
Some examples of geocoding in archaeology: everyday work, project in extreme conditions and missions abroad...

SLIDE 5
... survay and excavations

SLIDE 6
In survay projects the geocoding tolerance for archaeology is higher, so that we are testing alternative solutions to build a low-cost and open source GPS with centimetric accuracy, using the software RTKLIB (or its port in Android)

SLIDE 7
All the recorded data (in 2D and 3D) can be imported into an open source GIS.

SLIDE 8
For aerial archaeology it since 2008 we are working with open source DIY UAV, like the UAVP or the KKcopter (in the slide).

SLIDE 9
Our last UAV prototype and an example of 3D pointcloud form aerial pictures.

SLIDE 10
Since 2014 we are testing DIY camera (using the filter of Public Lab) for NDVI and NGB pictures in archaeological remote sensing.

SLIDE 11
Just removing the IR filter, a normal camera can be used for endoscopic prospections in low light conditions.

SLIDE 12
In the field of geophysical prospections we use a DIY  machine for Electrical Resistivity Imaging. The data can be visualized in a GIS (e.g. GRASS GIS in the slide), using the east and north and the resistivity values.

SLIDE 13
Some geoarchaeological analyses can be performed directly on the field, like the settlement test (using the soil triangle) for the texture or the lithologic recognition for the skeleton.

SLIDE 14
Also some basic analytical chemistry can help during the excavation (giving indications on the ancient use of the soil), to verify the presence/absence of phosphates or of organic remains.

SLIDE 15
Other preliminary laboratory (flotation and sieving) analyses can prepare the samples for further investigation. Also in this case we use a DIY machine.

SLIDE 16
Colorimetry can be performed in many ways. Currently we are testing different options, like the open source spectrometer of Public Lab.

SLIDE 17
For some laboratory geoarchaeological analysis (e.g. microscopic morphology) we use normal optic microscopes, while for more advanced studies we externalize the service (e.g. SEM or energy dispersive x-ray spectroscopy)

SLIDE 18
Currently we are testing the potentialities of the FLOSS MorphoJ to speed up the process in carpological remains recognition

SLIDE 19
To document archaeozoological remains in the field, we use the standard digital documentation techniques (in 2 and 3D), with FLOSS (e.g. bidimensional photomapping with the Aramus method or 3D recording through SfM and MVSR)

SLIDE 20
In the evolutionary anthropology field we developed a new technique (anatomical deformation) thanks to the FLOSS Blender

SLIDE 21
The same software (Blender) is used in the process of archaeological forensic facial reconstruction

SLIDE 22
Open source GIS (e.g. GRASS) are the main software we use to process and manage the recorded data

SLIDE 23
Thanks to open source UAV and Blender we experimented new ways to disclose archaeological data in a four-dimensional way (x,y,z,t)



A more detailed explanation of the entire presentation will come soon with the related article. For the topics which were already discussed in AOTR, I suggest to read the related post (see the above bibliography). For the latest experiment (e.g. near infrared, NDVI and NGB; Electrical Resistivity Imaging; Sedimentation test; litologic recognition on the field; flotation and sieving; colorimetry; microscopic morphology; MorphoJ;), we will try to write something as soon as possible.

Bibliography

Lo scavo archeologico professionale, innovazioni e best practice mediante metodologie aperte e Open research (here on Research Gate and here in Academia)

Webography (from ATOR):

3D and 4D GIS

SfM and MVSR

Aerial 3D documentation

Archaeological endoscopy

Geoarchaeology

Archaeobothany

Evolutionary anthropology
Anatomical Deformation Technique (ADT): validation; ADT Paranthropus boisei; ADT Homo rodhesiensis;

Archaeoanthropology
Archaeological Forensic Facial Reconstruction (AFFR); Digital AFFR: technique validation; AFFR: state of the arts; AFFR: poster;

Archaeological dissemination
Caldonazzo Castle 4D (case of study);

Sunday, 15 February 2015

CloudCompare on Debian Wheezy by pinning from ArcheOS5 Theodoric


CloudCompare is a 3D point cloud processing software. It's deb is already packed for ArcheOS 5 Theodoric, under development on https://github.com/archeos/ArcheOS . To install on Debian Wheezy (7.8), just add theodoric's repo:
sudo nano /etc/apt/sources.list
then add this lines and save (ctrl+o, then ctrl+x):
#Archeos5 Theodoric deb http://repos.archeos.eu/apt theodoric main contrib non-free
To validate gpg's keys, write :
gpg --keyserver pgpkeys.mit.edu --recv-key 5AC5D028 gpg -a --export 5AC5D028 | sudo apt-key add -
and update source.list by:

sudo apt-get update
 
now you can able to install any software from theodoric's repo,  just pinning by "sudo apt-get install -t theodoric packet-name", let me show you an example that installs CloudCompare:
 sudo apt-get install -t theodoric cloudcompare

that's all.

Monday, 3 November 2014

QGIS: exporting 3D data in threejs

Hi all,
I go on recording small videotutorial regarding the software in ArcheOS 5 (codename Theodoric), trying to collecting more material for the official documentation.
In order to avoid the creation of "wasted food" (videotutorial which are not connected with a real project risk to be useless because too theoretical and too few practical), I am collecting examples from our (Arc-Team) work.
This time I will show how to export 3D data from QGIS and visualize them in a browser thanks to the nice plugin "Qgis2threejs". I I had the necessity to do this kind of operation just to create some screenshot to complete this very simple illustration that gives a geological overview of the working area:


Of course this is not the only way to collect 3D views (I could do the same in GRASS with Nviz), but this workflow is very fast, for a small project.

Here is the videotutorial (I hope it will be useful):



As ususal, the video is uploaded also in our Digital Archaeological Documentation Project.
Have a nice day!

Monday, 19 May 2014

MicMac and PPT: two FLOSS solutions for 3D data

Hi all,
last week I finished my lectures in the Master Open Techne about Free Software and 3D data (acquisition and processing). As last year I could spend many times to research new solutions and test some applications. Some months ago, thanks to the friend +Romain Janvier , I was introduce to the use of MicMac, a suite for three-dimensional documentation of reality developed by the Institut national de l’information géographique et forestière (IGN). As Python Photogrammetry Toolbox, MicMac can produce point cloud from set of photos. There are two different ways to acquire images:

- the ground geometry mode (useful for zenithal pictures as a drone data-set or for wall's prospect) = take pictures perpendicularly to the surface (ground or wall) with 60 % of overlapping (between images and lines of image)



- the image geometry mode (useful for any kind of object that has more than one surface) = take a "cross" of images starting from the central one and then up, down, left and right; then take other images moving to the second position (frontal to another surface) and again a "cross" of image; go on in this way for all the surfaces of the object that you need to reconstruct.



The data acquisition is a little bit more complicate than PPT, both in the way to shoot and in the camera settings (keep the same level of zoom, no auto-focus, no stabilization, no flash, ...), but the final point cloud is denser. PPT is more user-friendly (thank to the python scripts and the GUI) but slower in processing data (mostly in the Camera Pose Estimation step of Bundler).





One of the advantages of MicMac is the fast developing that is improving the software and simplifying its usage. I'm waiting for the GUI ;)
Unfortunately Bundler and CMVS/PMVS have not new release since years.

Wednesday, 30 April 2014

The Austro-Hungarian emplacements on top of Mt. Roteck

(2390m)
Dolomites / South-Tyrol 

 A case study for extensive survey and documentation on occasion of the 100th anniversary of the beginning of WW1 on the Italian front in May 2015.

As reported on ATOR in summer 2013, Arc-Team is pushing ahead the plan of  mapping extensive areas of the high alpine frontline of WW1 from the Swiss border to the Dolomites.
Our approach consists in a very detailed DGPS-survey, terrestrial structure from motion, geolocalized images, archeological description and aerial survey by our drone.
Of course we are basing the whole working process on Open Source Soft- and, where possible , also on Hardware.
Now we want to share the latest version of a presentation, given originally in occasion of the 7th Fields Of Conflict Conference in Budapest (Hungary) in October 18.-21. 2012.
It outlines the characteristics of the high alpine working environment, the nature of the WW1 remains, the challenges to meet, our project strategy and first results.

Thursday, 5 December 2013

From drone-aerial pictures to DEM and ORTHOPHOTO: the case of Caldonazzo's castle

Hi all,
I would like to present the results we obtain in the Caldonazzo's castle project. Caldonazzo is a touristic village in Trentino (North Italy), famous for its lake and its mountains. Few people know about the medieval castle (XII-XIII century) whose tower is actually the arms of the town. Since 2006, the ruins are subject to a valorization project by the Soprintendenza Archeologica di Trento (dott.ssa Nicoletta Pisu). As Arc-Team we participated in the project with archaeological field work, historical study, digital documentation (SFM/IBM) and 3D modeling.
In this first post i will speak about the 3D documentation, the aerial photography campaign and the data elaboration.



1) The 3D documentation 

One of the final aims of the project will be the virtual reconstruction of the castle. To achieve that goal we need (as starting point) an accurate 3D model of the ruins and a DEM of the hill. The first model was realized in just two days of field-work and four days of computer-work (most of the time without a direct contribution of the human operator). The castle's walls were documented using Computer Vision (Structure from Motion and Image-Based Modeling); we use Pyhon Photogrammetry Toolbox to elaborate 350 pictures (Nikon D5000) divided in 12 groups (external walls, tower-inside, tower-outside, palace walls, fireplace, ...).


The different point clouds were rectified thanks to some ground control point. Using a Trimble 5700 GPS the GCPs were connected to the Universal Transverse Mercator coordinate system. The rectification process was lead by GRASS GIS using the Ply Importer Add-on.


To avoid some problems encountered using universal coordinate system in mesh editing software, we preferred, in this first step, to work just with only three numbers before the dot.



2) The aerial photography campaign 

After walls documentation we started a new campaign to acquire the data needed for modeling the surface of the hill (DEM) where the ruins lie. The best solution to take zenithal pictures was to pilot an electric drone equipped whit a video platform. Thank to Walter Gilli, an expert pilot and builder of aerial vehicles, we had the possibility to use two DIY drones (an hexacopter and a xcopter) mounting Naza DJI technology (Naza-M V2 control platform).


Both the drones had a video platform. The hexacopter mount a Sony Nex-7; the xcopter a GoPro HD Hero3. The table below shows the differences between the two cameras.


As you can see the Sony Nex-7 was the best choice: it has a big sensor size, an high image resolution and a perfect focal lenght (16mm digital = 24 mm compare to a 35mm film). The unique disadvantage is the greater weight and dimension than the GoPro, that's why we mounted the Sony on an hexacopter (more propellers = more lifting capability). The main problem of the GoPro is the ultra-wide-angle of the lens that distorts the reality in the border of the pictures.
The flight plan (image below) allowed to take zenithal pictures of the entire surface of the hill (one day of field-work).


The best 48 images were processed by Python Photogrammetry Toolbox (one day of computer-work). The image below shows the camera position in the upper part, the point cloud, the mesh and the texture in the lower part.


At first the point cloud of the hill was rectified to the same local coordinate system of the walls' point cloud. The gaps of the zenithal view were filled by the point clouds realized on the ground (image below).


After the data acquisition and data elaboration phases, we sent the final 3D model to Cicero Moraes to start the virtual reconstruction phase.


3) The Orthophoto

The orthophoto was realized using the texture of the SFM's 3D model. We exported out from MeshLab an high quality orthogonal image of the top view which we just rectified using the Georeferencer plugin of QuantumGIS.
As experiment we tried also to rectified an original picture using the same method and the same GCPs. The image below shows the difference between the two images. As you can see the orthophoto matches very well with the data of the GPS (red lines and red crosses), while the original picture has some discrepancies in the left part (the area most far away from the drone position, which was zenithal on the tower's ruin).



4) The DEM

The DEM was realized importing (and rectifying) the point cloud of the hill inside GRASS 7.0svn using the Ply Importer Add-on. The text file containing the transformation's info was built using the relatives coordinates extracted from Cloud Compare (Point list picking tool) and the UTM coordinates of the GPS' GCPs.




After data importing, we use the v.surf.rst command (Regularized spline tension) to transform the point cloud into a surface (DEM). The images below show the final result in 2D and 3D visualization.



Finally we imported the orthophoto into GRASS.



That's all.

Thursday, 20 June 2013

Kinect - Infrared prospections

Despite what I wrote at the end of this post, it looks like that Kinect is not really the best option for archaeological underground documentation, or for any other situation in which it is necessary to work in darkness.
I already tested the hardware and the software (RGBDemo) at home, simulating the light conditions of an underground environment, and the result was that Kinect scanned in 3D some parts of an object (a small table), with great difficulties. 
My hope was that the infrared sensors of Kinect were enough to record the objects geometries also in darkness, as actually happened. The problem was that probably RGBDemo, to work properly, needs also RGB values (from the normal camera). Without colors information the final 3D model is obviously black (as you can see below), but (and this is the real difficulty) it seems that the software loses a fundamental parameter to keep tracking the object to document, so that the operations become too slow and, in most cases, it is not possible to complete the recording of a whole scene. In other words the documentation process often stops, so that after it is necessary to start again or simply to save different partial scans of the scene, to reassemble at a later time.
However, before discarding Kinect as an option for 3D documentation in darkness, I wanted to do one more experiment in a real archaeological excavation and, some weeks ago, I found the right test area: an acient family tomb inside a medieval church.
As you see in the movie below, the structure was partially damaged, having a small hole on the North side. This hole was big enough to insert Kinect in the tomb, so that I could try to get a fast 3D overview of the inside, also to understand its real area (which was not identifiable from the outside).




As I expected, it was problematic to record the 3D characteristics of such a dark room, but I got all the informations I needed to estimate the real perimeter. I guess that in this occasion RGBDemo worked better because of the ray of light that, entering the underground structure and illuminating a small spot of the ground, was giving the software a good reference point in order to track all the surrounding areas.
Since the poor quality video it is difficult to evaluate the low resolution of the 3D reconstruction, you can get a better idea looking this other short clip, where the final pointcloud is loaded in MeshLab.



This new test of Kinect in a real archaeological excavation seems to confirm that this technology is not (yet?) ready for documentation in complete absence of light. However the most remarkable result of the experiment was the use of one of the tool of RGBDemo, which shows directly the infrared input in a monitor. This option has been a good prospection instrument to explore and monitoring the inside of the burial structure, without other invasive methodology. As you see in the screenshot, it is possible to see the inside condition of the tomb and to recognize some of the objects that lie on the ground (e.g. wooden planks or human bones), but of course this could have been done simply with a normal endoscope and some led lights (like we did in this occasion).

RGBDemo infrared view
However, here is possible to compare what the normal RGB sensor of Kinects is able to "see" in darkness and what its infrared sensors can do:

PS
This experiment was possible thanks to the support of Gianluca Fondriest, who helped me in every single step of the workflow.

Wednesday, 12 June 2013

Paranthropus boisei - forensic facial reconstruction

In the first works I made involving forensic facial reconstruction, It was important to me modeling all from scratch. More than to model, I created all textures and illumination in each new work.


With the time, and with the experience, I noticed that some properties of that works repeated constantly.

Because this, I developed a methodology to make the reconstruction faster, both with humans as hominids.

In this post I'll show you how was the reconstruction of a Paranthropus boisei. The work, how ever, it have the help of the archaeologist Dr. Moacir Elias Santos. He took some excellent photos that was the base of the 3D scanning with PPT-GUI.

Using CT-Scans of a Pongo pygmaeus and a Pan troglodytes (chimp) how references, the muscles was modeled.

Because of the morphology, we decided to use a CT-Scan of a chimp how reference to be deformed and match it with the mesh of the P. boisei. We used InVesalius to reconstruct the CT-Scan in a 3D mesh.


While I deformed the skull, the skin got the appearance of a new hominid.

The resulting mesh was the reference of the final model.

Instead of modeling the P. boisei from scratch, I imported the mesh of an Australopithecus afarensis to be deformed and match it with the skin base deformed from a CT-Scan.

By editing the mesh was possible conform it with the skull and the muscles of the P. boisei.

The edition of the mesh in Blender Sculpt Mode was done with a digital tablet Bamboo by Wacom (CTL-470). Surprisingly it was not necessary install anyone driver on Ubuntu Linux.


To finish the work, I made the texturing and put the hair. The render was done with Cycles.

I hope you enjoyed.

A big hug!

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.