Showing posts with label GRASS GIS. Show all posts
Showing posts with label GRASS GIS. Show all posts

Saturday, 23 December 2017

Project Red Lake open data: 3D bathymetric chart

Hi all,
it has been a long time without post here in ATOR, but this year we had to work on several different projects, without the possibility to report fast feedback in our blog.
I start today to write again, due to the fact that some of these projects grab the attention of different institutions of the academic world and, in particular, this happened for our underwater archaeology missions. 
Of course our primary interest during our diving is related with the archaeological perspective, but often the data we collect can be useful for other specialists (e.g. limnologists or biologists).
This is the reason why we decided to share our data and we start today with the bathymetric chart of Lake Tovel (previous post in ATRO: 1, 2). I processed this map working on the Red Lake Project, a research, directed by Prof. +Tiziano Camagna, which tries to study the medieval submerged forest of Lake Tovel (Trentino - Italy). I produces a 3D model of the bathymetric chart of this lake directly digitizing the map of Edgardo Baldi did in the '30s. Than I calibrated the result with the LIDAR model of the landscape, freely accessible form the geographic open data portal of the Autonomous Province of Trento (here a short tutorial about how to download data from the webgis).

Here is possible to download the file (an ESRI ASCII grid), ready to be integrated in most of the GIS software (below a screenshot of the data in GRASS GIS).

Tovel Lake bathymetrci chart in GRASS GIS

The data are available under the following license:
Creative Commons License
Lake Tovel 3D bathymetric chart by Arc-Team is licensed under a Creative Commons Attribution 4.0 International License.

I hope this data will be useful. Have a nice day!

Saturday, 26 August 2017

Mapping high alpine lakes for archaeological explorations

Hi all,
as you see we are writing few post in ATOR in the summer season, due to different field projects which take us away from home. Today I try to start again to dedicate some time to our research blog.
The topic of this post regard a solution we are currently using to help us in the archaeological exploration of high alpine lakes: the documentation of the bathymetry through a low cost sonar.
As you maybe know, since a couple of year we are working on underwater archaeology projects in the alpine lakes of our region (here an example). This kind of exploratory mission are difficult, due to the altitude of the site we have to investigate (almost always over 2000 meters asl), so that our divers have to acclimate themselves for one whole day, before starting the working. Also for this reason we started again to study archeorobotics and develop, together with our friends of the WitLab, an open hardware ROV called ArcheoROV (in order to help divers in exploratory mission).


The ArcheoROV (photo by WitLab)
 
This year we focused or research in find a cheap solution to map the bathymetry of the lakes, while WitLab went on working on the  Wi-Fi buoy which gives our ROV a long-range operability (respect the limitation of a simple control on shore). For this reason we tested a cheap sonar called Deeper, which normally is used as a fishfinder.
We started our test in the Lake Tovel, thanks to the hep of Prof. +Tiziano Camagna , who is leading the exploration project since many years. This lake is almost our playground to develop and test new solutions for underwater archaeology, since it is a difficult environment, but not extreme (like other high mountain lakes). We chose this location also because, on unlike other lakes, its bathimetry was documented by Edgardo Baldi in the 30s. We already digitized this map, processing a 3D model in GRASS GIS, so that we have some data to check our results with our small sonar (as you can see in the image below).

On the left the map drawn by Edgardo Baldi between 1937 and 1938; on the right the 3D derived map developed by Arc-Team in GRASS GIS

Some more details of the 3D map developed with GRASS GIS

To test the Deeper sonar, Porf. +Tiziano Camagna designed a small buoy which can be towed by a kayak. This solution stabilize the sonar (which remain always in the right position) and, at the same time, avoid its submersion (which causes the lost of the GPS signal).

The stabilization buoy developed by Prof. +Tiziano Camagna 

First positive results (image below) encouraged us to use this solution on a real mission, at the Monticello lake (almost 2600 meters asl), at Paradiso Pass (near Tonale Pass, Trentino, Italy).


A comparison between the digitized map of E. Baldi (on the left) and the map (work in progress) obtained with the Deeper sonar (on the right)

The expedition was joined also by our friends of the Team Nauticamare (Massimiliano Canossa and Nicola Boninsegna) and gave us the opportunity to accomplish a first mapping of the Lake Monticello, during the first day of acclimatization. This helped us very much during the archaeological underwater mission of the second day. As a result we have now a good 3D map of the bathymetry of the lake, which we will use also in the next expedition (September 2017). Her below is a short video (done with +QGIS plugin qgis2threejs), which shows the 3D model of the lake.




PS
I recorded some videotutorial related with the processing of these data. I will try to upload them ASAP in our channel.

Have a nice day!


Monday, 12 May 2014

WebRTIViewer

Hi all,
I write this post to complete the one +Rupert Gietl did regarding Large Scale Reflectance Transformation Imaging. As you read in that article, Rupert, using +GRASS GIS, re-built virtually the necessary light conditions to process an RTI image of an entire archaeological area. 
This is just one of the test we are carrying on with RTI techniques, since we are trying to evaluate this methodology under different aspects. Obviously, during our experiments, we encounter interesting researches carried on by other institutions. 
This post regards one of the projects we found on our way (I will write soon about other related works) and, more precisely, a software to share RTI images through internet: WebRTIViewer. Actually the source code of the application, an HTML5-WebGL viewer, is release under the therms of the General Public License 3 (GPL 3) on the website of its author: +Gianpaolo Palma.
Here is an example of its application, using Rupert's data of the archaeological site (better visualized here). To see it, just turno on the light and, holding the left button, move your mouse around.





The software comes with two binary tools (one for Windows 32 bit and the other for Windows 64 bit), which are necessary to prepare the RTI images for the viewer. For this reason I wrote to Giampolo Palma to ask if there would be the possibility to insert WebRTIViewer and the other applications in ArcheOS (to do this we would need the access to the source code of the binary tools, called webGLRTIMaker) and he kindly answered that he likes the idea and he would agree, but before to release the code of the webGLRTIMaker under an open license he will ask the opinion of his labs colleagues (the Visual Computing Lab). This institute, part of the Italian CNR-ISTI, is the same that develops other nice Free/Libre and Open Source (FLOSS) software, usefull in archaeology, such as MeshLab, which is often in our post, or 3DHOP (soon a post about it). Hopefully, if everything goes well, we will have another nice tool to add to the ArcheOS software selection, helping Cultural Heritage professionals in sharing data through RTI technologies.

Here below you can see again webRTIViewer in action (better visualized here), this time with data coming from the archaeological excavation of Khovle Gora (in Georgia), where we work for the University of Innsbruck (Austria) and support technically the fieldwork directed by Dr. Walter Kuntner of the Institut für Alte Geschichte und Altorientalistik.



Tuesday, 22 April 2014

Arc-Team tries Large Scale Reflectance Transformation Imaging (RTI)


With the data, collected during our mission presented recently in the post „@MAP“the Arc-Team Mobile Mapping Platform, we've tried for the first time to apply a method called Reflectance Transformation Imaging (RTI) on landscape:

Aerial Photo of the project area taken from Arc-Teams Drone

RTI is a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction. RTI also permits the mathematical enhancement of the subject’s surface shape and color attributes. The enhancement functions of RTI reveal surface information that is not disclosed under direct empirical examination of the physical object. (...) RTI images are created from information derived from multiple digital photographs of a subject shot from a stationary camera position. In each photograph, light is projected from a different known, or knowable, direction. This process produces a series of images of the same subject with varying highlights and shadows. Lighting information from the images is mathematically synthesized to generate a mathematical model of the surface, enabling a user to re-light the RTI image interactively and examine its surface on a screen.“ (http://culturalheritageimaging.org/Technologies/RTI/)

We've used the processing software and viewer of Cultural Heritage Imaging, their RTIBuilder software is made available under the Gnu General Public License ver. 3.


RTI is usually used for objects of small or medium size beause of the difficulty or impossibility to illuminate whole structures or even areas / landscapes.


At this point GIS comes to our aid:

Starting from a DTM it's easily possible to create shadow reliefs with GRASSGIS' module r.shaded.relief
The highlight of the module in our case is the capability to modify the altitude of the sun in degrees above the horizon and the azimuth of the sun in degrees to the east of north. 



In this way we could produce artificially the needed data for our RTI-landscape attempt. 
The next step was to export from GRASS a set of 60 images with different lighting positions creating an imaginary light dome around the object:


At this point we reached the first bottelneck of our approach:

Usually, you include at least one reflective sphere in each shot. 

The reflection of the light source on the spheres enables the processing software to calculate the light direction for that image. 

So we had to create and copy in every image a fake sphere with the reflection corresponding to the sunlight direction choosen in GRASS.

It was a stiff piece of work!

At the end everything was ready for processing the images in RTIBuilder. The single steps in the software are very easy to execute and well described on the ProcessingGuide

We've just had some problems with the size of our images (8200x6500 pixels), which the software couldn't process, but maybe it was because of the age of our hardware...

After reducing the image-size everthing worked fine...



At the end, after installing also RTIViewer, we've held in our hands an interactive scene of an archaeological site of nearly 10.000m2 which is almost invisible from the ground.


Monday, 21 April 2014

„@MAP“ the Arc-Team Mobile Mapping Platform


In Summer 2013 Arc-Team was charged with the task to survey a micro-DTM on an archaeological area of about 10.000 m2.
The underlying archaeological remains on the side cause small differences in height on the surface and the shape of a nearly 60 x 60 meters large structure was known from aerial photographs.


We've had only 10 hours of fieldwork at our disposal and exploring our options we've made some numbers games:
  • Doing the job with total station would allow us to take an average of 5 points per minute, which means a (very optimistic) total amount of 3000 points in 10 hours. (2 operators)

  • Using our DGPS, the stroke per minute increases up to a maximum of 15 points per minute, working with continuous point capturing mode, having an operator on the field who's stepping forward, putting down the pole and balancing the bubble eyery 4 seconds. The total amount in this case is about 9000 points. This means an average of only 0,9 points / m2. That would be far to few...


So what would we going to do?

In this occasion we've had the idea to adapt a monocycle in order to have a rollable vehicle carrying the GPS antenna and maintaining a constant distance to the ground.
The result you can admire in the illustration below.



By the help of this tool we were able to increase the stroke on 42 points / minute and a total amount of almost 25.000 points. This means an average of at least 2,5 points / m2.




The result of our efforts was quite lovely: GRASS GIS produced a high quality DTM from which we derive 3D views, isolines and shaded reliefs.

The official name of the trolley is „@MAP“ Arc-Team Mobile Mapping Platform. ;-)

Thursday, 5 December 2013

From drone-aerial pictures to DEM and ORTHOPHOTO: the case of Caldonazzo's castle

Hi all,
I would like to present the results we obtain in the Caldonazzo's castle project. Caldonazzo is a touristic village in Trentino (North Italy), famous for its lake and its mountains. Few people know about the medieval castle (XII-XIII century) whose tower is actually the arms of the town. Since 2006, the ruins are subject to a valorization project by the Soprintendenza Archeologica di Trento (dott.ssa Nicoletta Pisu). As Arc-Team we participated in the project with archaeological field work, historical study, digital documentation (SFM/IBM) and 3D modeling.
In this first post i will speak about the 3D documentation, the aerial photography campaign and the data elaboration.



1) The 3D documentation 

One of the final aims of the project will be the virtual reconstruction of the castle. To achieve that goal we need (as starting point) an accurate 3D model of the ruins and a DEM of the hill. The first model was realized in just two days of field-work and four days of computer-work (most of the time without a direct contribution of the human operator). The castle's walls were documented using Computer Vision (Structure from Motion and Image-Based Modeling); we use Pyhon Photogrammetry Toolbox to elaborate 350 pictures (Nikon D5000) divided in 12 groups (external walls, tower-inside, tower-outside, palace walls, fireplace, ...).


The different point clouds were rectified thanks to some ground control point. Using a Trimble 5700 GPS the GCPs were connected to the Universal Transverse Mercator coordinate system. The rectification process was lead by GRASS GIS using the Ply Importer Add-on.


To avoid some problems encountered using universal coordinate system in mesh editing software, we preferred, in this first step, to work just with only three numbers before the dot.



2) The aerial photography campaign 

After walls documentation we started a new campaign to acquire the data needed for modeling the surface of the hill (DEM) where the ruins lie. The best solution to take zenithal pictures was to pilot an electric drone equipped whit a video platform. Thank to Walter Gilli, an expert pilot and builder of aerial vehicles, we had the possibility to use two DIY drones (an hexacopter and a xcopter) mounting Naza DJI technology (Naza-M V2 control platform).


Both the drones had a video platform. The hexacopter mount a Sony Nex-7; the xcopter a GoPro HD Hero3. The table below shows the differences between the two cameras.


As you can see the Sony Nex-7 was the best choice: it has a big sensor size, an high image resolution and a perfect focal lenght (16mm digital = 24 mm compare to a 35mm film). The unique disadvantage is the greater weight and dimension than the GoPro, that's why we mounted the Sony on an hexacopter (more propellers = more lifting capability). The main problem of the GoPro is the ultra-wide-angle of the lens that distorts the reality in the border of the pictures.
The flight plan (image below) allowed to take zenithal pictures of the entire surface of the hill (one day of field-work).


The best 48 images were processed by Python Photogrammetry Toolbox (one day of computer-work). The image below shows the camera position in the upper part, the point cloud, the mesh and the texture in the lower part.


At first the point cloud of the hill was rectified to the same local coordinate system of the walls' point cloud. The gaps of the zenithal view were filled by the point clouds realized on the ground (image below).


After the data acquisition and data elaboration phases, we sent the final 3D model to Cicero Moraes to start the virtual reconstruction phase.


3) The Orthophoto

The orthophoto was realized using the texture of the SFM's 3D model. We exported out from MeshLab an high quality orthogonal image of the top view which we just rectified using the Georeferencer plugin of QuantumGIS.
As experiment we tried also to rectified an original picture using the same method and the same GCPs. The image below shows the difference between the two images. As you can see the orthophoto matches very well with the data of the GPS (red lines and red crosses), while the original picture has some discrepancies in the left part (the area most far away from the drone position, which was zenithal on the tower's ruin).



4) The DEM

The DEM was realized importing (and rectifying) the point cloud of the hill inside GRASS 7.0svn using the Ply Importer Add-on. The text file containing the transformation's info was built using the relatives coordinates extracted from Cloud Compare (Point list picking tool) and the UTM coordinates of the GPS' GCPs.




After data importing, we use the v.surf.rst command (Regularized spline tension) to transform the point cloud into a surface (DEM). The images below show the final result in 2D and 3D visualization.



Finally we imported the orthophoto into GRASS.



That's all.

Wednesday, 9 November 2011

More info about the archaeological automatic drawing technique

Yesterday I was looking the statistics regarding this blog and I noticed that one of the most popular post is the one about the automatic drawing technique we (Alessandro Bezzi, Simone Cavalieri and me) proposed some years ago. I noticed as well that i forgot to upload in Arc-Team's open library the presentation we did in Foggia (for ArcheoFOSS 5) about this argument (sorry, just in Italian by now...). Now the link is active and you can download the presentation here, or in Academia.edu.
As the slides are in Italian I summarize here the experiment we did in that occasion. We divided archaeological finds in four classes, looking which kind of documentation normally they need.

1) photographic documentation (e.g coins)
2) simple drawing (e.g. flint)
3) drawing + shading (e.g. normal artefacts)
4) drawing + shading + section (e.g. pottery)


Then we developed a five steps techniques to get the appropriate documentation for each class in a automatic or semi-automatic way (using only FLOSS, of course):


  1. rectified photo (GRASS - efoto)
  2. rectified photo + vector drawing (GRASS - efoto -OpenJUMP)
  3. rectified photo + vector drawing + shading (GRASS - efoto -OpenJUMP - stippler -  Inkscape)
  4. rectified photo + vector drawing + shading + section (GRASS - efoto -OpenJUMP - stippler - Inkscape - hardware)


Here is an image with the original picture of the archaeological finds we used as test and the final layout.


All the finds come from the excavation in the church of S. Andrea in Storo (TN - Italy) and gave us positive results (I just used to many points in stippler for the drawing of the pottery... anyway it is now easy to change this parameter with the new python interface Alessandro developed).
In the slides you will also find our first test on Lena picture:


The image has nothing to do with sexism, she is just o kind of standard since 70's for raster images tests... by the way she is beautiful :)

2016-04-28 Post updated

In 2010 we wrote an article (in Italian) about this technique:

"Proposta per un metodo informatizzato di disegno archeologico" (here in ResearchGate and here in Academia).
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.