Showing posts with label OpenMVG. Show all posts
Showing posts with label OpenMVG. Show all posts

Friday, 11 December 2020

4D in archaeology: 3D documentation VS 3D reconstruction

 Hi everybody,

On 23 April 2020 I was asked by my friend Piergiovanna Grossi to give a lesson about 3D and archaeology at the University of Verona. Unfortunately I cannot share here this lesson (at least not yet), due to some restrictions. Nevertheless I would like to write a fast post about one of the topics that seem to have surprised the students: the difference, in archaeology, between 3D documentation and 3D reconstruction.

To keep it simple (KISS principle) we have to consider that we can define our reality (at least in a simple way) in 4D, through three spatial dimensions (x,y,z) and a temporal dimension (t). For this reason, when we work on an archaeological project (excavation, survey, etc...) and we want to document something, we have to pay attention that we are not simply registering the data in 3D, but we are doing a digital copy of the object of our investigation, during a specific time lapse, so that we are recording his physical aspect (morphology) as it is at the moment in which we are working on it. In other words, we are recording a 3D of the object as we see it now (but it would be more correct to say that we are recording it in 4D: x,y,z and t). This is what we call archaeological documentation, but we have to keep in mind that the object as we see it can be very different from the shape it had in the past (like the ruins of a castle are different from the castle itself). Moreover we have to consider that a single object may have had various shapes in the past (a castle could be the result of various architectural stages). This is lead us to the main difference between an the archaeological documentation (which record the object as we see it when we study it) and the archaeological reconstruction (which tries to rebuild the original shape of the object in the past).

This difference is important also because, in Digital Archaeology, 3D documentations  and 3D reconstructions are performed with different kind of software. In the first case we can use SfM-MVS techniques with FLOSS like MeshRoom, OpenMVG, etc..., while in the second we use 3D suite like Blender, even if, recently, Cicero Moraes wrote an add-on able to join this two aspects into a single application: OrtgogOnBlender.

Of course, working in Arc-team together with Cicero Moraes, it is obvious for me to mention him for this topic, but this is due not only to his effort in developing OrtogOnBlender, but also because, in order to explain the students this fundamental difference between archaeological documentation and reconstruction, I found out that the best way was to  show the some example of our past projects related with Forensic Facial Reconstruction (FFR). In fact I started showing some example regarding the medieval of Torre dei Sicconi and the roman site of Villa di Valdonega, like the image below...

 

The Roman site of Villa di Valdonega: 3D documentation (up) and 3D reconstruction (down)

... but suddenly everything was more clear when I showed an example of FFR, like this one:

 

The FFR of St. Valentine of Monselice: on the left side the reconstructive model, on the right side the documentation of the skull

Indeed, during an archaeological FFR project, it is pretty simple to understand that the 3D of the skull represent a 3D documentation, while the 3D of the face is a 3D reconstruction.

I hope this post was useful, Have a nice day!



Wednesday, 24 October 2018

openMVGScript

Hi all,
this fast post regards the FLOSS openMVG. This software is our first choice in documenting archaeological evidences in 3D (via SfM), from the ground level (for Aerial Archaeology we often use MicMac).
Due to the fact that, in the last years, we started to gradually abandon a simple 2D documentation, our use of openMVG increased significantly. For this reason we developed a small script to speed up the use of this software (without its GUI: openMVG-GUI), adding some preliminary operations (like a general quality reduction of the pictures via ImageMagick) and registering some statistics about the whole process. The script is released through the GNU General Public License and it is freely downloadable here. At the same address (on GitHub), you can help us in improving the script. Like always, any kind of help is be greatly appreciated (also simple language translations, since the script is currently in Iatlian). 

Archaeological 3D done with openMVGScript (image quality reduced to 2000 px)


In the next future we would like to use ImageMagick to add a variable to the script, in order to optimized pictures for underwater 3D archaeological documentations, following the methodology we used in some of our past missions.

The process statistics reported by the script

Have a nice day!

Sunday, 3 September 2017

3D documentation of ancient millstones: preliminary tests


The traditional drawing of ancient millstones consists in a plan (eventually with shading to give a sense of three dimensions) and one or more cross sections of the object. This is not always easy because of the dimensions, weight and sometimes shape (mainly the Pre-Roman millstones are irregular and asymmetric) of this type of artefacts. Furthermore, the millstones are generally in museums or in storerooms: in these places, it is often difficult to move the objects or to have enough time for drawing quietly and checking well all the details. In short, drawing a millstone is not like drawing a sherd of pottery!
For these reasons, it could be useful applying a methodology based on the Structure from Motion (SfM) techniques in addition to the traditional drawing methods. In this post I’m going to present the preliminary results of a test aimed at the three-dimensional documentation of a fragment of an Iron Age millstone from Northern Italy (a so-called “Olynthus mill” or “Hopper rubber”).

The first step was the construction of a rectangular wooden frame made by 4 rods of different length (40, 60, 80 cm), so that it’s possible to build frames of different areas according to the dimensions of the millstone to be drawn. On the surface of the rods some cross marks equally spaced are signed: these marks will be use as reference points with known coordinates for the rectification of the3D point cloud, mesh and (eventually) texture (something like GCP, Ground Control Points).


Four bolts adjustable for height hold the frame together and allow to level it perfectly. Once the frame is ready, you need to enter the millstone into it, in such a way as to leave sufficient space between the stone and the rods for taking pictures.
Some recognizable markers should be placed in different points of the millstone: these are for aligning and merging the two point clouds that will be generated (see below). A simple solution is to use small spheres of coloured modelling clay visible in the point clouds.



At this stage, you start with the typical workflow of the SfM. You take an appropriate number of pictures of the upper surface first; then do the same to the lower surface, turning upside down the millstone inside the frame.
The pictures can be processed by the software you want. I used Regard3D / OpenMVG for generating two point clouds (one for the upper surface and the other for the lower) and CloudCompare for editing/cleaning the point clouds and for performing their rectification (thanks to the cross marks on the frame), alignment and merging (thanks to the coloured markers on the stone). CloudCompare and MeshLab have also been used for generating meshes and for computing other parameters, among which the measurements.



The final result is a point cloud and a mesh of the millstone.


Using MeshLab you could also obtain the texture of the object, but for my aims it’s enough a 3D model (point clouds or mesh) from which I can get a plan, some cross sections and all the measures I need. Thanks to these data, I can detail or check my handmade drawing or do it from scratch.


In conclusion, the usage of an homemade wooden frame makes easier and more precise the data acquisition for the SfM and make faster and more complete the documentation of this kind of artefacts. The method described leaves room for improvements and developments; it could become a “standard” documentation technique for the ancient millstones and for other archaeological objects with analogous drawing issues.

Thank’s to Alessandro Bezzi (Arc-Team).

Denis Francisci




Wednesday, 28 December 2016

The devils boat

This year, thanks to Prof. Tiziano Camagna, we had the opportunity to prove our methodologies during a particular archaeological expedition, focused on the localization and documentation of the "devils boat". 
This strange wreck consists in a small boat built by the Italian soldiers, the "Alpini" of the battalion "Edolo" (nicknamed the "Adamello devils"), during the World War 1, near the mountain hut J. Payer (as reported by the book of Luciano Viazzi "I diavoli dell'Adamello"). 
The mission was a derivation of the project "La foresta sommersa del lago di Tovel: alla scoperta di nuove figure professionali e nuove tecnologie al servizio della ricerca” ("The submerged forest of lake Tovel: discovering new professions and new technologies at the service of scientific research"), a didactic program conceived by Prof. Camagna for the high school Liceo Scientifico B. Russell of Cles (Trentino - Italy).
As already mentioned, the target of the expedition has been the small boat currently lying on the bottom of lake Mandrone (Trentino - Italy), previously localized by Prof. Camagna and later photographed during an exploration in 2004. The lake is located at 2450 meters above see level. For this reason, before involving the students into such a difficult underwater project, a preliminary mission has been accomplished, in order to check the general conditions and perform some basic operations. This first mission was directed by Prof. Camagna and supported by the archaeologists of Arc-Team (Alessandro Bezzi, Luca Bezzi, for underwater documentation, and Rupert Gietl, for GNSS/GPS localization and boat support), by the explorers of the Nautica Mare team (Massimiliano Canossa and Nicola Boninsegna) and by the experts of Witlab (Emanuele Rocco, Andrea Saiani, Simone Nascivera and Daniel Perghem).
The primary target of the first mission (26 and 27 August 2016) has been the localization of the boat, since it was not known the exact place where the wreck was laying. Once the boat has been re-discovered, all the necessary operations to georeference the site have been performed, so that the team of divers could concentrate on the correct archaeological documentation of the boat. Additionally to the objectives mentioned above, the mission has been an occasion to test for the first time on a real operating scenario the ArcheoROV, the Open hardware ROV which has been developed by Arc-Team and WitLab.
Target 1 has been achieved in a fast and easy way during the second day of  mission (the first day was dedicated to the divers acclimation at 2450 meters a.s.l.), since the weather and environmental conditions were particularly good, so that the boat was visible from the lake shore. Target 2 has been reached positioning the GPS base station on a referenced point of the "Comitato Glaciologico Trentino" ("Galciological Committee of Trentino") and using the rover with an inflatable kayak to register some Control Points on the surface of the lake, connected through a reel with strategical points on the wreck. Target 3 has been completed collecting pictures for a post-mission 3D reconstruction through simple SfM techniques (already applied in underwater archaeology). The open source software used in post-processing are PPT and openMVG (for 3D reconstruction), MeshLab and CloudCompare (for mesh editing), MicMac (for the orthophoto) and QGIS (for archaeological drawing), all of them running on the (still) experimental new version of ArcheOS (Hypatia). Unlike what has been done in other projects, this time we preferred to recover original colours form underwater photos (to help SfM software in 3D reconstruction), using a series of command of the open source software suite Image Magick (soon I'll writ  a post about this operation). Once completed the primary targets, the spared time of the first expedition has been dedicated to secondary objectives: teting the ArcheoROV (as mentioned before) with positive feedbacks, and the 3D documentation of the landscape surrounding the lake (to improve the free LIDAR model of the area).
What could not be foreseen for the first mission was serendipity: before emerging from the lake, the divers of Nautica Mare team (Nicola Boninsegna and Massimiliano Canossa) found a tree on the bottom of the lake. From an archaeological point of view it has been soon clear that this could be an import discovery, as the surrounding landscape (periglacial grasslands) was without wood (which is almost 200 meters below). The technicians of Arc-Team geolocated the trunk with the GPS, in order to perform a sampling during the second mission.
For this reason, the second mission changed its priority an has been focused on the recovering of core samples by drilling the submerged tree. Further analysis (performed by Mauro Bernabei, CNR-ivalsa) demonstrated that the tree was a Pinus cembra L. with the last ring dated back to 2931 B.C. (4947 years old). Nevertheless, the expedition has maintained its educational purpose, teaching the students of the Liceo Russell the basics of underwater archaeology and performing with them some test on a low-cost sonar, in order to map part of the lake bottom.
All the operations performed during the two underwater missions are summarized in the slides below, which come from the lesson I gave to the student in order to complete our didactic task at the Liceo B. Russell.



Aknowledgements

Prof. Tiziano Camagna (Liceo Scientifico B. Russell), for organizing the missions

Massimiliano Canossa and Nicola Boninsegna (Nautica Mare Team), for the professional support and for discovering the tree

Mauro Bernabei and the CNR-ivalsa, for analizing and dating the wood samples

The Galazzini family (tenants of the refuge “Città di Trento”), for the logistic support

The wildlife park “Adamello-Brenta” and the Department for Cultural Heritage of Trento (Office of Archaeological Heritage) for close cooperation

Last but not least, Dott. Stefano Agosti, Prof. Giovanni Widmann and the students of Liceo B. Russel: Borghesi daniele, Torresani Isabel, Corazzolla Gianluca, Marinolli Davide, Gervasi Federico, Panizza Anna, Calliari Matteo, Gasperi Massimo, Slanzi Marco, Crotti Leonardo, Pontara Nicola, Stanchina Riccardo


Wednesday, 7 December 2016

Comparing 7 photogrammetry systems. Which is the best one?


by Cicero Moraes
3D Designer of Arc-Team.

When I explain to people that photogrammetry is a 3D scanning process from photographs, I always get a look of mistrust, as it seems too fantastic to be true. Just imagine, take several pictures of an object, send them to an algorithm and it returns a textured 3D model. Wow!

After presenting the model, the second question of the interested parties always orbits around the precision. What is the accuracy of a 3D scan per photo? The answer is: submillimetric. And again I am surprised by a look of mistrust. Fortunately, our team wrote a scientific paper about an experiment that showed an average deviation of 0.78 mm, that is, less than one millimeter compared to scans done with a laser scanner.

Just like the market for laser scanners, in photogrammetry we have numerous software options to proceed with the scans. They range from proprietary and closed solutions, to open and free. And precisely, in the face of this sort of programs and solutions, comes the third question, hitherto unanswered, at least officially:

Which photogrammetry software is the best?

This is more difficult to answer, because it depends a lot on the situation. But thinking about it and in the face of a lot of approaches I have taken over time, I decided to respond in the way I thought was broader and more just.


The skull of the Lord of Sipan


In July of 2016 I traveled to Lambayeque, Peru, where I stood face to face with the skull of the Lord of Sipan. In analyzing it I realized that it would be possible to reconstruct his face using the forensic facial reconstruction technique. The skull, however, was broken and deformed by the years of pressure it had suffered in its tomb, found complete in 1987, one of the greatest deeds of archeology led by Dr. Walter Alva.


To reconstruct a skull I took 120 photos with an Asus Zenphone 2 smartphone and with these photos I proceeded with the reconstruction works. Parallel to this process, professional photographer Raúl Martin, from the Marketing Department of Inca University Garcilaso de la Vega (sponsor of my trip) took 96 photos with a Canon EOS 60D camera. Of these, I selected 46 images to proceed with the experiment.

Specialist of the Ministry of Culture of Peru initiating the process of digitalization of the skull (in the center)


A day after the photographic survey, the Peruvian Ministry of Culture sent specialists in laser scanning to scan the skull of the Lord of Sipan, carrying a Leica ScanStation C10 equipment. The final cloud of points was sent 15 days later, that is, when I received the data from the laser scanner, all models surveyed by photogrammetry were ready.

We had to wait for this time, since the model raised by the equipment is the gold standard, that is, all the meshes raised by photogrammetry would be compared, one by one, with it.

Full points cloud imported into MeshLab after conversion done in CloudCompare
The cloud of points resulting from the scan were .LAS and .E57 files ... and I had never heard of them. I had to do a lot of research to find out how to open them on Linux using free software. The solution was to do it in CloudCompare, which offers the possibility of importing .E57 files. Then I exported the model as .PLY to be able to open in MeshLah and reconstruct the 3D mesh through the Poisson algorithm.

3D mesh reconstructed from a points cloud. Vertex Color (above) and surface with only one color (below).

As you noted above, the jaw and surface of the table where the pieces were placed were also scanned. The part related to the skull was isolated and cleaned for the experiment to be performed. I will not deal with these details here, since the scope is different. I have already written other materials explaining how to delete unimportant parts of a cloud of points / mesh.

For the scanning via photogrammetry, the chosen systems were:

1) OpenMVG (Open Multiple View Geometry library) + OpenMVS (Open Multi-View Stereo reconstruction library): The sparse cloud of points is calculated in OpenMVG and the dense cloud of points in OpenMVS.

2) OpenMVG + PMVS (Patch-based Multi-view Stereo Software): The sparse cloud of points is calculated in the OpenMVG and later the PMVS calculates the dense cloud of points.

3) MVE (Multi-View Environment): A complete photogrammetry system.

4) Agisoft® Photoscan: A complete and closed photogrammetry system.

5) Autodesk® Recap 360: A complete online photogrammetry system.

6) Autodesk ® 123D Catch: A complete online photogrammetry system.

7) PPT-GUI (Python Photogrammetry Toolbox with graphical user interface): The sparse cloud of points is generated by the Bundler and later the PMVS generates the dense cloud.

* Run on Linux under Wine (PlayOnLinux).

Above we have a table concentrating important aspects of each of the systems. In general, at least apparently there is not one system that stands out much more than the others.


Sparse cloud generation + dense cloud generation + 3D mesh + texture, inconsiderate time to upload photos and 3D mesh download (in the cases of 360 Recap and 123D Catch).

Alignment based on compatible points

Aligner skulls
All meshes were imported to Blender and aligned with laser scanning.


Above we see all the meshes side by side. We can see that some surfaces are so dense that we notice only the edges, as in the case of 3D scanning and OpenMVG + PMVS. Initially a very important information... the texture in the scanned meshes tend to deceive us in relation to the quality of the scan, so, in this experiment I decided to ignore the texture results and focus on the 3D surface. Therefore, I have exported all the original models in .STL format, which is known to have no texture information.


Looking closely, we will see that the result is consistent with the less dense result of subdivisions in the mesh. The ultimate goal of the scan, at least in my work, is to get a mesh that is consistent with the original object. If this mesh is simplified, since it is in harmony with the real volumetric aspect, it is even better, because, when fewer faces have a 3D mesh, the faster it will be to process it in the edition.


If we look at the file sizes (.STL exported without texture), which is a good comparison parameter, we will see that the mesh created in OpenMVG + OpenMVS, already clean, has 38.4 MB and Recap 360 only 5.1 MB!

After years of working with photogrammetry, I realized that the best thing to do when we come across a very dense mesh is to simplify it, so we can handle it quietly in real time. It is difficult to know if this is indeed the case, as it is a proprietary and closed solution, but I suppose both the Recap 360 and the 123D Catch generate complex meshes, but at the end of the process they simplify it considerably so they run on any hardware (PC and smartphones), preferably with WebGL support (interactive 3D in the internet browser).

Soon, we will return to discuss this situation involving the simplification of meshes, let us now compare them.

How 3D Mesh Comparison Works


Once all the skulls have been cleaned and aligned to the gold standard (laser scan) it is time to compare the meshes in the CloudCompare. But how does this 3D mesh comparison technology work?

To illustrate this, I created some didactic elements. Let's go to them.


This didactic element deals with two planes with surfaces of thickness 0 (this is possible in 3D digital modeling) forming an X.


Then we have object A and object B. In the final portion of both sides the ends of the planes are distant in millimeters. Where there is an intersection the distance is, of course, zero mm.


When we compare the two meshes in the CloudCompare. They are pigmented with a color spectrum that goes from blue to red. The image above shows the two plans already pigmented, but we must remember that they are two distinct elements and the comparison is made in two moments, one in relation to the other.

Now we have a clearer idea of how it works. Basically what happens is the following, we set a distance limit, in this case 5mm. What is "out" try to be pigmented red, what is "in" tends to be pigmented with blue and what is at the intersection, ie on the same line, tends to be pigmented with green.


Now I will explain the approach taken in this experiment. See above we have an element with the central region that tends to zero and the ends that are set at +1 and -1mm. In the image does not appear, but the element we use to compare is a simple plane positioned at the center of the scene, right in the region of the base of the 3D bells, or those that are "facing upwards" when those that are "facing down" .


As I mentioned earlier, we have set the limit of comparison. Initially it was set at +2 and -2mm. What if we change this limit to +1 and -1mm? See that this was done in the image above, and the part that is out of bounds.


In order for these off-limits parts not to interfere with visualization, we can erase them.


Thus resulting in a mesh comprising only the interest part of the structure.

For those who understand a little more 3D digital modeling, it is clear that the comparison is made at the vertices rather than the faces. Because of this, we have a serrated edge.

Comparing Skulls


The comparison was made by FOTOGRAMETRIA vs. LASER SCANNING with limits of +1 and -1 mm. Everything outside that spectrum was erased.


OpenMVG+OpenMVS


OpenMVG+PMVS


Photoscan


MVE


Recap 360


123D Catch


PPT-GUI


By putting all the comparisons side by side, we see that there is a strong tendency towards zero, the seven photogrammetry systems are effectively compatible with laser scanning!


Let's now turn to the issue of file sizes. One thing that has always bothered me in the comparisons involving photogrammetry results was the accounting for the subdivisions generated by the algorithms that reconstruct the meshes. As I mentioned above, this does not make much sense, since in the case of the skull we can simplify the surface and yet it maintains the information necessary for the work of anthropological survey and forensic facial reconstruction.

In the face of this, I decided to level all the files, leaving them compatible in size and subdivision. To do this, I took as a base the smaller file that is generated by 123D Catch and used the MeshLab Quadratic Edge Collapse Detection filter set to 25000. This resulted in 7 STLs with 1.3 MB each.

With this leveling we now have a fair comparison between photogrammetry systems.


Above we can visualize the work steps. In the Original field are outlined the skulls initially aligned. Then in Compared we observe the skulls only with the areas of interest kept and finally, in Decimated we have the skulls leveled in size. For an unsuspecting reader it seems to be a single image placed side by side.


When we visualize the comparisons in "solid" we realize better how compatible they all are. Now, let's go to the conclusions.


Conclusion


The most obvious conclusion is that, overall, with the exception of MVE that showed less definition in the mesh, all photogrammetry systems had very similar visual results.

Does this mean that the MVE is inferior to the others?

No, quite the opposite. The MVE is a very robust and practical system. In another opportunity I will present its use in a case of making prosthesis with millimeter quality. In addition to this case he was also used in other projects of making prosthetics, a field that demands a lot of precision and it was successful. The case was even published on the official website of Darmstadt University, the institution that develops it.

What is the best system at all?

It is very difficult to answer this question, because it depends a lot on the user style.

What is the best system for beginners?

Undoubtedly, it's the Autodesk® Recap 360. This is an online platform that can be accessed from any operating system that has an Internet browser with WebGL support. I already tested directly on my smartphone and it worked. In the courses that I ministering about photogrammetry, I have used this solution more and more, because students tend to understand the process much faster than other options.

What is the best system for modeling and animation professionals?

I would indicate the Agisoft® Photoscan. It has a graphical interface that makes it possible, among other things, to create a mask in the region of interest of the photogrammetry, as well as allows to limit the area of calculation drastically reducing the processing time of the machine. In addition, it exports in the most varied formats, offering the possibility to show where the cameras were at the time they photographed the scene.

Which system do you like the most?

Well, personally I appreciate everyone in certain situations. My favorite today is the mixed OpenMVG + OpenMVS solution. Both are open source and can be accessed via the command line, allowing me to control a series of properties, adjusting the scanning to the present need, be it to reconstruct a face, a skull or any other piece. Although I really like this solution, it has some problems, such as the misalignment of the cameras in relation to the models when the sparse cloud scene is imported into Blender. To solve this I use the PPT-GUI, which generates the sparse cloud from the Bundler and the match, that is, the alignment of the cameras in relation to the cloud is perfect. Another problem with the OpenMVG + OpenMVS is that it eventually does not generate a full dense cloud, even if sparse displays all the cameras aligned. To solve this I use the PMVS which, although generating a mesh less dense than OpenMVS, ends up being very robust and works in almost all cases. Another problem with open source options is the need to compile programs. Everything works very well on my computers, but when I have to pass on the solutions to the students or those interested it becomes a big headache. For the end user what matters is to have a software in which on one side enter images and on the other leave a 3D model and this is offered by the proprietary solutions of objective mode. In addition, the licenses of the resulting models are clearer in these applications, I feel safer in the professional modeling field, using templates generated in Photoscan, for example. Technically, you pay the license and can generate templates at will, using them in your works. What looks more or less the same with Autodesk® solutions.

Acknowledgements


To the Inca University Garsilazo de la Vega for coordinating and sponsoring the project of facial reconstruction of the Lord of Sipán, responsible for taking me to Lima and Lambayeque in Peru. Many thanks to Dr. Eduardo Ugaz Burga and to Msc. Santiago Gonzáles for all the strength and support. I thank Dr. Walter Alva for his confidence in opening the doors of the Tumbas Reales de Sipán museum so that we could photograph the skull of the historical figure that bears his name. This thanks goes to the technical staff of the museum: Edgar Bracamonte Levano, Cesar Carrasco Benites, Rosendo Dominguez Ruíz, Julio Gutierrez Chapoñan, Jhonny Aldana Gonzáles, Armando Gil Castillo. I thank Dr. Everton da Rosa for supporting research, not only acquiring a license of Photoscan for it, but using the technology of photogrammetry in his orthognathic surgery plans. Dr. Paulo Miamoto for presenting brilliantly the results of this research during the XIII Brazilian Congress of Legal Dentistry and the II National Congress of Forensic Anthropology in Bahia. To Dr. Rodrigo Salazar for accepting me in his research group related to facial reconstruction of cancer victims, which caused me to open my eyes to many possibilities related to photogrammetry in the treatment of humans. To members of the Animal Avengers group, Roberto Fecchio, Rodrigo Rabello, Sergio Camargo and Matheus Rabello, for allowing solutions based on photogrammetry in their research. Dr. Marcos Paulo Salles Machado (IML RJ) and members of IGP-RS (SEPAI) Rosane Baldasso, Maiquel Santos and Coordinator Cleber Müller, for adopting the use of photogrammetry in Official Expertise. To you all, thank you!
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.