Friday, 28 December 2012

How to make 3d scan with pictures and the PPT GUI







More than ever before 3D models have become a "physical" part of our life, how we can see in the internet with 3D services of printing.

Some people have many difficult to get a model to print... well, not only to print, but to write an scientific article, make a job, or just have fun.

With this tutorial you'll learn how to scan 3D objects to use it the way you want.

Before all, I would like to thank all friends that help me to write this tutorial mainly Bob Max of the ExporttoCanoma's blog that publish interesting posts about GIS and now are interested in SfM (like all good nerd who works with 3D).

It's impossible to forget Pierre Moulon, the developer os Python Photogrammetry Toolbox (PPT), and Luca Bezzi e Alessandro Bezzi, developers of the ArcheOS and PPT GUI.

This tutorial includes many examples and some source files that will help you to learn how works the PPT.

So, lets go!


The image above show the object that we'll scan in this tutorial


How to make 3d scan with pictures and PPT GUI

First of all is necessary to download the Python Photogrammetry Toolkit on: http://www.arc-team.homelinux.com/arcteam/ppt.php




After download and unzip you have to edit the ppt_gui_start file putting the right path of the program (in orange).


Now, if you are in Linux is only run the script edited:
$ ./ppt_gui_start
Once the program is opened, click on “Check Camera Database”.

With the Terminal/Prompt by side, click in “Select Photos Path”.

Choose the path and then click on “Open”.

Click in “Run” and wait a little.




If all is OK, you’ll see a message in the Terminal:

Camera is already inserted into the database
If not, you can customise with this videotutorial:

Now, make a copy of the path.


1) Go to “Run Bundler”.
2) Past at “Select Photos Path”.

1) To make a good scan quality, click on “Scale Photos with a Scaling Factor”, by default, the value will be 1. If you have a computer with less power of processing, do not make this step (1), and go directly for the step bellow (2).

2) Click on “Run”.

Wait a few minutes, the program will solve the point clouds.


You will know that the solve is done when in the Terminal appear the message:

Finished! See the results in the '/tmp/DIRECTORY' directory

In this case the message was:
Finished! See the results in the '/tmp/osm-bundler-ibBZV9' directory

The Nautilus will be opened to, showing the directory with the files.




OBS.: If you area really curious, you can open the Bundle directory and see the .PLY files in Meshlab. But is better wait, because this point clouds is not good to make reconstruction/convertion into a mesh.



Go to the Terminal, where appeared the path with the solve, and make a copy of it.



1) Go to the “or run PMVS without CMVS”
2) Click in “Use directly PMVS2 (without CMVS)”



1) Paste the path in “Select Bundler Output Path”
2) Click on “Run




When the process is done, you’ll see a new directory named “pmvs” appear.



So, you have to enter in “models” and search for a file named “pmvs_options.txt.ply”. If all is OK it is the final process of solving.


OBS.: It’s a good idea copy the osm-* directory for your home, because it will be lost in the next boot, because the /tmp directory.

When you open the “pmvs_options.txt.ply” file in Meshlab you’ll see that the points cloud is really dense now, with almost the quality of a picture.



Only appear a picture or a mesh... notice that the “Points” is a way of view selected.



If you select “Flat Lines” for an exemple, the points clouds will desappear... because, obviously... it’s a --points-- cloud.



Click again in “Points” to see the points cloud and:


1) Click on “Show Layer Dialog” (A)
2) So, will appear a new element in the interface with the name of the object, in this case “pmvs_options.txt.ply” (B)



Go to “Filters” -> “Remeshing, simplification and reconstruction” -> “Surface Reconstruction: Poisson”



A new window will appear with the defaults value of “Octree Depth” and “Solver Divide”

1) Change the values to:
Octree Depth: 11
Solver Divide: 9


2) Click in “Apply”


OBS: This vlues can crash the program if you computer do not have a good power of processing.



If all runs OK, you will notice two things:


1) A lot of new write points over the reconstruction.
2) A new layer in the upper right named  “1 Poisson mesh *”



But, when we comeback to “Flat Line” to see the mesh, strange things can happen. In this case, the algotithm Poison created one type of ball to reconstruct the mesh.



We can see it better when we orbit away the model.



So, to make the door visible, we:


1) Come back to the “Points” view (A)
2) Orbit the scene to see the side of the door.
3) Click on “Select faces in a rectangular region”



So:


1) We make a window selection on the region that will be deleted (1A-2A)
2) Click on “Delete the current set of selected faces”.



Now we can see the mesh in the correct side.



But, when we change the type of view to “Smooth”, we see the mesh write without the colors of the points cloud.



To paint the mesh with the color of the points cloud we can go to:
Filters -> Sampling -> Vertex Attribute Transfer


A substancial part of this step was learned with this video: http://vimeo.com/14783202



A new window will appear.



You’ll have to invert the objects, because the “pmvs_options.txt.ply” is the real source mesh, that will be the base to paint, and the “Poisson mesh” will receibe the colors, so it is the Target mesh.



When you click on “Apply” immediatly you’ll see the mesh colored, like the image above.



If you wanna send this mesh to other software like Blender, you can go to:


File -> Export Mesh As..


Choose a place to save the .PLY file.





If all is OK, the mesh will be imported on Blender (or other software) perfectly.


Other examples:






If you wanna you can download a sequence of pictures of Taung Child (anim. above) to make your own test here: 

And see if match with the final result here:





I hope it has useful to you.


A big hug and I see you in the next article!







Tuesday, 25 December 2012

Forensic Facial Reconstruction of Virtual Mummy (1997)


Indeed, according the Virtual Mummy's official site, the CT-Scan was made in 1991, so it have more than twenty years! 

I choosed the year of 1997 because it was the creation of Virtual Mummy, a project where some researchers of University of Hamburg-Eppendorf shared a couple of interactive .MOV files used to reconstruct this mummy.


The original project can be visited here: http://www.voxel-man.de/gallery/virtual_mummy/

The links of the interactive movies can be viewed here: http://www.voxel-man.de/gallery/virtual_mummy/scenes.html

I was not able to find a lot information about the mummy. Only that she was 30 years old when died, more than 2300 years ago.


I would like to register the power of Blender's video editor to select the area of interest and isolate it.

In this case, the little area of the CT-Scan.

Even with the little dimensions of the CT-Scan I was able to get a good quality of 3D reconstruction using IMG2DCM and InVesalius.


To put the model in the right proportions, I used the images inside this article: http://www.uke.de/institute/medizinische-informatik/downloads/institut-medizinische-informatik/pommert-mc1991.pdf






With the skull in my hands, the reconstruction was done.

I hope it can be useful and enjoyable for you, like was to me.

A big hug and I see you in the next article!


Saturday, 22 December 2012

The Taung Child



Dr. Nicola Carrara, curator of the Museum of Anthropology at the University of Padua, sent us a note on the Taung project which we willingly share:
The Paleoanthropology is littered with nicknames assigned to various discovered hominin fossils: Lucy (Australopithecus afarensis), Mrs Ples (Australopithecus africanus), Ardi (Ardipithecus ramidus), Twiggy (Homo habilis), the Turkana Boy (Homo ergaster), The Hobbit (Homo floresiensis), the old man of Cro-Magnon man (Homo sapiens) are some of the most famous examples.
Naming the living beings is one of the tasks of Adam and Eve in the Garden of Eden. Giving a name to someone is the first way to know him, take him into our circle and, somehow, pigeonholing him.
But the name alone is not enough. To know someone, we also need to see his face. Name and Face are an inseparable pair to frame a person, so much that you go often in crisis when someone greets us, and we identify his face, but we can not remember the associated name.
Or, when you go back with memories, you feel uncomfortable with the fact that we remember the name of some people but not the features. Incidents of this type are common and show a fundamental process of our brain: we are better when we know a person's name and his aspect!
The Taung child (Australopithecus africanus) is a fundamental fossil in the history of Paleoanthropology. Discovered by Raymond Dart at Taung, South Africa, in 1924, the find consisted of the entire face, including teeth and jaws, and the endocranial cast of the brain. Dated between 2 and 3 million of years ago, the child was about 3 years old and had a cranial capacity of 410 cc, which would have been 440 cc in adulthood.
The fossil surprised the discoverer for the modernity of some of its features: the large and "rounded" brain, the small canines, different from those of apes, and especially the relatively advanced position of the foramen magnum compatible with bipedalism.
The cast of this fossil is in many museums around the world, and it's the evidence of the evolutionary history of our species. The Museum of Anthropology at the University of Padua keeps three copies of this find, along with those of many other Hominins.
When I was approached to join the "Taung" project, the first feeling was that of curiosity: I could finally know how the face of the child looked like, whose fossilized skull was in the closet behind my desk!
The curiosity was fueled by the various progresses of the work of Arc-Team that reached me through dr. Moreno Tiziani. Since a few weeks, the strictly scientific work of the team gave a face to the Taung child.
As an anthropologist and a curator of a museum, this result is very important. All the museography linked to human evolution is moving for some time to make our ancestors more human, taking away from the head the mistaken belief of the oneness of our humanity. There were many ways of being "human" and there have been many attempts to reach humanity. The Taung child is fully embedded in this story.
The times when the French paleontologist Marcellin Boule, between 1911 and 1913, reconstructing erroneously the skeleton of the Neanderthal of La Chapelle-aux-Saints, removed him from humanity because incapable, because of its anatomy, to "raise his eyes to heaven", are really far.
Today, thanks to the dedicated work of many scholars such as Arc-Team, when I open the closet behind my desk it's nice to see a familiar face.

Padua, december 4, 2012
Nicola Carrara
Translation: Moreno Tiziani



Sunday, 16 December 2012

The flight of the penguin: eight years of ArcheoFOSS


I always liked this Linux commercial, it makes you believe that anything is possible... and, in a way, sometimes it is. 
When I think how the workshop ArcheoFOSS started, I find it incredible that it will reach its eighth edition.
It was 2005 and we were sitting on a table of Cafe Einstein in Vienna:

"The decision to start this workshop was taken one evening in November 2005 at Cafe Einstein (a few steps from Vienna town hall), during the conference “Archäologie und Computer 2005: Workshop 10” (Böener W., 2006), together with Alessandro Bezzi, Luca Bezzi and Denis Francisci of Arc-Team.
The idea behind the proposal to realize the workshop was mainly to take stock of the situation regarding the application of Free/Libre and Open Source Software philosophy to archeology."

[Introduction to the first workshop proceedings, Grosseto 2006 - edited by G. Macchi Janica and R. Bagnara]

Nevertheless, here we are! The eighth edition of ArcheoFOSS will be held in Catania on 18 and 19 June 2013, organized by Giovanni Gallo and Filippo Stanco of the Image Processing Lab (Catania University).

The elephant, symbol of Catania (and PostgreSQL)

To celebrate the event, I made a short video showing the path of the workshop from Vienna to Catania. It is a kind of "Fligth of the penguin" in archeology, through eight years and more than 3742 Km.



For FLOSS nerds, I made the video with OpenShot Video Editor, simply using the animate title option called "World Map" (thanks to Luca Delucchi for the tip!)
See you in Catania!

Sunday, 9 December 2012

Virtual Terrain Project

VTerrain.org is an open source project, that has the goal to "foster the creation of tools for easily constructing any part of the real world in interactive, 3d digital form", as declaims its own web site.   Born from an idea of Ben Discoe, an experienced 3D programmer, the Virtual Terrain Project dates back from 1997, and is still one of the most important sites and updated documentation for territorial display in real time.



The numerous pages of VTerrain.org constitute a sort of wikipedia, where you can find and compare the various techniques discussed related to 3D programming.



But VTerrain is not just theory.  In fact, VTP is primarily a software that consists of two programs, VTBuilder and Enviro;  the first is a "tool for viewing and processing many kinds of geospatial data".
As Ben tells, "Enviro is the VTP runtime environment.  It provides an interactive, realtime 3D navigation of your virtual terrain".

Basically, what you can do with VTerrain programs ?

VTBuilder is a 2d viewer and a processing program of geospatial data; it allows complex operations such as merges and resamples of elevation data, coordinate transformations including the possibility of adding the values ​​of regional shift.

VTBuilder allows you to cut and join the grids of elevation from open sources, as OpenDEM, SRTM or LiDAR surveys.





The "resampling" of VTBuilder allows you to save Data Terrain Model grids into the BT format, viewable, through the GDAL library, into QuantumGIS OpenEnv, TatuckGis, gvSIG, and many other GIS software.


Enviro is instead a 3D visualization tool allows you to view the ground with a colorful theming based on height and superimposition of vector and raster layers adapted to difficult terrain. Enviro has proved a particularly powerful and flexible at the same time, this leads us to achieve an integration with QuantumGIS.

VTerrain plugin for QGis is a python module, that you can load into QGis through the "Fetch Python Plugins" panel, and allows you to view in 3D, with Enviro, loaded images between levels of QGis, provided they have an associated elevation grid or a TIN (Triangulated Irregular Network).


Now released version 1.21 of Virtual Terrain Project!  

You can find the installer for Windows (XP and 7) and detailed instructions at the following link:



A version of VTP integrated in Linux you can find ArcheOS integrated into the live iso that you can find at this link:



The following describes the operations to be performed on Linux ArcheOS to configure VTP to work on projects of users HOME.

Open the terminal and copy in your home directory contents
/usr/share/archeos/vtp-svn111229/TerrainApps/Data

Create, always under your home, a directory   vtp/Data   and copy your directories the same directory (in fact it would only a few, but we do so to make things easier).


After, execute VTBuilder, found under the menu "Applications> 3D" Linux ArcheOS.
The terminal window will appear with some messages and then VTBuilder.


At first launch, if they are not visible elements in the 2D window, enable the "View" menu the "World Map" and "Show UTM Boundaries".

To view an area of ​​the test, drag the file in VTBuilder "crater_0513.bt" that we will find in the directory  /home/utente/vtp/Data/Elevation/



If the territory is not shown, but instead will show a rectangle with diagonals, as in the image below, it is clearly the option "Show outline only" in the panel "View> Options". In this case, select the other hand, "Artificial color by elevation value."





Let us now Enviro. We run it through the menu "Applications> 3D" of Linux ArcheOS.





At first launch, the list of projects Enviro is empty. To enter the first project, you must select the button "Terrain Manager" panel and press "Add Path".
Add the directory vtp/Data that we have copied in our home, and press OK. Now we will find (or should) "Simple Terrain" between the land selected (if not, go back panel "Terrain Manager", and delete the first path ".. /Data", superfluous).

Selecting the "Simple Terrain" and pressing OK will open the panel "VTP Enviro OSG" and display the 3D terrain, with which we can interact.

Well, VTP is now running on your machine.



BobMaX

Saturday, 8 December 2012

That's enough! We want drivers!

Hi all,
as you know, normally ATOR's main topic is related with archeology and open source ("open archeology"), reporting news, data, test and researches with free/libre and open source software and hardware. 
This time I write about something more general, but in a way connected with these arguments: the lack of a good support for GNU/Linux drivers from some hardware manufactures. 
This problem is not new and, through the years, we had to face with it during the development of ArcheOS, but it has never been as hard as this time, when, working on the next release (ArcheOS v. 5 - codename Theodoric), we found on our way the NVIDIA Optimus graphic card (the video below explain this concept from the "Linux gamers" point of view).




That's why today I decided to repost the call of the Istituto di Istruzione Secondaria Statale Ettore Majorana
Here I simply translate their post in English:

We ask Linux drivers!

This is the struggle we will fight together with other sites, blogs, forums and with the help of you all. We will just ask to the hardware manufacturers, with a simple email, the Linux driver (that are often not supported). Maybe many of you will not believe it, but in this way we already won some battles. Anyway it cost nothing...
We must make our voice heard. We are tired to buy PC with just Windows driver. If they will not listen to us, in the future we will buy only from the hardware manufacturer more susceptible to this problem. 

Battles already won 

Surely this time is different, but this method can work. Here are two examples of struggles already won, thanks to your help:

1) Free Software, finally also Italy has decided

2) Hands off Majorana

... So let's go everybody. Send e-mail and spread the initiative (Facebook, Twitter, blogs, websites, e-mails, friends, acquaintances, etc...)! Maybe this time, if we are many, things will adjust.

This initiative comes from our site and ItaliaUnix (managed by Gianmaria Generoso). With the hope that many other will join us...
To all of you who will participate, thanks in advance!

E-mail

Here is a suggestion to copy and paste for the object and the text of the e-mail (of course you can also write one that suites you best and send it to companies that you want). We start with the well-known Asus.

To:
info@asus.it

Object:
Serious problems to use your hardware

Text:
I'm using one of your devices but it can't support Linux as well as Microsoft Windows because I haven't the necessary drivers. Please give me the right drivers for Linux, otherwise I will not  buy nothing from you anymore. I also will invite  friends and other people not to buy anything signed by Asus because of its incompatibility with Linux or other Open Source Operative Systems. Regards.

The initiative's logo
Do you also support, like I do, the Majorana Institute initiative? In this case you can send an email to hardware companies that do not provide GNU/Linux drivers or help in spreading the struggle.
Thank you!


Wednesday, 5 December 2012

Georeferncing 3D pointclouds with open source tools


Hi all,
since every pointcloud created with Structure-from-Motion comes in its own, relative coordinate system, you often need to georeference the pointclouds in order to use it for archaeological purposes.

I would like to post some notes about a pretty easy way to georeference 3D pointclouds.
In GRASS GIS 7  there are now the modules v.in.ply, v.rectify and v.ply.out available (thanks to Markus Metz of the GRASS Team and TOPOI Berlin) that allow to georeference pointclouds. There will be soon a module that does all – import, transformation and export for the use of the pointcloud in other applications as Meshlab or CAD systems in one step (v.ply.rectify).  These modules can be easily installed under Install extensions from addons.

To accomplish the transformation of the coordinates you need to create a simple .txt file containing the coordinates from the pointcloud, and the real coordinates.
For this it is useful, beforehand in the field, to put at least four (more is better) markers into or near the object you want to model with SfM. The control points need to be visible in the point cloud, so it can help to use a distinctive color for the targets. After taking the photos, don´t forget to take the measurements of the control points!
If you have your pointclouds and the measured coordinates, open the point cloud with Cloud Compare, click on it and open “point list picking”.
Pick the control points one after the other and save the list (xyz) as .txt with the same name as the .ply file in the same folder. Remember the right order (or put numbers on it)!

Picking control points in Cloud Compare 
Then open the .txt and add the coordinates taken in the field to the picked coordinates in the following order:
xyz (from the pointcloud) – xyz (real coordinates) – 0 or 1
The coordinates have to be separated by a space. Use a decimal for the 1000 separator.
For example:
0.215062 -0.873330 -5.366510 323249.970 5907123.953 91.777 1
0.118612 -0.559895 -5.421750 323249.566 5907123.228 92.772 0
0.174586 -0.641221 -5.727870 323248.603 5907123.008 91.996 1

The 0 or 1 at the end of each line indicates whether the coordinates in this line are used for transformation (1) or not (0). At least four lines have to be active!
Save the .txt-file with the same name as your point cloud.
After importing the pointcloud with v.in.ply into GRASS, open v.rectify and add the point cloud in the required field. You have to check Perform 3D transformation and load the .txt-file in the appropriate filed under the optional flag. By checking the Print RMS errors Box you can see the errors of the single control points and choose the best ones (by changing 0 and 1 in the .txt-file) before the actual computation (you have to uncheck the RMS-Box then).

GRASS GIS
When the transformation is done, you can export the georeferenced 3D pointcloud again with v.out.ply. Here you should enter red,green,blue,alpha (without space) in the Name of attribute columns to be exported - field to conserve the color of the pointcloud.
In some cases you want to edit the coordinates so you can use the pointcloud in a graphic software like Mehslab, or in an CAD environment. For using the real coordinates in Meshlab, you have to cut the first couple of digits, which can easily be done in GRASS GIS with v.transform.

All in all, using some SfM-Kit, Meshlab, Cloud Compare and GRASS GIS offers the possibility to create nice, georeferenced 3D models that can be used for archaeological purposes.
Maybe this is useful for anyone, some more information is provided in the GRASS Manuals or here.

Three archaeological layers as original SfM output (left) and georeferenced (right, moved only in height)




BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.