Sunday, 16 September 2012

Converting pictures into a 3D mesh with PPT, MeshLab and Blender



Obs.: Please, read the article http://arc-team-open-research.blogspot.com.br/2012/09/extreme-sfm-fast-data-acquisition-and.html with technical information that is important and complementary about the technique, place and manner that the photographs were taken.

SfM is a powerful technology that allow us convert a picture sequence in a points cloud.

MeshLab is a usefull software in 3D scanning tool, that is in constant development and can be used to reconstruct a points cloud in a 3D mesh.

Blender is a most popular opensource software of modeling and animation , with a intuitive UV mapping process.

When we joint the three software they allow us create a complete solution of picture scanning.


The process will be described superficially for that already have a few knowledge about the tools used to do this reconstruction.



First of all, was needed a group of pictures that was converted in a points cloud with Python Photogrammetry Tools.

The picture was taken without flash. This make the process harder in the future, when is needed use the image how reference to create the relief of the surface.

MeshLab was used to convert the points cloud into a 3D mesh with Poison reconstruction.

The surface was painted with vertex color.

The 3d mesh and the points cloud was imported in Blender.

The points cloud was imported because it have the information about the cameras point (orange points).

Using this points was possible placed the camera in the right position.

The vanishing points was matched using the focla distance of the camera. But, how we can see in the image above the mesh didnt match with the reference picture.

To point the camera was needed to orbit it manually.

Blender have a good group of UV Mapping tools. It is possible to use only the interest region of the picture to make a final texture map, how we can see in the infographic above.

So, in this process each viewpoint texture was projected using a picture. Above we can see in right the original image, and in the left the mesh with the projected texture. This appears to be perfect because the viewpoint of the camera is the same viewpoint of the picture.

But, if the 3D scene is orbited, we can attest that the projection works well only in the last viewpoint.

So, a good way to make the final texture is using the viewpoint of the picture to paint only the interest area.

When the scene is orbited we can attest that only the interest area was painted.

The surface have to be painted using some viewpoints, to complete bit by bit the entire texture.

We can see the finished process above. It isn't needed using all pictures taken to build the final texture. Depending on complexity of the model inly four images will be needed to complete the entire texture.

Now we can compare the texture process and the vertex paint process. In this case the texture process was more interesting to be used.

The resulted mesh have a hight level of details and nevertheless can be used to be viewed in realtime (see the video in the top).

To increase the mesh quality, we can use the Displacement Modifier in Blender. It project the relief of the surface using the texture how reference.
The final result:








CREDITS:
This article was possible thanks to the kindness of Dott.ssa Paola Matossi L'Orsa and Dott.ssa Sara Caramello and with the permission from the "Fondazione Museo delle Antichità Egizie di Torino".

Wednesday, 12 September 2012

Young anthropologists meeting in Florence

Hi all,
just a fast post to advice that tomorrow will start the firts italian meeting of the "Young anthrologists" in Florence (September 13-14, 2012). The event is under the patronage of the AAI (Associazione Antropologica Italiana) and of the ISItA (Istituto Italiano di Antropologia); it will take place in the Anthropology Laboratories of the Department of Evolutionary Biology "Leo Pardi" (Florence University). Here is the official program of the conference. We (Arc-Team) will partecipate with a contribution of Cicero Moraes, Giuseppe Naponiello and Silvia Rezza ("A sperimental methodology of craniofacial digital reconstruction with FLOSS")  and during the final discussion about "Open Source e Open Data in italian anthropology and archaeology", with a presentation of Alessandro Bezzi and Luca Bezzi ("Anthropology and Open Source, the experience of Arc-Team".

The official logo

Friday, 7 September 2012

Building an Xcopter

Hi all,
last week i tried to re-build our xcopter. The model I definitively destroyed was assembled with the help of an expert in aircraft models (Walter Gilli). The mainboard is a kkMultiCopter Controller sold by kkmultikopter.kr, which is based on Rolf R Bakke's original PCB (public domain). the others parts are:
  • 1 power distribution board, 
  • 1 lipo battery, 
  • 1 low  voltage alarm
  • 4 brushless outrunner motors, 
  • 4 ESCs (speed controller), 
  • 2 counter propellers, 
  • 2 noncounter propellers, 
  • some silicon wire pieces, connectors and leads,
  • a homemade frame composed of 4 aluminum arms.
I put the first prototype on the "operating table" (see picture below) and started to remove individual parts to reassemble them into the new xcopter.


The first step was to create the electrical network using the power distribution board (picture below) which allows to transmit electricity from the lipo battery to the motors. A switch simplifies the turning on/off of the xcopter.


The second step was to create a plate where fix the mainboard and the receiver of the remote control. I modified an empty box of CD/DVD (picture below).


Then I started to remove the ESCs and the motors from the first prototype and to solder them into the new model (picture below).


I was careful to respect the order of the xcopter schema: type of propellers and rotation of the motors (picture below).


Finally I fixed the the mainboard and the receiver of the remote control on the CD/DVD box (picture below).


The picture below shows the "operating table" after the "transplant" procedure.


I closed the top with the CD/DVD box-cover (picture below) and I was ready for the first flight. The remote control was correctly set up with the first prototype; I needed only to regulate a little bit the Roll and Pitch pot on the mainboard. Have Fun!


Tuesday, 4 September 2012

SfM/IBM of old data

Hi all,
i was organizing data of a old storage media and i found some pictures of a work we did in Aramus excavation during the 2006 season.  The documentation of a walled-door was an hard test for 2D digital documentation ("metodo Aramus"). The picture below reproduces the logistic difficulty to take pictures usable for a photomosaic: due to the morphology of the site it was not possible to be in front of the wall.


Finally we took 14 photos to document an area which could be covered by only one image under normal conditions. The schema below shows the different area taken up in the 14 photos: it is bigger in the upper stripe and obviously smaller in the lowest.


A selection of the 14 photos is represented in the image below.


On the field we took also a group of images (14) from different point of view. We intended to elaborate the photo set with the software Stereo. In the end we didn't elaborate it because the 2D photomosaic reached a good quality and a sufficient accuracy. Stereo's data elaboration is time consuming and it depend totally to human work. the picture below shows six photos taken for 3D documentation.


After six years i found this data again and i tried to elaborate them using Python Photogrammetry Toolbox which is not time consuming because the artificial intelligence leads automatically the process. The result is an accurate 3D model. Is surprising that pictures taken two years before the development of Bundler could be used to create precise documentation of no more accessible archaeological context. The movie below shows the mesh of the walled-door.

  
Thank to Sandra Heinsch and Walter Kuntner (University of Innsbruck - Institut für Alte Geschichte und Altorientalistik) to share the data.

Sunday, 2 September 2012

Converting a Video of Computed Tomography into a 3d Mesh

 
 Obs.: Please, watch this video before to read the article.

CT scan is an excellent technology for research in a lot of areas. unfortunately is a extensive service to be contracted.

If you are a researcher in egyptian archaeology or facial reconstruction this article will help you to learn a way to get CT scan in a easy way.


An archaeological example of the use of the technique


Describing the Technique

The technique consists in download a video of Youtube or Vimeo or whatever i movies site on the internet.

A example of Firefox add-on that can be used to download a video is DownloaHelper, and you can download it here:

https://addons.mozilla.org/pt-BR/firefox/addon/video-downloadhelper/

If you use other browser is possibly have a version of DownloadHelper for it or you can use other solution. 


For this example was downloaded a CT scan video in the site the Virtual Pig Head.


A .MOV video was downloaded directly for this page, dispensing the use of DownloadHelper in this case.

OBS.: If you like dinosaurs or articles about CT scan, you cannot let to visit Witmer's lab site. It can be nteresting to found a got material for your research or pleasure.

Once a video was acquired it was seen that it have a lot of labels in the screen.

To erase it was used he video editor Kdenlive. The solution was to create some black areas over the bigger labels.

 So, a new video was generated without that labels. To convert this video in a image sequence you can use FFMPEG, a command line software that converts video in a serie of different formats.:

$ ffmpeg -i Video.mpeg -sameq sequencia/%04d.jpg


Where:

-i Video.mpeg is the input file.

-sameq preserves the same quality of the frame in the jpg output file.

sequence/%04d.jpg sequence is the directory where the file will be created and %04d.jpg means that the result files will be a sequence with four zeros like 0001.jpg, 0002.jpg, 0003.jpg.


Obs.: The signal $ only means that the command have to be written in a terminal. 

Ok, now you already have the jpg sequence, but InVesalius (the CT scan software) uses DICOM files to convert images in 3D meshes.

A DICOM file is not only a image file, but a imagem file with the data of pacient, distance of the slices, and etc.


So, to convert an image in a DICOM file you'll need a specific application called IMG2DCM, tha can be download here. With this command line application you'll convert image files like Tif, Png and Jpg in a sequence of .Dcm (DICOM) files and if necessary you can setup the information about the pacient, the distance of slices and etc. 

To do the conversion is quite easy:

$ python img2dcm.py -i sequence_directory -o output_directory -t jpg



To open the DICOM files and convert it in a 3D mesh you can use InVesalius, that is a power opensource in area of CT scan.


How appears in the screenshot when the reconstruction is made the names os the muscles that was in the video was reconstructed too. It isn't a problem, because they will be deleted in the 3D editor after. 



We can importing the .STL file exported from InVesalius in Blender 3D.

The .STL files comes big and with a lot of subdivisions. You need to simplify it with Remesh, for example, to edit the mesh with tranquility.



Blender has a sculpt mode, where you can to polish the little warts that was created when had the texts with the name of the muscles.

Because the labels, the right ear came incomplete.



You can solve it mirroring the model and complete the lacking area.



This video shows a good technic to make this.


When the jpg sequence was converted in a DICOM sequence the data of distance of the slices wasn't setup. Because this the pig's face was generated stretched. After polished the warts and mirrored the ear we can rescale the face to correct the proportions (with a clean mesh).


Usually artists remodel a complex mesh with less subdivision using a technique called retopo.

But, this article is geared for scientific and archaeological solutions. So, the texturing of the model will be configured in the complex mesh, without make retopo.



The final step is rendering the images and make the animation, like you saw in the start of the article.



If you wanna, you can download the textured .OBJ file here.

Facial forensic reconstruction of a skull reconstructed of a video
 
The video used to reconstruct the mummy of the start of the article (and above) can be watched here.


Notes


1) A situation that gave a lot of pride for the writer of this article was the citation of Mr. Witmer in his Facebook page:





This is a good demonstration that people that like to share information generate big amount of solutions. The most important is the technique was describet and all have a chance to learn and make a solution better.

2) The original post  that motivated this article was written in portuguese: http://www.ciceromoraes.com.br/?p=430
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.