Friday, 7 September 2012

Building an Xcopter

Hi all,
last week i tried to re-build our xcopter. The model I definitively destroyed was assembled with the help of an expert in aircraft models (Walter Gilli). The mainboard is a kkMultiCopter Controller sold by kkmultikopter.kr, which is based on Rolf R Bakke's original PCB (public domain). the others parts are:
  • 1 power distribution board, 
  • 1 lipo battery, 
  • 1 low  voltage alarm
  • 4 brushless outrunner motors, 
  • 4 ESCs (speed controller), 
  • 2 counter propellers, 
  • 2 noncounter propellers, 
  • some silicon wire pieces, connectors and leads,
  • a homemade frame composed of 4 aluminum arms.
I put the first prototype on the "operating table" (see picture below) and started to remove individual parts to reassemble them into the new xcopter.


The first step was to create the electrical network using the power distribution board (picture below) which allows to transmit electricity from the lipo battery to the motors. A switch simplifies the turning on/off of the xcopter.


The second step was to create a plate where fix the mainboard and the receiver of the remote control. I modified an empty box of CD/DVD (picture below).


Then I started to remove the ESCs and the motors from the first prototype and to solder them into the new model (picture below).


I was careful to respect the order of the xcopter schema: type of propellers and rotation of the motors (picture below).


Finally I fixed the the mainboard and the receiver of the remote control on the CD/DVD box (picture below).


The picture below shows the "operating table" after the "transplant" procedure.


I closed the top with the CD/DVD box-cover (picture below) and I was ready for the first flight. The remote control was correctly set up with the first prototype; I needed only to regulate a little bit the Roll and Pitch pot on the mainboard. Have Fun!


Tuesday, 4 September 2012

SfM/IBM of old data

Hi all,
i was organizing data of a old storage media and i found some pictures of a work we did in Aramus excavation during the 2006 season.  The documentation of a walled-door was an hard test for 2D digital documentation ("metodo Aramus"). The picture below reproduces the logistic difficulty to take pictures usable for a photomosaic: due to the morphology of the site it was not possible to be in front of the wall.


Finally we took 14 photos to document an area which could be covered by only one image under normal conditions. The schema below shows the different area taken up in the 14 photos: it is bigger in the upper stripe and obviously smaller in the lowest.


A selection of the 14 photos is represented in the image below.


On the field we took also a group of images (14) from different point of view. We intended to elaborate the photo set with the software Stereo. In the end we didn't elaborate it because the 2D photomosaic reached a good quality and a sufficient accuracy. Stereo's data elaboration is time consuming and it depend totally to human work. the picture below shows six photos taken for 3D documentation.


After six years i found this data again and i tried to elaborate them using Python Photogrammetry Toolbox which is not time consuming because the artificial intelligence leads automatically the process. The result is an accurate 3D model. Is surprising that pictures taken two years before the development of Bundler could be used to create precise documentation of no more accessible archaeological context. The movie below shows the mesh of the walled-door.

  
Thank to Sandra Heinsch and Walter Kuntner (University of Innsbruck - Institut für Alte Geschichte und Altorientalistik) to share the data.

Sunday, 2 September 2012

Converting a Video of Computed Tomography into a 3d Mesh

 
 Obs.: Please, watch this video before to read the article.

CT scan is an excellent technology for research in a lot of areas. unfortunately is a extensive service to be contracted.

If you are a researcher in egyptian archaeology or facial reconstruction this article will help you to learn a way to get CT scan in a easy way.


An archaeological example of the use of the technique


Describing the Technique

The technique consists in download a video of Youtube or Vimeo or whatever i movies site on the internet.

A example of Firefox add-on that can be used to download a video is DownloaHelper, and you can download it here:

https://addons.mozilla.org/pt-BR/firefox/addon/video-downloadhelper/

If you use other browser is possibly have a version of DownloadHelper for it or you can use other solution. 


For this example was downloaded a CT scan video in the site the Virtual Pig Head.


A .MOV video was downloaded directly for this page, dispensing the use of DownloadHelper in this case.

OBS.: If you like dinosaurs or articles about CT scan, you cannot let to visit Witmer's lab site. It can be nteresting to found a got material for your research or pleasure.

Once a video was acquired it was seen that it have a lot of labels in the screen.

To erase it was used he video editor Kdenlive. The solution was to create some black areas over the bigger labels.

 So, a new video was generated without that labels. To convert this video in a image sequence you can use FFMPEG, a command line software that converts video in a serie of different formats.:

$ ffmpeg -i Video.mpeg -sameq sequencia/%04d.jpg


Where:

-i Video.mpeg is the input file.

-sameq preserves the same quality of the frame in the jpg output file.

sequence/%04d.jpg sequence is the directory where the file will be created and %04d.jpg means that the result files will be a sequence with four zeros like 0001.jpg, 0002.jpg, 0003.jpg.


Obs.: The signal $ only means that the command have to be written in a terminal. 

Ok, now you already have the jpg sequence, but InVesalius (the CT scan software) uses DICOM files to convert images in 3D meshes.

A DICOM file is not only a image file, but a imagem file with the data of pacient, distance of the slices, and etc.


So, to convert an image in a DICOM file you'll need a specific application called IMG2DCM, tha can be download here. With this command line application you'll convert image files like Tif, Png and Jpg in a sequence of .Dcm (DICOM) files and if necessary you can setup the information about the pacient, the distance of slices and etc. 

To do the conversion is quite easy:

$ python img2dcm.py -i sequence_directory -o output_directory -t jpg



To open the DICOM files and convert it in a 3D mesh you can use InVesalius, that is a power opensource in area of CT scan.


How appears in the screenshot when the reconstruction is made the names os the muscles that was in the video was reconstructed too. It isn't a problem, because they will be deleted in the 3D editor after. 



We can importing the .STL file exported from InVesalius in Blender 3D.

The .STL files comes big and with a lot of subdivisions. You need to simplify it with Remesh, for example, to edit the mesh with tranquility.



Blender has a sculpt mode, where you can to polish the little warts that was created when had the texts with the name of the muscles.

Because the labels, the right ear came incomplete.



You can solve it mirroring the model and complete the lacking area.



This video shows a good technic to make this.


When the jpg sequence was converted in a DICOM sequence the data of distance of the slices wasn't setup. Because this the pig's face was generated stretched. After polished the warts and mirrored the ear we can rescale the face to correct the proportions (with a clean mesh).


Usually artists remodel a complex mesh with less subdivision using a technique called retopo.

But, this article is geared for scientific and archaeological solutions. So, the texturing of the model will be configured in the complex mesh, without make retopo.



The final step is rendering the images and make the animation, like you saw in the start of the article.



If you wanna, you can download the textured .OBJ file here.

Facial forensic reconstruction of a skull reconstructed of a video
 
The video used to reconstruct the mummy of the start of the article (and above) can be watched here.


Notes


1) A situation that gave a lot of pride for the writer of this article was the citation of Mr. Witmer in his Facebook page:





This is a good demonstration that people that like to share information generate big amount of solutions. The most important is the technique was describet and all have a chance to learn and make a solution better.

2) The original post  that motivated this article was written in portuguese: http://www.ciceromoraes.com.br/?p=430

Wednesday, 29 August 2012

3D documentation of small archaeological finds

Today I want to present the results of an experiment we did 2011 thanks to the collaboration of Nicolò Dell'unto (Lund University).
The goal of the project was the comparison between different methodologies in documenting small archaeological finds. In such a wide applications field we decided to evaluate mainly two technologies: Computer Vision (Structure form Motion and Image-based modeling) and optical sensors.
In both groups we compared open source and closed source possibilities, documenting the same artefact with Microsoft Photosynth (closed source software), Python Photogrammetry Toolbox (open source software), then with a NextEngine Scanner (closed source hardware) and finally with Three Phase Scan (open source hardware).
For the test we used a souvenir from Lund: the small statuette of a female viking (we called her Viky) you can see in the image below.

The test object: Viky

  
The characteristics of such an object granted us a medium-high gradient of difficulty (a good level to test our technologies).
For all the post-processing elaboration we used MeshLab (also when another commercial software was avalaible, like for the NextEngine Scanner), because we considered it a "killer application" for mesh-editing (by the way, for next experiments I would also consider CloudCompare, another great open source project). The screensot below shows the four results of our experiment.

 
Comparison of the four 3D documentations

To better observe the differencies between the four 3D models it is possible to check the next panoramic 2D (sorry I did not clean all the models, but I think it is enought to understand the general quality):




Python Photogrammetry Toolbox





MS Photosynth








Three Phase Scan
NextEnigine Scanner










As you can see there is not a big difference between the two Computer Vision documentations, as they are based on similar software (having a common origin in the Photo Tourism project), while the comparison between the NextEngine and Three Phase Scan is simpler: untill now the open source instrument is not accurate enought to record small objects (it is possible to observe the error on the arms of Viky), having too low quality for archaeological aims. Anyway, it has to be said that the cathegory of the two optical sensors is different: the NextEngine is a triangulation scanner (very accurate and precise), while the Three Phase Scan is a structured-light scan. Unfortunatley we had no time to consider also the open source MakerScanner, which would belong to the same cathegory of the NextEngine.
In conclusion our opinion about 3D small finds recording is that actually the NextEngine is a very good low-cost solution (sadly not open source, IMHO) and it is an optimal choice in those pojects where there is the necessity to document a big amount of objects in a short time. If we consider also the price, then the better applications seems to be the Computer Vision based software Photosynth and Python Photogrammetry Toolbox (PPT). Personally I prefer the second one for two reasons: it is a FLOSS (GPL licensed) and has no need of internet (while with Photosynth it is necessary to upload the pictures on external servers). This characteristic makes PPT a perfect tool in difficult situations, like, for exemple, an arcaheological mission abroad. In these conditions it is possible to record 3D models with a very simple hardware equipment (just a digital camera and a laptop), in short time (at least during data acquisition) and without an internet connection (as I described in this post). Moreover PPT (and in general CV vision applications) gave us also good results in "extreme" projects, like we illustrated in the slides (sorry, just italian) of our contribution during the Low-Cost 3D workshop, held in Trento in March 2012


2016-04-07 Update

In 2011 we wrote an article about this experienced:

ResearchGate: Article
Academia: Article

I hope it will be useful, even if no more up to date it.

Wednesday, 15 August 2012

Facial reconstruction of a Neanderthal



I already reconstructed some faces using skull of ancient people, but I always had the tissue depth of modern measurements to help me make this.

This time I choosed reconstruct a neanderthal man, using only the facial musculature how reference.

I used the coordinates of Manchester method shown in Caroline Wilkinson's book called "Forensic Facial Reconstruction". But, like I wrote, I haven't used the tissue depth markers. The reason is obvious... It's impossible to obtain this data, because we don't have any Homo neanderthalensis alive.

The Process

I tried to find some CT-scan of a neanderthal Skull, but I found only one. And it was not compatible with my software: http://foveaproject.free.fr/availableDataFossilEng.html

I tested more than twenty CT-scan software to convert the INR file in a mesh. Unfortunatelly I didn't get this time.

After I tried to use SfM to reconstruct using sequence of images, videos and other ways. But all in vain, like you can see in my attempts album: https://picasaweb.google.com/115430171389306289690/NeanderthalAttemp1

To obtain a answer about CT-Scan I sent more than thirty e-mails to scientists, students and institutes asking about neanderthal skulls. But no one could help me.

In vain, I spent a lot of days trying , until deduce that I should to model the skull from scratch, even this being not the most accurate way to obtain the data.

First of all, I found a good skull references here: http://www.indiana.edu/~ensiweb/lessons/skulls2.html

So, the skull was modelled in Blender.




After, the muscles was placed, using a technology called metabals.




The skin over the muscles.

Obs.: Note that I put the eyes with blue color. My friend called Moacir Elias Santos, a brazilian archaeologist told me that the blue eyes is a genetic mutation that has not happened with Neanderthals, like you can see in these articles (thanks Moacir!):
http://occupycorporatism.com/
blue-eyes-originated-10000-years-ago-in-the-black-sea-region/

http://newswatch.
nationalgeographic.com/2008/09/17/neanderthal_woman_is_first_rep/



The UV Mapping (texturing).

The first rendering... with right eye color.

The hair setup.




And the final results.




All the modelling was made with free software. I used:

Blender to model
Gimp to help with the textures
Inkscape to organize the references
All running under Linux Ubuntu.

I gave a image to Wikipedia, like you can see in these links:

http://en.wikipedia.org/wiki/Neanderthal_anatomy

http://en.wikipedia.org/wiki/Neanderthal

The process was fun. The main goal was modelling all in a few hours with quality and precision... at least the possible precision.


I need to thank the Arc-Team for motivating me to write this article with my bad English.

I hope you enjoyed it. A big hug and see you the next article.

Monday, 13 August 2012

Software recovery: e-foto rectification module for 64 bit

As you can see from this mail (ArcheOS developer mailing list), since July 2011 one of the problem in mainatinnig ArcheOS e-foto package was related with the rectification module of this software. In fact this module seems to be abbandoned in the latest releases. Unfortunatly this code is very important for our archaeological field-work, being connected with the Metodo Aramus (the procedure we use to obtain georeferencad photomosaics).

e-foto's rectification module at work (Metodo Aramus)

For this reason I fisrt tried to contact the software developers (in the official forum) and then, having too few time to dedicate to this problem (I know, my fault...), I decide to upload the code on github at this link: https://github.com/archeos/rectify. This solution should help to keep the rectification module of e-foto ("rectify") an active project as a stand-alone application, avoiding the risk to become an abbandonware.
However, looking to the development of ArcheOS new release (codename Theodoric), there was still a big problem: I was not able to compile "rectify" with Qt4 also for 64 bit, as ArcheOS 5 should have both a 32 and a 64 bit version.
To solve this situation I asked again the help of the community, writing a post in the italian Qt forum. As you can see from the discussion (sorry, just italian) an user (Tom) helped me in updating the source code. It was necessary to modify just two files: matriz.cpp and matriz.h, so I did a new commit on github and now the code is ready to be compiled with Qt4. I did not yet packaged rectify for 64 bit, but I will do it ASAP. Anyway if someone has this kind of machine and needs to compile the module, he can use the source-code in github (It should work, but if there are problems please report them).
I hope it was usefull.
Ciao.

The commit in the source code (Github)

Tuesday, 7 August 2012

ArcheOS v.4 (Caesar) beta release presented during the ArcheoFOSS VII

Hi,
with a big delay I uploaded the slides of the official presentation of ArcheOS 4 (Caesar) beta release, during the ArcheoFOSS VII, which took place in Rome (23 and 23 June 2012). The file can be seen here (Academia) or here (Researchgate).
With our contribute we tried to satisfy the scientific committee guidlines, illustrating not only the new software integrated in ArcheOS, but also the archaeological methodology connected with the system. We also presented some projects in which ArcheOS has been used, the community feedbacks and a preview of the future developments.


I would like to thank both Roberto Angeletti (aka BobMax) and Alessandro Furieri (SpatiaLite) for the fruitful discussions in Rome.


BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.