Showing posts with label MeshLab. Show all posts
Showing posts with label MeshLab. Show all posts

Sunday, 3 September 2017

3D documentation of ancient millstones: preliminary tests


The traditional drawing of ancient millstones consists in a plan (eventually with shading to give a sense of three dimensions) and one or more cross sections of the object. This is not always easy because of the dimensions, weight and sometimes shape (mainly the Pre-Roman millstones are irregular and asymmetric) of this type of artefacts. Furthermore, the millstones are generally in museums or in storerooms: in these places, it is often difficult to move the objects or to have enough time for drawing quietly and checking well all the details. In short, drawing a millstone is not like drawing a sherd of pottery!
For these reasons, it could be useful applying a methodology based on the Structure from Motion (SfM) techniques in addition to the traditional drawing methods. In this post I’m going to present the preliminary results of a test aimed at the three-dimensional documentation of a fragment of an Iron Age millstone from Northern Italy (a so-called “Olynthus mill” or “Hopper rubber”).

The first step was the construction of a rectangular wooden frame made by 4 rods of different length (40, 60, 80 cm), so that it’s possible to build frames of different areas according to the dimensions of the millstone to be drawn. On the surface of the rods some cross marks equally spaced are signed: these marks will be use as reference points with known coordinates for the rectification of the3D point cloud, mesh and (eventually) texture (something like GCP, Ground Control Points).


Four bolts adjustable for height hold the frame together and allow to level it perfectly. Once the frame is ready, you need to enter the millstone into it, in such a way as to leave sufficient space between the stone and the rods for taking pictures.
Some recognizable markers should be placed in different points of the millstone: these are for aligning and merging the two point clouds that will be generated (see below). A simple solution is to use small spheres of coloured modelling clay visible in the point clouds.



At this stage, you start with the typical workflow of the SfM. You take an appropriate number of pictures of the upper surface first; then do the same to the lower surface, turning upside down the millstone inside the frame.
The pictures can be processed by the software you want. I used Regard3D / OpenMVG for generating two point clouds (one for the upper surface and the other for the lower) and CloudCompare for editing/cleaning the point clouds and for performing their rectification (thanks to the cross marks on the frame), alignment and merging (thanks to the coloured markers on the stone). CloudCompare and MeshLab have also been used for generating meshes and for computing other parameters, among which the measurements.



The final result is a point cloud and a mesh of the millstone.


Using MeshLab you could also obtain the texture of the object, but for my aims it’s enough a 3D model (point clouds or mesh) from which I can get a plan, some cross sections and all the measures I need. Thanks to these data, I can detail or check my handmade drawing or do it from scratch.


In conclusion, the usage of an homemade wooden frame makes easier and more precise the data acquisition for the SfM and make faster and more complete the documentation of this kind of artefacts. The method described leaves room for improvements and developments; it could become a “standard” documentation technique for the ancient millstones and for other archaeological objects with analogous drawing issues.

Thank’s to Alessandro Bezzi (Arc-Team).

Denis Francisci




Tuesday, 16 June 2015

OpenMVG VS PPT

Hi all,
as I promised, we are back with new post in ATOR. We start today with an experiment we wanted to do since long: a comparison between two Structure from Motion - Muti View Stereoreconstruction (SfM - MVS) suite. The first is Python Photogrammetry Toolbox, developed by +Pierre Moulon some years ago and integrated in ArcheOS 4 (Caesar) with the new GUI (PPT-GUI) written in Python by +Alessandro Bezzi and me. The second one is the evolution of PPT: openMVG, which Pierre is developing since some years and that will be integrated in the next releases of ArcheOS.
Our small test regarded just four pictures taken with a Nikon D5000 on an old excavation. We want to point out the speed of the overall process in OpenMVG, which gave a result compatible with the one of PPT.
In the image below you can have an overview (in +MeshLab) of the two pointclouds generated bye the different software: openMVG processed a ply file with 391197 points, while PPT gave us a result with 425296 points.


Comparison of the two models generated by opnMVG and PPT

The main different stays in the processing time. In fact, while PPT needed 16 minutes, 11 seconds and 25 tenths, openMVG complete the model in just 3 minutes, 28 seconds and 20 tenths.
Here below I report the log file of openMVG, where you can see each step of the process:

STEP 1: Process Intrisic analysis - openMVG_main_CreateList took: 00:00:00:00.464
STEP 2: Process compute matches - openMVG_main_computeMatches took: 00:00:01:13.73764
STEP 3: Process Incremental Reconstruction -  openMVG_main_IncrementalSfM took: 00:00:00:47.47717
STEP 4: Process Export to CMVS-PMVS - openMVG_main_openMVG2PMVS took: 00:00:00:00.352
STEP 4: Process Export to CMVS-PMVS - openMVG_main_openMVG2PMVS took: 00:00:00:00.352
STEP 5: Process CMVS-PMVS took: 00:00:01:25.85958
--------------------
The whole detection and 3D reconsruction process took: 00:00:03:28.208258

We will go on in working and testing openMVG, hopfully posting soon news about this nice software.

Have a nice day!

Acknowledgment

Many thanks to +Pierre Moulon and +Cícero Moraes for the help!

Saturday, 13 December 2014

Forensic Facial Reconstruction, the state of the art

As many of you know last week a team of the University of Leicester have publicly revealed to have discovered, in all likelihood, the tomb of Richard III. The results seem comforted by the analysis of mitochondrial DNA, while the discrepancy on the Y chromosome could be explained by a false paternity. The study was completed with a forensic facial reconstruction of the king, performed by the experts of the University of Dundee, led by Caroline Wilkinson, Professor of Craniofacial Identification.
Given the opportunity, I decided to publish here our state of the art on this particular field (forensic facial reconstruction applied to archeology), publishing the presentation that I gave during the study day in honor of Prof. Franco Ugo Rollo (Ascoli Piceno, November 26 2014).

You can see the presentation here below (better visualized at this link)...
 



... and here is a brief explanation of each slide:

SLIDE 1

A remember of Franco Ugo Rollo, professor at the Camerino University. It was not my fortune to know personally Prof. Rollo, but his name is surely well known also in my discipline (archeology).

SLIDE 2

"Digital faces: new technologies for the forensic facial reconstruction of the historical figures".
The presentation intend to be an overview of the digital methodologies of FFR with FLOSS, developed in the last two years on the blog ATOR with a spontaneous contribution of different authors.

SLIDE 3

The traditional work-flow involves several operations: 3D scanning the skull, preparing a replica, performing the anthropological analyses, placing the tissue depth markers, reconstructing the profile, modeling the muscles and skin, calibrating the model with the available sources and dressing it.

SLIDE 4

The same operations are necessary for the digital work-flow. Our main work has been to turn the traditional process into a digital one, using only FLOSS.

SLIDE 5

There are different technology to obtain a 3D digital copy of the original skull. The main two we are using are: SfM - IBM and X-ray CT.

SLIDE 6

IN 2009 Arc-Team perform the first test in applying SfM - IBM with FLOSS to Cultural Heritage, during its participation at the TOPOI excelent cluster of Berlin.

SLIDE 7

The test developed in a collaboration with the French researcher +Pierre Moulon (Université Paris - Est and Mikros Image; actually at Acute3D) to integrate SfM - IBM software in ArcheOS 4 (codename Caersar)

SLIDE 8

The first test (TOPOI Löwe) gave positive results

SLIDE 9

The process is mainly based on different photos with different orientations, computing the displacement of common points between images

SLIDE 10

To complete the 3D documentation of an object, the next step is the so-called mesh-editing, which can be performed in the software MeshLab (developed by the Visual Computing Lab at the ISTI - CNR of Pisa, Italy)

SLIDE 11

In order to validate the digital method of FFR, some unconventional procedures (derived from the hacker culture) have been adopted. With reverse engineering techniques, based on SfM, it has been possible to digitally replicate the process of past FFR projects and to compare the results.

SLIDE 12

The anthropological validation has been performed comparing the result of 3D models obtained with SfM - IBM and the relative results coming form 3D scan (the observed distortion remained in the range of 1 mm).

SLIDE 13

In several projects it is possible to work with DICOM data. In these cases the anthropological analysis is more accurate. (3D VS Voxel)

SLIDE 14

The main software we used for DICOM data is InVesalius, mainly developed at the Renato Archer Information of Technology Center, an institute of the Brazilian Ministry of Science and Technology.

SLIDE 15

"X-ray computed tomography (X-ray CT) is a technology that uses computer-processed X-rays to produce tomographic images (virtual 'slices') of specific areas of the scanned object, allowing the user to see inside without cutting." (Wikipedia)

SLIDE 16

Also in this case, the process was validated with unconventional procedures derived from hacker culture. With reverse engineering of CT videos it has been possible to rebuild DICOM data and the 3D model of different skulls, replicating FFR projects and comparing the results.

SLIDE 17

It is necessary to check and validate the protocol with a continuous methodological comparisonwith all the available resources. For this reason, we tried also the FFR of Henry the IV, a project in which Prof. Rollo was involved, rejecting (with other scholars) the attribution of the mummified head to the French king. Our test in this case is just an experiment, starting from low quality data, but it is a good example to show some benefits of digital FFR, like the possibility to quickly modify the reconstructed face (e.g. closing the mouth in order to perform superimposition with the death mask), an operation not so simple with tangible models.

SLIDE 18

Once obtained the 3D model, digital anthropological analyses do not differ from traditional ones.

SLIDE 19

In some cases, a virtual restoration of the model is necessary. The solution comes from symmetrical and boolean operations of 3D modeling software (Blender).

SLIDE 20

The whole process of 3D modeling is actually performed in the software Blender.

SLIDE 21

The first operation is to fix the 3D skull on the Frankfurt plane, which replicates the head position of a standing human figure.

SLIDE 22

Than tissue depth markers are placed. The software keeps automatically the correct normal of each marker.

SLIDE 23

In our works, for depth tissue markers, we use the tables of Degreef et alii (2006)

SLIDE 24

A second step is the profile reconstruction.

SLIDE 25

For nose shape we refer to G. Lebedinskaya method.

SLIDE 26

The validation of the method came mainly from the comparison between FFR models and the facial DICOM data of living people, a simple simple with digital techniques, using the software CloudCompaer. All this experiment were conducted ans blind test (the artist did not know the identity and the fisionomy of the people).

SLIDE 27

According to the blind test, main deviations were detected on the cheeks.

SLIDE 28

Like other 3D operations, muscles modeling has been performed in Blender.

SLIDE 29

The technique hes been continuously rationalized and optimize. For instance, once the main muscles are modeled with metaballs in Blender, the result can be reused in successive reconstructions through an anatomical deformation.

SLIDE 30

It is possible to reach more realistic results through specific modeling tools,
like the "sculpt mode" in Blender.

SLIDE 31

Also skin modeling is an operation to be performed in Blender

SLIDE 32

Again the technique has been optimized: In order to simplify and speed up the process, a neutral facial model has been  created.

SLIDE 33

The neutral model can be anatomically deformed on different skulls to meet gender and age dimorphism.

SLIDE 34

At the same time, the neutral model can be deformed to meet the anatomical criteria which determine the individual dimorphism.

SLIDE 35

After the reconstruction process, two main models are defined:  one with hair and one hairless.

SLIDE 36

Thanks to the latest developments of the software MakeHuman it is now possible to further simplify and speed up the technique. Our actual research is following this direction.

SLIDE 37

The first tests carried out in 2014 have yielded positive results, thanks to the new feature which loads base raster images. The software is also perfectly compatible with Blender.

SLIDE 38

A further development of the protocol will allow to obtain high quality forensic facial reconstructions, in less time, without the need to master the techniques of 3D modeling.

SLIDE 39

At the end of the FFR process, the final model is calibrated with historical, archaeological and medical sources.

SLIDE 40

In case of historical reconstructions, the model appearance (hairstyle and clothing) is calibrated depending on era and culture, while the physical characteristics (color of hair and eyes) are set basing on the ancestry.

SLIDE 41

The 3D printing technologies allow the materialization of the model with different levels of detail.

SLIDE 42

A case study: the forensic facial reconstruction of St. Anthony of Padua 


SLIDE 43

The 3D scan was carried out on the bronze cast performed by R. Cremesini in 1981.

SLIDE 44

The cast done by R. Cremesini is very important, because it derives from the temporary anatomical reconnection of the skull and the jaw, which were separated since the first survey of the tomb (1263). 

SLIDE 45 

3D scan has been performed with the SfM - IBM software of the archaeological GNU/Linux distribution ArcheOS.

SLIDE 46

The final model has been presented Tuesday, June 10 at the event "Scoprendo il volto di Antonio" at the Centro Culturale S. Gaetano in PAdua (Italy)  

SLIDE 47 - 50

Digital FFR allows to further define the details of the model to reach a more realistic result.

SLIDE 51

Thanks to the collaboration with the Centro de Tecnologia da Informação Renato Archer - CTI (Ministério da Ciência and Technology do Brasil) the model was printed in 3D.

SLIDE 52

One of the materialized models was repainted by the Brazilian Mari Bueno,
specialized in religious art.


SLIDE 53

Thank you for your attention!


 

Thursday, 5 December 2013

From drone-aerial pictures to DEM and ORTHOPHOTO: the case of Caldonazzo's castle

Hi all,
I would like to present the results we obtain in the Caldonazzo's castle project. Caldonazzo is a touristic village in Trentino (North Italy), famous for its lake and its mountains. Few people know about the medieval castle (XII-XIII century) whose tower is actually the arms of the town. Since 2006, the ruins are subject to a valorization project by the Soprintendenza Archeologica di Trento (dott.ssa Nicoletta Pisu). As Arc-Team we participated in the project with archaeological field work, historical study, digital documentation (SFM/IBM) and 3D modeling.
In this first post i will speak about the 3D documentation, the aerial photography campaign and the data elaboration.



1) The 3D documentation 

One of the final aims of the project will be the virtual reconstruction of the castle. To achieve that goal we need (as starting point) an accurate 3D model of the ruins and a DEM of the hill. The first model was realized in just two days of field-work and four days of computer-work (most of the time without a direct contribution of the human operator). The castle's walls were documented using Computer Vision (Structure from Motion and Image-Based Modeling); we use Pyhon Photogrammetry Toolbox to elaborate 350 pictures (Nikon D5000) divided in 12 groups (external walls, tower-inside, tower-outside, palace walls, fireplace, ...).


The different point clouds were rectified thanks to some ground control point. Using a Trimble 5700 GPS the GCPs were connected to the Universal Transverse Mercator coordinate system. The rectification process was lead by GRASS GIS using the Ply Importer Add-on.


To avoid some problems encountered using universal coordinate system in mesh editing software, we preferred, in this first step, to work just with only three numbers before the dot.



2) The aerial photography campaign 

After walls documentation we started a new campaign to acquire the data needed for modeling the surface of the hill (DEM) where the ruins lie. The best solution to take zenithal pictures was to pilot an electric drone equipped whit a video platform. Thank to Walter Gilli, an expert pilot and builder of aerial vehicles, we had the possibility to use two DIY drones (an hexacopter and a xcopter) mounting Naza DJI technology (Naza-M V2 control platform).


Both the drones had a video platform. The hexacopter mount a Sony Nex-7; the xcopter a GoPro HD Hero3. The table below shows the differences between the two cameras.


As you can see the Sony Nex-7 was the best choice: it has a big sensor size, an high image resolution and a perfect focal lenght (16mm digital = 24 mm compare to a 35mm film). The unique disadvantage is the greater weight and dimension than the GoPro, that's why we mounted the Sony on an hexacopter (more propellers = more lifting capability). The main problem of the GoPro is the ultra-wide-angle of the lens that distorts the reality in the border of the pictures.
The flight plan (image below) allowed to take zenithal pictures of the entire surface of the hill (one day of field-work).


The best 48 images were processed by Python Photogrammetry Toolbox (one day of computer-work). The image below shows the camera position in the upper part, the point cloud, the mesh and the texture in the lower part.


At first the point cloud of the hill was rectified to the same local coordinate system of the walls' point cloud. The gaps of the zenithal view were filled by the point clouds realized on the ground (image below).


After the data acquisition and data elaboration phases, we sent the final 3D model to Cicero Moraes to start the virtual reconstruction phase.


3) The Orthophoto

The orthophoto was realized using the texture of the SFM's 3D model. We exported out from MeshLab an high quality orthogonal image of the top view which we just rectified using the Georeferencer plugin of QuantumGIS.
As experiment we tried also to rectified an original picture using the same method and the same GCPs. The image below shows the difference between the two images. As you can see the orthophoto matches very well with the data of the GPS (red lines and red crosses), while the original picture has some discrepancies in the left part (the area most far away from the drone position, which was zenithal on the tower's ruin).



4) The DEM

The DEM was realized importing (and rectifying) the point cloud of the hill inside GRASS 7.0svn using the Ply Importer Add-on. The text file containing the transformation's info was built using the relatives coordinates extracted from Cloud Compare (Point list picking tool) and the UTM coordinates of the GPS' GCPs.




After data importing, we use the v.surf.rst command (Regularized spline tension) to transform the point cloud into a surface (DEM). The images below show the final result in 2D and 3D visualization.



Finally we imported the orthophoto into GRASS.



That's all.

Monday, 5 August 2013

3D scanning by photos taken with a simple smartphone

Example of composed scene (scaned objects + modeled objects)

3D scanning with photos by Structure-from-Motion (SfM) and Image Based Modeling (IBM) has provoked the admiration and has pointed the curiosity of the people.

Some of these people want only to test randomly the technology, but other have objective ambitions, like to reconstruct the architectural heritage, art for advertising and even forensic research that already has been shown here, in other post.

The objective of this blog is share the knowledge of all people that want to learn about free software technology. Thinking about a great number of individuals, today will be shown the result of scanning with the camera of a simple smartphone, the Galaxy Y Duos, from Samsung.

The next phase of the researches will be the collect of photos of other cell phones and smartphones more and less sofisticated than this presented here.

Configuration


Before anything else, is necessary know that the scanning is not made inside the smartphone, but in a personal computer with PPT-GUI installed. A few months ago a post was published here with a tutorial of 3D scanning by photos, that you can follow to make your own test.

The objective of this post is prove that the result is literally at your fingertips. Even the hardware you have not be the most sophisticated (sorry, the version of screenshots are only in Portuguese).


In this case, the model used is a GT-S6102B, with Android 2.3.6. It is a quite simple smartphone sold in Brazil.

All the photos was taken with default configuration of illumination.

Evidently, anyone scanning was made with photos taken during the night.

Some of the scenes was in sunny environment. Some of them in environment with shade of sun, and other in internal of houses.

The only change in the default configuration was use the 3.2 megapixels the total power of the camera, with 2048x1536 pixels.

The result was shown in the video on top of this post.

I hope you enjoyed.

See you in the next!






Wednesday, 17 July 2013

Forensic facial reconstruction of a living individual using open-source software (blind test)


Studying alone is often a good solution when one cannot find support or has no understanding of something new and exciting, albeit not appealing to the general public.

Still, when it comes down to evolve and adapt scientific knowledge to the benefit of human beings, there is nothing better than having around people with the same goals, motivated to devote towards a better world, more accessible to those who have interest in that certain area of knowledge.

Earlier in 2012 I began my studies in the field of forensic facial reconstruction. Now, a year and a half later, over forty reconstructions have gone by, mostly of modern humans, some hominids and even a saber-teeth tiger.

Over that time, in the lectures I taught, in the e-mails I received or courses I offered, people often questioned me about the precision of the method, whether had I tested it in skulls of known people (living or not).

Graph representing the precision of a reconstruction (in millimeters) in relation to the skin of the volunteer, obtained by optical scanning. The blue areas represent areas where the face was reconstructed deeper than the real face, while the yellow areas represent regions in which the real face was deeper than the reconstructed mesh.


I had already done some experiments, but for technical reasons and in order to not disclose the identity of volunteers, I did not publish them. Instead, I was limited to showing the work of great artists such as Gerasimov from Russia, Caroline Wilkinson from England and Karen T. Taylor from USA.

Fortunately, a few days ago, research partner Dr. Paulo Miamoto sent me a scanned skull at my request, so I could test a newly developed technique to "wear" the skin over the virtual muscles. This skull, sent without much background on it, but with permission for reconstruction by its "owner", would be the first opportunity I had to show a case of facial reconstruction of a living person, exposing the degree of accuracy that such works may reach.

Development of the Work

A few days ago, I began to test a series of Blender modifiers, seeking an option that would allow me to "wear" the skin over a reconstruction in muscle stage. The goal was to make the process faster, and therefore more accessible to those who wish to replicate it, whether one is gifted with artistic skills or not.

I managed to find a solution with a modifier called Shrinkwrap (and a number of adaptations), as seen in the video above. The skull shown on the video is from another reconstruction in progress. It may seem almost imperceptible to a layman in forensic facial reconstruction, but it is a "blessing" for those who are just starting to work on virtual sculpture.

Back to the skull previously provided by Dr. Paulo Miamoto, it offered me the possibility to reconstruct a living person that was only known to him. He asked me for help with the configuration of the skull, since he would have to "assemble" the structure, because the CT was acquired by a Cone Beam tomograph.

Usually a cone beam CT captures only a portion of a skull due to a reduced field of view of the hardware. It is and equipment widely used for dental purposes and it is usually cheaper than a medical CT scanner.


An interesting fact in this story is that the whole process was done with open-source software. Initially, Dr. Miamoto opened the scans in InVesalius and filtered the part that corresponded to the bones. For this step he used a tutorial that I wrote, explaining the basic operation of InVesalius (translated from Portuguese): http://bit.ly/18mN6TR

Then he imported the three parts in MeshLab and aligned them in 3D space so that the face of the skull part stayed structure. All steps of this process were done thanks to the tutorials available at Mister P’s channel on Youtube: https://www.youtube.com/user/MrPMeshLabTutorials

After aligning the meshes the skull was exported as a .ply file and sent with the following anthropological data for the iorientation of the reconstruction:

- Gender: Male;

- Ancestry: miscegenated xanthoderm (of Japanese descent) and caucasian (white);

- Age: 20-30 years.

Upon receiving the skull I had to simplify the mesh, because the reconstructed CT had generated some areas with significant noise, inherent to the technique of image capture of Cone Beam CT scanners. Then I rebuilt the area of the skull that was missing by aligning it with another skull from my database, as recommended by the authors of the area. Thus, the work would be done more easily, with more spatial references.

With the skull cleaned and properly positioned in the Frankfurt plane, the virtual pegs used as reference for soft tissue depth were placed and sketches of the projections of the nose and face profile were done. As Asian and Native American individuals share physical anthropological traits that makes their skulls, a soft tissue depth table for the native indians from southwestern South America (Rhine, 1983) was used.

To speed up the process, a whole set of muscles, cartilage and glands was imported from another file. Obviously some changes needed to be donein order to fit it to the studied skul.

Gradually, one by one, the muscles were deformed and adapted to the skull.

At the end all the elements were positioned and contrary to what many people think, even with all the muscles of the face it is hard to get an idea of how the final work will look like, once finished.

For the configuration of the skin, the work followed the same method used for the muscles. A kind of general template is imported from another file.


And adapted until it fits the shape outlined by the profile sketch, muscles and soft tissue depth pegs.

It is possible to visualize the progressive shape transformation suffered by the skin mesh.

By placing the skin and "wearing it" over the muscles, I suspected the skull belonged to Dr. Miamoto. The shape of the chin and the side view highlighted some features that are evident in photographs (I do not know personally Dr. Miamoto). Upon questioning him, since in this field on cannot work with uncertainty, he told me “yes, it is his skull”.

Needless to say I was extremely pleased with the result.

Then it would be the time to test the quality of the reconstruction in relation to the face of skull "owner".

A test was done with a photograph, in which the reconstructed mesh was put over it and viewed from the same point of view. Note that the lips almost lined up with the 3D model.

Dr. Paulo then did the same process to filter the skin from the CT and sent it to me in another .ply another file. The file was aligned with the reconstruction, showing a rather large compatibility.

Finally a optical scan of the Dr. Paulo’s face (done apart from the CT scan) was aligned to the reconstructed  face. Note that again the line of the lips was quite compatible, as well as the nose breadth.


 The data of the reconstructed mesh and optical scanning mesh were loaded on CloudCompare and a 3D compatibility graphic was generated. A significant part of the reconstructed mesh differed only a few millimeters from the optical scanned mesh.

The part in blue, comprising the cheeks traditionally differs from scannings of the living individual because the soft tissue depth table used as reference was done on cadavers that may have undergone a slight change in its shape (due to dehydration and action of gravity upon its record).

This was an example of how a facial reconstruction done with open-source software can provide a rather satisfactory degree of compatibility with the living individual, provided it fulfills the current and already validated protocols.

The use of new technologies and specific tools in Blender 3D contribute to a satisfactory degree of compatibility of expression lines of the face, thus making the process faster and easier for those who wish to perform a reconstruction but often do not have an art training background.

The findings of this study are currently being structured as a scientific article. I hope to publish them in a peer-reviewed forensic journal, so that the technical aspects of using exclusively open-source software for forensic facial reconstruction can be adequately exposed and disseminated among those interested in this field.

Acknowledgements

To Dr. Paulo Miamoto for the continued partnership on several fronts of research involving open-source computer graphics to forensic science (and to translate this article for a decent English, thank you!)

To the Biotomo Imaging Clinic staff from Jundiaí-SP: Dr. Roberto Matai and Dr. Caio Bardi Matai for the CT scan of the reconstructed skull.

To the Laboratoř Morfologie a Forenzní Antropologie team, from Faculty of Sciences at Masaryk University in Brno, Czech Republic: Prof. Petra Urbanová, MSc. Mikoláš Jurda, MSc. Zuzana Kotulanová and BS. Tomáš Kopecký, for access to the collection of skeletal material of the Department of Anthropology, aid in research of photographic technique for photogrammetry purposes and optical scans.

To the Laboratório de Antropologia e Odontologia Forense (OFLAB-FOUSP) team, from Faculty of Dentistry at University of São Paulo: Prof. Rodolfo Francisco Haltenhoff Melani and MSc. Thiago Leite Beaini for supporting the works in Brazil.

To the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES):  for granting a scholarship for Abroad Doctoral Internship Program (PDSE).
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.