Showing posts with label Blender. Show all posts
Showing posts with label Blender. Show all posts

Wednesday, 13 June 2018

Francesco Petrarca, the mocap experiment in Blender

This post is related with the Wikipedia editathon we are organizing for the open source exhibition "Imago Animi", a project derived from the previous experience of "Facce. I molti volti della storia umana".
This time I will write about the experiment in facial MoCap we performed with the 3D model of the FFR (Forensic Facial Reconstruction) of Francesco Petrarca. The poet was indeed one of the five historical personalities connected with the city of Padua, who were the protagonist of a specific session within the exhibition "Facce". Moreover Petrarch is also present in "Imago Animi", due to the fact that its mortal remains were studied by the scientist Giovanni Canestrini, born in Revò, a town very close to Cles (Trentino - Italy), where the exhibition is currently open to visitors.
The image below (Creative Commons Attribution 4.0 International License) is the result of the Forensic facial Reconstruction of Francesco Petrarca, performed starting from the cast of the skull, found in 2005 in the "fondo Canestrini" at the University of Padua.

The FFR portrait of Francesco Petrarca


This cast is the only data available for the FFR, because, as the 2013 recognition on the mortal remains revealed, the skeleton of Petrarch is currently buried with a female skull, dated (with the C14 techniques) between 1134 and 1280 (almost one century before the life of the poet). The aDNA analysis performed in 2004 by Prof. David Caramelli (University of Florence) confirmed this thesis (the skeleton had a male DNA, while the skull a female DNA) [1].
In 2015 Arc-Team has been commissioned to perform the Forensic Facial reconstruction of Petrarca and other historical personalities, in order to prepare the open exhibition "Facce". The work started with the 3D documentation of the cast of the "fondo Canestrini", done (with SfM techiques) by Luca Bezzi (Arc-Team). The cast was previously validated by Dott. Nicola Carrara (of the Anthropological Museum of the Univesrity of Padua), with osteometric measurements based on the drawing published by Giovanni Canestrini on his study about the mortal remains of the poet [2]. Cicero Moraes, the forensic specialist of Arc-Team, later performed the FFR in Blender, with the techniques developed during the years starting from this first post in ATOR: Forensic Facial Reconstruction with Free Software.
Once achieved the final 3D model, we decided to test Blender potentialities in facial MoCap, starting from previous experiences. In this case the idea was a short video in which Francesco Petrarca would have "reciting" one of its poetry and, in particular, the proemial sonnet of the Canzioniere ("Voi ch'ascoltate in rime sparse il sono...").
The video below shows the final result...


... while this video shows the "making of".


For the two open exibitions ("Facce" and "Imago Animi") has been chosen a combination of the previous videos, in order to show also the technique of facial MoCap. The final product, you can see here below, has been performed by Cicero Moraes (Arc-Team) using the facial MoCap tools of Blender, starting from the original video registered by Luca Bezzi (Arc-Team), with the technical help of Dott.ssa Emma Varotto and Dott. Nicola Carrara (Anthropological Museum of the Univesrity of Padua), which recorded the excellent performance of the actor Antonello Pagotto.



This post wants to be also a tribute to all the people involved in the project, for their professionalism and kindness!
Have a nice day!


Bibliography

[1] N. Carrara, L. Bezzi, Lo strano caso del cranio di Francesco Petrarca, in Imago Animi. Volti dal passato, 2018
[2] G. Canestrini, Le ossa di Francesco Petrarca, 1874

Wednesday, 7 December 2016

Comparing 7 photogrammetry systems. Which is the best one?


by Cicero Moraes
3D Designer of Arc-Team.

When I explain to people that photogrammetry is a 3D scanning process from photographs, I always get a look of mistrust, as it seems too fantastic to be true. Just imagine, take several pictures of an object, send them to an algorithm and it returns a textured 3D model. Wow!

After presenting the model, the second question of the interested parties always orbits around the precision. What is the accuracy of a 3D scan per photo? The answer is: submillimetric. And again I am surprised by a look of mistrust. Fortunately, our team wrote a scientific paper about an experiment that showed an average deviation of 0.78 mm, that is, less than one millimeter compared to scans done with a laser scanner.

Just like the market for laser scanners, in photogrammetry we have numerous software options to proceed with the scans. They range from proprietary and closed solutions, to open and free. And precisely, in the face of this sort of programs and solutions, comes the third question, hitherto unanswered, at least officially:

Which photogrammetry software is the best?

This is more difficult to answer, because it depends a lot on the situation. But thinking about it and in the face of a lot of approaches I have taken over time, I decided to respond in the way I thought was broader and more just.


The skull of the Lord of Sipan


In July of 2016 I traveled to Lambayeque, Peru, where I stood face to face with the skull of the Lord of Sipan. In analyzing it I realized that it would be possible to reconstruct his face using the forensic facial reconstruction technique. The skull, however, was broken and deformed by the years of pressure it had suffered in its tomb, found complete in 1987, one of the greatest deeds of archeology led by Dr. Walter Alva.


To reconstruct a skull I took 120 photos with an Asus Zenphone 2 smartphone and with these photos I proceeded with the reconstruction works. Parallel to this process, professional photographer Raúl Martin, from the Marketing Department of Inca University Garcilaso de la Vega (sponsor of my trip) took 96 photos with a Canon EOS 60D camera. Of these, I selected 46 images to proceed with the experiment.

Specialist of the Ministry of Culture of Peru initiating the process of digitalization of the skull (in the center)


A day after the photographic survey, the Peruvian Ministry of Culture sent specialists in laser scanning to scan the skull of the Lord of Sipan, carrying a Leica ScanStation C10 equipment. The final cloud of points was sent 15 days later, that is, when I received the data from the laser scanner, all models surveyed by photogrammetry were ready.

We had to wait for this time, since the model raised by the equipment is the gold standard, that is, all the meshes raised by photogrammetry would be compared, one by one, with it.

Full points cloud imported into MeshLab after conversion done in CloudCompare
The cloud of points resulting from the scan were .LAS and .E57 files ... and I had never heard of them. I had to do a lot of research to find out how to open them on Linux using free software. The solution was to do it in CloudCompare, which offers the possibility of importing .E57 files. Then I exported the model as .PLY to be able to open in MeshLah and reconstruct the 3D mesh through the Poisson algorithm.

3D mesh reconstructed from a points cloud. Vertex Color (above) and surface with only one color (below).

As you noted above, the jaw and surface of the table where the pieces were placed were also scanned. The part related to the skull was isolated and cleaned for the experiment to be performed. I will not deal with these details here, since the scope is different. I have already written other materials explaining how to delete unimportant parts of a cloud of points / mesh.

For the scanning via photogrammetry, the chosen systems were:

1) OpenMVG (Open Multiple View Geometry library) + OpenMVS (Open Multi-View Stereo reconstruction library): The sparse cloud of points is calculated in OpenMVG and the dense cloud of points in OpenMVS.

2) OpenMVG + PMVS (Patch-based Multi-view Stereo Software): The sparse cloud of points is calculated in the OpenMVG and later the PMVS calculates the dense cloud of points.

3) MVE (Multi-View Environment): A complete photogrammetry system.

4) Agisoft® Photoscan: A complete and closed photogrammetry system.

5) Autodesk® Recap 360: A complete online photogrammetry system.

6) Autodesk ® 123D Catch: A complete online photogrammetry system.

7) PPT-GUI (Python Photogrammetry Toolbox with graphical user interface): The sparse cloud of points is generated by the Bundler and later the PMVS generates the dense cloud.

* Run on Linux under Wine (PlayOnLinux).

Above we have a table concentrating important aspects of each of the systems. In general, at least apparently there is not one system that stands out much more than the others.


Sparse cloud generation + dense cloud generation + 3D mesh + texture, inconsiderate time to upload photos and 3D mesh download (in the cases of 360 Recap and 123D Catch).

Alignment based on compatible points

Aligner skulls
All meshes were imported to Blender and aligned with laser scanning.


Above we see all the meshes side by side. We can see that some surfaces are so dense that we notice only the edges, as in the case of 3D scanning and OpenMVG + PMVS. Initially a very important information... the texture in the scanned meshes tend to deceive us in relation to the quality of the scan, so, in this experiment I decided to ignore the texture results and focus on the 3D surface. Therefore, I have exported all the original models in .STL format, which is known to have no texture information.


Looking closely, we will see that the result is consistent with the less dense result of subdivisions in the mesh. The ultimate goal of the scan, at least in my work, is to get a mesh that is consistent with the original object. If this mesh is simplified, since it is in harmony with the real volumetric aspect, it is even better, because, when fewer faces have a 3D mesh, the faster it will be to process it in the edition.


If we look at the file sizes (.STL exported without texture), which is a good comparison parameter, we will see that the mesh created in OpenMVG + OpenMVS, already clean, has 38.4 MB and Recap 360 only 5.1 MB!

After years of working with photogrammetry, I realized that the best thing to do when we come across a very dense mesh is to simplify it, so we can handle it quietly in real time. It is difficult to know if this is indeed the case, as it is a proprietary and closed solution, but I suppose both the Recap 360 and the 123D Catch generate complex meshes, but at the end of the process they simplify it considerably so they run on any hardware (PC and smartphones), preferably with WebGL support (interactive 3D in the internet browser).

Soon, we will return to discuss this situation involving the simplification of meshes, let us now compare them.

How 3D Mesh Comparison Works


Once all the skulls have been cleaned and aligned to the gold standard (laser scan) it is time to compare the meshes in the CloudCompare. But how does this 3D mesh comparison technology work?

To illustrate this, I created some didactic elements. Let's go to them.


This didactic element deals with two planes with surfaces of thickness 0 (this is possible in 3D digital modeling) forming an X.


Then we have object A and object B. In the final portion of both sides the ends of the planes are distant in millimeters. Where there is an intersection the distance is, of course, zero mm.


When we compare the two meshes in the CloudCompare. They are pigmented with a color spectrum that goes from blue to red. The image above shows the two plans already pigmented, but we must remember that they are two distinct elements and the comparison is made in two moments, one in relation to the other.

Now we have a clearer idea of how it works. Basically what happens is the following, we set a distance limit, in this case 5mm. What is "out" try to be pigmented red, what is "in" tends to be pigmented with blue and what is at the intersection, ie on the same line, tends to be pigmented with green.


Now I will explain the approach taken in this experiment. See above we have an element with the central region that tends to zero and the ends that are set at +1 and -1mm. In the image does not appear, but the element we use to compare is a simple plane positioned at the center of the scene, right in the region of the base of the 3D bells, or those that are "facing upwards" when those that are "facing down" .


As I mentioned earlier, we have set the limit of comparison. Initially it was set at +2 and -2mm. What if we change this limit to +1 and -1mm? See that this was done in the image above, and the part that is out of bounds.


In order for these off-limits parts not to interfere with visualization, we can erase them.


Thus resulting in a mesh comprising only the interest part of the structure.

For those who understand a little more 3D digital modeling, it is clear that the comparison is made at the vertices rather than the faces. Because of this, we have a serrated edge.

Comparing Skulls


The comparison was made by FOTOGRAMETRIA vs. LASER SCANNING with limits of +1 and -1 mm. Everything outside that spectrum was erased.


OpenMVG+OpenMVS


OpenMVG+PMVS


Photoscan


MVE


Recap 360


123D Catch


PPT-GUI


By putting all the comparisons side by side, we see that there is a strong tendency towards zero, the seven photogrammetry systems are effectively compatible with laser scanning!


Let's now turn to the issue of file sizes. One thing that has always bothered me in the comparisons involving photogrammetry results was the accounting for the subdivisions generated by the algorithms that reconstruct the meshes. As I mentioned above, this does not make much sense, since in the case of the skull we can simplify the surface and yet it maintains the information necessary for the work of anthropological survey and forensic facial reconstruction.

In the face of this, I decided to level all the files, leaving them compatible in size and subdivision. To do this, I took as a base the smaller file that is generated by 123D Catch and used the MeshLab Quadratic Edge Collapse Detection filter set to 25000. This resulted in 7 STLs with 1.3 MB each.

With this leveling we now have a fair comparison between photogrammetry systems.


Above we can visualize the work steps. In the Original field are outlined the skulls initially aligned. Then in Compared we observe the skulls only with the areas of interest kept and finally, in Decimated we have the skulls leveled in size. For an unsuspecting reader it seems to be a single image placed side by side.


When we visualize the comparisons in "solid" we realize better how compatible they all are. Now, let's go to the conclusions.


Conclusion


The most obvious conclusion is that, overall, with the exception of MVE that showed less definition in the mesh, all photogrammetry systems had very similar visual results.

Does this mean that the MVE is inferior to the others?

No, quite the opposite. The MVE is a very robust and practical system. In another opportunity I will present its use in a case of making prosthesis with millimeter quality. In addition to this case he was also used in other projects of making prosthetics, a field that demands a lot of precision and it was successful. The case was even published on the official website of Darmstadt University, the institution that develops it.

What is the best system at all?

It is very difficult to answer this question, because it depends a lot on the user style.

What is the best system for beginners?

Undoubtedly, it's the Autodesk® Recap 360. This is an online platform that can be accessed from any operating system that has an Internet browser with WebGL support. I already tested directly on my smartphone and it worked. In the courses that I ministering about photogrammetry, I have used this solution more and more, because students tend to understand the process much faster than other options.

What is the best system for modeling and animation professionals?

I would indicate the Agisoft® Photoscan. It has a graphical interface that makes it possible, among other things, to create a mask in the region of interest of the photogrammetry, as well as allows to limit the area of calculation drastically reducing the processing time of the machine. In addition, it exports in the most varied formats, offering the possibility to show where the cameras were at the time they photographed the scene.

Which system do you like the most?

Well, personally I appreciate everyone in certain situations. My favorite today is the mixed OpenMVG + OpenMVS solution. Both are open source and can be accessed via the command line, allowing me to control a series of properties, adjusting the scanning to the present need, be it to reconstruct a face, a skull or any other piece. Although I really like this solution, it has some problems, such as the misalignment of the cameras in relation to the models when the sparse cloud scene is imported into Blender. To solve this I use the PPT-GUI, which generates the sparse cloud from the Bundler and the match, that is, the alignment of the cameras in relation to the cloud is perfect. Another problem with the OpenMVG + OpenMVS is that it eventually does not generate a full dense cloud, even if sparse displays all the cameras aligned. To solve this I use the PMVS which, although generating a mesh less dense than OpenMVS, ends up being very robust and works in almost all cases. Another problem with open source options is the need to compile programs. Everything works very well on my computers, but when I have to pass on the solutions to the students or those interested it becomes a big headache. For the end user what matters is to have a software in which on one side enter images and on the other leave a 3D model and this is offered by the proprietary solutions of objective mode. In addition, the licenses of the resulting models are clearer in these applications, I feel safer in the professional modeling field, using templates generated in Photoscan, for example. Technically, you pay the license and can generate templates at will, using them in your works. What looks more or less the same with Autodesk® solutions.

Acknowledgements


To the Inca University Garsilazo de la Vega for coordinating and sponsoring the project of facial reconstruction of the Lord of Sipán, responsible for taking me to Lima and Lambayeque in Peru. Many thanks to Dr. Eduardo Ugaz Burga and to Msc. Santiago Gonzáles for all the strength and support. I thank Dr. Walter Alva for his confidence in opening the doors of the Tumbas Reales de Sipán museum so that we could photograph the skull of the historical figure that bears his name. This thanks goes to the technical staff of the museum: Edgar Bracamonte Levano, Cesar Carrasco Benites, Rosendo Dominguez Ruíz, Julio Gutierrez Chapoñan, Jhonny Aldana Gonzáles, Armando Gil Castillo. I thank Dr. Everton da Rosa for supporting research, not only acquiring a license of Photoscan for it, but using the technology of photogrammetry in his orthognathic surgery plans. Dr. Paulo Miamoto for presenting brilliantly the results of this research during the XIII Brazilian Congress of Legal Dentistry and the II National Congress of Forensic Anthropology in Bahia. To Dr. Rodrigo Salazar for accepting me in his research group related to facial reconstruction of cancer victims, which caused me to open my eyes to many possibilities related to photogrammetry in the treatment of humans. To members of the Animal Avengers group, Roberto Fecchio, Rodrigo Rabello, Sergio Camargo and Matheus Rabello, for allowing solutions based on photogrammetry in their research. Dr. Marcos Paulo Salles Machado (IML RJ) and members of IGP-RS (SEPAI) Rosane Baldasso, Maiquel Santos and Coordinator Cleber Müller, for adopting the use of photogrammetry in Official Expertise. To you all, thank you!

Thursday, 17 November 2016

Torre dei Sicconi - Chapter 9 - Rebirth

After surveying, digging and historical research and virtual reconstruction, here is the final result:

Watch in the last chapter of Arc-Team's "Torre dei Sicconi" series our idea of how the castle looked like in the Middel Ages.

Enjoy!

Torre dei Sicconi - Chapter 9 - Rebirth


Wednesday, 9 November 2016

Torre dei Sicconi - Chapter 8 - Reconstruction

After surveying, digging and historical research, we have started to think about, how the castle was looking like in the Middle Ages. 
Photos from the beginning of the 20. century, archaeological finds, 3D models, the comparison with similar, preserved castles: This are the bases for the virtual reconstruction made by Cicero Moraes.
Watch in the next chapter of Arc-Team's "Torre dei Sicconi" series the single steps of 3D reconstruction with Blender

Enjoy!

Torre dei Sicconi - Chapter 8 - Virtual Reconstruction

Thursday, 3 November 2016

Blender Magazine Italia is back!

Hi all,
this quick post is just to report the news that Blender Magazine Italia is back! 
This year, in November, Blender Italia community will celebrate the 14th year of activity, so that the official website was renovated in February with a brand new look, a forum and new sessions. If you want to join it, just visit this link: https://www.blender.it/
Moreover, today is online the 18th number of Blender Magazine Italia, which can be read directly online here, or downloaded as a pdf here. In this number is included also an article about the Open Source exhibition "Facce. I molti volti della storia umana", which has been realized with just Free/Libre and Open Source Software and in particular with Blender.
I hope you will enjoy reading!

The article about the exhibition "Facce"


Monday, 22 August 2016

St. Paolina Visintainer. Recovering a lost smile

Thursday 9th June 2016 in Vigolo Vattaro (Trentino - Italy) was held a conference regarding the figure of S. Paolina Visintainer, who was born in this town in 1865, and in general the immigration issue. 
Among the other interesting contributions, a special mention has to be made for the work about Italian immigration in Brazil during the XIX century, presented by Cesar Augusto Prezzi, which focused on the states of Rio Grande do Sul and Santa Caterina. This research illustrated the hard journey Italian and, in general, European migrants had to face to reach the new world, often losing their relatives along the way (in many cases buried at sea), in order to escape poverty and war (a story that sadly remembers the current travel of refugees).


Ship with immigrants in Santos

Our (Arc-Team) contribution to the conference regarded a more particular topic, connected with the person of St. Paolina: her Forensic Facial Reconstruction aimed to recover her smile. Indeed Mother Paolina is remembered as a smiling and good-natured person, but, due to the fact that the only photo we have were taken in sad moments of her life, we have no representation of her more natural expression. From this singular issue, +Cícero Moraes  started to work in order to recover her lost smile, with the help of the artist Mari Bueno and Prof. José Luis Lira.
Here below I uploaded the video of the presentation I gave during the conference, in which are presented the partner of the project, the main work-flow and the final result:



while here is the clip shown in the final part of the slides:



Here below is the final image of the Forensic Facial Reconstruction of S. Paolina Visintainer, performed by Arc-Team's forensic artist Cicero Moraes, who also accomplished also the 3D documentation of the skull with SfM techniques. like always in ATOR, this material is released under CC-BY licenses.

The Forensic Facial reconstruction of S. Paolina Visintainer (with the reconstructed smiling expression)


Have a nice day!

Saturday, 13 December 2014

Forensic Facial Reconstruction, the state of the art

As many of you know last week a team of the University of Leicester have publicly revealed to have discovered, in all likelihood, the tomb of Richard III. The results seem comforted by the analysis of mitochondrial DNA, while the discrepancy on the Y chromosome could be explained by a false paternity. The study was completed with a forensic facial reconstruction of the king, performed by the experts of the University of Dundee, led by Caroline Wilkinson, Professor of Craniofacial Identification.
Given the opportunity, I decided to publish here our state of the art on this particular field (forensic facial reconstruction applied to archeology), publishing the presentation that I gave during the study day in honor of Prof. Franco Ugo Rollo (Ascoli Piceno, November 26 2014).

You can see the presentation here below (better visualized at this link)...
 



... and here is a brief explanation of each slide:

SLIDE 1

A remember of Franco Ugo Rollo, professor at the Camerino University. It was not my fortune to know personally Prof. Rollo, but his name is surely well known also in my discipline (archeology).

SLIDE 2

"Digital faces: new technologies for the forensic facial reconstruction of the historical figures".
The presentation intend to be an overview of the digital methodologies of FFR with FLOSS, developed in the last two years on the blog ATOR with a spontaneous contribution of different authors.

SLIDE 3

The traditional work-flow involves several operations: 3D scanning the skull, preparing a replica, performing the anthropological analyses, placing the tissue depth markers, reconstructing the profile, modeling the muscles and skin, calibrating the model with the available sources and dressing it.

SLIDE 4

The same operations are necessary for the digital work-flow. Our main work has been to turn the traditional process into a digital one, using only FLOSS.

SLIDE 5

There are different technology to obtain a 3D digital copy of the original skull. The main two we are using are: SfM - IBM and X-ray CT.

SLIDE 6

IN 2009 Arc-Team perform the first test in applying SfM - IBM with FLOSS to Cultural Heritage, during its participation at the TOPOI excelent cluster of Berlin.

SLIDE 7

The test developed in a collaboration with the French researcher +Pierre Moulon (Université Paris - Est and Mikros Image; actually at Acute3D) to integrate SfM - IBM software in ArcheOS 4 (codename Caersar)

SLIDE 8

The first test (TOPOI Löwe) gave positive results

SLIDE 9

The process is mainly based on different photos with different orientations, computing the displacement of common points between images

SLIDE 10

To complete the 3D documentation of an object, the next step is the so-called mesh-editing, which can be performed in the software MeshLab (developed by the Visual Computing Lab at the ISTI - CNR of Pisa, Italy)

SLIDE 11

In order to validate the digital method of FFR, some unconventional procedures (derived from the hacker culture) have been adopted. With reverse engineering techniques, based on SfM, it has been possible to digitally replicate the process of past FFR projects and to compare the results.

SLIDE 12

The anthropological validation has been performed comparing the result of 3D models obtained with SfM - IBM and the relative results coming form 3D scan (the observed distortion remained in the range of 1 mm).

SLIDE 13

In several projects it is possible to work with DICOM data. In these cases the anthropological analysis is more accurate. (3D VS Voxel)

SLIDE 14

The main software we used for DICOM data is InVesalius, mainly developed at the Renato Archer Information of Technology Center, an institute of the Brazilian Ministry of Science and Technology.

SLIDE 15

"X-ray computed tomography (X-ray CT) is a technology that uses computer-processed X-rays to produce tomographic images (virtual 'slices') of specific areas of the scanned object, allowing the user to see inside without cutting." (Wikipedia)

SLIDE 16

Also in this case, the process was validated with unconventional procedures derived from hacker culture. With reverse engineering of CT videos it has been possible to rebuild DICOM data and the 3D model of different skulls, replicating FFR projects and comparing the results.

SLIDE 17

It is necessary to check and validate the protocol with a continuous methodological comparisonwith all the available resources. For this reason, we tried also the FFR of Henry the IV, a project in which Prof. Rollo was involved, rejecting (with other scholars) the attribution of the mummified head to the French king. Our test in this case is just an experiment, starting from low quality data, but it is a good example to show some benefits of digital FFR, like the possibility to quickly modify the reconstructed face (e.g. closing the mouth in order to perform superimposition with the death mask), an operation not so simple with tangible models.

SLIDE 18

Once obtained the 3D model, digital anthropological analyses do not differ from traditional ones.

SLIDE 19

In some cases, a virtual restoration of the model is necessary. The solution comes from symmetrical and boolean operations of 3D modeling software (Blender).

SLIDE 20

The whole process of 3D modeling is actually performed in the software Blender.

SLIDE 21

The first operation is to fix the 3D skull on the Frankfurt plane, which replicates the head position of a standing human figure.

SLIDE 22

Than tissue depth markers are placed. The software keeps automatically the correct normal of each marker.

SLIDE 23

In our works, for depth tissue markers, we use the tables of Degreef et alii (2006)

SLIDE 24

A second step is the profile reconstruction.

SLIDE 25

For nose shape we refer to G. Lebedinskaya method.

SLIDE 26

The validation of the method came mainly from the comparison between FFR models and the facial DICOM data of living people, a simple simple with digital techniques, using the software CloudCompaer. All this experiment were conducted ans blind test (the artist did not know the identity and the fisionomy of the people).

SLIDE 27

According to the blind test, main deviations were detected on the cheeks.

SLIDE 28

Like other 3D operations, muscles modeling has been performed in Blender.

SLIDE 29

The technique hes been continuously rationalized and optimize. For instance, once the main muscles are modeled with metaballs in Blender, the result can be reused in successive reconstructions through an anatomical deformation.

SLIDE 30

It is possible to reach more realistic results through specific modeling tools,
like the "sculpt mode" in Blender.

SLIDE 31

Also skin modeling is an operation to be performed in Blender

SLIDE 32

Again the technique has been optimized: In order to simplify and speed up the process, a neutral facial model has been  created.

SLIDE 33

The neutral model can be anatomically deformed on different skulls to meet gender and age dimorphism.

SLIDE 34

At the same time, the neutral model can be deformed to meet the anatomical criteria which determine the individual dimorphism.

SLIDE 35

After the reconstruction process, two main models are defined:  one with hair and one hairless.

SLIDE 36

Thanks to the latest developments of the software MakeHuman it is now possible to further simplify and speed up the technique. Our actual research is following this direction.

SLIDE 37

The first tests carried out in 2014 have yielded positive results, thanks to the new feature which loads base raster images. The software is also perfectly compatible with Blender.

SLIDE 38

A further development of the protocol will allow to obtain high quality forensic facial reconstructions, in less time, without the need to master the techniques of 3D modeling.

SLIDE 39

At the end of the FFR process, the final model is calibrated with historical, archaeological and medical sources.

SLIDE 40

In case of historical reconstructions, the model appearance (hairstyle and clothing) is calibrated depending on era and culture, while the physical characteristics (color of hair and eyes) are set basing on the ancestry.

SLIDE 41

The 3D printing technologies allow the materialization of the model with different levels of detail.

SLIDE 42

A case study: the forensic facial reconstruction of St. Anthony of Padua 


SLIDE 43

The 3D scan was carried out on the bronze cast performed by R. Cremesini in 1981.

SLIDE 44

The cast done by R. Cremesini is very important, because it derives from the temporary anatomical reconnection of the skull and the jaw, which were separated since the first survey of the tomb (1263). 

SLIDE 45 

3D scan has been performed with the SfM - IBM software of the archaeological GNU/Linux distribution ArcheOS.

SLIDE 46

The final model has been presented Tuesday, June 10 at the event "Scoprendo il volto di Antonio" at the Centro Culturale S. Gaetano in PAdua (Italy)  

SLIDE 47 - 50

Digital FFR allows to further define the details of the model to reach a more realistic result.

SLIDE 51

Thanks to the collaboration with the Centro de Tecnologia da Informação Renato Archer - CTI (Ministério da Ciência and Technology do Brasil) the model was printed in 3D.

SLIDE 52

One of the materialized models was repainted by the Brazilian Mari Bueno,
specialized in religious art.


SLIDE 53

Thank you for your attention!


 

Wednesday, 3 December 2014

Space archaeology

Per aspera ad astra

When you start a new research you know where your path begins, but you do not know where it will end (and where it will take you). 
As many of you knows, we work also with 3D printing of archaeological objects: here (1 and 2) is the two post +Leonardo Zampi wrote about the Taung Project and here is a post regarding some Augmented Reality applications, in one of which a 3D printed skull was used (look the first video).
Most of these experiments are connected with the open source exhibition "Facce. I molti volti della storia umana" (please, do not forget our crowdsourcing campaign: send us your images!). For this event, whose English title is "Faces. The many aspects of human history", we planned to used 3D printed objects for different Augmented Reality applications and to expand the accessibility to the digital exhibit for the visitors (reducing the restrictions connected with disability). This post reports an preliminary overview of the event (done during the European Academic Heritage Day 2013), in which are presented the main topics of the exhibition, the problems and the solutions we planned to apply (sorry, the slides are in Italian; I will translated the text ASAP).
Today, working on a new research for this exposition, I tested different possibilities to reconstruct a 3D from a unique image. Normally our (Arc-Team) work-flow starts with a 3D model obtained from Structure from Motion and Image-Based Modeling (using different software) or from x-ray Computer Tomography (like for the paleoart or mummiology projects), but in archeology can happen to use Single View Reconstruction techniques when there are no other solutions. This post of +Cícero Moraes is a good example of a reconstruction in Blender based on perspective and vanishing points. Of course this technique is optimized for architectural documentation of structures, but is almost unusable for more irregular objects. 
To avoid this problem I studied different possibilities and I decided to use the same software, Blender, but in a different way. I looked in internet for an archaeological picture that could meet my requirements: not too simple, but with a correct light exposition. My problem in finding a good base image comes from the fact that the archaeological artifact photography has codified rules and normally the light source is located in the upper-left corner, otherwise bas-relief (convex) would appear as counter-relief (concave) and vice versa (due to the Hollow-Face optical illusion).
After a while I found this image, which has an almost correct light exposition (sorry, I do not know anymore the source of the photo and I did not find informations about the author).

The base image
I modified the picture with GIMP, in order to obtain a grayscale photo, than I imported it in Blender.

The grayscale image
There I used the "displace" modifier and I automatically obtained a fast 3D of the object (of course nothing comparable with SfM and IBM technique, but enough for my SVR needs).

The displace modifier
After some additional smoothing operation in Blender (you can directly use the related modifier), the model was ready to be saved as an stl file, loaded in Cura and printed in 3D.

The stl file in Cura
At this point I was ready to adapt the entire process to my needs in order to work for the exhibition "Facce", but here is were my research took a complete different way.
On my desk was lying a local newspaper in which was a photo of the Italian astronaut Samantha Cristoforetti, who is actually on board the International Space Station (ISS). Dr. Cristoforetti was born in Milano, but her family is originally form a town (Malé) very close to the one in which I live (Cles) and this is the reason why local press is following her scientific mission very closely. Reading the article I was thinking that it would be nice to print in 3D something that could be a tribute to her work and to the whole mission: something related with space exploration and archeology. Suddenly in my mind appeared a black and white picture, which probably most of you know and that dates back to July 20 1969, so I decided to test the process on this image and see the result.
I searched on the NASA website regarding the Apollo 11 mission and I found what I was looking for: the photo of the first footprints on the moon. I turned the picture into a grayscale image and I repeat the protocol of Single View Reconstruction with this data

The grayscale image
The video below shows all the work-flow and is a new videotutorial for the Digital Archaeological Documentation Project.



Of course the result has no metric, nor topographic value and it is more an artistic reconstruction than a 3D documentation, but this time it was just for fun and for a tribute to woman's contribution in space exploration. BTW on board the ISS astronauts are currently testing 3D printing in space (Made in Space).
If you want to print directly the .stl file I did, you can download it at this link. Otherwise in this post you can find all you need to do the process by yourself.
Have fun! 

Saturday, 13 September 2014

Arc-Team wins a prize in an international conference in Brazil

Dr. Miamoto and the winner poster

Last week, from September 4th to 6th, the 12th Brazilian Congress of Forensic Dentistry took place in Florianópolis. The biennial event featured conferences and workshops by forensic professionals from Brazil, Uruguay, Peru and USA.

The attendees could also submit poster and short oral presentations to compete for the best academic works awards. The oral presentation "Protocol for Forensic Facial Reconstruction with open software: method simplification using MakeHuman" was one of the winners.

In this work, authors Cicero Moraes (Arc-Team member) and Dr. Paulo Miamoto explained how the application of MakeHuman to forensic facial reconstruction can aid this technique by simplifying and individualizing the anatomic modeling process, as well as allowing the operator to adjust the 3D humanoid template to soft tissue pegs and other objective parameters using the Blender export mode.


The winner poster (in Portuguese)
The method was also presented at one of the official conferences of the event by Dr. Miamoto. Moraes, a 3D Designer, and Miamoto, a Forensic Dentist, are members of the NGO "Brazilian Team of Forensic Anthropology and Legal Dentistry - Ebrafol", a non-profit organization that aims the promotion of Human Rights by applying knowledges of the aforesaid sciences. One of Ebrafol's expectations is provide official forensic units with training on 3D technology using open software.

Originally pubblished at: http://www.makehuman.org/blog/makehuman_for_forensic_face_reconstruction_and_crime_investigation.html

Monday, 5 August 2013

3D scanning by photos taken with a simple smartphone

Example of composed scene (scaned objects + modeled objects)

3D scanning with photos by Structure-from-Motion (SfM) and Image Based Modeling (IBM) has provoked the admiration and has pointed the curiosity of the people.

Some of these people want only to test randomly the technology, but other have objective ambitions, like to reconstruct the architectural heritage, art for advertising and even forensic research that already has been shown here, in other post.

The objective of this blog is share the knowledge of all people that want to learn about free software technology. Thinking about a great number of individuals, today will be shown the result of scanning with the camera of a simple smartphone, the Galaxy Y Duos, from Samsung.

The next phase of the researches will be the collect of photos of other cell phones and smartphones more and less sofisticated than this presented here.

Configuration


Before anything else, is necessary know that the scanning is not made inside the smartphone, but in a personal computer with PPT-GUI installed. A few months ago a post was published here with a tutorial of 3D scanning by photos, that you can follow to make your own test.

The objective of this post is prove that the result is literally at your fingertips. Even the hardware you have not be the most sophisticated (sorry, the version of screenshots are only in Portuguese).


In this case, the model used is a GT-S6102B, with Android 2.3.6. It is a quite simple smartphone sold in Brazil.

All the photos was taken with default configuration of illumination.

Evidently, anyone scanning was made with photos taken during the night.

Some of the scenes was in sunny environment. Some of them in environment with shade of sun, and other in internal of houses.

The only change in the default configuration was use the 3.2 megapixels the total power of the camera, with 2048x1536 pixels.

The result was shown in the video on top of this post.

I hope you enjoyed.

See you in the next!






BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.