Sunday 27 July 2014

July 27, 2014: just another birthday for ATOR

... and here we are: the third birthday of ATOR! 
To respect the "tradition" (1st anniversary, 2nd anniversary), today I'll publish some data about our "open research" blog.
This year, unfortunately, there are no new entry between the active authors (or AuThORs, as someone already says), but the number of post has (obviously) increased from 160 to 215, leading to 767 comments from the community. Currently (22:31 p.m. in Italy) we had 253116 visualizations (41708 since last reset of the revolvermaps counter... this time it was my fault). Our members are, up to now, 85 and this means that we have 24 new regular readers.

In my opinion, our little experiment in "sharing tests, problems and results" of our research is working, due to some events that have occurred over the past three years:

  1. through ATOR, Arc-Team's research in archeology increased in terms of development and results
  2. ATOR's post were useful also for other disciplines and sciences (soon more news about this topic)
  3. oldest project (e.g. ArcheOS) took advantage form ATOR visibility
  4. new projects (e.g. Taung project) and derivatives subprojects (e.g. the exhibition "Facce") started also thanks to ATOR
  5. we have improved our English :)

As I wrote last year: "This short post is intended as a thanks for all the people composing the community of ATOR, readers and authors as well", but this time I want to mention the authors (in alphabetical order), without which our blog could not exist:
Bernhard Fischer 

Thank you all!

Sunday 20 July 2014

Lattice deform 3D: Modern man + chimp = H. rhodesiensis

This post is meant to showcase the use of Blender's Lattice Modifier on the facial reconstruction of hominid ancestors.

As we do not have soft tissue tables for them, we had to use the skull of a modern human specimen and a chimpanzee and deform them alongside the soft tissue - although the latter stays in another layer.

Final image with details made on Sculpt Mode

The goal was to create a model resembling a computer tomography. It is meant to be hairless, without a defined color and promoting the study of form.

Deforming a chimp skull until it be converted in almost a man skull
Initially we were going to deform the head of a modern man, but instead we took advantage of the situation and tried a new approach; a few days earlier I had performed the deformation of a chimpanzee's skull using as a reference the skull of a man. We expected the result to be an individual very different from us humans, but what we saw was amazing: it looked like an average human being, or at least the caricature of a human being.

The man, the chimp and the result of the final deformation.
In view of this result I figured it would be a good idea to perform the deformation of the head of a modern man and a chimpanzee and in the end merge the two as a result of anatomical conformation.
At left the model made by MM Gerasimov

In the book The Face Finder written by the Russian master of forensic facial reconstruction MM Gerasimov, there was already talk that the structure of Homo rhodesiensis had characteristics of modern humans and apes.

In the end the two models joined in a single deformed mesh and I made minor adjustments. The result of our study was fairly consistent with that reached by MM Gerasimov.

Thanks to the Primate Research Institute, Kyoto University (KUPRI) for the CT-Scan of the chimp. Thanks to the Osirix developers for the DICOM file of modern human. Thanks to Dr. Moacir Elias Santos for the 3D scanned skull of a H. rhodesiensis.  Thanks to Claudio Marques Sampaio for the help with English translation.

Homo Rhodesiensis after received the retopo of the mesh with Bsurface in Blender allowing the application of facial expressions.

Thursday 10 July 2014

ArchaeoSection 0.1.1 (new release)

New release of ArchaeoSection (with some important bug corrections) is now available.

ArchaeoSection is a simple tool for the translation and rotation of points measured on a section line, in order to make easier the section's drawing.

More details and downloadable files are available here:

Wednesday 9 July 2014

Faces of Evolution - validating the methodology for facial reconstruction of hominids

Face of a Homo erectus pekinensis reconstructed from the deformation of the reconstructed CT scan of a modern man
In facial reconstruction, most secure information are those obtained based on the soft tissue thickness tables. They are elaborated from the measurement of the distance between the outer surface of the skin, going through muscles, fat and other soft tissue s to the bone, at specific points, spread over the head and may vary within an average  quantity of 21-33 landmarks, depending on the protocol used . These thicknesses can be obtained from people who have died recently or even in living individuals using ultrasounds or CT scans

And what use do these measures have in facial reconstruction? It's very simple, they work as a reference for the artist or scientist, which uses these points to make a "reverse engineering", because from the bones of the skull one can estimate how much soft tissue there was at these specific points, and approximate the volumetric shape of that individual based on a statistical method.

Stages of the adaptation of skull and skin of a modern man over the skull of a H. erectus pekinensis.
The problem arises when dealing with cases without soft tissue tables, such as our hominid ancestors. How to do this research for such beings, which are already extinct for thousands and/or hundreds of thousands of years?

To overcome this problem I thought of a conceptually simple solution but that charges a certain skill to be applied. In the case of hominids that look more like the modern man, such as neanderthalensis, pekinensis and rhodesiensis we can use the scans of modern humans, filter the skin and the skull and then deform them until the man's skull suits the hominid skull. Of course the skin is put in another layer so its view does not interfere in the work and at the same time. It also allows the focus to be restricted to the skull, that is the only sound piece that is left of that animal.

Paranthropus boisei, a hominid reconstructed from the deformation of the skull and skin of a Pan troglodytes (chimpanzee).
For other hominids such as Paranthopus boisei, Homo habilis, Australopithecus afarensis and the like, we apply the same procedure but using a reconstructed CT scan from a Pan troglodytes as the object for deformation.

So far so good, it was clear that this was an ingenious way out ... but would it be valid? Would that deformation be compatible with the volumetrics of the hominid in question?

To answer these questions I leaned on Archaeologist Luca Bezzi's rationale, put forward during a meeting in which we participated in Italy, in occasion of the preparations for FACCE, il molti volti della storia umana. Bezzi proposed a simple and interesting experience... he said, if the method was valid, theoretically it could convert a chimpanzee into a gorilla (Gorilla gorilla) and vice and versa. I found that assumption fantastic and decided to perform it as soon as I returned to Brazil.

To get a gorilla, I resorted to a database of CT scans from KUPRI, Primate Research Institute Kyoto University, in Japan (PRICT. 296). Despite being with an "open mouth" the model I found seemed good, because the head was complete and it was an adult, as well as the chimpanzee used as a source object of deformation.

Although they look like the same creature at first glance, there are many structural differences between a chimpanzee and a gorilla. By adapting the skull of the first one, using the second as a reference, I was quite anxious to see the final result. Even a seemingly intelligent and well thought out solution, as I carried out a test with such rigor and the need to reach a pre-determined outcome, I confess that I feared falling into the arms of failure.

As I finished the settings and turned on the layer containing the already deformed chimpanzee's skin, I realized that the method had achieved a high degree of compatibility. I still must test the deformation of a gorilla... but I will leave it to another occasion, when free time allows me to do it. For now I will take some time and just enjoy the delight that this experience has given me.

OBS.: I have to thank Dr. Paulo Miamoto that made this translation from the original post in Portuguese. Dr. Miamoto is a Ph.D in Dentistry and coordinator of the EBRAFOL - Brazilian Team of Forensic Anthropology and Legal Dentistry.

A big hug!

Saturday 5 July 2014

Geometric Classification Method in QGIS

Graduated symbolology for vector layers is largely used in archaeology, e.g. for classifying with different colors, dimensions or symbols excavation grids, survey's squares, site points, etc.
The graduated symbology is built by the “classification” of a numeric variable: for example, if we have an excavation grid of 60 squares that register the weight of finds for each of them, vector layer classification consists of defining the number of weight's classes (e.g. 5), intervals in which data are grouped (0-10 g, 11-20 g, 21-30 g, etc.) and colors or symbols representing each class.
This classification may be different depending on the different mathematical and statistical method applied. Different classification methods yield different results, as you can see in this picture:
In order to using the proper classification method is necessary to know the shape of frequency distribution that the variable we want to classify assumes: for uniform distributions "Equal Interval" or "Pretty Breaks" methods are good; for normal distributions "Standrd Deviation" or "Quantile" methods are better; for bi- or polimodal distributions "Natural Breaks (Jenks)" method is the best choice.

In Archaeology exponential (positive skewed) distributions are frequent. Here you can see an example of exponential distribution of data:

When we deal with an exponential distribution of our data, the proper classification method is "Geometric Intervals" (Dent B. D. 1999, Cartography. Thematic Map Design. Fifth Edition, London, pp. 146; 406. Conolly J., Lake M. 2006, Geographical Information Systems in Archaeology, Cambridge, pp. 141-­145).

As far as I know, GIS FLOSS (QGIS, GRASS, gvSIG, Openjump, Saga, etc.) don't include this classification method. I tried to develop it for QGIS using the opportunities provided by its Console Python.

I developed a simple Python script for geometric classification, using the formula suggested by Dent 1999, p. 146 and with the help of web community, in particular the blog of Carson Farmer ( and a mailing list reply of Kelly Thomas (­to­apply­a­graduated­renderer­in­

You can find more details and download the script from programming section of the site:

Here I would like to post an example of usage step by step:

1. Open in a text editor and modify your variables (field of numeric data to classify and number of classes). Save the modified file.

2. In QGIS load your shapefile, select it and open "CONSOLE PYTHON" from Plugin. In Console type:

###For GNU-Linux:

###FOR Win:

Press ENTER (1) and then UPDATE button in QGIS toolbar (2). Open Vector Style Manager (double click on your vector layer) for viewing legend and changing color ramp (3).

3. The resulting map should look like this:

Remember that my script is under development and in need of improvement and testing: use it without warranties! Test and suggestions are welcome.

That's all. 

Denis Francisci

Tuesday 1 July 2014

MakeHuman – Tests with facial reconstruction and human evolution

Face of a reconstruction turning into an Australopithecus afarensis. Model originally created in MakeHuman and Blender later deformed with ShapeKeys. Animated GIF and effects generated with Imagemagick.

I remember in late 2011 I bought a classic book called Forensic Art Art & Illustration. After studying the process of facial reconstruction I decided to apply it on a skull acquired from a CT scan, and to my happiness, it all went successful.

The year 2012 was prolific in reconstructing faces, and by 2013 I developed a methodology for facial reconstruction based on pre-structured models, in partnership with Dr. Paulo Miamoto. We aimed at a more rapid completion of work and more than that, we also in thought about providing the beginner with the ability to make their own reconstructions without the need to study for years in order to develop the mastery of sculpting a 3D face digitally.

Face adapted from a photo background view. The goal was to only define the volumetric shape, without setting beard, hair and accessories. The terracotta face on the left is the standard model of MakeHuman.
Even with the development of a pre-configured template, the task of revealing faces from skulls was still difficult for those who were starting and it forced us to look for more reasonable alternatives. That's when MakeHuman appeared.

First of all it must be explained how our "assembly line" of faces works. Initially I get a skull, usually without much information. When I say "get a skull" in fact it is either a CT or a laser scan. Then I send the material to Dr. Miamoto, who observes it and estimates the sex, age and the ancestry of the individual, based on knowledge from forensic protocols.

Face adaptation in MakeHuman, using a background image as a reference.
For many months, Dr. Miamoto and I did hours and hours of virtual meetings, so called "hangouts", in which we exchanged information. He handed me knowledge inherent to Forensic Dentistry and Anthropology and I taught him how to use 3D computer graphics open software. Gradually he learned the processes, and started handing me the skull aligned and with the soft tissue markers placed (those little pegs on the surface of the skull that are estimates of the thickness of the skin, fat and muscle in the region).

Just the fact that he handed me this stage of work already done quickened the wprkflow considerably. Our challenge became another step... the basic modeling of the face. Even with all the knowledge gained in computer graphics, Dr. Miamoto saw himself struggling to handle the template we had developed, because even though it was preset, handling it demanded a some ability of the user, which could only be gained by many months of dedication.

The luck turned to our side at the moment I received the news of the new version of the free software MakeHuman. Not that I had not used it before, in fact I knew it fairly well, but I didn't find it very suitable in previous versions, due to some incompatibilities on importation of human models in Blender. However, as I tested the scripts provided by the developers I had a pleasant surprise as I realized that everything worked perfectly.
For those unfamiliar, we can sey long story short that MakeHuman is a "human factory" tool. When we open the software we are presented to a standard model which may be adapted to any gender, age or ancestry just by manipulating a series of attributes intuitively organized into its interface. Everything visually and in real time.

In theory we can create any type of human being, and the most interesting is that given its ease of use, anyone can operate it in a few minutes.

The idea that came to mind was that, as Dr. Miamoto had dominated the first part of the process he could also set up the model based on the anthropological profile assessed from the skull, and send me the file so that I would use it to adapt the shape defined by skull itself, the markers in the nasal tissue and other projections.
The first test I did was with a skull that I had reconstructed two years ago. I sent the skull, he observed it and assessed sex, age and ancestry. Then he set up a model in MakeHuman based on the data collected and clues provided by the skull. Once finished he sent it to me via email.

Back in 2012 when modeled it from scratch, that face took about 8 hours to be done. By adapting the face according to the data received from Dr. Miamoto it did not take me more than 20 minutes to finish the volumetric face! I was amazed after I imported the mesh that I had done in 2012 and verified that the two were very compatible with each other.
The test was successful, and the more interesting... it was not only the head, but an already articulated whole body, ready to be animated in case it was necessary!

Skull aligned and profile sketched by Dr. Miamoto
Above, an example of soft tissue pegs being positioned over a skull and then the outline of the profile, made by Dr. Miamoto.

Adaptation process of the 3D face set in MakeHuman and then imported into Blender, done by Dr. Miamoto.
After setting up the face of the individual based on the anthropological assessments, he imported the model of MakeHuman into Blender and gradually adapted the facial tissue according to the projections of the profile.
Final stage of the adaptation of the face profile using as reference the layout of the profile.
Note that Blender and other programs are running under Linux. According to him, it made ​​the job much easier and practical.
Below are some words of Dr. Miamoto himself about MakeHuman and the proposed new methodology for digital facial reconstruction addressed in this post:
Fourth class of the graduate course in Forensic Dentistry at the Faculty of Dentistry of Ribeirão Preto (FORP-USP). Classes on forensic facial reconstruction with open software.
Dr. Miamoto explaining how photography-based 3D scanning works.
Below are some words of Dr. Miamoto himself about MakeHuman and the proposed new methodology for digital facial reconstruction addressed in this post:

"In Brazil, increasingly, the postgraduate courses in Forensic Dentistry include in their curriculi contents on forensic facial reconstruction. Generally, students are initiated in the manual technique and soon feel motivated to deepen their knowledge. It's an almost instinctive curiosity about the digital techniques, often with romaticized views (influenced by TV series), in which it is believed that there is an unique software that does all the work and everything is automatic. There are also intimidated views, as if the digital world is an extremely hostile terrain for "outsiders" such as dentists. The partnership that Cicero and I are working, always with the broad support of many partners in Brazil and abroad, helps make teaching beginners the digital techniques reality. The protocol we developed is constantly improving, and undoubtedly MakeHuman constitutes a keystone, which takes the weight of digitally modeling a very coherent face from a few polygons off the back of beginners (and also the experienced). In other words, although artistic skills are and always will be desirable, it is possible to start to work in 3D environment even if one is not an extremely skilled forensic artist. In addition, MakeHuman's intuitive interface combined with its excellent software development also allows for a real demonstration of the key features of each ancestry, differences between sizes and shapes of faces and cranial vaults, development aging of the human body, as well as other nuances of the human variation that are very useful to the teaching of Forensic Dentistry and Forensic Anthropology. By mastering some functions and basic concepts of Blender, the avatar that is imported from MakeHuman can be adapted to the skull with relative ease, while maintaining the characteristics of sex, age and ancestry, providing the student with the crystallization of a technique regarded as very complex or difficult, namely, computerized forensic facial reconstruction. The satisfaction and pride that a student feels when actually modeling a face on the computer is a spark that we believe to be a possible trigger for changes in the panorama of Brazilian forensic science, which certainly has in the motivation and training of its own human resources one of the cornerstones for its growth and consolidation. Thanks to the work of the open source software comunity, new solutions to old problems can be glimpsed, with the advantage of the wide accessibility to tools as well as their low or even nonexistent cost. Long live the open software applied to Forensic Sciences!”

Paulo Eduardo Miamoto Dias, DDS, MDS, PhD
CROSP 91.834

To make it an even more interesting task and prove once and for all that the job gets easier, I decided to test the deformation of the model we reconstructed from MakeHuman in a hominid.

I used an Australopithecus afarensis reconstruction, performed a few months ago for shows in Brazil and Italy, as a reference.

Using Blender's shapekey, which allows one to deform an object preserving the information from the original object, it is possible to track the "morph" of the face.

Technically we can convert the model into many other mammals, like a saber-tooth tiger, a mouse etc., not only other hominids.

If we can make such a different adaptation like this, why would the mild conversion of a human face into another be a problem?

The most interesting is that as we deal with open software with shareable content, there is a great possibility of establishing a partnership with MakeHuman's development team to collaborate on this very interesting project, which makes it accessible to the general public this art that is seen as extremely complex, which is the facial modeling.

Breathing the atmosphere of the World Cup, I would say that the kickoff was given, that contacts were made and that now everything depends on a little time and effort to get things to start happening.

See you next post!

Obs.: Thanks to Dr. Miamoto for the translation to English, from the original post in Portuguese.
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.