Friday, 2 November 2012

Taung Project: Facial forensic reconstruction 2D – studying for 3D modeling.




In the previous post we showed the process of modeling missing parts of the Taung child skull.
To draw the face was used Inkscape, a vector software, and Gimp, a image editor, both free software

To draw the face was used Inkscape, a vector software, and Gimp, a image editor, both free software.



Before we start working, it is important to study the face of primates and human beings, since the Australopithecus africanus appear more ape than human, however in the case of humans, the documentation of their anatomy is far better documented.


Once the skull was completed it was ready to be rendered.


In order o use an image in a 2D reconstruction process, we need to render with an orthogonal camera.

Now is the time to open Inkscape and start the reconstruction, placing the eyeball. Note that some reference images are placed inside the document. These will help during the drawing process.
Now the muscles are to be placed, iniciating with back (temporalis and masseter muscles).

We continue with the muscles in the front of the face (orbicularis oris, buccinator, depressor labii inferioris, depressor anguli oris, zygomaticus minor and zygomaticus major).At this step, it is very important to have a good knowledge of human anatomy, in order to make muscles match the skull.

The last is the orbicularis oculi muscles. Note that have an image in right side. You can see a good article with anatomical description of a chimpanzee here.

Finished the muscles, it's time to make the eyes. 


And draw the nose and the expressions.

Put the ears using the reference pictures of juvenile apes.
It's a good idea to hide some parts of the face, in order to see if the structure is OK.

Finished the face, now is the time to put on some hair. Since this is a fast test, the hair is chosen to be fairly 'gross', giving less work.


One last view of the structure.

And the vector drawing is finished.


The vector is exported as an image and handled on to Gimp, in order to add some effects that ressembles a painting.



And to finish all the work, the classical cut image with the skull and the face reconstructed.





You can download the vectorial file HERE.

The next step will be the reconstruction in 3D of the face. I wait you there.

A big hug!


PS.: Thanks to FAR, that help me with the English.

Thursday, 1 November 2012

Taung Project: Recovering the missing parts of the skull


As was published in the last articles, we are working on the Taung Project, which involves the reconstruction of a 2.5-million-year-old fossil; not just reconstructing the face with soft tissue, but restructuring the entire skull as well.

The most important thing in this project is the technology that will be used, because evidently, all the results will be shared with the community. And the ‘community’ means everyone.


This article will describe the techniques in recovering the missing parts of the Taung child skull.

It's important to state at this point that all integrants of the Arc-Team work hard in their professions, so there will be times one would publish an article before another, when they have free time to share their
knowledge. Having said that, this article was written during someone’s free time, in the hopes that it might be useful to otherswho would read this blog. Below you’ll find the description of how the skull was scanned in 3D.

Describing the process



The skull was scanned in great detail for Luca Bezzi. The model was prepped for importing to Blender.


Unfortunately (or fortunately, for the nerds), a significant part of the skull was missing, as indicated by the purple lines. For a complete reconstruction, the missing parts needed to be recovered.

The first step was to recover the parts using a mirrored mesh in Blender 3D. You can see a time-lapse video of the process here.

This was sufficient enough to cover a large part of the missing area.

But even with the mirroring, a few parts were still missing.
How can this be solved?


An option was to use the CT scan of primates to reconstruct the missing parts at the mandible and other areas.

Obviously, the CT scan chosen was that of infant and juvenile primates.

You can found the tomographies in this link. They can be used for research purposes. To download the files, you'll have to creat an account.

The mandible is of a juvenile chimpanzee (Pan troglodytes). Viewd in InVesalius.

The reconstruction of CT-scan was imported (.ply) in Blender.

And placed on skull.


 But, beyond of the size bigger, the Australopithecus didn't have canines so big.

Using the Blender sculpting tools, it was possible to deform the teeth to make them appear less “carnivorous”…


…and make them compatible with the Taung skull.

To complete the top, the cranium of an infant chimpanzee (Pan troglodytes) was chosen.

ollowing the same process as before, the reconstructed mesh was imported to Blender…


 …and made compatible with the Taung cranium.

The overlapping portion of the cranium was deleted.

The same was done with the mandible.

The skull was completed, but with a crudely formatted mesh because of the process of combining different meshes.

The resulting mesh was very dense, as you can see in the wired orange part.

Why didn’t we use the decimate tool? Because the computer (Core 5) often crashes when this is used.

Why didn’t we make a manual reconstruction of the mesh? To avoid a subjective reconstruction.

How was this solved?

A fake tomography needed to be done to reconstruct a clean mesh in InVesalius. How? We know that when you illuminate an object, the surface reflects the light, but inside it's totally dark because of the absence of light.

So since Blender allows the user to start the camera view when needed, you can set up the camera to "cut" a space and see inside the objects.
The background has to be colored in white, so only the dark part inside the skull appears.

To invert the colors (because the bones have to be white in the CT scan), you can use Blender nodes…

…and render an animated image sequence (frame 1 to 120) of 120 slices.


Using the Python script IMG2DCM, the image sequence was converted in a DICOM sequence that was imported to InVesalius and reconstructed like a 3D mesh.

With IMG2DCM, it is possible to manually establish the distances of the DICOM slices, but in this case,the conversion was made with default values (because this is flattened), and the mesh will just be rescaled later on.





The reconstructed mesh is then imported and rescaled to match the original model.


The result is a clean mesh that can be modified with Remesh to come up with an object with 4-sided faces.

 Now, we only needed to use the sculpt tool for "sanding" the mesh.


 

To create the texture, the original mesh was used. A description of the technique can be viewed here.

When the mapping was finished, the rendering was done, and this step of the project was completed.

You can download the Collada file (3D) here.

I hope this article was useful and/or interesting for you. The next step is a previous 2D reconstruction as training for making the 3D final model.

See you there…a big hug!


Kinect 3D limits: documentation of small objects

As Moreno Tiziani wrote in his post, last Monday (October 22) I was in Padua to start the "Taung Project". The first step of this research was indeed the 3D documentation of the cast of the Taung Child, preserved in the Museum of Anthropology of Padua University
To digitally register our subject we chose SfM/IBM techniques (using ArcheOS and PPT), because, as I reported in this post, the methodology is accurate enough to document small objects. Nevertheless I brought in Padua also our hacked Kinect, to show Moreno how this system is working in 3D recording operations. 

Red circle: Kinect. Green circle Taung Child's cast. Blue circle: RGBDemo compiling on ArcheOS

 As we thought, the cast was too small to be documented with Kinect. The reason is clear: when Kinect is too close, it simply does not "see" the subject to record, while when the device is too far away, it register too few 3D points, so that the final mesh is not accurate enough. 
Unfortunately, I did not capture a screenshot of our test, but I think the images below illustrate the concept: in the first picture my hand is to close to the sensor and it appears completely black, while in the second picture Kinect can see my hand, which appears pink, but the resolution is too low.

The sensor is too close to the subject

The distance between the sensor and the subject is adequate, but the resolution is too low

However we used Kinect to document something in the Museum of Anthropology: a wooden Egyptian sarcophagus. 
As you can see in the short movie below, we registered just one side of the object, for the same reason I explained before: when Kinect is too close to the subject it does not work properly. In this case the position of the sarcophagus was too close to the wall (almost 50 cm) and to a glass showcase (almost 20 cm). It would have been possible to scan all the three visible faces and join them together in post-processing with MeshLab, but this was just an experiment, so we concentrate on the Taung cast. 



However in the movie it is also possible to observe another interesting characteristic of Kinect: being an infrared based device it is not able to go through glass, which is registered like a normal opaque object.

I hope it was useful, have a nice day!


BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.