Thursday 20 June 2013

Kinect - Infrared prospections

Despite what I wrote at the end of this post, it looks like that Kinect is not really the best option for archaeological underground documentation, or for any other situation in which it is necessary to work in darkness.
I already tested the hardware and the software (RGBDemo) at home, simulating the light conditions of an underground environment, and the result was that Kinect scanned in 3D some parts of an object (a small table), with great difficulties. 
My hope was that the infrared sensors of Kinect were enough to record the objects geometries also in darkness, as actually happened. The problem was that probably RGBDemo, to work properly, needs also RGB values (from the normal camera). Without colors information the final 3D model is obviously black (as you can see below), but (and this is the real difficulty) it seems that the software loses a fundamental parameter to keep tracking the object to document, so that the operations become too slow and, in most cases, it is not possible to complete the recording of a whole scene. In other words the documentation process often stops, so that after it is necessary to start again or simply to save different partial scans of the scene, to reassemble at a later time.
However, before discarding Kinect as an option for 3D documentation in darkness, I wanted to do one more experiment in a real archaeological excavation and, some weeks ago, I found the right test area: an acient family tomb inside a medieval church.
As you see in the movie below, the structure was partially damaged, having a small hole on the North side. This hole was big enough to insert Kinect in the tomb, so that I could try to get a fast 3D overview of the inside, also to understand its real area (which was not identifiable from the outside).




As I expected, it was problematic to record the 3D characteristics of such a dark room, but I got all the informations I needed to estimate the real perimeter. I guess that in this occasion RGBDemo worked better because of the ray of light that, entering the underground structure and illuminating a small spot of the ground, was giving the software a good reference point in order to track all the surrounding areas.
Since the poor quality video it is difficult to evaluate the low resolution of the 3D reconstruction, you can get a better idea looking this other short clip, where the final pointcloud is loaded in MeshLab.



This new test of Kinect in a real archaeological excavation seems to confirm that this technology is not (yet?) ready for documentation in complete absence of light. However the most remarkable result of the experiment was the use of one of the tool of RGBDemo, which shows directly the infrared input in a monitor. This option has been a good prospection instrument to explore and monitoring the inside of the burial structure, without other invasive methodology. As you see in the screenshot, it is possible to see the inside condition of the tomb and to recognize some of the objects that lie on the ground (e.g. wooden planks or human bones), but of course this could have been done simply with a normal endoscope and some led lights (like we did in this occasion).

RGBDemo infrared view
However, here is possible to compare what the normal RGB sensor of Kinects is able to "see" in darkness and what its infrared sensors can do:

PS
This experiment was possible thanks to the support of Gianluca Fondriest, who helped me in every single step of the workflow.

9 comments:

  1. I'm no expert, but apparently, Kinect use independent ways to collect 3D and color information data from a scene. First, Kinect use a "structured light" approach to get the 3d info. To do that, is used a embedded IR projector and a IR data collecting sensor. The camera is not involved, and is only used later to collect the RGB color information, from the scene.

    http://users.dickinson.edu/~jmac/selected-talks/kinect.pdf

    If you operate in total darkness, you can only achieve the infrared information, emitted by the Kinect IR projector. No visible light, means no RGB color information, only pseudo-color (see images in http://en.wikipedia.org/wiki/Kinect). But you still achieve an accurate 3D model, because is independently obtained with the laser/sensor.

    If you want some kind of color, you need to install a visible light source in the scene (normal lamps).

    The Kinect camera is also IR sensitive. Theoretical, in total darkness, is possible to improve the record capabilities of a IR sensible camera sensor, increasing the amount of IR light in the scene (ie. using a IR led (Light Emitting Diodes) flashlight, or a IR led ring around the camera). This would be good for obtain a better pseudo-color color map, but wouldn't improve the quality of the 3D metric data.

    ReplyDelete
    Replies
    1. Hi Ricarrdo,
      I am just testing Kinect in free time, also because with this tool we cannot reach the quality of a SfM/IBM documentation. What I noticed with this experiment is that in darkness Kinect with RGBDemo is "loosing" often the target (more often respect a documentation with artificial light). It is just an empiric observation... my only clue is that maybe the fact is connected also with the software (not only with the hardware) and that maybe the software uses the RGB values to keep traking the scene, but it is just my thought. Of course Kinect works also in darkness for 3D (cause of the infrared sensors), but the process is too slow for an effective use in a working place (at least by now).
      I was thinking to add to Kinect a led ring, but to do it I have to connect the led with the lead battery and do some modifications on the power system (to keep the tool portable). It is a small hack, but until now I did not find the time to do it. When I will do it, I will write a post about it.
      However, Rupert's work (http://arc-team-open-research.blogspot.co.at/2013/03/sfm-for-underground-documentation.html) with SfM techniques in darkness reached satisfactory results (also because it is always possible to play with the ISO of the camera and with the exposure time).
      We will go on testing both technologies, thank you very much for the feedbacks (I answered also in the blog to keep the discussion public).

      Delete
  2. ...and thank you very much for sharing, once again, this cool experiments.

    ReplyDelete
  3. How kinect works
    http://gamerant.com/kinect-night-vision-video-dyce-51156/

    ReplyDelete
    Replies
    1. Thank you very much, really interesting post!

      Delete
  4. About the new Kinect model:

    http://www.technologyreview.com/view/515276/what-will-hackers-do-with-the-new-kinect/

    http://blogs.msdn.com/b/kinectforwindows/

    ReplyDelete
  5. Kinect 3D mapping

    http://hackaday.com/2013/06/08/3d-mapping-of-rooms-again/

    ReplyDelete
    Replies
    1. This is an amazing software, probably Kintinous is the best solution for a Kinect SLAM documentation. Unfortunately I did not find the source code. It looks like that the authors should contribute to the Kinect Fusion project and to PCL, but until now does not seems they released the code. If someone can find the source code, it would be nice to test this tool (it would turn Kinect in a really interesting documenting tool for underground environment). Thanks!

      Delete

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.