Showing posts with label hacking. Show all posts
Showing posts with label hacking. Show all posts

Monday, 8 August 2016

Kinect, a sleeping research branch reacivated

As you probably noticed, one of the topic of ATOR is related with hardware hacking, with the aim to build new archaeological devices from ordinary objects and tools (33).
This concept is close to the one of "reuse" (using an artefact for a purpose which is completely different from the original function), a pretty common phenomenon in archaeology; also in architecture there is something similar,  called "spolia" (but maybe our interest in hacking things is just a kind of McGiver syndrome of people grown up in the 80s).
However, this post is about hacking a common game device like Kinect to use its characteristic in archaeological 3D real-time documentation. If you are a regular reader of ATOR, you will know that we already face this challenge, performing a first test (1) with RGBDemo in February 2012, and controlling accuracy and precision of the device in March of the same year (2), after a discussion with some of the researchers of FBK, during the workshop "Low cost 3D: sensori algoriti e applicazioni". Due to the encouraging results achieved in our first experiments, we worked on the hardware in order to modify it for outdoor projects (3), but soon we experimented the limits of this technology when applied in areas with direct sunlight (4) or in documenting small objects (5, 25). Despite this drawbacks in our research, Kinect worked pretty good in indoor excavations (6), helping us in difficult situations (related with the the workplace safety), and for particular purposed, like for infra-red prospections in dark environment (7).
After all these experiences, our final advice about Kinect is that the device has a potential in archaeology, but its real employment in professional work is restricted to peculiar conditions, while in most of the cases the SfM-based techniques are the best option (due to their versatility, which makes them a perfect choice during missions abroad (8), for small finds documentation (9, 10), for underwater and aerial archaeology (11, 12, 13), considering also the speed which characterize SfM and MVSR open source software development (14) and the wide range of possibilities between the different tools (15, 16).
Well, at least this was our opinion until now... Currently we are changing our mind about Kinect, and this is due to our professional engagement in underground archaeology (17) and to our renovate interest in robotics. Let's deal with these two points separately.

Underground archaeology

Documenting an underground semisubmerged structure in Firuzabad (Iran)

Like any other operation in archaeological 3D documentation, the tolerance regarding accuracy and precision is variable and influenced by some factors, and mainly: research purposes, logistics, characteristics of the structures to be documented.
Without considering some important exceptions (e.g prehistoric rock shelter, which are often simple to document with SfM techniques), most of the structures related with underground archaeology (WW1 artificial caves, medieval mines, etc...) are connected with large scale survey projects (where it is important a "big data" approach, raising the tolerance in data acquisition to increase the number of documented structures); with logistically difficult areas (high mountains, glaciers, (18, 19) etc...); with structures often characterized by vast surfaces without important small details, which (when present) can be recorded with a targeted SfM or RTI (21, 22) documentation (e.g. for graffiti, inscriptions (20), manufacture traces, etc...). For this reason, in most of these projects, it is necessary to deal with precision in documenting (keeping checkpoints thanks to other TOF instruments, like total stations) in order to gain a real-time response from the selected device, and, under this point of view, Kinect is often a good solution, considering also that its infrared sensor helps very much in low light conditions (7).

Documenting WW1 caves in Southtirol (Italy)

Archeorobotics

Arc-Team's UAV during an aerial archaeology project in Storo (Trentino - Italy)

Since 2006, when we joined an aerial archaeological project in Armenia (23), we started to work on "archaeorobotics", trying to develop robotic devices able to help us in the most difficult archaeological missions.
The first positive results we reached in this field were related with aerial archaeology and the building of an open hardware UAV (in 2008), even if at the beginning we underestimated the time needed to practice with our new tool  (24). Soon our experience increased as we built different drones, based on open and closed solutions (like kk multicopter (26) or Naza dji (27) models). The benefits of this research branch were clear (28, 29) and soon other research institutions, like the CNR-ITAB of Rome (30), the University of Lund (31) and  the CNR-ISTI of Pisa (32), asked us to give lessons about this topic.
Another field of archaeorobotics we explored is the one related with CNC machines and especially with 3D Printers. For this topic a precious help came from the society Kentstrapper and Leonardo Zampi (aka +Exekias 87), who helped us in 3D printing the cast of the Taung Child (34, 35). Since RepRap project started (in 2005), 3D printers evolved very fast. Of course our interest regarding these machines is mainly oriented to Cultural Heritage, and this is also the reason why we built a Fa)(a 3D form scratch (36), but results with this kind of instruments can be very impressive, especially considering the wide range of scientific applications (37, 38, 39, 40, 41), even if sometimes you have to deal with difficult boolean operations (42).
However, none of the robotic projects we developed till now needed Kinect, being based on UAV, to 3D document archaeological sites, or on CNC machines, to fast replicate archaeological artefacts. Our renovate interest in Kinect for archeorobotics is due to our new challenge in developing a ROV (Remotely Operated Vehicle), in order to assist us in our underwater archaeological missions. Indeed, in the last months, we started a collaboration with the WitLab, the FabLab of Rovereto (Trentino - Italy), to develop a new Open Hardware ROV, especially designed for archaeological aims. One of the main topic in developing such an instrument is that the new robot will be oriented not only to 3D documentation, but also to the exploration of unknown areas. For such reason SfM and MVS software are no more enough, but we had to start again in testing Open Source SLAM (Simultaneous  Localization And Mapping) algorithms, due to the fact that we need to register in 3D the submerged landscape (Mapping), but also to recover the path the "ArcheoROV" did (Localization) to reach new hidden archaeological evidences (for a better planning of human operations).

Testing the ArcheoROV at night


Testing Open Source SLAM solutions

The importance of SLAM algorithms in exploring devices is the main reason why we started again to experiment Kinect. Indeed, despite Kinect cannot be used as an on-board optical device in our ArcheoROV (due to the infrared camera), this tool is the perfect system to check SLAM software.
If, you ever started in working on robotics, probably sooner or later you stepped into ROS (Robot Operating System), an Open Source (BSD License) collection of software frameworks for robots. Of course SLAM is a very important task for any robotic vehicle, and the ROS package RTAB-Map is a perfect solution to implement this capability into any autonomous or remotely operated machine, like our ArcheoROV. For this reason, before starting experiments in more sophisticated (and complicated) systems, we checked RTAB-Map performance with an old Kinect, and here is the video of the result:



As you can see, the performance of real-time 3D is pretty responsive, respect our old experiments with the Open Source software RGBDemo (also considering that the Kinect used in this video is the first version, and it is now pretty obsolete) and, most important, the localization function within SLAM algorithm works very good. As I wrote at the beginning of the post, our current impression is that this combination of hardware (Kinect) and software (ROS) can be a good solution for underground environment documentation, while the software can be the right choice for archaeological exploring robotic devices.

I hope that this long post will be useful, if you have any feedback, please just write your comment below. Have a nice day!

PS:

we will present the ArcheoROV at the ArcheoFOSS (43) of Cagliari (Sardinia - Italy), this year. Also our partner of WitLab will be with us!

Webography

ATOR:

(1) Kinect, real-time 3D; (2) Kinect accuracy and precision with RGBDemo; (3) Kinect 3D outdoor: hacking the hardware; (4) Kinect 3D outdoor: first test; (5) Kinect 3D limits: documenting small objects; Kinect 3D indoor: excavation test (6); Kinect - Infrared prospections (7); Aramus 2014: 2D and 3D documentation of archaeological excavation (8); 3D for archaeological finds (9); Taung Project: 3D with SfM & IBM (10); Extreme SfM: underwater archaeology (11); From drone-aerial pictures to DEM and ORTHOPHOTO: the case of Caldonazzo's castle (12); Documentation of a bas-relief on a cliff : the workflow (13); CMVS/PMVS2 40% faster (14); OpenMVG VS PPT (15); MicMac and PPT: two FLOSS solutions for 3D data (16); SfM for Underground Documentation (17); Archaeology as a profession (18); Glacial Archaeology: About the challange to work in extreme conditions (19); WW1: High Alpine Survey Data - Work in Progress (20); Arc-Team tries Large Scale Reflectance Transformation Imaging (RTI) (21); WebRTIViewer (22); UAVP (Universal Aerial Video Platform) (23); UAVP indoor flight (24); 3D documentation of small archaeological finds (25); Building an Xcopter (26); Arc-Team's UAVP: testing the NAZA dji (27); Xcopter drone and SFM techniques (28); ArcheOS and UAVP for archaeological remote sensing (29); Open Source Remote Sensing Platform (30); Remote sensing with UAV in archeology (lessons at Lund University) (31); Aerial archaeology with FLOS Hardware and Software (32); A DIY endoscope for emergencies during excavation fieldwork (33); 3D PRINTING THE PAST: SOME ISSUES (34); The Taung Child is now touchable, thanks to 3d printing (35); 3D printing for Cultural Heritage (36); Space archaeology (37); 3D PRINTING GOOGLE MAPS IS NOW EASY (38); When Veterinary Medicine and 3D printing meet each other (39); Three more animals are saved with the aid of Blender and 3D printing (40); Augmented Reality at Cultways (41); Boolean operations - the powerful Cork! (42); ArcheoFOSS 2016 in cagliari! (43)

Kentstrapper website: http://kentstrapper.com/

Fa)(a 3D website: http://www.falla3d.com/

WitLab website: http://www.witlab.io/

ROS website: http://www.ros.org/

RTAB-Map website: http://introlab.github.io/rtabmap/

Thursday, 1 November 2012

Kinect 3D limits: documentation of small objects

As Moreno Tiziani wrote in his post, last Monday (October 22) I was in Padua to start the "Taung Project". The first step of this research was indeed the 3D documentation of the cast of the Taung Child, preserved in the Museum of Anthropology of Padua University
To digitally register our subject we chose SfM/IBM techniques (using ArcheOS and PPT), because, as I reported in this post, the methodology is accurate enough to document small objects. Nevertheless I brought in Padua also our hacked Kinect, to show Moreno how this system is working in 3D recording operations. 

Red circle: Kinect. Green circle Taung Child's cast. Blue circle: RGBDemo compiling on ArcheOS

 As we thought, the cast was too small to be documented with Kinect. The reason is clear: when Kinect is too close, it simply does not "see" the subject to record, while when the device is too far away, it register too few 3D points, so that the final mesh is not accurate enough. 
Unfortunately, I did not capture a screenshot of our test, but I think the images below illustrate the concept: in the first picture my hand is to close to the sensor and it appears completely black, while in the second picture Kinect can see my hand, which appears pink, but the resolution is too low.

The sensor is too close to the subject

The distance between the sensor and the subject is adequate, but the resolution is too low

However we used Kinect to document something in the Museum of Anthropology: a wooden Egyptian sarcophagus. 
As you can see in the short movie below, we registered just one side of the object, for the same reason I explained before: when Kinect is too close to the subject it does not work properly. In this case the position of the sarcophagus was too close to the wall (almost 50 cm) and to a glass showcase (almost 20 cm). It would have been possible to scan all the three visible faces and join them together in post-processing with MeshLab, but this was just an experiment, so we concentrate on the Taung cast. 



However in the movie it is also possible to observe another interesting characteristic of Kinect: being an infrared based device it is not able to go through glass, which is registered like a normal opaque object.

I hope it was useful, have a nice day!


Monday, 8 October 2012

Kinect 3D indoor: excavation test

To complete the "Kinect trilogy", today I write this post about our first test during a real archaeological fieldwork. 
Also in this case we (Alessandro Bezzi and me) used our "hacked Kinect" with the external battery in connection with the rugged PC and, again, the chosen software for data acquisition was RGBDemo. This time we documented in 3D a layer during an "indoor" excavation, to avoid the problems with direct sunlight I descirbed in this post.
The video below tries to summarize this operation...




... and here are some screenshots to have an idea of the final result:

The pointcloud (frontal view)


The pointcloud (side view)

The mesh

The mesh (wireframe)

As you can see the general quality is lower respect the results we can obtain with other techniques (e.g. SfM and IBM), but Kinect and RGBDemo have the benefit to acquire and elaborate the data almost at the same moment, with the possibility to see the documentation process in real time. 
Ultimately Kinect is one more option to consider for 3D indoor documentation, considering the peculiarities of the archaeological project (the light conditions, the available time, the required level of detail, etc...). Our experiments will now go on now with some tests in particoular situations, where this technique could be the best option (expecially in underground environments).
Have a nice day!

Saturday, 6 October 2012

Kinect 3D outdoor: first test

It was a sunny September Sunday, so I decided to take a walk with my wife Kathi and show her one of the hermitages located in the valley in which we live (Val di Non, Trentino, Italy). 
My second thought was that the ramble was a perfect opportunity to test the hacked Kinect and try to document in 3D the main wall of S. Gallo's ruins (the remains of the hermitage). So I prepared the backpack with Kinect, the external battery and the rugged pc we normally use on the archaeological excavations. 
After half an hour's walk throught apple orchards and woods we reached the hermitage. Along the way we also found a stunned rooster. That was strange! A rooster, in italian "gallo", in the S. Gallo's hermitage...
However, we began to try to document the main wall of the ruins, which you can see in the picture below...

S. Gallo's hermitage, with the rooster

... but, probably due to direct sunlight conditions, Kinect and RGBDemo where not working propertly.
In fact, as you can also read in M. Dalla Mura, M. Aravecchia and M. Zanin poster (during "LOW COST 3D: sensori, algoritmi e applicazioni" workshop), "...The main issue is due to direct Sun illumination that leads to saturation in the depth acquisition...". Moreover the software (RGBDemo) was reacting very slowly, but this was probably due to the hardware (Panasonic Thougbook), which is less powerful compared to the laptop I normally use to work. Secondly also RGBDemo seems to work better on GNU/Linux (ArcheOS), the Operating System which runs my laptop, than in Windows, the rugged PC OS (but this could be just my impression). 
Not beeing satisfied with the results I get with the 3D documentation of the ruins (software too slow to manage all the data recording process, high errors on the sunny parts of the wall, etc...), I decided to check for another subject to document. Luckily in S. Gallo's hermitage are not missing the caves, so, with the help of Kathi, I did a fast digital 3D copy of the cave you see in the picture below.

S. Gallo's cave


This time the software was working good, fast enought to work on the field and with negligible errors in data acquiring. In the movie below it is possible to see the final pointcloud (not complete, but big enought to understand the quality of a 3D "field" documentation with Kinect).



After this test, our Kinect was ready to the "trial by fire" of a real (indoor) archaeological excavation, which will be the topic of one of the next posts in ATOR.
Ciao.
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.