Friday 3 February 2012

Aramus' method: level of detail

Many people asked us which level of detail can be reached with the "metodo Aramus". It depends on several variables:
  • the computer RAM,
  • the size of the area,
  • the resolution chosen in GRASS
  • the "time budget" to spend in the elaboration process.
The picture below shows the result obtained elaborating 52 pictures using a PC with 2Gb of RAM (4 hours of work).




3 comments:

  1. Well done! Thanks for sharing, but I have some questions ;)

    Do you use some kind of automatic color brightness equalization to process the images, or all the equalization is done manually?

    Do you know some kind of osm-bundler implementation, or open/libre similar, that can produce a mesh and a texture map, using the same set of pictures?

    Probably a mesh texture created using point color transfer is good enough for structures, but to register earth color nuances (different S.U.), frescoes or mosaics a photographic texture is much more accurate.

    rgaidao

    ReplyDelete
  2. Hi Ricardo,

    Normally we equalize the photos manually inside GIMP (but I guess this process could be semi-automatize in GIMP using colored GCP. Maybe in the next excavation we will have the time to try...). If I remember well, our friend Francesco Pirotti (http://www.cirgeo.unipd.it/francescopirotti/index.html) of TeSAF (http://www.tesaf.unipd.it/itn/) once developed a GRASS script to automatically equalize the photo of a photomosaic. It was a work related with the Giotto's fresco in the Cappella degli Scrovegni in Padua. The article was published in Geomatic workbooks (http://geomatica.como.polimi.it/workbooks/n5/articoli/pir-vet_en.pdf).
    One of the goal in our future research will be to transfer all the steps of the "metodo Aramus" inside GRASS, but we still have to evaluate if in the next months it will be simpler to work directly in 3D, which is also the topic of your next question...

    Yes, the next release of MeshLab should be able to extract an high-resolution texture map from the colored point-cloud. By now these are just rumors, but in a week we will be in Sweden (Lund Univerity, http://www.lunduniversity.lu.se/), where we will meet Nicolò Dell'unto (http://www.humlab.lu.se/people/personnel/nicolodellunto), who knows this argument very well, being in good connection with Matteo Dellepiane (http://vcg.isti.cnr.it/~dellepiane/Research_ita.html) of ISTI-cnr, one of the developer of MeshLab. Hope to have soon good news, but until it is not possible to get high-resolution texture from SfM/IBM 3D models, it is better to go on with the normal 2D photomapping...

    I hope it helps.
    Ciao

    ReplyDelete
  3. Thanks, for the help and specially for the fantastic news about Meshlab. Marco Callieri mention that:

    we will finish to integrate inside MeshLab the complete color pipeline

    - Image-to-3D precise alignment
    - Color to mesh projection (per-vertex and texture)
    Everything is already there in the code, we are
    finalizing the interface...


    http://vcg.isti.cnr.it/~callieri/blendercourse.html

    rgaidao

    ReplyDelete

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.