ATOR (Arc-Team Open Research).
The blog spreads tests, problems and results of Arc-Team research in archaeology, following the guidelines of the OpArc (Open Archaeology) project.
Showing posts with label Open Archaeology. Show all posts
Showing posts with label Open Archaeology. Show all posts
it has been a long time without post here in ATOR, but this year we had to work on several different projects, without the possibility to report fast feedback in our blog.
I start today to write again, due to the fact that some of these projects grab the attention of different institutions of the academic world and, in particular, this happened for our underwater archaeology missions.
Of course our primary interest during our diving is related with the archaeological perspective, but often the data we collect can be useful for other specialists (e.g. limnologists or biologists).
This is the reason why we decided to share our data and we start today with the bathymetric chart of Lake Tovel (previous post in ATRO: 1, 2). I processed this map working on the Red Lake Project, a research, directed by Prof. +Tiziano Camagna, which tries to study the medieval submerged forest of Lake Tovel (Trentino - Italy). I produces a 3D model of the bathymetric chart of this lake directly digitizing the map of Edgardo Baldi did in the '30s. Than I calibrated the result with the LIDAR model of the landscape, freely accessible form the geographic open data portal of the Autonomous Province of Trento (here a short tutorial about how to download data from the webgis).
Here is possible to download the file (an ESRI ASCII grid), ready to be integrated in most of the GIS software (below a screenshot of the data in GRASS GIS).
Tovel Lake bathymetrci chart in GRASS GIS
The data are available under the following license:
as you see we are writing few post in ATOR in the summer season, due to different field projects which take us away from home. Today I try to start again to dedicate some time to our research blog.
The topic of this post regard a solution we are currently using to help us in the archaeological exploration of high alpine lakes: the documentation of the bathymetry through a low cost sonar.
As you maybe know, since a couple of year we are working on underwater archaeology projects in the alpine lakes of our region (here an example). This kind of exploratory mission are difficult, due to the altitude of the site we have to investigate (almost always over 2000 meters asl), so that our divers have to acclimate themselves for one whole day, before starting the working. Also for this reason we started again to study archeorobotics and develop, together with our friends of the WitLab, an open hardware ROV called ArcheoROV (in order to help divers in exploratory mission).
The ArcheoROV (photo by WitLab)
This year we focused or research in find a cheap solution to map the bathymetry of the lakes, while WitLab went on working on the Wi-Fi buoy which gives our ROV a long-range operability (respect the limitation of a simple control on shore). For this reason we tested a cheap sonar called Deeper, which normally is used as a fishfinder.
We started our test in the Lake Tovel, thanks to the hep of Prof. +Tiziano Camagna , who is leading the exploration project since many years. This lake is almost our playground to develop and test new solutions for underwater archaeology, since it is a difficult environment, but not extreme (like other high mountain lakes). We chose this location also because, on unlike other lakes, its bathimetry was documented by Edgardo Baldi in the 30s. We already digitized this map, processing a 3D model in GRASS GIS, so that we have some data to check our results with our small sonar (as you can see in the image below).
On the left the map drawn by Edgardo Baldi between 1937 and 1938; on the right the 3D derived map developed by Arc-Team in GRASS GIS
Some more details of the 3D map developed with GRASS GIS
To test the Deeper sonar, Porf. +Tiziano Camagna designed a small buoy which can be towed by a kayak. This solution stabilize the sonar (which remain always in the right position) and, at the same time, avoid its submersion (which causes the lost of the GPS signal).
First positive results (image below) encouraged us to use this solution on a real mission, at the Monticello lake (almost 2600 meters asl), at Paradiso Pass (near Tonale Pass, Trentino, Italy).
A comparison between the digitized map of E. Baldi (on the left) and the map (work in progress) obtained with the Deeper sonar (on the right)
The expedition was joined also by our friends of the Team Nauticamare (Massimiliano Canossa and Nicola Boninsegna) and gave us the opportunity to accomplish a first mapping of the Lake Monticello, during the first day of acclimatization. This helped us very much during the archaeological underwater mission of the second day. As a result we have now a good 3D map of the bathymetry of the lake, which we will use also in the next expedition (September 2017). Her below is a short video (done with +QGIS plugin qgis2threejs), which shows the 3D model of the lake.
PS
I recorded some videotutorial related with the processing of these data. I will try to upload them ASAP in our channel.
The 21st Conference on Cultural Heritage and NEW Technologies (CHNT 21, 2016) took place in Vienna the first week of November 2016. In that occasion we gave a presentation entitled "Digitizing the excavation. Toward a real-time documentation and analysis of the archaeological record". Today I found the time to publish it in our blog, to share our research regarding this topic and in particular some interesting projects of "archeorobotics" we are working on.
Here below you can see the video of the presentation, done like always with the open source software impress.js and Strut...
... and here is a short description of each slide:
SLIDE 1
The title (strictly related with Digital Archaeology in general)
SLIDE 2
A short presentation of Arc-Team
SLIDE 3
All the work has been done thanks to Free/Libre and Open Source Software. In order to keep going on with our research regarding archaeological methodology we need the source code!
SLIDE 4
The fundamental schema of the archaeological cognitive process elaborated by G. Leonardi in 1982. The schema shows the progressive reduction of the informations regarding human actions before and during the archaeological excavation (Human activities --> Traces on the soil --> Natural and anthropological degradation of the record --> archaeological excavation --> archaeological documentation) until the interpretative knowledge starts recover information during the post-excavation stage (with analitical data interpretation and reconstructive hypothesis)
SLIDE 5
A practical example of the schema from the site of Torre dei Sicconi in Italy (a medieval castle):
1. Human activities (summarized in the building of the castle, the medieval battle and the destruction of the main structure and the controlled explosion during the Great War)
2. Traces on the soil (summarized in the evidences of the battle, of the controlled explosion and of recent agrarian activities, while just negative layers were found regarding the construction of the structure)
3. Natural and anthropological degradation (summarized in the battle, the explosion, the agrarian activities and the normal natural dynamics)
4. Archaeological excavation (the most destructive investigation: in Torre dei Sicconi all the layers concerning the tower and the main central building has been removed by this activity)
5. The importance of archaeological documentation comes from distructive analysis (excavation). Being a long term project, Torre dei Sicconi was documented both with traditional and digital methodology
6. Data analysis. During this stage our knowledge of the site started to grow again. In this case both archaeological and historical techniques have been used
7. Reconstructive hypotheses represent the maximum increase of our (interpretative) knowledge of the site. For Torre dei Sicconi this stage has been achieved just for the central part of the castle (tower and main building)
SLIDE 6
The archaeological excavation is the most critical (destructive) stage of our knowledge regarding a site.
SLIDE 7
Arc-Team's excavation strategies:
1. increasing the amount of information registered decreasing the time-consuming operation of archaeological documentation
2. on-site direct observation for a better interpretation, avoiding at the same time any kind of data selection
3. moving the lab into the field (chemical and physical analyses)
SLIDE 8
A milestone of our research: in 2006 the development of the "Metodo Aramus" gave us a better (more precise and accurate), faster and corect (equalized) 2D digital documentation with FLOSS.
SLIDE 9
Another milestone. Between 2008 and 2009 the migration from pure photogrammetric software to SfM and MVSR methods (through the development of a GUI for +Pierre Moulon's application Python Photogrammetry Suite) gave us better and faster 3D digital documentation
SLIDE 10
Even today we still use a combination of 2D and 3D techniques to meet different requirements of various archaeological projects
SLIDE 11
2D digital documentation through GIS is fast enough for on site interpretation during emergency excavation
SLIDE 12
A software like +QGIS allows a direct interpretation on the field without the necessity of long post-rpocessing
SLIDE 13
3D documentation gives better results, but needs longer processing time (even if it does not need long data acquisition on the field, which is always performed)
SLIDE 14
We achieved (a lower quality) 3D data acquisition which has the fundamental characteristic of being real-time, thanks to open hardware (archeorobotics)
SLIDE 15
Our experience in archeorobotics dates back to 2006 with our first prototype of UAV, which could be use professionally just in 2008.
SLIDE 16
Currently or archeorobotics research regards our last prototype of Archeodrone (a UAV specifically designed for aerial archaeology)...
SLIDE 17
... some CNC machines and, above all, the Fa)(a 3D, a 3D open hardware printer which without any kind of modifications was able to satisfy our archaeological needs (like 3D printing casts of unique finds or exctract and print DICOM data form x-ray CT scan)...
SLIDE 18
... and the ArcheoROV, the open hardware Remotely underwater Operated Vehicle which we developed with the +Witlab Fablab
SLIDE 19
Some pictures of the first test of the ArcheoROV
SLIDE 20
A first step into 3D real-time documentation through SLAM (Simultaneous Localization and Mapping) techniques has been done with the open source ROS (Robot Operating System) and RTAB-Map via Kinect...
SLIDE 21
... and tested for 3D real-time documentation in wooden areas (where SfM and MVSR or laserscab would have been too slow), reaching in almost one hour of work a model (with real dimension) of 75000 points.
SLIDE 22
A benefit of archaeorobotic system like these (which are ROS capable) is the possibility to change the sensor in order to adapt the hardware to different situation, using monocular or stereo cameras (for odometry) as well as LIDAR or SONAR devices.
SLIDE 23
Another benefit is the wide range of possibilities offered by the different open source software (e.g. RTAB-Map, LSD-SLAM, REMODE, Cartographer, ecc...)
SLIDE 24
Currently the precision/accuracy level of a real-time 3D archaeological documentation cannot be compared with the results achieved with post-processing through traditional SfM - MVSR systems, but there are good prospects for improvement.
SLIDE 25
Nowadays, basing on our professional experience, the best use of such devices seems to be during extreme operations, such as high mountain archaeology, glacial archaeology, underwater archaeology or speleoarchaeology
SLIDE 26
Another important step to improve the reaction time of professional archaeology, in order to avoid errors during the critical stage of the excavation, is the possibility to perform some basic archaeometrical analyses (chemical and physical) directly on the field.
SLIDE 27
Considering the composition of any archaeological layer based on two different elements, the skeleton (macroscopic) and the fine earth (microscopic), it is obvious that different analyses can be performed in different work environment.
SLIDE 28
For instance, in the case of the skeleton, a fast petrografic (ontoscopic) analysis can be easily performed directly on the field (defining allogeneic elements), while further (more specific) investigations need an equipped laboratory.
SLIDE 29
Also in the case of fine earth, some raw descriptive analyses can be performed on the field, while laboratory investigation can reach very detailed results (e.g. with the Scanning Electron Microscope).
SLIDE 30
The field analysis of the fine earth is more problematic (compared with the skeleton) the most common test (e.g. the Soil texture by feel) are anametric and subjective
SLIDE 31
For this reason, archaeometric test are the better choice (e.g the sedimentation test)
SLIDE 32
The sedimentation test on the field can be improved with basic physical analysis (e.g. considering the Stoke's Law in order to define sand, silt and clay by the tme they need to sediment)
SLIDE 33
Another implementation on the field for the sedimentation test is the possibility to directly store the data into a PostreSQL/PostGIS database (through some specific fields of the archaeological recording sheet), using the open source application geTTexture.
SLIDE 34
An example of the use of geTTexture
SLIDE 35
Other archaeometric test which are simple to perform directly during the excavation are based on basic chemical analyses, and specifically with the quantification of compounds like phosphates or nitrates.
SLIDE 36
Moreover, with some simple workarounds, it is possible to turn anametric (boolean) analyses of carbonates or organic substances, into metric (quantitative) observations.
SLIDE 37
The Archaeological excavation is a destructive process, subject to fatal (not reversible) errors. Moreover the reduced time and budget in professional and emergency archaeology increase stress conditions during decision making stages.
Real-time 3D mapping can speed up data interpretation, avoiding data selection on the field, while on-site chemical and physical analyses (geoarchaeology and archaeometry) can define a better (data-driven) digging strategy.
I hope this presentation can be useful. Have a nice day!
This is the second presentation we gave at ArcheoFOSS 2016. This time the topic is more related with geoarchaeology and regards geTTexture (the open source application we developed in order to speed up the sedimentation est).
Here below is the link to the original presentation, for the reader who wants to see it directly online:
For those who prefer to see it on youtube, I just uploaded it on our channel:
Like for last post, I report here below a short abstract, describing shortly each slide of the presentation:
SLIDE 1
Title and overview
SLIDE 2
Compiling the archaeological recording sheet is one of the most time-expensive operation during an archaeological project both doing it manually...
SLIDE 3
... or using a database.
SLIDE 4
Considering the Italian standards (ICCD, "Istituto Centrale per il Catalogo e la Documentazione"), often new archaeologists have difficulties in describing the composition of the archaeological layer.
SLIDE 5 and 6
SLIDE 7 and 8
No particular difficulties are detected in describing the artificial elements.
SLIDE 9 and 10
A little bit more complicated is considered to describe the organic and oranogenic elements.
SLIDE 11 and 12
The most difficult field is considered the geological one.
SLIDE 13
Geological materials are splitted into two categories: skeleton and fine earth
SLIDE 14 and 15
The skeleton is normally simpler to identify (both in the field and in the lab).
SLIDE 16 and 17
The fine earth is maybe the most complicated archaeological element to identify on the field, while specialist (geoarchaeologists) need to use specific equipement in the lab.
SLIDE 18
Fine earth definition on the field is foten carried on with anametric and sobjective methodology.
SLIDE 19
Like feel, ball and ribbon test
SLIDE 20
The sedimentation test gives more objective results with a minimum metric value.
SLIDE 21
Arc-Team used validated the use of sedimentation test also in emergency excavation (which have a stricter time-table respect other archaeological projects)
SLIDE 22
Thank to +Mattia Segata (Arc-Team's geoarchaeologist at ATLAB), the basic methodology has been improved considering the Strokes' Law.
SLIDE 23
+Giuseppe Naponiello (Arc-Team DataBase and WebGIS expert) improved a PostreSQL dabatase, developed on the Italian archaeological recording sheet. The Database is able to integrate the data coming from the sedimentation test.
SLIDE 24
Future integration are planned for basic analytical chemistry analyses on the field.
SLIDE 25
And for more specific laboratory analyses (e.g. Energy Disperive X-ray Spectrometry).
SLIDE 26
The DataBase can be easily integrated into a WebGIS
SLIDE 27
The slides is just a demonstration of the software (the code is taken from a prototype).
SLIDE 28
The slide is just an example of one of the videotutorial Arc-Team is producing to explain the sedimentation test and the use of geTTexture.
SLIDE 29
geTTexture will be one of the open source application for archaeology which Arc-Team is developing and that will compose the suite Arc-Tool.
SLIDE 30
Another extension of geTTexture Arc-Team is working on is related with colorimetry. The idea is to integrate a tool to record anametric analyses
SLIDE 31
or metric data coming from Open Hardware devices (e.g. Public Lab spectrometer)
as I wrote in this post, I am building a very simple Android app in order to use geTTexture (the automatic Soil Texture Tirangle +Giuseppe Naponiello developed) from the mobile, directly on the field.
Thanks to the work of +Giuseppe Naponiello the website is now optimized also to be viewed from a mobile's monitor (due to the fact that now the Soil Triangle can be automatically scaled), so that if you have an internet connection on your excavation, you can use geTTexture to define the soil texture of your archaeological layers in a semi-automatic way.
Here below are two screenshots of the mobile app:
geTTexture mobile (upper screen)
geTTexture mobile (lower screen)
By now the mobile app is a simple link the geTTexture developed with MIT App Inventor. Nevertheless I uploaded also the source code of the project on Arc-Team's githup repository (released under General Public license), where you can find also the main software (released under Creative Commons Attribution). Here you can find the binary apk file, to install the app on your mobile. If you want to use geTTexture on your PC or laptop, than you can use this direct link.
Have a nice day!
As you probably noticed, one of the topic of ATOR is related with hardware hacking, with the aim to build new archaeological devices from ordinary objects and tools (33).
This concept is close to the one of "reuse" (using an artefact for a purpose which is completely different from the original function), a pretty common phenomenon in archaeology; also in architecture there is something similar, called "spolia" (but maybe our interest in hacking things is just a kind of McGiver syndrome of people grown up in the 80s).
However, this post is about hacking a common game device like Kinect to use its characteristic in archaeological 3D real-time documentation. If you are a regular reader of ATOR, you will know that we already face this challenge, performing a first test (1) with RGBDemo in February 2012, and controlling accuracy and precision of the device in March of the same year (2), after a discussion with some of the researchers of FBK, during the workshop "Low cost 3D: sensori algoriti e applicazioni". Due to the encouraging results achieved in our first experiments, we worked on the hardware in order to modify it for outdoor projects (3), but soon we experimented the limits of this technology when applied in areas with direct sunlight (4) or in documenting small objects (5, 25). Despite this drawbacks in our research, Kinect worked pretty good in indoor excavations (6), helping us in difficult situations (related with the the workplace safety), and for particular purposed, like for infra-red prospections in dark environment (7).
After all these experiences, our final advice about Kinect is that the device has a potential in archaeology, but its real employment in professional work is restricted to peculiar conditions, while in most of the cases the SfM-based techniques are the best option (due to their versatility, which makes them a perfect choice during missions abroad (8), for small finds documentation (9, 10), for underwater and aerial archaeology (11, 12, 13), considering also the speed which characterize SfM and MVSR open source software development (14) and the wide range of possibilities between the different tools (15, 16).
Well, at least this was our opinion until now... Currently we are changing our mind about Kinect, and this is due to our professional engagement in underground archaeology (17) and to our renovate interest in robotics. Let's deal with these two points separately.
Underground archaeology
Documenting an underground semisubmerged structure in Firuzabad (Iran)
Like any other operation in archaeological 3D documentation, the tolerance regarding accuracy and precision is variable and influenced by some factors, and mainly: research purposes, logistics, characteristics of the structures to be documented.
Without considering some important exceptions (e.g prehistoric rock shelter, which are often simple to document with SfM techniques), most of the structures related with underground archaeology (WW1 artificial caves, medieval mines, etc...) are connected with large scale survey projects (where it is important a "big data" approach, raising the tolerance in data acquisition to increase the number of documented structures); with logistically difficult areas (high mountains, glaciers, (18, 19) etc...); with structures often characterized by vast surfaces without important small details, which (when present) can be recorded with a targeted SfM or RTI (21, 22) documentation (e.g. for graffiti, inscriptions (20), manufacture traces, etc...). For this reason, in most of these projects, it is necessary to deal with precision in documenting (keeping checkpoints thanks to other TOF instruments, like total stations) in order to gain a real-time response from the selected device, and, under this point of view, Kinect is often a good solution, considering also that its infrared sensor helps very much in low light conditions (7).
Documenting WW1 caves in Southtirol (Italy)
Archeorobotics
Arc-Team's UAV during an aerial archaeology project in Storo (Trentino - Italy)
Since 2006, when we joined an aerial archaeological project in Armenia (23), we started to work on "archaeorobotics", trying to develop robotic devices able to help us in the most difficult archaeological missions.
The first positive results we reached in this field were related with aerial archaeology and the building of an open hardware UAV (in 2008), even if at the beginning we underestimated the time needed to practice with our new tool (24). Soon our experience increased as we built different drones, based on open and closed solutions (like kk multicopter (26) or Naza dji (27) models). The benefits of this research branch were clear (28, 29) and soon other research institutions, like the CNR-ITAB of Rome (30), the University of Lund (31) and the CNR-ISTI of Pisa (32), asked us to give lessons about this topic.
Another field of archaeorobotics we explored is the one related with CNC machines and especially with 3D Printers. For this topic a precious help came from the society Kentstrapper and Leonardo Zampi (aka +Exekias 87), who helped us in 3D printing the cast of the Taung Child (34, 35). Since RepRap project started (in 2005), 3D printers evolved very fast. Of course our interest regarding these machines is mainly oriented to Cultural Heritage, and this is also the reason why we built a Fa)(a 3D form scratch (36), but results with this kind of instruments can be very impressive, especially considering the wide range of scientific applications (37, 38, 39, 40, 41), even if sometimes you have to deal with difficult boolean operations (42).
However, none of the robotic projects we developed till now needed Kinect, being based on UAV, to 3D document archaeological sites, or on CNC machines, to fast replicate archaeological artefacts. Our renovate interest in Kinect for archeorobotics is due to our new challenge in developing a ROV (Remotely Operated Vehicle), in order to assist us in our underwater archaeological missions. Indeed, in the last months, we started a collaboration with the WitLab, the FabLab of Rovereto (Trentino - Italy), to develop a new Open Hardware ROV, especially designed for archaeological aims. One of the main topic in developing such an instrument is that the new robot will be oriented not only to 3D documentation, but also to the exploration of unknown areas. For such reason SfM and MVS software are no more enough, but we had to start again in testing Open Source SLAM (Simultaneous Localization And Mapping) algorithms, due to the fact that we need to register in 3D the submerged landscape (Mapping), but also to recover the path the "ArcheoROV" did (Localization) to reach new hidden archaeological evidences (for a better planning of human operations).
Testing the ArcheoROV at night
Testing Open Source SLAM solutions
The importance of SLAM algorithms in exploring devices is the main reason why we started again to experiment Kinect. Indeed, despite Kinect cannot be used as an on-board optical device in our ArcheoROV (due to the infrared camera), this tool is the perfect system to check SLAM software.
If, you ever started in working on robotics, probably sooner or later you stepped into ROS (Robot Operating System), an Open Source (BSD License) collection of software frameworks for robots. Of course SLAM is a very important task for any robotic vehicle, and the ROS package RTAB-Map is a perfect solution to implement this capability into any autonomous or remotely operated machine, like our ArcheoROV. For this reason, before starting experiments in more sophisticated (and complicated) systems, we checked RTAB-Map performance with an old Kinect, and here is the video of the result:
As you can see, the performance of real-time 3D is pretty responsive, respect our old experiments with the Open Source software RGBDemo (also considering that the Kinect used in this video is the first version, and it is now pretty obsolete) and, most important, the localization function within SLAM algorithm works very good. As I wrote at the beginning of the post, our current impression is that this combination of hardware (Kinect) and software (ROS) can be a good solution for underground environment documentation, while the software can be the right choice for archaeological exploring robotic devices.
I hope that this long post will be useful, if you have any feedback, please just write your comment below. Have a nice day!
PS:
we will present the ArcheoROV at the ArcheoFOSS (43) of Cagliari (Sardinia - Italy), this year. Also our partner of WitLab will be with us!
This short post is written for archaeologists who frequently
perform common data analysis and visualisation tasks in Excel, SPSS or
similar commercial packages. It was motivated by my recent observations
at the Society of American Archaeology
meeting in San Francisco - the largest annual meeting of archaeologists
in the world - where I noticed that the great majority of
archaeologists use Excel and SPSS. I wrote this post to describe why
those packages might not be the best choices, and explain what one good
alternative might be. There’s nothing specifically about archaeology in
here, so this post will likely to be relevant to researchers in the
social sciences in general. It’s also cross-posted on the Software Sustainability Institute blog.
Prevailing tools for data analysis and visualization in archaeology have severe limitations
For many archaeologists, the standard tools for any kind of quantitative analysis include Microsoft Excel, SPSS, and for more exotic methods, PAST. While these software are widely used, there are a few limitations that are obvious to anyone who has worked with them for a long time, and raise the question about what alternatives are available. Here are three key limitations:
File formats: each program has its own proprietary format, and while there is some interoperability between them, we cannot open their files in any program that we wish. And because these formats are controlled by companies rather than a community of researchers, we have no guarantee that the Excel or SPSS file format of today will be readable by any software 10 or 20 years from now.
Click-trails: the main interaction with these programs is by using the mouse the point and click on menus, windows, buttons and so on. These mouse actions are ephemeral and unrecorded, so that many of the choices made during a quantitative analysis in Excel are undocumented. When a researcher wants to retrace the steps of their workflow days, months or years after the original effort, they are dependent on their memory or some external record of many of the choices made in an analysis. This can make it very difficult for another person to understand how an analysis was conducted because many of the details are not recorded.
Black boxes: the algorithms that these programs use for generating results are not available for convenient inspection to the researcher. The programs are a classic black box, where data and settings go it, and a result comes out, as if by magic. For moderately complicated computations, this can make it difficult for the researcher to interpret their results, since they do not have access to all of the details of the computation. This black box design also limits the extent to which the researcher can customise or extend built-in methods to new applications.
How to overcome these limitations?
For a long time archaeologists had few options to deal with these problems because there were few alternative programs. The general alternative to using a point-and-click program is writing scripts to program algorithms for statistical analysis and visualisations. Writing scripts means that the data analysis workflow is documented and preserved, so it can be revisited in the future and distributed to others for them to inspect, reuse or extend. For many years this was only possible using ubiquitous but low-level computer languages such as C or Fortran (or exotic higher level languages such as S), which required a substantial investment of time and effort, and a robust knowledge of computer science. In recent years, however, there has been a convergence of developments that have dramatically increased the ease of using a high level programming language, specifically R, to write scripts to do statistical analysis and visualisations. As an open source programming language with special strengths in statistical analysis and visualisations, R has the potential to be a solution to the three problems of using software such as Excel and SPSS. Open source means that all of the code and algorithms that make the program operate are available for inspection and reuse, so that there is nothing hidden from the user about how the program operates (and the user is free to alter their copy of the program in any way they like, for example, to increase computation speed).
Three reasons why R has become easier to use
Although R was first released in 1993, it has only been in the last five years or so that it has really become accessible and a viable option for archaeologists. Until recently, only researchers steeped in computer science and fluent in other programming languages could make effective use of R. Now the barriers to getting started with R are very low, and archaeologists without any background with computers and programming can quickly get to a point where they can do useful work with R. There are three factors that are relevant to the recent increase in the usability of R, and that any new user should take advantage of:
the release of an Integrated Development Environment, RStudio, especially for R
the shift toward more user-friendly idioms of the language resulting from the prolific contributions of Hadley Wickham, and
the massive growth of an active online community of users and developers from all disciplines.
1. RStudio
For the beginner user of R, the free and open source program RStudio is by far the easiest way to quickly get to the point of doing useful work. First released in 2011, it has numerous conveniences that simplify writing and running code, and handling the output. Before RStudio, an R user had little more than a blinking command line prompt to work with, and might struggle for some time to identify efficient methods for getting data in, run code (especially if more than a few lines) and then get data and plots out for use in reports, etc. With RStudio, the barriers to doing these things are lowered substantially. The biggest help is having a text editor right next to the R console. The text editor is like a plain text editor (such as Notepad on Windows), but has many features to help with writing code. For example, it is code-aware and automatically colours the text to make it a lot easier to read (functions are one colour, objects another, etc.). The code editor has comprehensive auto-complete feature that shows suggested options while you type, and gives in-context access to the help documentation. This makes spelling mistakes rare when writing code, which is very helpful. There is a plot pane for viewing visualisations and buttons for saving them in various formats, and a workspace pane for inspecting data objects that you've created. These kinds of features lower the cognitive burden to working with a programming language, and make it easier to be productive with a limited knowledge of the language.
2. The Hadleyverse
A second recent development that makes it easier for a new user to be productive using R is a set of contributed packages affectionately known in the R user community as the Hadleyverse. User contributed packages are add-on modules that extend the functionality of base R. Base R is what you get when you download R from r-project.org, and while it is a complete programming language, the 6000-odd user contributed packages provide ready-made functions for a vast range of data analysis and visualization tasks. Because the large number of packages can make discovering relevant ones challenges, they have been organised into 'task views' that list packages relevant to specific areas of analysis. There is a task view for archaeology, providing an annotated list of R packages useful for archaeological research. Among these user-contributed packages are a set by Hadley Wickham (Chief Scientist at RStudio and adjunct Professor at Rice University) and his collaborators that make plotting better, simplify common data analysis activities, speed up importing data in R (including from Excel and SPSS files), and improve many other common tasks. The overall result is that for many people, programming in R is shifting from the base R idioms to a new set of idioms enabled by Wickham's packages. This is an advantage for the new user of R because writing code with Wickham's packages results in code that is easier to read by people, as well as being highly efficient to compute. This is because it simplifies many common tasks (so the user doesn't have to specify exotic options if they don't want to), uses common English verbs ('filter', 'arrange', etc.), and uses pipes. Pipes mean that functions are written one after the other, following the order they would appear in when you explain the code to another person in conversation. This is different from the base R idiom, which doesn't have pipes and instead has functions nested inside each other, requiring them to be read from the center (or inside of the nest) to the left (outside of the nest), and use temporary objects, which is a counter-intuitive flow for most people new to programming.
3. Big open online communities of users
A third major factor in the improved accessibility of R to new users is the growth of an active online communities of R users. There has long been an email list for R users, but more recently, user communities have former around websites such as Stackoverflow. Stackoverflow is a free question-and-answer website for programmers using any language. The unique concept is that it gamifies the process of asking and answering questions, so that if you ask a good question (ie. well-described, includes a small self-contained example of the code that is causing the problem), other users can reward your effort by upvoting your question. High quality questions can attract very quick answers, because of the size of the community active on the site. Similarly, if you post a high-quality answer to someone else's question, other users can recognise this by upvoting your answer. These voting processes make the site very useful even for the casual R user searching for answers (and who may not care for voting), because they can identify the high-quality answers by the number of votes they've received. It's often the case that if you copy and paste an error message from the R console into the google search box, the first few results will be Q&A pages on Stackoverflow. This is very different experience compared to using the r-help email list, where help can come slowly, if at all, and searching the email list, where it's not always clear which is the best solution. Another useful output from the online community of R users are blogs that document how to conduct various analyses or produce visualizations (some 500 blogs are aggregated at http://www.r-bloggers.com/). The key advantage to Stackoverflow and blogs, aside from their free availability, is that they very frequently include enough code for the casual user to reproduce the described results. They are like a method exchange, where you can collect a method in the form of someone else's code, and adapt it to suit your own research workflow.
There's no obvious single explanation for the growth of this online community of R users. Contributing factors might include a shift from SAS (a commercial product with licensing fees) to R as the software to teach students with in many academic departments, due to the Global Financial Crisis of 2008 that forced budget reductions at many universities. This led to a greater proportion of recent generations of graduates being R users. The flexibility of R as a data analysis tool, combined with rise of data science as an attractive career path, and demand for data mining skills in the private sector may also have contributed to the convergence of people who are active online that are also R users, since so many of the user contributed packages are focused on statistical analyses.
So What?
The prevailing programs used for statistical analyses in archaeology have severe limitations resulting from their corporate origins (proprietary file formats, uninspectable algorithms) and mouse-driven interfaces(impeding reproducibility). The generic solution is an open source programming language with tools for handling diverse file types and a wide range of statistical and visualization functions. In recent years R has become the a very prominent and widely used language that fulfills these criteria. Here I have briefly described three recent developments that have made R highly accessible to the new user, in the hope that archaeologists who are not yet using it might adopt it as more flexible and useful program for data analysis and visualization than their current tools. Of course it is quite likely that the popularity of R will rise and fall like many other programming languages, and ten years from now the fashionable choice may be Julia or something that hasn't even been invented yet. However, the general principle that a scripted analyses using an open source language is better for archaeologists, and science generally, will remain true regardless of the details of the specific language.
this fast post is intended to be an overview of the new open source software Polygontool, an application our friend +Szabolcs Köllö (aka +keulemaster) developed for Arc-Team. This tool is helping us in defining an automatic data processing protocol, in order to directly convert raw data files (collected with RTK GPS or total station during survey campaigns) into GIS readable formats. Currently the tool is under an hard test phase, being used during an interreg project (leaded by +Rupert Gietl) about the Great War between the Austrian and Italian border, but it had already positive effects on our work-flow, reducing the time expensive operations of manual data processing. The short video below is a demo to explain how the software works and what it can do.
The source code (in Python) can be found on github and it is already usable (if you want to test it) and open to contributions (if you want to help us in the development). Currently the configuration files (in the "config" folder) are optimized for our interreg project, but you can, of course, modify the terminology to make them fit to any other archaeological database.
Soon I will post other reports about Polygontool. By now I hope this preview will be useful for some of you (and maybe for us, if someone will join the project).
for who is interested, the deadline of the 9th edition of the ArcheoFOSS has been extended till April 25. Here is the official report (from +piergiovanna grossi):
"IX Workshop Free / Libre and Open Source Software and Open Format in the archaeological research processes.
From survey to data sharing. Technologies , methodologies and languages of open archeology.
Verona , 19-20 June 2014 (IT)
To encourage the submission of proposals,
the deadline has been extended till
April 25. The organizing committee's aim is to support the broadest participation in the joint
construction of a workshop of increasing quality, hoping that new
proposals can be submitted by scholars, researchers,
students, professionals, archaeological companies and associations, working in the field of Cultural Heritage and the FLOSS application.For proposal submission, please refer to the Call for proposals page.For more informations on the workshop, you can visit the page of ArcheoFOSS 2014."
The Arena of Verona (CC-BY-SA 3.0, author: Lo Scaligero)
I go on in recording basic videotutorial about FLOSS in archeology. This time I show how to turn raw data (from the total station) into WKT, starting with the simplest shape (a point).
Like always I will upload this material on the DADP wiki, udpdating the old tutorial (I am using a preview version of ArcheOS Theodoric).
Hi all,
today I simply post the call for paper of the ArcheoFOSS 2014, which will be held in Verona. The text below has been written by +piergiovanna grossi, of the committee thatis organizing the event:
The logo of ArcheoFOSS workshops (done by +Paolo Cignoni)
IX Workshop Free/Libre and Open Source Software and Open Format in
the processes of archaeological research
From research to shared knowledge. Technologies, methodologies and
language of open archaeology
Verona, 19-20th June 2014
The IX Workshop ArcheoFOSS will be hosted by the Department Time, Space,
Image and Society and Department of Computer Science, University of
Verona. The workshop will focus on the use of free and open source
software and on the opening and sharing of data related to archeology
and cultural heritage. Key topics range from field research activities,
to analysis and lab studies, to sharing and dissemination via web,
including the presentation of excavation, research, study activities and
of projects aimed at data processing and dissemination.
The thematic lines are:
- FLOSS systems and tools in archaeological and cultural heritage research;
- FLOSS systems and tools in management, preservation and enhancement of
archaeological and cultural heritage;
- FLOSS systems of representation, analysis, sharing and web publishing
of archaeological and cultural heritage;
- Projects focused on opening and disseminating archaeological and
cultural heritage data.
Workshop sessions, in which the proposals may be included, are:
1 . talks
2 . seminars / workshops
3 . presentations of dissertations or small projects in progress
4 . barcamp
5 . install party or programming section
6 . other ( to be specified by the proposer )
Proposals will be included in the program based on contents, quality of
work and number of proposals received.
The deadline for submissions is April 15, 2014 , at 24.00.
today I recorder a fast videotutorial regarding the GIS OpenJUMP (which is one of the Geographic software integrated in ArcheOS).
Normally, when you add a new vector layer in your project, you get a new empty level, without any database schema, so that you can start to draw your feature, but, if you want to add some info, you have to manually describe your db schema, like in the video below:
Of course, if you have to draw many different layers with a common database schema (like always happen in a normal archelogical GIS), this operation can be time-consuming (and boring). For this reason in the videotutorial below I try to show how to write a short script which automatically add in OpenJUMP a new vector layer with a customized database schema:
Since I fear the quality of the video is too poor in Youtube, I prepared an image in which you can see better the source code of the script:
The code of the script
To work correctly, the script has to be placed in your OpenJUMP folder, in /lib/ext/BeanTools/ and, as you see in the video, you have to refresh the menu in OpenJUMP (Customize --> BeanTools --> RefreshScriptMenu) to find it (in ArccheOS Caesar you will find the script already in the menu. Just modify the code according to your needs).
Like always I added the tutorial in our ArcheOS wiki (DADP project), in order to go on in composing a free documentation system for Digital Archaeology:
I also uploaded the code of the script into a specific github repository, so that, if you want, you can contribute in its development. We can use the comment space of this post for the discussion about the schema and about its possible modifications (or you can simply download the script and modify it in order to fulfill your specific needs).
As Luca Bezzi said in his presentation in Catania, the next step in the Taung project was 3d printing; in a previous post, I explained some issues we found in the original mesh. But thanks to Cicero's suggestions, the problems have been fixed, and 3 days ago Kentstrapper finally printed the Taung Child skull.
Here are some images:
The .stl model
Kentstrapper strongly believe that 3d printing can be a real revolution in education and culture. And, of course, in archaeology 3d printing should also be a great change in museum expositions: facial reconstructions, scale models of ancient buildings or (as in this case) plastic copies of finds could make archaeology much more easilyunderstandable for visitors.
HERE you can download the final .stl file of the skull.