Digitizing natural heritage
What is natural heritage? Is it linked with cultural heritage? Should it be conserved or considered for digitization?
With 3D digitization, the spot light is always caught by cultural heritage items or monuments. Because they’re made by us, humans. By our ancestors. They’re part of our story and we want to try to better understand them. But this does not mean that we should not also focus on the natural heritage, that is also linked with our ancestors and also, maybe, it might give us much more important answers about ages long gone, when humans were not the apex predator on this planet.
Natural heritage (definition here) sums all the elements of biodiversity (flora, fauna, ecosystem types) associated with geological structures and formations (geodiversity). Heritage meaning that we inherited all these, we maintain them at the present and preserve them for the future generations. According to UNESCO WHC (1972) and also to the Romanian law (which I think just translated the same phrases) natural heritage includes:
- natural features
- geological and physiographical formations
- natural sites
As with cultural heritage, natural heritage can benefit greatly from the new technologies for documentation. 3D digitization has been used for a long time for scanning landscapes, caves, rupestrian formations or fossils. Each with different purposes and applications.
About the collection
Today I’m going to write shortly about the digitization of a small collection of cave bear mandibles found in several caves in Romania. Part of the INTEGRATE project focus, this collection is made of 55 pieces collected from 3(?) caves: Muierilor Cave, Urșilor Cave and Bisericuța Cave. Each of these locations have very interesting stories that are not mine to tell, but you can check the INTERGRATE project that is currently including them in a more complex investigation. These caves are both fascinating and very important for the scientific community.
The fossils ranged in size between 8 cm and 40 cm in length, and between 2 cm and 7cm in absolute width (distance projected between the farthest points on both sides of the fossil). The heaviest was less than 2kg. This first assessment was important in order to establish the equipment and working place setup. I used an automatic rotating table that can only support 2 kg items so this was an important aspect. The largest lighting tents that I have have a 60cm by 60cm working area so, again, the sizes were important in this aspect. So, I could use my favorite tent and the automatic rotating table for all the fossils. Cool!
A more important aspect of the fossil sizes and shape (or sizes ratio) was for determining the camera working parameters. Setting an optimal focal length, working distance and lens aperture is crucial for the quality of the scan. The shape of the object dictates the approach of the scan and the positions during the scan. The important thing is to manipulate as less as possible the object during the scans. So you have to figure out the best positions for the object that allows you to record all the needed images for a complete 360 scan.
Another possible problem for 3D scanning (either photogrammetry or laser scanning) was the enameled teeth on these mandibles. Teeth usually have both reflections and subsurface scattering, light interaction phenomena that can lead to erroneous results.
The digitization process
Workspace setup
The size and weight of the subjects allowed the use of the light rotating table from OrangeMonkie. The automated process allowed really fast data acquisition (under 10 minutes per object).
For a correct and precise photogrammetry project one of the important aspects are the lighting conditions and the white balance/color correction. The Foldio tent with its perfect LED lighting and two vertical LED bars, each one positioned at 45 from the optical axis of the lens, generated enough uniform light on the surface of the scanned object. This light system allowed me to use close apertures for greater depths of field. For color and white balance corrections special charts were used.
For scale precision I used either a right angle scalebar or a custom boad with a few Metashape coded targets.
Data acquisition and image processing
For this project I used my new Tamron 28-75mm G2 lens on the Sony A7R3 with a CPL polarizing filter. The focal lengths used, depending on the subject were mainly 50 mm and 75 mm. Aperture throughout the whole project was f/20. Working distances varied from subject to subject but the goal was to be as close to the object as possible while maintaining a reasonable depth of field (which varied between 10 and 30 cm)
Each object was recorded with an average of 303 (48 images x 3 rounds x 2 sides + specific single images for obscure angles) images from three height levels on each side of the object (base level, 45 degrees, 75-80-ish degrees). With all the reshoots (two piece were rescanned) were shot 16304 photos. All images were recorded in RAW format at 42 megapixels.
Because all the fragments were not recorded in a single day, for each recording session slight lighting differences occurred (especially with morning/evening cycles). To mitigate this factor, calibration chart photos were recorded with each session start and thus resulted multiple calibration files that were going to be used during the image processing step. From the chart photos, calibration files were generated.
The RAW images were edited in a Batch editing software, Adobe Lightroom. Color calibration files were used and custom white balance was set from the white balance chart recorded for the session the images were recorded in. Slightly adjustments for shadows or highlights were also carried out. Some parts located deep into the bone structure were not properly recorded, as they were completely dark.
Edited images were exported with the maximum resolution and quality as JPEG files.
Photogrammetric reconstruction
Photogrammetric reconstruction was carried out with Agisoft Metashape PRO 1.8. The workflow was pretty basic: image files import, image alignment at High quality factor, marker and scale creation, alignment optimization, mesh generation using depth maps method and finally textures generation. The meshes were all decimated to 1 million polygons, while the textures (diffuse and normal maps) were generated based on the high poly models at 8K resolution.
Calculated ground resolution (GSD) was 0.063 mm/pixel, this being the smallest detail size that could have been identified based on the camera system and working distance. Due to the time constraints and that the type of this project did not require this level of details the models where processed at a slightly lower resolution.
The purpose of this project was to deliver mesh files of the digitized content in any universal formats. My choice was glTF (Graphics Language Transmission Format) in it’s binary format .glb. No other types of deliverables were required but the original source files and the projects were stored for any possible future re-processing at higher details.
Interactive viewer
Below you can view and analyze one of the fragments, actually the first that was scanned. This viewer is called 3D Hop and it is open source and free to use and adapt to your needs. It requires some HTML knowledge and 3d file format conversions (as it only works with nexus formats) before customizing it, but it is a very powerful tool in this regards. You can zoom in, zoom out, pan, measure distances on surface, play with a source of light to analyze small details in raking light (you can also remove the texture for a better view of the details). Another useful tool is the section cut where you can cut profiles on all three axes.
So this would be all. Thank you for bearing with me (pun intended) so far and I hope you enjoyed this short journey. You can always leave a comment, question or just a cheer in the comments or on social media.
Cheers!
Laurentiu.
Add comment