Last week, we told you about a test of our Digitization Manager’s relief digitization process, under development thanks to a UA Libraries’ Innovation Grant.
The project: a process to capture relief data using (1) a regular mounted digital camera setup, (2) a piece of simple secondary hardware, (3) the ubiquitous photo editing program Adobe Photoshop, and (4) some homegrown open-source software.
For the last few months, Jeremiah’s been simultaneously working on designing the hardware (and in a way that can be replicated by others), writing the software (a Ruby script), and working out the logistics of the technique. While last week’s post focused on how the captures are made, this one will focus a bit more on how and why the technique works.
Light and Dark
You might be wondering, Why do you need an extra piece of hardware? The short answer: light. The relief capture process depends on using brightness values to determine height, and the hardware Jeremiah built helps in this aspect.
The movable mask (green) makes the mounted lights illuminate an object in just the right way, as it comes into view through the opening in the mask. The object, down below the mask, is being shot from above and illuminated at an angle:Where both lights cross and hit the object, the brightness values will be highest; where neither light hits the object, the brightness values will be lowest. Dozens of slices of the object are shot, each being illuminated in this way.
Height and Depth
When the slices are layered into a single image, called a height map, the composite brightness values can be used to create a sense of the object’s height and depth. Here’s what the height map looks like for Mr. Hoole’s key:
Of course, this is in black and white, since it’s only using the lights and darks of the brightness channel. Color values return when a single, un-sliced shot of the object, called a texture map, is combined with the height map in Jeremiah’s software. The software transforms the data into a .x3d file so the object can be viewed in all its lovely dimensions.
The first time we processed the key, though, the dimensions weren’t so lovely:
Looks kind of fuzzy, no? We picked up so much detail from the surface of the key, so many light and dark tones, that it appears to have wildly variable height and depth, even from pixel to pixel.
The solution in this case was to blur the height map a bit:
Okay — so it was blurred what looks like a lot, but it was necessary to tame the height values. The texture map, however, wasn’t blurred, so the overall detail is still good:
(You might also notice the background doesn’t show up in this 3D rendering. This wasn’t because of the blur applied but because Jeremiah manually blacked out the background areas, now that he was satisfied with the way the rest turned out.)
Jeremiah’s still in the process of testing the technique and tweaking the software and hardware designs where needed. You’ll see more about this project once it’s out of the testing phase and ready for its public debut. 🙂