3D Digitization Project: testing phase

This entry was posted in Digitization, Projects and tagged . Bookmark the permalink.

Our Digitization Manager, Jeremiah, has been working on a pretty exciting project, and we thought we’d share some pictures from the testing phase.

Late last year, we got a grant from UA Libraries to develop a digitizing process for relief objects, including an inexpensive apparatus used to facilitate the image capture (see these posts about making the components) and a Ruby program that creates the 3D files. The project is now in the testing phase.

For my first test run last week, we chose one of the large keys that used to be in the possession of our special collection library‘s namesake, W. S. Hoole. (Thank you, Associate Dean Mary Bess Paluzzi!) They’re not only cool shapes to work with, but they also have some nice textures:

In Jeremiah’s process, dozens of shots of an object are taken and composited together. To make those captures with our usual mounted-camera digitization setup, we’ve added a secondary apparatus, one with a mask that moves back and forth over the object, targeting portions of it for capture:

Most of what I learned this go-around was how to set up the object for digitization, orienting it to the apparatus. Here it is on a platform under the apparatus (with mask removed):

Once the object is in place, it can’t be moved. Instead, everything else moves, structuring the way light is cast onto the item:

  1. The position of the apparatus changes after each pass over the object. At minimum, we shoot the object from two different orientations, usually three. (If you look at the wide shots above up close, you’ll see circular position marks on the table.)
  2. The position of the opening — or aperture — in the green mask is moved in increments during the digitization process, so that we capture multiple different views of the object through the aperture.




Why is the mask bright green? For exactly the reason you might expect — to use a green-screen process. It’s not too different from how actors are shot in front of a green background so computer-generated effects can be added in around them. In this case, the green signals to the Ruby software which portions of the image should be cut out — namely, the parts that aren’t the slice of the object.

After all the capture is done, the images are combined into a whole. It looks like this, before we map a color shot onto it:


In a subsequent post, we’ll talk how the compositing technique works. It has a lot to do with lighting. In the meantime, this is a screenshot of the final product of the key test:


This entry was posted in Digitization, Projects and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *