It’s been some time since we updated you on Jeremiah’s 3D digitization project. It’s been through one round of prototype and testing, and now it’s in a second phase. We thought we’d talk about where we are and show off an early 3D printing test.
The Old Process
The first version of the process — using our Canon EOS 6D plus custom hardware and software — was successful in exactly the way you want an exploratory project to be: it worked well enough to confirm proof of concept and provide feedback to guide further development.
The old process had two main drawbacks. First of all, it wasn’t entirely automatic. One still needed to manually capture the item, using the custom lighting rig and aperture mask (pictured below) to facilitate an incremental shift of perspective over dozens of photos.
It was also necessary to use Photoshop for parts of the process: for visual assessment — checking to make sure the greenscreen process went as expected — and to create the final composite image. Only then could that composite “height map” be compiled with the “texture map” in his software to create the 3D image.
Secondly, there was a concern with the resulting 3D manifestations. Since the process used exposure to determine depth, highlights from shiny objects or changes in color across the surface of an item might be interpreted as changes in lightness and darkness — therefore, in exposure — creating anomalies in depth in the finished product.
The New Process
The second phase of the project involved trying a different version of the same compositing system. It meant building on previous work but also rethinking some aspects of the process.
First, the automation problem. Jeremiah discovered that a particular digital camera, the Canon PowerShot, is highly hack-able, lending itself to being manipulated by computer programs. Here’s the used model he bought, in all its hot pink glory. 🙂
With a computer script telling the camera what to do, it can automatically capture the necessary range of images needed for the compositing process, no human intervention required. Here it is in action, mounted on a camera stand and aimed at a test environment.
The process now depends on sharpness to determine depth, rather than light, a much more reliable method that admits a greater range of objects with various textures and colors. This also takes away the need for the special lighting and aperture hardware.
Even better, it allows for automated compositing, with his Ruby program doing the interpreting work based on contrast from pixel to pixel. No more wrangling in Photoshop — in fact, no Photoshop at all!
A Test Product
One of the desired outcomes from the project is not just a web-displayable 3D object, but also a computer file that will allow you to 3D print that object. Jeremiah recently took one of his earlier test files to be printed. The object he captured is pictured here for reference, with lighting from multiple angles so you can get a sense of its contours.
Captured and reproduced, it came out like this (evenly lit on the left, lit from an angle on the right).
You’ll notice two things: it’s rough, and it’s backwards. The backwardness is probably fixable; a tweak to the software should make it possible to reverse things. The roughness will probably take some more complex adjustments to the software. The 3D printer we used couldn’t really deal with the fineness of detail in his STL file. It’s like using a Sharpie marker to fill out a form designed for a ballpoint pen — it’ll work, but it’ll be messy.
Jeremiah assures me there’s still a ways to go with the new process. Hopefully, one of those output files will be ready for print testing soon. We’ll keep you posted.