DIY Swept Plane 3D Scanner
This is an attempt to make a 3D scanner with a webcam, projector, and a web app.
We used:
In broad strokes the steps to produce some 3D data are:
- Calibrate the instrinsics of the projector and the camera, which essentially means determine the focal length and screen sizes.
- Calibrate the extrinsices of the camera, which essentially means determine it's location and rotation in space.
- Take reference image of scene to be scanned.
- Project plane onto scene.
- Sweep plane in one direction.
- Convert camera view and reference image to greyscale and subtract reference from camera view.
- Threshold the resulting image.
- Determine median of groups of white pixels.
- Normalize points in camera and projector space and find intersection between camera view line and projected plane to determine point in real space.
Intrinsics Calibration
This is a well studied topic with well validated calibration techniques available in tools like OpenCV. To save time we ended up doing this quick and dirty and just looked up the focal length for the projector and camera. They are defined in the inital state like so:
camera: {
focalLength: 1460,
width: 1920,
height: 1080
}
projector: {
focalLength: 1750,
width: 1280,
height: 720
}
The focal lengths are in pixel units. CHECK
Extrinsic Calibration
To calbrate the extrinsics we create a reference image in real space which we will line up with a projected image. Once again this is a hack but it's quick! In our case we drew the corners of a rectangle on a piece of cardboard.
We then project this image over the cardboard and line up the corners.
To project this image we open the web app.
Turn on the video stream.
Open the background and place it on the projector.
Make the background full screen.
Now you're ready to project background images.
Select crosses and hit draw background.
Then move the projector to match the corners on the cardboard.
Now let's align the camera rotation by projecting to parallel lines and painting two red lines on our camera view which are all parallel. Select "vertical-lines" and check the "Overlay Lines" box.
Now turn off the overlay and project the crosses again. Click on each cross in the camera view to log its pixel coordinates in the console.
We'll plug these in to an OpenCV python program to generate the extrinsic calibration.
Set the points in the imgPoints
variable in calibrate_cam.py
. Here is an example, the order is left-top, right-top, left-bottom, right-bottom:
imgPoints = np.array([[
[600.015625, 388.015625],
[1325.015625, 374.015625],
[611.015625, 886.015625],
[1321.015625, 880.015625]
]], dtype=np.float32)
That sould produce this ouput:
Use this to set the camera position variable in index.js
with the second array. You'll have to invert the signs.
state.cameraPos: [
4.43436471,
-72.2131717,
-43.49579285
]
Notice that for now we are assuming the rotations of the camera are negligible.
Setting Reference
To set your reference place your object in the scene and set the "blank" background then click "Set Camera Reference"
That will show you the reference image and a snapshot of the current camera view minus the reference.
Scanning
Select a plane to project as a background. You can use a rectangle or a gaussian.
Clicking "process" will show the average of the white pixel clumps in each column.
Hit "scan" to sweep the plane along the scene which will generate a height map and download a ply file.
Scanner running:
Resuling in this height map:
And these point clouds:
These results are imprecise probably mostly contributable to our poor calibration, but despite that we can see that we are getting some 3D data.