Wednesday, April 30, 2014

Time to talk to Unity...

Right now I've been using the Unity3D engine to use my Oculus Rift. I'm not the best programmer in the world, and the drag and drop functionality, along with all the boilerplate and ease of compiling really influenced this decision. I was fortunate enough to get a 4 month trial of Unity Pro, so we'll so where I go when that is up...

Anyways, onto the meat of the post. Getting the Rift to work in Unity was as easy as importing the Oculus Rift package, and dropping an OVRCamera into the scene. In minutes I was looking around a virtual white box.

Once I hooked up my two webcams, I was painfully reminded of how long it's been since I've done any programming. My only formal training was 3 years ago, and in Java. C-Sharp is similar, so I've been able to trudge along, but it's not without its frustrations! (Basically... I know that my code is probably terrible. Feel free to post any corrections you may see!).

The first step was to create some planes for the C310s to map their textures to. I have an empty GameObject attached as a child to each of the cameras (CameraLeft and CameraRight) in the OVRCameraController. These GameObjects are both projected 20 units out in the Z direction. A quad with a scale of [1,1,1] is placed as a child to each of the GameObjects.

The quads both have their own material set to them: Copies of a blank texture, renamed LeftWebcamFeed and RightWebcamFeed, respectively.

The last setting that's put in place is to create two new layers (CameraLeft and CameraRight), and setting the left and right GameObjects to these. CameraLeft then has a mixed "Culling Mask", where it can see everything EXCEPT for the "CameraRight" layer. CameraRight has the opposite -- it can see all EXCEPT for the "CameraLeft" layer. This prevents double vision, and lets the user actually see the correct images.

Next I created a script (Called WebcamTextureScript) and added it as a component to each of the quads.


Here's the code in its entirety, hosted on pasteBIN.

Please note, this code is barely functional. I plan, in the future, to convert the WebcamTexture that is gotten from the C310s into an EmguCV image, which should let me do computer vision tasks on it. There's a lot of muck in there that's commented out, and nearly all of the commented out lines relate to failed attempts at EmguCV conversions.

The code, as it stands, will simply grab the WebcamTextures from two cameras, and assign them to either the right or the left eye's quad. It then rotates the quad 90 degrees, so the image is oriented correctly. You may have to play with some values to get the correct webcams on the correct eyes!

Once the script is added to the quad, it is important to assign the correct public variables.

  • "cam" is the Left or Right Camera, and should be the parent of the quad.
  • "mat" is the material, and should be either LeftWebcamTexture or RightWebcamTexture
  • "Quad" is the quad that the script is modifying.
"Image Offset X" and "Image Offset Y" are helper variables. I've found that the camera views don't always line up perfectly, and I usually move one quad around to compensate for that. I haven't thought of a good way to calibrate this automatically... but anyone who uses one of these will have already gone through hacking a few webcams, so what's the big deal in changing some numbers?

Next update will be focused on getting EmguCV (A C# wrapper of OpenCV) working with Unity, and hopefully a bit of image processing.


Thursday, April 24, 2014

Modifying a Logitech C310 HD webcam

As per William Steptoe's recommendation, I chose to augment my Oculus Rift DK1 with two Logitech C310 webcams, modified to have a wider FOV.

The C310s can be found easily on Amazon for around 40 bucks, with Prime shipping. Not too bad.

Steptoe also cannibalized two Genius WideCam F100. The F100s have an FOV of 120°, while the C310s only have an FOV of about 40°. Rather than spend an extra 80 dollars, I looked to sourcing my own lenses.

To achieve an FOV of 120°, a focal length of around 2.5 is needed. I promptly went onto Amazon, and picked up a pair of 2.8mm CCTV lenses. They arrived with my cameras, and I set to hacking.

My first issue arose when I found that the threads on the lenses I had ordered were the wrong size! CCTV cameras have a standard size, called the "S-Mount". S-Mounts have a 12mm thread with a 0.5 pitch (M12x0.5). The C310s have an M8x0.5 thread! I searched to no avail for the smaller size, and eventually modeled and printed a new lens adapter. This new lens adapter can be found on Thingiverse.

This worked! I swapped then lenses, and turned on my cameras to measure my FOV... and I was right around 60°.

What? With a 2.8mm lens, the FOV should be at LEAST 115°! A little more research brought me to my answer. The FOV is related to both the focal length of the lens, as well as the size of the camera sensor! The lenses that I purchased were meant for 1/3" sensor. I carefully measured the sensor on the C310 board, and used this helpful site to find my sensor size. Turns out that the C310 sensor is 3.6mm x 2.7mm, with a diagonal of 4.5mm. This corresponds to a 1/4" sensor!

There's not many lenses that are made for a 1/4" sensor... at least, not cheap ones. I found this site, which would allow me to convert my focal length between 1/3" and 1/4" sensors. To get an FOV of about 120° on a 1/4" sensor, I needed a lens with an FOV of about 160° on a 1/3" sensor.

After a fair amount of digging, I finally came across a reasonably priced 170° FOV lens. This should theoretically put me at about 130° FOV for the C310s... But who knows! When I get them I'll test them out.



Below are some random links. I don't remember why I pasted them here, but I'll leave them for posterity.

http://petapixel.com/2013/06/15/a-mathematical-look-at-focal-length-and-crop-factor/

http://www.peauproductions.com/blog/2009/07/23/m12-lens-and-distance-calculator-formulas/

Saturday, April 19, 2014

And now for something completely different...

This isn't strictly related to 3D printing, per se, but I needed a build log, and I may as well document it online.

I'm currently in my last quarter at the University of Washington, which of course means finishing up those one or two credits that you need. I chose to fulfill one of my science credits with PHYS 207: Physics of Music.

Now, this is a bit of a weird choice for me. While I'm not exactly tone deaf, I'm pretty close. A great example is the following video:


It's a great song, and makes many people cringe, but I can't really hear anything wrong with it.

Why is this important? It's really not, but it helps make the next part make a bit more sense. For Physics of Music, I'm apparently supposed to both write a term paper (easy) and perform musically in front of the class (terrible).

So I figured that I'd base my term paper on teaching someone (me) how to play piano! The title of my paper is:

Teaching Music Through Augmented Reality

Essentially, I am going to convert my Oculus Rift DK1 (An excellent pair of Virtual Reality goggles) into functional Augmented Reality goggles through the addition of two webcams. I will then create a virtual scene in which a pair of virtual hands play a piano piece. This virtual image will be overlaid onto the webcam textures, so the student will be able to follow the hands and play along.

This project was heavily inspired by William Steptoe's documentation of his AR-Rift. (The cameras I use, for example, are what he suggests).

Eventually, all of the hardware and software for this project will be opened and released... but first I need to actually build all of that!

Stay tuned....