Wednesday, April 30, 2014

Time to talk to Unity...

Right now I've been using the Unity3D engine to use my Oculus Rift. I'm not the best programmer in the world, and the drag and drop functionality, along with all the boilerplate and ease of compiling really influenced this decision. I was fortunate enough to get a 4 month trial of Unity Pro, so we'll so where I go when that is up...

Anyways, onto the meat of the post. Getting the Rift to work in Unity was as easy as importing the Oculus Rift package, and dropping an OVRCamera into the scene. In minutes I was looking around a virtual white box.

Once I hooked up my two webcams, I was painfully reminded of how long it's been since I've done any programming. My only formal training was 3 years ago, and in Java. C-Sharp is similar, so I've been able to trudge along, but it's not without its frustrations! (Basically... I know that my code is probably terrible. Feel free to post any corrections you may see!).

The first step was to create some planes for the C310s to map their textures to. I have an empty GameObject attached as a child to each of the cameras (CameraLeft and CameraRight) in the OVRCameraController. These GameObjects are both projected 20 units out in the Z direction. A quad with a scale of [1,1,1] is placed as a child to each of the GameObjects.

The quads both have their own material set to them: Copies of a blank texture, renamed LeftWebcamFeed and RightWebcamFeed, respectively.

The last setting that's put in place is to create two new layers (CameraLeft and CameraRight), and setting the left and right GameObjects to these. CameraLeft then has a mixed "Culling Mask", where it can see everything EXCEPT for the "CameraRight" layer. CameraRight has the opposite -- it can see all EXCEPT for the "CameraLeft" layer. This prevents double vision, and lets the user actually see the correct images.

Next I created a script (Called WebcamTextureScript) and added it as a component to each of the quads.


Here's the code in its entirety, hosted on pasteBIN.

Please note, this code is barely functional. I plan, in the future, to convert the WebcamTexture that is gotten from the C310s into an EmguCV image, which should let me do computer vision tasks on it. There's a lot of muck in there that's commented out, and nearly all of the commented out lines relate to failed attempts at EmguCV conversions.

The code, as it stands, will simply grab the WebcamTextures from two cameras, and assign them to either the right or the left eye's quad. It then rotates the quad 90 degrees, so the image is oriented correctly. You may have to play with some values to get the correct webcams on the correct eyes!

Once the script is added to the quad, it is important to assign the correct public variables.

  • "cam" is the Left or Right Camera, and should be the parent of the quad.
  • "mat" is the material, and should be either LeftWebcamTexture or RightWebcamTexture
  • "Quad" is the quad that the script is modifying.
"Image Offset X" and "Image Offset Y" are helper variables. I've found that the camera views don't always line up perfectly, and I usually move one quad around to compensate for that. I haven't thought of a good way to calibrate this automatically... but anyone who uses one of these will have already gone through hacking a few webcams, so what's the big deal in changing some numbers?

Next update will be focused on getting EmguCV (A C# wrapper of OpenCV) working with Unity, and hopefully a bit of image processing.


No comments:

Post a Comment