×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Microsoft Research Brings Kinect-Style Depth Perception to Ordinary Cameras

timothy posted about 4 months ago | from the how-far-away-you-are dept.

Hardware Hacking 31

mrspoonsi (2955715) writes "Microsoft has been working on ways to make any regular 2D camera capture depth, meaning it could do some of the same things a Kinect does. As you can see in the video below the team managed to pull this off and we might see this tech all around in the near future. What's really impressive is that this works with many types of cameras. The research team used a smartphone as well as a regular webcam and both managed to achieve some impressive results, the cameras have to be slightly modified but that's only to permit more IR light to hit the sensor." The video is impressive, but note that so are several of the other projects that Microsoft has created for this year's SIGGRAPH, in particular one that makes first-person sports-cam footage more watchable.

Sorry! There are no comments related to the filter you selected.

Leapmotion anyone? (0)

daid303 (843777) | about 4 months ago | (#47654105)

This is pretty much how the leap-motion works. Nothing really new to see here, move along.

Re:Leapmotion anyone? (4, Informative)

Saint Gerbil (1155665) | about 4 months ago | (#47654137)

Leap motion uses two monochromatic IR cameras and three infrared LEDs.

This claims to use one 2d Camera.

Apple and Pears.

Re:Leapmotion anyone? (3, Insightful)

Tx (96709) | about 4 months ago | (#47654189)

It is apples and pears on one hand, however the fact that the camera needs a modification, however small, means that you will still be buying a special bit of hardware to make your gesture control work, so in that sense it is in the same boat as the Leap. Except of course that the piece of hardware in question should be a lot cheaper, and could easily be included in laptops/tablets/monitors at minimal extra cost, if it really works that well and the idea takes off.

Re:Leapmotion anyone? (0)

Anonymous Coward | about 4 months ago | (#47654401)

Still much cheaper though, and it's assuming that improvements on the algorithm don't negate the need for changing the IR filter.

Re:Leapmotion anyone? (1)

Anonymous Coward | about 4 months ago | (#47654661)

As fast as the price of camera hardware has been dropping, adding a 2nd camera doesn't seem like much of a price to pay to add a dimension to the resulting product....

Re:Leapmotion anyone? (0)

Anonymous Coward | about 3 months ago | (#47676651)

Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn't work.
Homer: Uh-huh.
Lisa: It's just a stupid rock.
Homer: Uh-huh.
Lisa: But I don't see any tigers around, do you?
[Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.

Re:Leapmotion anyone? (2)

bloodhawk (813939) | about 4 months ago | (#47654393)

It is completely and utterly different from how leap motion works, they are not even vaguely similar. this is about using a single 2D camera not multiple cameras and LED's

Re:Leapmotion anyone? (0)

daid303 (843777) | about 4 months ago | (#47654501)

I mean, it works in the same way, as in, it does not really work for anything but a tech demo.

Re:Leapmotion anyone? (0)

Anonymous Coward | about 3 months ago | (#47657125)

It is completely and utterly different from how leap motion works, they are not even vaguely similar. this is about using a single 2D camera not multiple cameras and LED's

Um... it's pretty close.

At 33 seconds into the video you'll see all the parts labeled for the modifications they did to the camera.

There's additional IR leds, and a IR bandpass filter.

So basically the only difference is they're using 1 camera instead of 2. Even that's not a large advance given they're guessing the surface absorption of IR and then using the IR intensity to judge distance instead of using 2 cameras to use interoccular distance and image disparity for direct measurement of distance.

I'd be impressed if...

1) they were able to use 1 camera and get distance without resorting to guessing surface absorption of IR (and suffer significant problems because of the guesswork).
2) or they were also able to use the camera in normal mode. This mod turns the camera into an IR camera with an IR source basically (something I did back in 1990 with my camcorder and a spotlight).

Microsoft still can't get the UI right (1)

mwfischer (1919758) | about 4 months ago | (#47654167)

Why isn't this a split screen of without and with?

Re:Microsoft still can't get the UI right (0)

plover (150551) | about 4 months ago | (#47654191)

Tell you what: you build and program such a system, and see if anyone on Slashdot crucifies you for your demo's UI. Oh, what's that? You've never built anything so cool in your life? Guess that won't happen then.

Re:Microsoft still can't get the UI right (1)

Thanshin (1188877) | about 4 months ago | (#47654275)

So you have no opinion on anything you have no personal experience in?

So that excludes conversation about politics (unless you've managed a country), religion (unless you're a god), the opposite sex of yours, the weather (again, unless god), ...

Re:Microsoft still can't get the UI right (0)

Anonymous Coward | about 4 months ago | (#47654323)

So that excludes conversation about politics (unless you've managed a country), religion (unless you're a god), the opposite sex of yours, the weather (again, unless god), ...

To be fair, directly experiencing the effects of such things qualifies a person to have an opinion. For instance, if a bunch of religious people try to ban atheism/agnosticism, I think I'd be qualified to tell them to shut up.

But, this demo really didn't impact anyone in any noticeable way other than to say "oh hey, that's pretty cool." Therefore, critiques about the demo UI can only really be made by anyone who has done a better demo UI.

Re:Microsoft still can't get the UI right (0)

Anonymous Coward | about 4 months ago | (#47654547)

The whole concept is pretty horrible when you realize you can point at things, but not click... (you stand there in front of an xbox, and point... and then point some more... and wiggle your finger.... or whatever, just to avoid pressing a damn button that would be trivial to press, but we *have* to jump through hoops to avoid touching anything physical with this concept).

(e.g. is it really easier to raise your hands in front of the screen, grab the map with your fists, and scale it, than say... rolling the mouse wheel? It certainly looks cooler, but, eh, not practical at all....)

Re:Microsoft still can't get the UI right (1)

kryliss (72493) | about 3 months ago | (#47657513)

Yes I know, I'm feeding the troll. Have you ever thought that this may some day be useful for people that have that have disabilities where they can't grab a mouse? My mom takes meds that make her hands shake all over and she has a terrible time with a mouse, she sometimes has to grab her mouse hand with her other hand just so she can click the button on the right spot. How about people who have arthritis so bad that they can't even grab the mouse?

Re:Microsoft still can't get the UI right (1)

wonkey_monkey (2592601) | about 4 months ago | (#47654713)

Why isn't what a split screen of without and with what?

Doesn't the kinect use an ordinary camera? (1)

fuzzyfuzzyfungus (1223518) | about 4 months ago | (#47654193)

I thought that the kinect, while nicer than the average cheapie camera in terms of optics and sensor, also used a fairly normal camera(well, one higher resolution visual band one for image and one IR one for depth) and that the real secret sauce was the IR laser device that projected the dot pattern on the environment for the camera to pick up and interpret. Am I remembering incorrectly?

Re:Doesn't the kinect use an ordinary camera? (2)

axedog (991609) | about 4 months ago | (#47654225)

You are indeed remembering incorrectly. Kinect has a colour camera, an IR camera and an IR Emitter. http://msdn.microsoft.com/en-u... [microsoft.com]

Re:Doesn't the kinect use an ordinary camera? (1)

axedog (991609) | about 4 months ago | (#47654247)

Sorry, I read your post a tad hurriedly. Kinect has the three components that you mentioned, but that make it something other than a "fairly normal camera".

Re:Doesn't the kinect use an ordinary camera? (1)

qpqp (1969898) | about 4 months ago | (#47656325)

I believe, the question was, whether there's been an advance in the depth-sensing algorithm in the sense that you don't need a specific IR pattern (i.e. a grid) like in the Kinect anymore, but that just a couple of IR emitters are enough.

Re:Doesn't the kinect use an ordinary camera? (4, Insightful)

plover (150551) | about 4 months ago | (#47654273)

You are correct. The IR laser and IR camera are used to measure depth, while the visual light camera only picks up the image.

The cool thing about the Kinect's IR pair is that it senses depth in the same way a pair of eyes does, in that the delta between left and right eyes provides the depth info. But instead of using two eyes, it projects a grid from the location where one eye would be, and the camera in the other location measures the deltas of "where the dot is expected - where the dot is detected". The grid is slightly randomized so that straight edges can be detected. If you've ever stared into one of those Magic Eye random dot stereogram posters, you're doing pretty much the same thing the Kinect does.

This system is very different. The Kinect has a deep field of view, but all the demos show this working in a very short range. I haven't yet read the paper, but I'm wondering if that's the point of the IR.

Re:Doesn't the kinect use an ordinary camera? (4, Informative)

OzPeter (195038) | about 4 months ago | (#47654319)

This system is very different. The Kinect has a deep field of view, but all the demos show this working in a very short range. I haven't yet read the paper, but I'm wondering if that's the point of the IR.

From watching the video my understanding is that they illuminate the subject with a fixed IR source and map the drop off of the reflected IR in 2D space and then interpret that drop off as a depth map of the object they are looking at. Which looks surpassingly accurate for the sort of use cases they demonstrate. They also point out that this technique is not a general purpose 3D system.

Re:Doesn't the kinect use an ordinary camera? (1)

ab8ten (551673) | about 4 months ago | (#47655955)

That's how the Kinect 1 works. It projects structured light and then reconstructs the world based on deviations from the expected pattern. It's built from off-the-shelf parts. The Kinect 2 measures the time it takes for an emitted laser light to be reflected back to the sensor. It's much more accurate and reliable, but requires purpose-made sensors, thus increasing the cost. Here's a good article with technical descriptions of the two methods: http://www.gamasutra.com/blogs... [gamasutra.com]

Re:Doesn't the kinect use an ordinary camera? (2, Informative)

Anonymous Coward | about 4 months ago | (#47654311)

I thought that the kinect, while nicer than the average cheapie camera in terms of optics and sensor, also used a fairly normal camera(well, one higher resolution visual band one for image and one IR one for depth) and that the real secret sauce was the IR laser device that projected the dot pattern on the environment for the camera to pick up and interpret. Am I remembering incorrectly?

Yes and no.

It's correct for Kinect 1. It uses a "structured light" approach (developed by PrimeSense), which projects a magic pattern and has a (regular) IR cam observing the distortion in the pattern.

Kinect 2, on the other hand, uses real Time-of-Flight (or rather, it measures the phase difference between the modulated IR signal and the reflected IR light) imaging, very similar to laser distance meters, just 2D instead of a single point. (Versus Kinect 1, it provides a better resolution and less noise.)

As predicted in Better Off Ted (4, Insightful)

OzPeter (195038) | about 4 months ago | (#47654221)

At the very end of the video it describes how the system is tuned to skin albedo. The only problem with this is that various races around the world have different albedos - which does have a real world effect in photography when trying to expose correctly for skin. In the video they mentioned training the system on the user, but all users shown in the video were white - so I can't say how well it would work for non-whites. But in general I am impressed with what they have done.

Back in 2009, in Better Off Ted [wikipedia.org] episode 4 "Racial Sensitivity", they developed a security system that had issues with skin albedo and not detecting (from memory) dark skinned people - which resulted in all sorts of hijinks for the African American employees

Re:As predicted in Better Off Ted (2, Interesting)

Anonymous Coward | about 4 months ago | (#47654427)

That itself is probably based on the fact that there are biometric systems that do have issues with certain racial types. Iris scanners are one example, as the extra melatonin means there's too little contrast to pick up enough detail to be as reliable without tuning for that type of iris. That's been known since way before 2009.

funny that they developed it in an Android phone (0)

Anonymous Coward | about 4 months ago | (#47654361)

amazing

Hyperlapse (1)

fgouget (925644) | about 4 months ago | (#47654545)

They should rename HyperLapse to SmoothLapse, StableLapse or CleanLapse.

Re:Hyperlapse (1)

Anonymous Coward | about 4 months ago | (#47654649)

ProLapse. (with apologies to RockStar)

Gorilla arms, gorilla arms everywhere! (0)

pixie.pt (963700) | about 4 months ago | (#47655547)

Thanks god we don't have to suspend the mouse in front of the screen to move the pointer...
Well the point being, why does user has to wave hands in the hair when it could as easily have them rested on a table. Too much minority report kool aid?

yeah (1)

ClumsyNinjaCheats (3782921) | about 4 months ago | (#47655753)

It's hard to disagree. Microsoft is power!
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?