Who needs VR when we have Mixed Reality?
Well, leave it to Magic Leap to go and blow up my inbox with inquiries. I mean, not directly... but you get the point. Whenever there is some new technology thing on the Internet and it’s related to VR/AR, I get a stream of emails and messages asking for my opinion about it and instead of answering the same question over and over again I just write a blog post.
So, if you haven’t already seen it, Magic Leap on their Youtube Channel released a concept video for what looks like an amazing mixed reality experience. I would like to lean more in the direction of “concept video” and not “this is an in-environment” thing (yet).
But still, this is pretty much what I would envision the future of synthetic environments to be like.
There is of course some caveats not addressed here in such a video, and that’s down to back-end logistics and MEMs sensors, etc. A lot of things that I don’t think Magic Leap has actually solved yet.
That isn’t to say that these are unsolvable problems. After all, we’re talking about a combination of SLAM algorithm, RGBD camera, and triangulation in conjunction with a loose GPS tracking. While GPS isn’t as accurate as we need it to be for this tracking system, and given that we obviously would not want to use external beacons or cameras, that leaves a triangulation in virtual space in conjunction with SLAM and the RGBD camera.
Effectively, the mixed reality environment is seen as a total VR environment to the system itself and you are the avatar within it. While the end-user of such a headset and system still sees mixed reality (real world plus digital items interacting with it) the back-end system is seeing it all as a VR Overlay, and it is running the calculations (debug view) as a virtual reality world.
In short, it really boils down to saying “If the SLAM algorithm has mapped the area, and it knows your starting position, then in conjunction with the RGBD camera polling it can figure out where you are in that scene in relation to that starting location, which was polled from GPS and then corrected with the triangulation to begin with”
When you load up an application, it goes through this process:
- Where are you? Check GPS to get rough position.
- SLAM Algorithm and RGBD camera starts building a map of your scene if one does not already exist.
- Software checks the discrepancy in that scene between where it thinks you are (GPS) in relation to your relation to other objects in your scene as reported by the RGBD camera (where you are looking)
- System corrects your position start point and continues to triangulate from that point as you move around.
Sounds complicated, and it’s tricky but not impossible. Just takes some ninja logistics in the background to poll and correct your position via that sensor/hardware/SLAM updating. What I’m saying here is really an oversimplified explanation of the process, so keep that in mind.
As for the “drift” problem, it doesn’t really exist under this combination. Because the SLAM algorithm has mapped the scene, and because the RGBD camera is giving the distance in relation to other objects in that scene, and because of the starting point, it is just regularly polling and triangulating your position to correct it.
That being said, there is still a long way to go.
After all, in the video it is apparent that there isn’t an RGBD camera in use. If there was, then the 3D objects would have depth occlusion properly and consistently. For instance, you see that turret dropped and the enemies, those are occluded properly but then in the beginning the interface and grabbing stuff (like the GMAIL icon) aren’t depth occluded.
Let us not forget that Magic Leap as a system is really just a visual advancement with light field technology, which for all intents and purposes is a fancy brand name for Virtual Retina Display tech (first seen by Avegant) and what is available from places such as TriLite systems, though there hasn’t been much in the way of knowing how the rest of it would work (at least from them). From their patent applications, it looks like a visual recognition system and not a geospatial recognition system.
Maybe with those flying dragons by the beach it’s just getting a basic GPS location and loading a scene for that area?
The bigger question is then whether or not anyone is working out the logistics for a geospatial mixed reality system (I mean, I definitely am but that’s inconsequential to this conversation at the moment), or if everyone is going with closed localized AR type systems. If we looked at HoloLens, it seems very much like a closed localized system and not dynamic geospatial, and Magic Leap at the moment is giving mixed signals.
The patents are showing localized AR while their concept videos are showing an interest in geospatial mixed reality.
Regardless, I really do think this is the real future to keep an eye out for. Not so much virtual reality but instead a geospatial mixed reality system. When that system comes into existence, and it is done correctly, then anyone wearing such a headset (glasses) are going to be digital gods manipulating the entire world with the wave of a hand.
Pretty much just imagine if what you can do in Second Life was able to be done in Real Life.
0 Comments:
Post a Comment