When Infinite Detail Isn’t a Hoax
There’s this company, and I’m sure by now you’ve heard of them, called Euclideon who has been showing off these insanely impressive videos of a rendering engine they are working on that supposedly handles “unlimited” detail in real-time using only software methods. I’ve read the responses from major players like @Notch at Mojang (of Minecraft fame) and even he is screaming it’s a scam.
Turns out that Euclideon has themselves a variation of a sparse voxel engine, and it’s an open and shut case; I mean, how do you move around 512 petabytes of information for such a large island without requiring tons of storage space and rendering time?
This would be true if it weren’t for some related factors that people seem to have glossed over in their assessment of the situation.
The Euclideon system, aptly named Unlimited Detail, may actually be quite capable of doing exactly what they claim – however in order to possibly understand how, we first need to suspend our disbelief, start answering some important questions, and even get a little philosophical.
Overwhelming Complexity
Let’s say for a moment that this is some sort of elaborate hoax. After all, the shear amount of data involved is mind-boggling if the entire 3D space is using what amounts to atomic voxel level of detail without repeating the data, right? Similar arguments in the same manner are given by even well known game programmers like @Notch at Mojang, as quoted below:
Well, it is a scam.
They made a voxel renderer, probably based on sparse voxel octrees. That’s cool and all, but.. To quote the video, the island in the video is one km^2. Let’s assume a modest island height of just eight meters, and we end up with 0.008 km^3. At 64 atoms per cubic millimeter (four per millimeter), that is a total of 512 000 000 000 000 000 atoms. If each voxel is made up of one byte of data, that is a total of 512 petabytes of information, or about 170 000 three-terrabyte harddrives full of information. In reality, you will need way more than just one byte of data per voxel to do colors and lighting, and the island is probably way taller than just eight meters, so that estimate is very optimistic.
Assuming is a big word here, and that is exactly what the gaming industry (and Notch) seems to be doing right now concerning this type of technology. 512-petabytes of information is a mighty lot of data to be moving around and storing, but we’ve left out a really simple explanation for why Unlimited Detail probably is not storing or moving around anywhere near that much data for the level of detail they are showing. Regardless if a well known game programmer like Notch at Mojang insists this is impossible, we must understand that impossible often times means we simply do not understand or are unwilling to try, and those are two very different scenarios.
Below, I’d like to set aside my disbelief and try to wrap my head around how such a thing would be possible, and we’ll assume it is while we’re trying to figure it out, like a good magic trick. As you read on, just keep in mind that it’s all hypothetical, even if plausible. I have no actual idea how they are doing it, but I can make some very educated guesses instead of blindly dismissing it as a scam or hoax.
Non-Linear Data
When does 512 Petabytes of data not actually equal 512 Petabytes? The answer to this requires non-linear thinking, which the gaming industry seems to be lackluster at lately.
It is very likely that Euclideon actually is using a type of voxel rendering system, but nowhere near in the traditional way that we’re immediately thinking. In all likelihood, it is closer to a Procedural Voxel rendering engine than a traditional voxel rendering engine. As it is said in the videos, the screen space itself is the limiting factor and also a key to all of this.
Just like a Mandelbrot Fractal, which goes on infinitely, all it really takes to make that happen is a simple equation. In the same manner, converting models into proceduralized point cloud data would be a likely approach for “unlimited” detail aside from the nature of Voxel systems to have built in level of detail to begin with, coupled with highly aggressive point cloud culling based on algorithmic rules. It wouldn’t be storing a 1:1 representation, but instead a procedural and algorithmic method to be solved on the screen space itself in much smaller chunks than worrying about the entire 3D world. Essentially, streaming the point cloud data as required for the current screen space, with a very aggressive culling and search based algorithm discarding (or more likely not even calling into existence) anything it doesn’t need.
While a Mandelbrot Fractal uses repeatable data, and so too does Voxel methods use copies of data to minimize the requirements and memory usage for unique instances, a Procedural Voxel Point Cloud would not have that constraint. We’re dealing with, just like a procedural texture, a very small algorithmic file which when resolved to X number of steps produces the shapes and details, except in a 3D setting via Point Cloud data.
What is actually on the screen at any given point, is only the viewable data, other data ignored in real-time. It’s like asking only for 1/10th of a model because only 1/10th of it is visible at the moment, so you only need to load 1/10th of the file itself. But even further is the idea that even the data for it is algorithmic, and thus not a bloated 1:1 representation of detail, instead just the procedural algorithmic representation which can be solved as said 3D object. Again, this is quite possible and I’ve seen examples weighing in at a mere 177kb in size, but giving modern games weighing in with many gigabytes of data a run for their money.
So what we have is a Voxel Point Cloud System, which can be easily concluded, but what isn’t so obvious are the algorithmic methods employed in order to transcend the Voxel limitations normally encountered.
Another point I’d like to bring up is the statement by Euclideon that the system is working more like a search engine algorithm. Why is this important?
Well, since we’re dealing with a Voxel systems and it uses Point Cloud data for the items in the 3D space, the search type algorithm is only returning (of the fractional file as stated above) the points in the cloud data that matter based on the myriad of criteria assigned in the engine itself – camera distance, etc.
So now we can have a point cloud data for a voxel engine, that requires a fraction of the file space versus the same as represented as a linear file type, and even then only is required to load a fraction of that data for the screen space, and even then, the internal sorting algorithms are only asking for a fraction of that data to resolve and display based on the sorting and screen space requirement limiters.
To See Into Forever
Do we still have a basis for a claim of “Unlimited Detail” in this case? Well, yes and no. What is unlimited detail if only the idea that no matter how closely we inspect something it never loses resolution? Much in the same manner as we look closer into reality we see molecular structures, atomic structures, and sub-atomic structures and even theorize about the underlying energy of quantum structures.
However, I’m beginning to see a parallel between a Procedural/Algorithmic Voxel system and the fundamental and philosophical questions posed by people who study reality itself. In a quantum reality, are the underlying levels of detail actually there if we aren’t viewing them or is this all just a well organized simulation where detail and complexity resolve based on the conscious observer? Due to the nature of quantum mechanics, the latter may be the answer in that things don’t resolve unless consciously observed.
This is some seriously deep stuff to think about.
The nature of infinity may be that of infinite detail on instance, while everything else is non-existent until observed. Sort of like an on-demand reality, which might explain what’s outside the observable universe – not a damned thing until we can see that far, in which case something will be there.
Think of it like streaming the HD content of a video. While watching it, it’s high definition and moving, but clearly has no requirement of loading the entire file before you can watch it. In fact, it only needs to load a fraction of that file and keep that fraction streaming in instance. Now, if the movie was converted to a procedural method file, that file may be many orders of magnitude smaller and only have to resolve a fraction of that total file to create the buffered stream portion in play because only the portion to be displayed actually is resolved algorithmically on demand, while the rest of the movie doesn’t exist until called into a specific instance.
We’re not trying to resolve the entire file through procedural algorithm, but only 30 still frames per second, before discarding and moving on to the next fractional batch, and the reason it knows what portion of the algorithmic representation to ask for is based on the idea of the “more like a search engine” approach Euclideon mentions.
There is also the “limitation” of animating voxel data, and I’ve seen this argument already used for why a dynamic voxel scene is Euclideon’s Achilles Heel. I hate to burst that bubble, but animation of voxel point cloud data is possible, and so is the rigging, as demonstrated in a thesis by Dennis Bautembach named simply “Animated Sparse Voxel Octrees”.
Final Thoughts
Whether or not Euclideon is bluffing isn’t the point. Personally, I don’t actually know if they’ve accomplished what they say, but I do happen to know the idea of how it would be very possible to do so if somebody tried. What it takes is the ability to ignore traditional thinking and really think dynamically. Procedural methods, highly optimized point cloud searching, and intelligent methods to limit the required data to only the pixel screen space can make such a system at least feasible without breaking the laws of physics (or a typical computer) in the process.
Unlimited Detail is actually possible if you understand that you don’t need to load all of infinity at once to make it happen or even acknowledge the need to store infinity in the first place. Algorithms are elegant representations of things, much like we can represent a Pine Cone and most of nature not as bloated 1:1 geometry in a huge file, but instead as a simple equation. This equation requires only a few bytes, or even kilobytes at most to be represented. When resolved, we can scan through the number set to find the exact part we actually need for the instance in 3D, but we don’t really need to solve the whole equation in infinite detail to get our tree or pine cone, now do we? No, we only need to solve a reasonable depth of that equation before we can declare that any further detail would be pointless and non-observable for the current instance. This in itself is the basis for the idea of Level of Detail to begin with, however, actively and aggressively ignoring data and using fractions of the file itself, which may already be a procedural point cloud, would add more bang for the buck and invalidate quite a lot of arguments which say this sort of thing is impossible.
Since we aren’t being required to solve the entire equation, but only the portions that are relevant to the exact screen space at that still frame moment, the amount of CPU/GPU required would substantially drop as it is solving fractional data. So all of this talk about petabytes of data being required is actually nonsense. That is simply the old-style of thought for 3D environments, and not the new school of thought. They are both wildly at odds with each other, much like classical physics and quantum physics don’t see eye to eye.
That’s a good analogy, actually… currently, we’re using methodologies that resemble classical physics, while the next generation will be using what appears to be magic (quantum thinking) in regards to the last generation onlookers.
Arthur C. Clarke said it best
Any sufficiently advanced technology is indistinguishable from magic.
Remember this the next time somebody pulls infinity out of their hat.
I dont disagree with what your saying, but its 1 of 2 scenario's that i see, A: it really does require 512 petabytes or B: It will require an almost unlimited amount of processing power to do this procedurally. Either way your talking about a lot of computing power to achieve it, perhaps something like this might work well in a cloud or something of that nature where it can scale up rapidly. But personally I do not see this happening on desktop computers for a very long time. Perhaps one day soon we will have that power on our desktops but its pretty far off still if you ask me.
ReplyDelete@Nebadon
ReplyDelete1. It likely doesn't require anywhere near 512 Petabytes
2. It also is likely not to require unlimited processing either.
Both of your assumptions are based on linear file approaches and not dynamic thinking. Do we need to infinitely solve the math for a Tree to get a 3D tree? Absolutely not. Do we even need to store petabytes of data for that procedural model? Again, no, because it's dynamic.
What we need happens to be only to solve what is visible on the 2D screen space, and that is the key factor here. Coupled with the built in Voxel LOD, with an aggressive search algorithm that culls the data even further, and then only would be looking for gradual and fractional algorithmic solutions to render detail constrained to a 2D space.
Like I said, it doesn't require infinite processing or infinite filespace to render an infinite 3D Fractal. It requires a small math problem, and the ability to know that anything in infinity you can't see right at this moment... doesn't exist.
Aeonix --
ReplyDeleteSo the viewer would send a message: "I'm standing at this point, looking in that direction."
Then the server would do a search, and come up with a set of equations that would generate the view, and send it back to the viewer.
And the viewer would run the equations as far as out as it needed for the display resolution and plot the pixels on the screen?
Like sending over "x+y=1" instead of all the coordinates of each point in the circle, so if you've got a low-res game you can generate a circle in a dozen steps, and if you've got a high-res screen, you'd go out a hundred steps to get a finer level of detail?
Or are you talking about non-linear, recursive algorithms where a simple set of starting instructions can generate very complex structures in just a few steps?
Aren't graphics files already encoded like this?
Or are you talking about extending the same approach to the physical shapes of the objects, as well?
@Maria
ReplyDelete"Like sending over "x+y=1" instead of all the coordinates of each point in the circle, so if you've got a low-res game you can generate a circle in a dozen steps, and if you've got a high-res screen, you'd go out a hundred steps to get a finer level of detail?"
I believe that is a likely type of *starting* point for this, however when they mentioned that it's working more like a search engine algorithm coupled with the 3D engine, it dawned on me:
Normally you'd go out 100 steps to get the finer detail, right? So it's like saying what is the 100th decimal place of Pi? Normally we'd start at 3.141... and calculate each digit leading up to the 100th digit we actually need, but in a search algorithm, you'd simply say "Skip to the 100th decimal place and calculate just that number" ignoring everything before it.
So while it has access to infinite detail (just like Pi is infinite detail for a circle or sphere) it doesn't necessarily have to calculate everything leading up to the step it actually needs, nor does it have to calculate infinitely because there is a detail threshold where you simply couldn't see any difference if it got better, much like we have with True Color. So we do have a visual threshold, which limits the data needed, plus the idea of algorithmic methods, as asserted in the "search engine" analogy, and I suspect also procedural methods as is hinted at when the official video (on the site) mentioned polygons not surpassing the quality anytime soon, but then he said "except for Procedural methods at this time" - so what Euclideon is describing is likely using a procedural method as well, but in conjunction with the search algorithm to effectively skip many of the procedural model calculations in the linear manner and select only the calculation of the 1000th or so step, while aggressively culling the data further limited to only screen space pixel constraint. If it's not in view, it literally doesn't exist, not even in file format...
@Maria
ReplyDeleteI'm talking about doing the process for the physical shapes as well, since it's point cloud data, if it were proceduralized, then it goes from a linear 1:1 total representation (a mesh) to an infinite detail procedural method. The trick is that we aren't necessarily having to solve all the steps leading up to the one we need to resolve the geometry, but instead the search algorithm only takes the 1000th step in the procedural equation to solve, skipping every step before it.
Sort of like the difference between walking and teleporting. When we're walking somewhere, we have to cross everything between point A and point B to reach our destination. That's pretty much how current technology works, and their solution is essentially to increase the speed which you can travel down that road (faster cars). But somebody who can teleport would simply skip everything in between and just be at point B.
So is the analogy for linear versus non-linear approaches in this manner - linear methods say we have to store 1:1 files with all the data (as we currently do), and non-linear methods say we can store that as equations which will resolve to that model in infinite detail (if you were to continually process it) but adding the non-linear access to that equation via a search method makes it that much more powerful, like knowing you need the 10000th step of the equation, but not having to solve it in a linear manner to get there - we tell the search system to give us *just* the 10000th step (iteration) and ignore everything before it. So it's like asking only for a miniscule fraction of data, when the file would be stored as an equation to begin with and thus a fraction of the file storage space to begin with... it would be absolutely tiny. But we're dealing with the equivalent of dynamic filesizes - 2kb -> Infinity, and that's where most people seem to have trouble comprehending, because we are so accustomed to thinking about things in a linear manner than non-linear seems like we're breaking the laws of the universe and thus must be impossible.
I really do want to believe that everything you say is possible, but if everything you said was true, why wouldn't Euclideon be advertising hardware requirements for this level of technology or at the least post the level of hardware they used to demo the technology featured in the demo video? What kind of technology company would do this? Would they not realize that anyone with 1/2 a brain in their heads 1st question would be what are the requirements? If were to go by the Occam's razor princpal, none of what Euclideon has done makes any sense they could have easily avoided all of this condemnation by simply posting hardware requirements, the fact they have not done this leads me to believe the requirements are so high its not worth posting them until after they get their funding. I always keep an open mind in these kind of situtations, but its Euclideon's own actions that force one to take a step back and say WTF!!
ReplyDelete@Nebadon
ReplyDeleteIn regard to system requirements, it may not actually be possible to post them. If it uses a dynamic non-linear approach, then even the models themselves would be 100kb and yielding the equivalent of unlimited detail through non-linear methodologies. The requirements in this case are based on that they do not need the total static method, and therefore the screen space is the requirement gauge. So, give or take, the required processing power + RAM for 1024x768 resolution would be something along the lines of whatever it happens to take to render the equivalent still image per second. By GPU and CPU standards of today, what we have available to us would be possibly considered absurdly overpowered to make this happen.
Occam's Razor principle works exceedingly well only if applied to linear methodologies, but becomes inconsequential applied to non-linear methods. We're actually processing far less for the detail level of far more. It's an inverse ratio when we really think about it. It would be impossible if we applied it to linear methods, and that is where most people are coming from when trying to assess this - but the moment it's non-linear and dynamic methods, suddenly it's explosively the opposite of impossible.
Whether or not Euclideon is using a Proceduralized Sparse Voxel system with non-Linear Access is up in the air, however this is the only plausible method I can deduce for how such a system is possible, and quite easily in practice.
I just don't know, I understand what your saying, but lets take Vector graphics for instance and apply it "Unlimited Detail", vector graphics work in an identical manner to which you describe, but in practice only work well when you have very straight or smooth edges (http://en.wikipedia.org/wiki/Vector_graphics) But when you try to get the same level of detail with Vector graphics that you would see in a High Resolution Bitmap, it requires more resources than the Bitmap would use. When you are talking about things like fractals where a lot of the shapes constantly repeat its very easy to do what you describe, but when you start applying those same routines to things that do not use repeating patterns, ie: a broken rock, a weathered building, rotting wood, things that are completely random in nature, then these same algorithms become increasingly complex and require very intense amounts of computational power to resolve. I have a lot of experience with Vector graphics in Flash Development and when you apply vector type algorithms to extreme detail, ie to the atomic level it just does not scale as you describe, the equation becomes so huge that the amount of processing power required to decode it in a real-time manner goes incredibly up, so i must re-iterate in my opinion there are 2 options, unlimited amounts of storage, or unlimited amounts of processing power to achieve it, and lets say for instance that it does not take 512petabytes, lets say it takes 1/100th of that, its still 5 petabytes for a tiny little island? that is still incredibly absurd, lets say they get 500:1 compressing that's still 1 petabyte? my point being is that to get super compression of the nature you describe would require a massive amount of processing power, remember this is real-time 3D world your moving about in with possible animation of leaves and water and possibly falling objects, I can just not resolve this in my mind without super computer level power, but on the other hand, i do hope I am wrong, but right now i just can not see it.
ReplyDelete@Nebadon
ReplyDelete*smiles* It helps if you step away from linear thinking before you try to wrap your mind around it. Trust me, what seems to be the your biggest hindrance with trying to reconcile all of this is that you're basing it on the preconception of everything being a linear access, static data.
It scales if, and only if, you are released from the constraints of linear access and static model thinking. Otherwise, this whole notion seems entirely impossible.
So ask yourself - what if the constraints of linear thinking were removed? What if files weren't static representations? More importantly - what if none of it had to exist unless needed?
It's not a matter of compression, that's the old-style linear thinking talking. It's a matter of not having to compress it in the first place because it was never there.
Take some time to read through what I've said in this article, and *really* wrap your mind around it.
Its all about recursion. and the funny thing about recursion is the formula is the same no mater where in the recursion you are. Therefore it makes no sense to care about the recursion your not looking at nor does it matter what it is before you get there. that's the point of recursion you can basically assume the output with out acualay calculating it. sort of similar to N vs PN but not with enough similarity to be an example of N vs PN. Fractals and set theory though are probably the key to understanding this idea to allow one to grasp it potentualy
ReplyDelete@Yoshiko
ReplyDeleteI agree, even though details from my side are rough. The basic idea is there though, and I still believe there is non-linear methods happening as-per the "search engine" aspect mentioned by their descriptions. A lot of information simply need not exist, nor does it, in whatever they are employing.
Fractals and Set Theory are definitely a key factor here as well. Thank you for a bit more clarification. Wrapping my head around it is hard enough - just to get past the linear "Holy sh** this is impossible!" mentality. I'd say it is definitely a very different idea combination than we may be used to in this industry, which is why we're ingrained to reject it without really trying to think about it from all angles. Impossible? No... but it's definitely going to bend our minds trying to get used to it.
One thing i would like to point out before this topic gets forgotten is that regardless of of how the objects are stored for the game or the size and detail of the objects they used the true revolution here is the rendering speed. using there technique with a current game quality mesh in say a vr world such as secondlife or WoW would be like a 5000% boost in render performance. basically removing viewer render lag from the equation. this is gonna be a large issue using current methods with mesh going on the grid on SL and OS
ReplyDelete