tag:blogger.com,1999:blog-21946045.post2493415521590715196..comments2022-11-03T11:39:13.662-05:00Comments on Andromeda Media Group: Quantum RushWill Burnshttp://www.blogger.com/profile/14369186130470176679noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-21946045.post-71071438490793278642011-08-11T13:54:05.245-05:002011-08-11T13:54:05.245-05:00One thing i would like to point out before this to...One thing i would like to point out before this topic gets forgotten is that regardless of of how the objects are stored for the game or the size and detail of the objects they used the true revolution here is the rendering speed. using there technique with a current game quality mesh in say a vr world such as secondlife or WoW would be like a 5000% boost in render performance. basically removing viewer render lag from the equation. this is gonna be a large issue using current methods with mesh going on the grid on SL and OSYoshiko Fazukuhttps://www.blogger.com/profile/04343191796288070151noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-58147083802094803612011-08-06T19:46:13.363-05:002011-08-06T19:46:13.363-05:00@Yoshiko
I agree, even though details from my sid...@Yoshiko<br /><br />I agree, even though details from my side are rough. The basic idea is there though, and I still believe there is non-linear methods happening as-per the "search engine" aspect mentioned by their descriptions. A lot of information simply need not exist, nor does it, in whatever they are employing.<br /><br />Fractals and Set Theory are definitely a key factor here as well. Thank you for a bit more clarification. Wrapping my head around it is hard enough - just to get past the linear "Holy sh** this is impossible!" mentality. I'd say it is definitely a very different idea combination than we may be used to in this industry, which is why we're ingrained to reject it without really trying to think about it from all angles. Impossible? No... but it's definitely going to bend our minds trying to get used to it.Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-16292008381109035012011-08-06T18:41:43.200-05:002011-08-06T18:41:43.200-05:00Its all about recursion. and the funny thing about...Its all about recursion. and the funny thing about recursion is the formula is the same no mater where in the recursion you are. Therefore it makes no sense to care about the recursion your not looking at nor does it matter what it is before you get there. that's the point of recursion you can basically assume the output with out acualay calculating it. sort of similar to N vs PN but not with enough similarity to be an example of N vs PN. Fractals and set theory though are probably the key to understanding this idea to allow one to grasp it potentualyYoshiko Fazukuhttps://www.blogger.com/profile/04343191796288070151noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-30178786547954741562011-08-06T17:27:01.103-05:002011-08-06T17:27:01.103-05:00@Nebadon
*smiles* It helps if you step away from ...@Nebadon<br /><br />*smiles* It helps if you step away from linear thinking before you try to wrap your mind around it. Trust me, what seems to be the your biggest hindrance with trying to reconcile all of this is that you're basing it on the preconception of everything being a linear access, static data. <br /><br />It scales if, and only if, you are released from the constraints of linear access and static model thinking. Otherwise, this whole notion seems entirely impossible.<br /><br />So ask yourself - what if the constraints of linear thinking were removed? What if files weren't static representations? More importantly - what if none of it had to exist unless needed? <br /><br />It's not a matter of compression, that's the old-style linear thinking talking. It's a matter of not having to compress it in the first place because it was never there.<br /><br />Take some time to read through what I've said in this article, and *really* wrap your mind around it.Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-76818160091269647062011-08-06T16:27:36.360-05:002011-08-06T16:27:36.360-05:00I just don't know, I understand what your sayi...I just don't know, I understand what your saying, but lets take Vector graphics for instance and apply it "Unlimited Detail", vector graphics work in an identical manner to which you describe, but in practice only work well when you have very straight or smooth edges (http://en.wikipedia.org/wiki/Vector_graphics) But when you try to get the same level of detail with Vector graphics that you would see in a High Resolution Bitmap, it requires more resources than the Bitmap would use. When you are talking about things like fractals where a lot of the shapes constantly repeat its very easy to do what you describe, but when you start applying those same routines to things that do not use repeating patterns, ie: a broken rock, a weathered building, rotting wood, things that are completely random in nature, then these same algorithms become increasingly complex and require very intense amounts of computational power to resolve. I have a lot of experience with Vector graphics in Flash Development and when you apply vector type algorithms to extreme detail, ie to the atomic level it just does not scale as you describe, the equation becomes so huge that the amount of processing power required to decode it in a real-time manner goes incredibly up, so i must re-iterate in my opinion there are 2 options, unlimited amounts of storage, or unlimited amounts of processing power to achieve it, and lets say for instance that it does not take 512petabytes, lets say it takes 1/100th of that, its still 5 petabytes for a tiny little island? that is still incredibly absurd, lets say they get 500:1 compressing that's still 1 petabyte? my point being is that to get super compression of the nature you describe would require a massive amount of processing power, remember this is real-time 3D world your moving about in with possible animation of leaves and water and possibly falling objects, I can just not resolve this in my mind without super computer level power, but on the other hand, i do hope I am wrong, but right now i just can not see it.Nebadonhttps://www.blogger.com/profile/17984022349519987961noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-43231514414669909942011-08-06T16:08:33.715-05:002011-08-06T16:08:33.715-05:00@Nebadon
In regard to system requirements, it may...@Nebadon<br /><br />In regard to system requirements, it may not actually be possible to post them. If it uses a dynamic non-linear approach, then even the models themselves would be 100kb and yielding the equivalent of unlimited detail through non-linear methodologies. The requirements in this case are based on that they do not need the total static method, and therefore the screen space is the requirement gauge. So, give or take, the required processing power + RAM for 1024x768 resolution would be something along the lines of whatever it happens to take to render the equivalent still image per second. By GPU and CPU standards of today, what we have available to us would be possibly considered absurdly overpowered to make this happen.<br /><br />Occam's Razor principle works exceedingly well only if applied to linear methodologies, but becomes inconsequential applied to non-linear methods. We're actually processing far less for the detail level of far more. It's an inverse ratio when we really think about it. It would be impossible if we applied it to linear methods, and that is where most people are coming from when trying to assess this - but the moment it's non-linear and dynamic methods, suddenly it's explosively the opposite of impossible.<br /><br />Whether or not Euclideon is using a Proceduralized Sparse Voxel system with non-Linear Access is up in the air, however this is the only plausible method I can deduce for how such a system is possible, and quite easily in practice.Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-60211272995013530922011-08-06T14:31:31.854-05:002011-08-06T14:31:31.854-05:00I really do want to believe that everything you sa...I really do want to believe that everything you say is possible, but if everything you said was true, why wouldn't Euclideon be advertising hardware requirements for this level of technology or at the least post the level of hardware they used to demo the technology featured in the demo video? What kind of technology company would do this? Would they not realize that anyone with 1/2 a brain in their heads 1st question would be what are the requirements? If were to go by the Occam's razor princpal, none of what Euclideon has done makes any sense they could have easily avoided all of this condemnation by simply posting hardware requirements, the fact they have not done this leads me to believe the requirements are so high its not worth posting them until after they get their funding. I always keep an open mind in these kind of situtations, but its Euclideon's own actions that force one to take a step back and say WTF!!Nebadonhttps://www.blogger.com/profile/17984022349519987961noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-72347105328874693492011-08-06T11:06:38.322-05:002011-08-06T11:06:38.322-05:00@Maria
I'm talking about doing the process fo...@Maria<br /><br />I'm talking about doing the process for the physical shapes as well, since it's point cloud data, if it were proceduralized, then it goes from a linear 1:1 total representation (a mesh) to an infinite detail procedural method. The trick is that we aren't necessarily having to solve all the steps leading up to the one we need to resolve the geometry, but instead the search algorithm only takes the 1000th step in the procedural equation to solve, skipping every step before it.<br /><br />Sort of like the difference between walking and teleporting. When we're walking somewhere, we have to cross everything between point A and point B to reach our destination. That's pretty much how current technology works, and their solution is essentially to increase the speed which you can travel down that road (faster cars). But somebody who can teleport would simply skip everything in between and just be at point B. <br /><br />So is the analogy for linear versus non-linear approaches in this manner - linear methods say we have to store 1:1 files with all the data (as we currently do), and non-linear methods say we can store that as equations which will resolve to that model in infinite detail (if you were to continually process it) but adding the non-linear access to that equation via a search method makes it that much more powerful, like knowing you need the 10000th step of the equation, but not having to solve it in a linear manner to get there - we tell the search system to give us *just* the 10000th step (iteration) and ignore everything before it. So it's like asking only for a miniscule fraction of data, when the file would be stored as an equation to begin with and thus a fraction of the file storage space to begin with... it would be absolutely tiny. But we're dealing with the equivalent of dynamic filesizes - 2kb -> Infinity, and that's where most people seem to have trouble comprehending, because we are so accustomed to thinking about things in a linear manner than non-linear seems like we're breaking the laws of the universe and thus must be impossible.Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-13791723767176802812011-08-06T10:28:38.093-05:002011-08-06T10:28:38.093-05:00@Maria
"Like sending over "x+y=1" ...@Maria<br /><br />"Like sending over "x+y=1" instead of all the coordinates of each point in the circle, so if you've got a low-res game you can generate a circle in a dozen steps, and if you've got a high-res screen, you'd go out a hundred steps to get a finer level of detail?"<br /><br />I believe that is a likely type of *starting* point for this, however when they mentioned that it's working more like a search engine algorithm coupled with the 3D engine, it dawned on me: <br /><br />Normally you'd go out 100 steps to get the finer detail, right? So it's like saying what is the 100th decimal place of Pi? Normally we'd start at 3.141... and calculate each digit leading up to the 100th digit we actually need, but in a search algorithm, you'd simply say "Skip to the 100th decimal place and calculate just that number" ignoring everything before it. <br /><br />So while it has access to infinite detail (just like Pi is infinite detail for a circle or sphere) it doesn't necessarily have to calculate everything leading up to the step it actually needs, nor does it have to calculate infinitely because there is a detail threshold where you simply couldn't see any difference if it got better, much like we have with True Color. So we do have a visual threshold, which limits the data needed, plus the idea of algorithmic methods, as asserted in the "search engine" analogy, and I suspect also procedural methods as is hinted at when the official video (on the site) mentioned polygons not surpassing the quality anytime soon, but then he said "except for Procedural methods at this time" - so what Euclideon is describing is likely using a procedural method as well, but in conjunction with the search algorithm to effectively skip many of the procedural model calculations in the linear manner and select only the calculation of the 1000th or so step, while aggressively culling the data further limited to only screen space pixel constraint. If it's not in view, it literally doesn't exist, not even in file format...Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-16003271009178416062011-08-06T10:17:31.551-05:002011-08-06T10:17:31.551-05:00Aeonix --
So the viewer would send a message: &qu...Aeonix --<br /><br />So the viewer would send a message: "I'm standing at this point, looking in that direction."<br /><br />Then the server would do a search, and come up with a set of equations that would generate the view, and send it back to the viewer.<br /><br />And the viewer would run the equations as far as out as it needed for the display resolution and plot the pixels on the screen?<br /><br />Like sending over "x+y=1" instead of all the coordinates of each point in the circle, so if you've got a low-res game you can generate a circle in a dozen steps, and if you've got a high-res screen, you'd go out a hundred steps to get a finer level of detail?<br /><br />Or are you talking about non-linear, recursive algorithms where a simple set of starting instructions can generate very complex structures in just a few steps?<br /><br />Aren't graphics files already encoded like this?<br /><br />Or are you talking about extending the same approach to the physical shapes of the objects, as well?Maria Korolovhttps://www.blogger.com/profile/17986688121266319555noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-54895911110168752762011-08-05T23:11:55.137-05:002011-08-05T23:11:55.137-05:00@Nebadon
1. It likely doesn't require anywher...@Nebadon<br /><br />1. It likely doesn't require anywhere near 512 Petabytes<br /><br />2. It also is likely not to require unlimited processing either.<br /><br />Both of your assumptions are based on linear file approaches and not dynamic thinking. Do we need to infinitely solve the math for a Tree to get a 3D tree? Absolutely not. Do we even need to store petabytes of data for that procedural model? Again, no, because it's dynamic. <br /><br />What we need happens to be only to solve what is visible on the 2D screen space, and that is the key factor here. Coupled with the built in Voxel LOD, with an aggressive search algorithm that culls the data even further, and then only would be looking for gradual and fractional algorithmic solutions to render detail constrained to a 2D space.<br /><br />Like I said, it doesn't require infinite processing or infinite filespace to render an infinite 3D Fractal. It requires a small math problem, and the ability to know that anything in infinity you can't see right at this moment... doesn't exist.Will Burnshttps://www.blogger.com/profile/14369186130470176679noreply@blogger.comtag:blogger.com,1999:blog-21946045.post-43941424729331820472011-08-05T23:03:13.434-05:002011-08-05T23:03:13.434-05:00I dont disagree with what your saying, but its 1 o...I dont disagree with what your saying, but its 1 of 2 scenario's that i see, A: it really does require 512 petabytes or B: It will require an almost unlimited amount of processing power to do this procedurally. Either way your talking about a lot of computing power to achieve it, perhaps something like this might work well in a cloud or something of that nature where it can scale up rapidly. But personally I do not see this happening on desktop computers for a very long time. Perhaps one day soon we will have that power on our desktops but its pretty far off still if you ask me.Nebadonhttps://www.blogger.com/profile/17984022349519987961noreply@blogger.com