May 25, 2011

Mesh is Dead. Long Live Mesh!

This is going to be one of those posts where I’ve been awake far too long and the Mountain Dew is no longer having its wondrous medicinal effects upon my psyche. Bare with me, though, because this is going to be an interesting post.

 

Mesh is dead. There, I’ve said it publicly.

 

Actually, the term is Dead on Arrival, but the problem is that nobody got the memo.

 

Sure it’ll make waves and somehow revolutionize the way people create things in SecondLife, but that’s a temporary situation. It’s not really a revolution, but a nod to the past and the static nature of virtual worlds and the content created for them.  It is this static approach to things that has caused the biggest headache to developers and virtual worlds alike since the dawn of the pixel. I know right now it doesn’t seem like a problem, or that the complexity issue is just a matter of time before the hardware catches up to render even more data at lightning speeds, but in the end the question isn’t whether or not our hardware will eventually catch up or surpass our current top-end expectations, it’s about whether we’re intelligently building our systems to scale and offer the absolute best we can offer in less bandwidth and computation at the same time.

 

I’m not even going to limit myself to just Mesh either, because that’s just a myopic view of the situation as a whole. No, let’s go full-tilt bozo with this and declare static methodologies defunct altogether.

 

Filesize does not equal expectation of quality. Just because you manage a 25MB PNG file at 60,000x60,000 resolution does not mean the filesize (or subsequent processing power required to load it) is even remotely justified. I’m going to apply this train of thought also to things such as mesh generation, textures, and every other aspect of environmental modeling, with the exception of audio, because quite frankly audio shouldn’t necessarily be dynamically generated (yet).

 

Mesh is this static assumption of NURBS, triangles, primitives, etc wherein the entirety of the object complexity is stored. It has a very discernible limitation, a top end for fidelity. What price do we pay for this high-end mesh? Well, we pay for it in filesize and CPU/GPU render time. If the filesize doesn’t kill you the Render Cost and LOD surely will.

 

The same applies to textures as well. As far as I can recall, the solution for getting higher fidelity for textures in a virtual environment has been to simply create a higher resolution image; Powers of 2 of course (4,8,16,32,64,128,256,512,1024) and there is an artificial limitation again with this approach. I remember when the big buzzword from id Software was that of “MegaTextures” wherein these absolutely gargantuan texture files were used in games to cover the entire game world surface with high-end fidelity. Yes, it did work and the results were really good, however the file sizes were overkill and wasteful, not to mention that by today’s standards they look like crap.

 

Over the years I’ve sort of sat around and observed the trends in virtual environments, the… process of creation, and similar questions always seem to arise at the table in the planning stages. Do we support things like Collada as a model format? Do we allow JPG, PNG, TIFF and others for textures? Even in IEEE Virtual Worlds Standards Group I hear similar conversations, and it baffles me to no end.

 

The question shouldn’t be the difference between something like Collada, Maya or Lightwave3D, because quite frankly the answer is none of the above. The same holds true for whether or not we should be asking if we should support PNG, JPG, GIF, APNG, MNG, PSD, or TIFF. The answer, ideally, is none of them at all. If anything, these static formats are the exception and not the rule in a truly dynamic and scalable Metaverse, used only as a fallback for legacy compatibility.

 

The future, assuming it’ll ever arrive, is not in static methodologies and approaches, but instead it is in dynamic methodologies. As far as the 3D Mesh itself is concerned, the future is in things like Generative Modeling Languages (GML) where St. Patrick’s Cathedral in all of it’s 3D glory weighs in under 50kb (18kb if the file is zipped). With Generative Modeling, we’re talking about a procedural methodology that can just as easily generate more or less detail based on the hardware running it, but do so with the same information regardless. We’re talking about a procedural model of the world, here…

 

Traditionally, 3D objects and virtual worlds are defined by lists of geometric primitives: cubes and spheres in a CSG tree, NURBS patches, a set of implicit functions, a soup of triangles, or just a cloud of points.

 

The term 'generative modeling' describes a paradigm change in shape description, the generalization from objects to operations: A shape is described by a sequence of processing steps, rather than just the end result of applying operations. Shape design becomes rule design. This approach is very general and it can be applied to any shape representation that provides a set of generating functions, the 'elementary shape operators'. Its effectiveness has been demonstrated, e.g., in the field of procedural mesh generation, with Euler operators as complete and closed set of generating functions for meshes, operating on the halfedge level.

 

Generative modeling gains its efficiency through the possibility to create high-level shape operators from low-level shape operators. Any sequence of processing steps can be grouped together to create a new 'combined operator'. It may use elementary operators as well as other combined operators. Concrete values can easily be replaced by parameters, which makes it possible to separate data from operations: The same processing sequence can be applied to different input data sets. The same data can be used to produce different shapes by applying different combined operators from, e.g., a library of domain-dependent modeling operators. This makes it possible to create very complex objects from only a few high-level input parameters, such as for instance a style library.

 

Generative-Modeling.org

 

Algorithmic modeling technologies and methodologies are the future. Instead of worrying if content will run on any system, we can rest assured that (with common sense in the coding department) the algorithmic content is displayed at the fidelity and resolution that the system it is on can comfortably maintain through GPU streaming.

 

 

This entire demo is completely generated from 177kb

 

Farbrausch .debris Real Time Demo: http://www.pouet.net/prod.php?which=30244

 

If you have a crappy computer, then the content will scale back in fidelity for you to give you the much needed extra FPS. Sure, the virtual environment wouldn’t look as pretty as it would on your neighbor’s monster gaming rig, but you never should have expected it to.

 

Procedural textures are in this foray as well, with the ability to scale up and down to extremes based on the capability of the computer system running them. So combine GML and Procedural Textures and we end up with about 100kb worth of data that can theoretically scale upward in fidelity automatically to rival movie studio quality with the same data as the home computer rendering it in real-time with lower fidelity. What the hell, run it on a wimpy iPad or Android phone while we’re at it, since dynamic systems like these can scale down to meet the hardware effectively without breaking a sweat.

 

 

Doing more in 2KB than our best 200Kb PNG

 

Allegorithmic Substance: http://www.allegorithmic.com

 

Toss in the obligatory shaders, parallax extrusion and Normal Mapping, mix it up with Tessellation routines to prioritize fidelity to range, and we end up with a system that makes anything we have today look like an Atari 2600 game in comparison.

 

We can do buildings in GML type languages, procedural textures with something like Allegorithmic Substance, model trees with fractal rule sets, grass using vertex shaders, add depth and detail using tessellation routines, and even specular and height using normal mapping. Hell, I’m even convinced the avatar itself could be generated using a well constructed GML model and take up under 50kb of file size in the process, while remaining fully editable in the process.

 

Of course I’m also a huge fan of Synergistic Networking processes, which is to say take the dynamic approach a step further and apply it to the actual underlying network architecture to create a hybrid decentralized system with centralized gateways for stability and security.

 

While we’re at it, we may as well address realistic lighting for trees and foliage.

 

 

Kevin_Boulanger1

http://kevinboulanger.com/trees.html

 

 

And for good measure let’s add some realistic grass generated through real-time shaders.

 

Kevin_Boulanger2

http://kevinboulanger.com/grass.html

 

 

The world is dynamic in nature. The same base genetic code is used for the blueprints and variations appear through nature or nurture. In this light of understanding, we should be designing our virtual worlds in the same manner. The underlying algorithmic data acts as a base, and then variations and influences are introduced to the end result to influence the final outcome, creating breathtaking beauty and variance with the same data at a fraction of the file size and bandwidth, allowing the fidelity to scale upward to unimaginable heights without recreating the world manually.

 

Think about this for a moment.

 

The ability to create something that will only get better as technology hardware allows for more processing.

 

Automatically.

 

Seems like a no-brainer to me, but the future isn’t here yet (apparently), so Long Live Mesh!

 

I’ll check back in ten years to see if Mesh is still the new buzzword in virtual environments.

5 comments:

  1. Very interesting post. As I was delving into it I somehow expected to see Allegorithmic Substance mentioned in there :)

    This stuff is fascinating, I knew nothing about GML. However, I wonder... how does this fit with Second Life's model? Can GML be used inworld? And for external modeling, does it require special 3d tools? Or, can GML be generated as a translation of whatever 3d format is used?

    ReplyDelete
  2. The photorealism of Blue Mars is amazing. It hates my PC however and I have a fairly good one by most standards.

    It would be awesome to see that in Second Life. Great post Will.

    Skylar

    ReplyDelete
  3. that would be a great step in the right direction for sl if they would ever be willing to do a rewrite for it

    ReplyDelete
  4. @Indigo

    I mention GML as a possibility, but I am not aware of whether it can translate existing models to algorithmic models in the same manner as how Allegorithmic offers Bitmap2Material wherein static bitmapped images can be converted to dynamic material files. I would say it is very possible to import an existing model in say, Collada, and then convert it to a GML based dynamic model format, but as far as I am aware at this time, no such exploration into this has been done.

    As for how it would fit into SecondLife's existing model, it is likely that it doesn't. For one, the idea of Rendering Cost goes out the window, and thus the estimation for how to charge "per prim" since a dynamic model would have no preset Rendering Cost to base from (calculations streamed client-side to achieve widely varying levels of appropriate fidelity). With a dynamic metaverse system, in entirety, the synergistic networking approach addresses this issue in that the storage of assets becomes less of an issue on price point and the focus would be squarely on in-world content and internal micro-transaction revenue.

    @Skylar

    BlueMars is actually a poor example of the dynamic object/texture paradigm in that they use the Crytek 2 engine which doesn't scale well and is built for pure looks alone with little to no regard for scalability on lower end architectures.

    @leonT

    You are correct in the assumption that it would require a total rewrite of the SL system in order to really make it dynamic and scalable. However, it isn't entirely out of the question, as I recall that ActiveWorlds as a system came out of an internal skunkwerks project at Worlds Inc called Alpha Tech wherein the WorldsPlayer was called Gamma. In this light, SecondLife (Linden Lab) would have to devote some of its own team to a skunkwerks project to develop the next evolution of SecondLife from the ground up (Alpha Tech).

    ReplyDelete
  5. Very interesting post - thought provoking and with lots of links to stuff to research more. i love the procedural and algorithmic approaches, hungry to read more about them now!

    ReplyDelete