Dec 1, 2018

The Future of Engagement

Why I Don’t Believe In Virtual Worlds…

Virtual Worlds Header

“Those who cannot remember the past are condemned to repeat it.” – George Santayana

Back in the mid 1990s, there was this new thing called virtual reality that was taking the world by storm. The term itself became so saturated and misappropriated that it no longer held any real meaning. By the end of the 1990s, it had jumped the shark.

It was specifically because virtual reality had no proper definition of what is and is not, that marketing and companies latched onto it like a bunch of hungry vultures looking to peddle whatever it was that they were making. During this time we saw the lowest common denominator spring up under the guise of legitiate offerings – Nintendo Virtual Boy, VZone, PowerGlove and more.


In the early 1990s, there was of course things like Worlds Inc, which was a virtual world where you had to pre-design the spaces and download them up front before you could enter. Their claim to fame at the time was that Aerosmith themselves had a world there that you could explore. The interactivity of it all was sparse at best and Worlds Inc soon faded out in favor of it’s in-house successor ActiveWorlds in which the end-users could create the environment in real time. This in and of itself was a major breakthrough in the days of dial-up connections.


What is most interesting about the Worlds Inc model was that rooms, or spaces were compartmentalized and each had to be downloaded in entirety up front before entering. It was a failed model at the time, and later on others tried to adopt the same model with similar results. There was no sense of continuity with these spaces nor contiguous spatial presence. Each space existed on its own.

The irony of that tidbit of knowledge is that Alpha Tech, or AlphaWorld as it was known early on, actually was an in-house system competing against the “Gamma” Tech of Worlds Inc that they already used.

Worlds Inc didn’t see the merit of the AlphaWorld model and so discarded it in favor of their existing Gamma tech model.

The artists and designers at Worlds Inc raised the capital themselves to buy the rights to AlphaWorld and soon launched their own system independently and became ActiveWorlds.


The interesting thing about ActiveWorlds was that it immediately ditched the self-contained room model and download up front aspect for spaces, instead implementing dynamic asset downloads in a sprawling world. The main world, AlphaWorld, was a public building space where anyone could claim land and build. More importantly, this land mass was roughly the contiguous size of the state of California.

There were no border crossings like in Second Life – so you really could drive a vehicle for hundreds of miles uninterrupted. While I can hear the numerous outcries of RP enthusiasts from Second Life waiting to respond with “Umm, Actually…” – I’ll point out that while you can drive vehicles in Second Life the experience is lackluster by comparison to literally everything.

Yes, you can race a car on a track with others, but the audience to watch the race is limited and most tracks during actual races pretty much require you to remove everything including your shoes just to sit and watch.You can fly a plane in Second Life but you and I both know full well that every 1024 meters you fly you’re going to hit that sim border crossing, and it’s not very smooth. We both know full well that you can sail a boat around some waters in Second Life but the experience isn’t actually that good.

That’s the self delusion kicking in. Telling yourself the experience is top notch when it isn’t even close. As a matter of fact, to the mainstream world that experience is completely unacceptable. I’ve had responses to that implying that I somehow don’t know about all these options and shouldn’t making comments on them…


And yet, there I am in my own UNO Mark II super-car in Second Life. Let me tell you about that experience:

The car itself is fantastic and a feat of virtual engineering. It has dials and guages, color options, tons of tuning options, etc. This car can tear up a track in no time flat. It has fifteen gears, and under the best circumstances my friends and I have only ever managed to get it up to tenth gear before the car is moving too quickly for the sim to keep up.

Needly to say, we’ve launched this car over embankments into the water, over the railings, and managed to cross a finish line at high speed launched upside down.

I love this car to death… but I’d enjoy driving it more in Forza Motorsport.

Assuming it’s real life counterpart, the Mono were available as an option:


Which it actually is:


And when you’re comparing you flight/sailing/driving experience from Second Life to anything mainstream, you’re flatly outclassed at literally every conceivable turn. The assertion that the experience in Second Life for those things is somehow on par with anything at all is downright laughable.

You among your fellow RP’ers will agree that it’s a great experience, but that sentiment doesn’t hold up to outside facts.

In ActiveWorlds however… we could very well drive on a track uninterrupted and quickly, we could have 50 or so spectators without incident. Nobody was ever asked to take off their shoes. Of course, our vehicles never reached a level of fidelity like the UNO/Mono.

Of course, I digress… so let’s get back on topic.

During those years, ActiveWorlds saw a rise in popularity but never did become mainstream. Though it is to note that they were the predescessor of something you may know today – Second Life.

It was during those mid to late 1990s that another key figure from left field happened to notice this virtual world trend and who worked in another quintessentially 90s company – Real (who did early streaming video via realPlayer etc).

When he departed from Real, he went on to form Linden Lab and start his own virtual world system which borrowed heavily from ActriveWorlds in the process while innovating in some areas to modernize the approach. It is because I saw these similarities in approach that I chose Second Life to migrate to from ActiveWorlds in 2007.

The interesting thing to understand here is that while you know this person as Philip Rosedale, what you likely didn’t know was that when asked why he didn’t make Second Life sooner, he was quoted as saying that the technology wasn’t possible at the time.

But this is a fabrication at best, as ActiveWorlds existed from 1994 onward and shares many of the key structural elements with the successor of Second Life – even if the predescessor is a much less refined example, it still doesn’t negate that it served a similar premise and shared many key architectural developments in how a virtual world was deployed and for what purpose to the end user. The key interaction paradigm itself was what Rosedale took away from the 90s through prior example.

That last part isn’t what bothers me, but instead it’s the deliberate and selective amnesia about that inspiration and process which grates on my nerves. It isn’t something that is unique to him, but across an entire industry as a whole. We continue to conveniently forget that our “new” ideas are actually unoriginal at best and have been done before. So long as it serves our own narrative and product, we’re willing to have selective amnesia.

That is something that bothers me enough to where I believe there is a certain level of futility when making honest assessments for the current and future of virtual worlds as a whole (including augmented reality). It does no good to forecast to a group of people whose best interest lays in deliberately not acknowledging contradictory information to their own offerings.

The bigger problem is that, much like the 1990s, the entire industry has their head in the sand and any form of constructive criticism or forecast that contradicts their marketing narrative is ignored. There’s simply too much riding on those systems and products to admit anything contrary to that self-agrandization.

During the mid to late 1990s, we also saw a myriad of companies proclaiming that the future of virtual reality was in fact online web worlds. To this end we ended up with plugins for web browsers to play WRL and VRML content, and more specialized systems like Blaxxun Contact to manage multi-user social aspects in those worlds.

Blaxxun Cybertown

Somewhere in all of that, Virtual Reality hit mainstream for awhile and we saw movies and expensive VR Arcades popping up with headsets and $5.00 to play Wolfenstein 3D for five minutes…


What is most important to note in this editorial is not the history itself but that today you can quite easily see the modern day versions of all of this replaying out like a bad franchise reboot.

Second Life? It’s an iteration of ActiveWorlds.

SANSAR? Iteration of BlueMars, which is an iteration of the Worlds Inc model.

VR Headsets? VFX1

VR Arcades? Been there and done that twenty years ago.

OpenSim? Iteration of the countless third party ActiveWorlds systems that sprung up independently.

Web Worlds? They didn’t take off in the 1990s, so why on Earth would anyone think they would take off today?

We have reboot-itis in the industry. Take an old idea, add some more polish to it, and hope the average consumer doesn’t know we’re repackaging the old stuff to sell again.

There has been, and continues to be, a lack or real innovation in the industry – at least where that innovation counts in the bigger picture.

It’s no wonder the mainstream audience doesn’t take to these current offerings and in fact openly mock them.

They aren’t as willing to have selective amnesia as the companies that are trying to sell the products are.


Just as we patterned our original conceptions of The Metaverse on Cyberpunk novels of the 80s, so too the narrative must evolve. When I was part of (and technically still am) IEEE Virtual Worlds Standard group, and prior to that asked to submit a definition of The Metaverse that fit modern times to the Solipsis Decentralized Metaverse project, I thought long and hard about the future of virtual worlds as well as the history that helped us define virtual worlds up until that point.

I extrapolated that information among many cyberpunk descriptions and distilled it into a common thread for an all inclusive criteria for what The Metaverse would look like, how the architecture would support it, the experience overall.

What I ended up coming to as an answer was well ahead of its time, and for many years thought to be impossible. Even today, many in the industry still insist it’s not possible or that their particular methodologies are really the future.

What Is The Metaverse

I was asked to define this in 2007, and above you can see my answer. It was just the synopsis of the full answer, which was given in the ACM Paper a few years later, and then reiterated in IEEE Virtual Worlds Standard group, and now the running definition worldwide.

The Metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space,[1] including the sum of all virtual worlds, augmented reality, and the internet. The word metaverse is a portmanteau of the prefix "meta" (meaning "beyond") and "universe" and is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

I want to note that the definition above was written by me verbatim and stands as the most comprehensive definition of The Metaverse that exists unless you count the 38 pages of the ACM paper that goes into heavy detail or the expansion of that for IEEE in the P1828 overview.

The entire point of the definition was to outline that Neal Stephenson’s Metaverse consisted of a single large planet that was persistent and traversable. That being said, Neal Stephenson wasn’t the only author to portray cyberpunk and virtual reality space during the time, and so I took into account a much larger Metaverse, a Universe structure, in which The Street was merely a gateway location in the bigger system.

Ergo, why the Solipsis Decentralized Metaverse definition I provided split things up into a hierarchy the way it did.

Whenever I hear anyone proclaim confidently that they are building a true Metaverse or already have one, my first reaction (as should anyone who hears such a claim) is uncontrollable laughter. If you aren’t checking off any of those boxes, sit your delusional ass down.

From the mouth of the guy who wrote the modern hierarchy and definition of Metaverse – allow me to tell you with absolute authority what The Metaverse will look like and act like:

OASIS from Ready Player One.

Full stop. No bullshit. No room for half-assed or sub-par.

If your virtual world system is not on par with a fully scalable virtual universe that is persistent and single shard, if it does not contain a full ecosystem and methods by which to support the building of entire virtual civilizations, if it is simply a rehash of the things that were done in the 1990s, then it’s not The Metaverse.

Not in your wildest dreams.

We live in a time where the current industry is crowing about their 100,000 userbase and how a couple of hundred people showing up is proof that they’ll be the next big thing.

But in contrast – Minecraft puts you all to shame.

They have on average 91 million players. Of that demographic split up, ages 30+ make up 21% which comes out to about 19 million players.

There are nearly a hundred million people that prefer Minecraft over anything this virtual world industry is offering.

Your numbers are at best laughable by comparison. We don’t like to compare those numbers but instead just compare other big fish in the small pond to each other. High Fidelity, SineWave, SANSAR, SecondLife, VRChat, etc.

When I see charts from Wagner James Au asking who is most likely among those to become “The Metaverse”? I have to pin a post-it note to that chart and say “None of them”.

On that post it note reads: Minecraft and Dual Universe make this a moot point.

When I said in OSCC last year that what the virtual world really needs is to add purpose – resources, gathering, etc. Nerf teleports, add scarcity…

Universally I was asked “Who would possibly want that? The market doesn’t want that…”

Except for those 91 million players of Minecraft that enjoy just that and probably Dual Universe where 30,000+ players have paid about 100 bucks just to get an account when they are in pre-alpha.

How many people are knocking down the doors of High Fidelity like that? Nobody. Hows the response to SANSAR lately? Not so good. I’d go down this list but you get my point.

They (High Fidelity) are giving out gift cards to entice people to show up for their stress tests, giving away VR headsets at conferences for PR… effectively they have to bribe people to show up. Now back to that prior situation where tens of thousands of people are paying a hundred dollars to get into a pre-alpha of Dual Universe.

And yet, Mr Rosedale asserts he’s building a true Metaverse?

I don’t think so.

The other issue here is – we’re talking about somebody who told you this before, and spectacularly misjudged what the market wanted, and slammed Second Life into a brick wall.

Lest you forget… Mr Rosedale is heading High Fidelity because he was ousted as CEO of Linden Lab. You’ve heard that song and dance from him before, bought into it, and took the ride into the brick wall with him at the wheel before. His inability to anticipate market wants/needs, deliberate misunderstanding of organic trends and fostering a Metaverse type structure, and his penchant for over-hyping his product is why he’s no longer CEO of Linden Lab.

If it’s all the same to you – I’ll take that assertion of his with a monumental salt lick.

Just as I took it with a massive grain of salt when everyone was jumping on the VR Headset bandwagon again. Linden Lab bet an entire project (SANSAR) on the mass adoption of VR Headsets… while I flatly said it won’t pan out.

That isn’t news now, but in hindsight I might as well have been predicting the Titanic would sink while people were boarding. Again, the people most invested both monetarily and emotionally in their chosen virtual world platform or devices do so at the complete disregard for the reality of the situation.

It’s much the same for why I politely bowed out of the Metaverse Alliance, though I’m sure my name is still up there as an advisor (which I’m happy to be). The reason here is that OpenSim is not the future… no more than Second Life is. Trying to retrofit OpenSim and strap stuff onto it just isn’t helpful in the bigger picture. It serves a niche purpose at best and for that it does well enough, but mainstream it will never be. It is (at best) a stop gap.

The Future of Engagement

Coming up on the 8th of December, I’ll be participating in OSCC as a panelist for The Future of Engagement discussion. I’ve opted not to do my own presentation this year, and if you’ve read this far, you’ll understand why.

When I asserted that the main problem with virtual worlds and adoption is the Bored God Syndrome – illustrated by an episode of Twilight Zone (entitled: A Nice Place to Visit), and asked what Minecraft has that draws 91+ million players that SL, and others do not offer – I was met with the stock answer “Ease of Use”.

This illustrates my point entirely. Ease of use is but a small part of the equation, while the underlying premise and narrative itself – that scarcity model and hero’s call to adventure brings purpose to the otherwise purposeless virtual life.

Minecraft adds overarching meaning and the tools to achieve those ends. It adds scarcity model – and that is far more successful than anything the virtual worlds industry has come up with to date.

Nobody can seriously say that their virtual world system even comes close to those numbers.

Of course, that premise is one of the overarching criteria for The Metaverse. You saw it in OASIS. Which brings us to the other point at hand -

The reason your systems aren’t mainstream and likely will never be is because in order to be mainstream it has to appeal to the mainstream ideal of what the majority of people associate with as The Metaverse.

As of right now, hundreds of millions, if not billions of people on Earth think OASIS is The Metaverse. If you aren’t bringing that to the table, you’re going to get laughed at.

I have to agree with the notion of OASIS as Metaverse model because that’s exactly what I defined it as in 2007. Since 2007, I’ve had people constantly tell me I was wrong, or that The Metaverse is whatever they want it to be… I’ve had people tell me that the definition I came up with wasn’t even possible.

And yet here we are today.

Dual Universe exists. Nearly exact to my definition of The Metaverse. Single shard, persistent, full scale universe.

It’s still in alpha at this point, but even their alpha puts everything else to shame right now. I’m not even worried about Dual Universe as a game being The Metaverse… that’s irrelevant. What is relevant in this discussion is that the underlying structure and execution of that architecture is the basis for a full scale Metaverse on par with OASIS.

Dual Universe is the canary in the coal mine. It’s telling everyone to wake up and pay attention to the future, to real innovation… If for no other reason – it deserves to be in the discussion along side all the other so-called virtual world players, even if it makes everyone else look bad by comparison.


Things like XAYA deserve to be in the discussion, especially if you combine it with Dual Universe. Minecraft also should be thrown into that discussion, because they are doing something right with the emergent behavior aspect we so desperately need (and could implement in Second Life) to a larger degree. It’s also something that Dual Universe took a keen interest in when developing their system.

That doesn’t mean I think Second Life could evolve into that Metaverse, but I will say that they can evolve further to be something similar in the meantime. Linden Lab isn’t entirely out of the running yet – they’re just betting on the wrong horse in this race at the moment.

The only thing that ultimately matters now is this:

Who is going to implement all this and when?

Answer those two questions, and you’ll know who is really ahead of the industry, and who to keep an eye on. You’ll also know who is selling a line of bullshit.

What I said in 2007 was and remains true. When Novaquark bills itself as “The Metaverse Company”, as the person who defined the modern Metaverse, I have to agree with them so far. They’re the only place that checks off the boxes I originally came up with.

Maybe not The Metaverse, but by god that technology will likely be the basis for it in the long run. If your system isn’t following suit, you’re pretty much relegated to a niche audience in the long run.

I don’t believe in virtual worlds, because I believe in The Metaverse. Nothing short of OASIS level virtual Universe. I’ve always held this belief, I’ve written the definition of Metaverse with this overarching structure in mind. Anything less is just a virtual world – however compelling… it’s still just a nice place to visit.

Virtual worlds are the equivalent to the participation trophy. They are constantly rehashing the past with a new coat of paint and trying to sell it like it’s the future. I don’t believe in virtual worlds… and now you know why.

If we all want better, it’s time to stop believing in virtual worlds and start believing in The Metaverse instead. Hold it to the highest criteria, don’t accept anything less.

Peace and Happy Holidays :)

May 29, 2018

Organic Nature

Impossible Audio in #SecondLife

Immersive Audio Banner

When it comes to creating an immersive virtual environment, the goal is to allow the participant to suspend disbelief. Unfortunately, when it comes to places like Second Life (and SANSAR), we find that even the best designed sims always seem to forget the basics.

You know what I’m talking about today, and you’ve likely experienced this for yourself, (or rather didn’t). You teleport to some highly recommended sim somewhere only to find that the music is blaring 24 hours a day. When you turn the music off, you begin to understand why -

The whole place is eerily dead silent.

So what gives?

Audio immersion is a fundamental design staple when it comes to virtual worlds, and there are a few ways that designs either go about it or decidedly do not.

On the low end, we have our typical sim that just forgoes the immersive audio altogether in favor of sticking a music stream on the parcel. You’ll find more often than not that these locations are also quite sparse in their overall design, having a shopping mall or no real coherent planning.

Let’s say we happen across a sim that actually took some time to plan out their audioscape. Even here we’ll find that the audio often seems flat and repetative.

There are plenty of options on Marketplace when it comes to ambient audio, with the most populated option being from SoundScenes. What I am about to say is in no way an indication of whether Hastur Piersterson has done a great job or not with his product. I believe given the circumstances, the SoundScapes series of ambient audio is just fine for everyday use in your builds and I’d definitely recommend it.

One thing you’ll notice, (however), is that with the SoundScapes lineup the quality seems to be entirely hit or miss. Of course, this is totally subjective and I’m merely looking at the reactions of the customers who will give certain audio cubes a 3 star rating or 5 stars.

This isn’t a situation relegated to just SoundScapes, and it is something that persists across most (if not all) ambient audio systems in Second Life.

Much of the problem comes about from understanding what the limitations of SecondLife are in relation to audio itself.

  • 44100khz
  • Mono Channel
  • 10 seconds or less

That doesn’t seem to be a lot to work with up front, but if you understand how audio works, this is more than enough for a virtual world, especially when you understand why Second Life insists on uploading Mono tracks only.

The first bit of information we need to understand is that Mono tracks are required for Second Life to properly pan the audio points and add doppler. Well, not entirely… but this is the main stated reason because this is how the audio engine works in Second Life.

The problem here is that when you’re doing ambient audio systems in Second Life, you take these awesome stereo tracks and effectively crush them down to mono, in the process you lose what is called “side information”. A Mono track will effectively take stereo and average the left and right channel into a mid channel.

Mono mixes will always sound different to stereo ones, and there is little that you can do about that. On a technical level, the mono mix contains only the 'mid' information whereas the stereo mix has both 'mid' and 'side' information.

The reason a stereo mix 'sounds massive' is because of the quantity and nature of the side signal. If there is a lot of out-of-phase information in the stereo mix it will tend to sound very big, but this information will largely be lost when listening to the mid signal only.

This is why most audio in Scond Life sounds “flat” or low quality. We’ve simply stripped out the side information and uploaded the equivalent of average.

Now, you can get away with some tricks here in Mono and we’ll get to that in a moment. I want to address the 10 second limit first.

When we step up our game (pun intended) and start using ambient audio, maybe somebody has created a looped player in a cube (which is the most common approach), you find that 10 second loop sound annoying as hell and fake. It’s simply not organic enough to introduce randomness or it is so short that you can tell when it is looping.

This destroys the immersion.

Years ago when I was in ActiveWorlds, the AWGate world had a looping track of forest and birds. The problem was that this looped every 60 seconds or so. Those bird calls became predictable and ultimately annoying.

We could, of course, implement cubes that play some clips at random to break it up. We get the random crow in the distance or whatever. But let’s stay with our baseline for now.

Ok, so let’s say we step up our game again… this time we’re chaining together multiple 10 second clips to extend that loop further.

Excellent… we’re onto something now. But again, most stop around 30 seconds or 1-2 minutes in the high end. At the very least, we should be shooting for a 2 minute loop.

But what of the sound quality?

We’re still stuck with this crushed mono track, right? We’ve lost that side information and saved the mid range average. This still makes our audio sound flat.

Curse you Linden Lab!

Hold on… there is a light at the end of this tunnel.

We know that Second Life doesn’t allow stereo files (singular), and we have to upload in Mono only in 10 second maximum length. But that isn’t necessarily a limitation if you understand audio editing.

So let’s say that we understand now that combining a stereo track into a mono track will effectively lose the side information and flatten our audio.

But what if we isolated the Left and Right channels of a stereo track, and saved them separately as Mono tracks?

Of course we split them up into 10 seconds or less clips to chain together in-world.

Now we’re sitting with a split stereo audio… of which Second Life will gladly accept for upload. Those two channels saved individually also retain their side information, making them sound bigger when played back together in sync. There’s more depth to it and spatialization – far more than you’d get out of a mono track.

Now we’re stuck with solving the problem of how to play them simultaneously in Second Life. This is where the solution gets a little more complex because we can’t just code a single cube and let it go… I mean you could but that would be kind of a nightmare and limiting because you’re using a single item contents and dumping everything in there.

So let’s say we have our main cube, it’s a controller cube. The entire purpose of this object is to orchestrate the left and right channels of audio, which are in two other objects running a clone of our audio script and listening on an internal channel for the controller to tell them what to do.

You have the left channel audio files in one object, and the right channel audio files in the other object, both listening for the controller cube to tell them when to start and stop.

This should sound eerily similar to a Night/Day system except instead of day and night and two separate loops, we’re treating the two internals like left and right speaker to play simultaneously and giving them specialized audio that syncs together.

By doing it like this, we side-step the mono midrange problem and retain our side information, making the combined audio seem “bigger” and more robust. It just sounds more natural.

Of course, Second Life will pan those audios and add doppler because it thinks it’s just two separate tracks in mono and doesn’t know there is a correlation.

The purpose of doing it like this is predominantly to retain that side information which gives our audio depth. Now we have something that sounds more natural in the process, and whether Second Life is panning them is irrelevant because they are being panned in relation to each other, and that is what counts to retain the symmetry of the audio. We effectively are doubling the audio information being played back in-world, which to our ears sounds better.

These are what we can refer to as baseline ambient audio systems. Our “first layer” foundation. We build from here to create a totally immersive environment. Once we’ve sorted out the original audio information limitation and solved it, it becomes easier as we build with it.

Yes, we can have multiple cubes synced up but I’ll be the first to tell you that you actually don’t want to do that. If you have multiple cubes like this playing out of sync, it by defaults makes your environment seem organic and “random”.

As you move around the environment, they cue up out of sync with each other but in sync with itself (if that makes sense).

In the audio editing phase of such a project, we can apply some more tricks. What if we applied a wider spatialization to the stereo track before splitting it up? In-world, it would sound richer and more organic (within reason).

Once we understand how the audio system works, and a bit of audio theory for editing, we should be able to figure out how to get around the limits within reason.

I wouldn’t suggest that we can nail down true binaural audio in Second Life this way. If we approached it slightly different then yes we actually could.

Let’s say we applied the same technique to a pair of virtual headphones in Second Life. The headphone has two objects, one for right and left channel, and the headphone is the controller to them. We apply the same technique of stereo splitting and synchronization as above but now from a fixed position in relation to the listener.

With this setup, we could replicate full binaural audio in Second Life, albeit in a manner which is highly controlled. You wouldn’t get real-time panning like this, but in the bigger picture, maybe you could invent a pair of ASMR headphones for Anxiety Relief that plays back a head massage for fifteen minutes in binaural?

At this point you should understand that there is a bit of a downside to this, depending on what you’re trying to accomplish.

  • Synchronized Stereo Costs Twice As Much

Developing such a system in Second Life would also entail that your sound cube systems now cost you roughly twice as much to make, obviously because you have to upload double the audio files.

You therefore wouldn’t want to apply this technique to everything, but instead figure out where this technique would best be suited.

There is also another cost factor when trying to do this for a Day/Night system. Instead of two sets of audio in mono, you’re now using two stereo sets and so the cost of a Day/Night ambient system would run quadruple to had you just had a single mono loop.

I suppose in the bigger picture, we’re talking about that up-front cost and investment, and whether the benefits outweigh the costs. I’d imagine we would have to charge a slight premium for these HD Audio systems, but as long as it was still reasonable I think the end-user would still pay for it.

For me, the benefits definitely outweigh the costs. Audio using such a system sounds far better in Second Life than the typical flat mono loops we’re used to. It has a better dynamic range, it sounds less flat and more robust – even though Second Life pans the audio around as you move, that side information is still there and (interestingly) compliments it (I’ll get to that in a moment).

One could most definitely add the ability to “widen” the audio field in-world by allowing the end-user to expandor contract the left and right channel distance – which is just a fancy way of saying move those two cubes internall farther or closer together.

The Cetera Algorithm & HRTF 

The Cetera Algorithm is a reference to Starkey Labs and their hearing aid technology which makes the hearing aid seem invisible to the brain. Cetera removes the barrier between sound and the brain’s ability to process signals, and helps retain the subtle differences in arrival time between left and right ears so that your brain can process positional audio.

In the world of virtual reality, we also refer to this as understanding teh acoustic properties of Head Related Transfer Function (HRTF) which model 3D sound in both the room and how it arrives to your ears. That inference pattern of information is unconscious, but means a lot to the brain when trying to determine position, whether something sounds “real” or not and so on.

Let’s take an audio journey as an example:

Synthetic HRTF Audio Test

Of course, this is a massive oversimplification. So far as Second Life is concerned, yes it can be done but it likely will not anytime soon. We’re not talking about simple panning of left and right chanels with a mono track, but instead a panning stereo track, and even then we’re talking about a stereo track that was recorded in a very specific manner. Yes, we can effectively fake it in Second Life to a degree and under very controlled circumstances, but for our purposes here we’re discussing how to at least up the ante with stereo and the extended information at that level.

Suffice it to say, when somebody says that the difference between CD audio and Vinyl is “all in your head”, they don’t quite seem to understand how right they are (for all the wrong reasons).

While we may not get a full HRTF Model in Second Life, we can approximate things a bit. We can also take this information and subtle cues approach to help us further our understanding on how to approach and apply audio in the virtual world, even with our current limitations.

If we know that the extra information is paramount to our brain in order to process the audio better, then we can look for ways to reasonably retain that information and higher frequency whenever possible for a more natural listening experience.

Planning Your Scene

Even if we know all these crazy details about human hearing and perception, and create a tool by which exploits both how Second Life works and effectively doubles the perceptual audio resolution, there is still the understanding that the best tools are only as effective as the person using them.

For instance, we don’t actually want all of these ambient cubes synchronized to each other. With themselves, yes (and for obvious reason). But because of the nature of Second Life itself, and because such a system would invariably have a delay anyway for preloading and so on, it’s not a big deal and it’s actually more preferred to have the cubes not synced together because then they are offset around your sim playing out of sync with each other and effectively randomizing the soundscape based on where the end-user is location and moving.

The next part to understand is that we aren’t using these singular cubes as the end-all to be all. We have to plan ahead for a soundscape, and include those little details to layer things beyond the baseline.

A random cube that plays maybe crows during day and owls at night, or woodpeckers or whatever. That’s a good addition to the baseline.

The real trick here is walking around the sim as you’re building and asking yourself:

What does this sound like?

There’s really no such thing as “silence”. Whether the soda machine is making a compressor hum, there’s distance walla in a city (like white noise), the door opens, a bell rings entering the store, whatever… Things make noise on their own or when interacted with.

It all adds up.

AES Audio

A lot of this post comes about from a long-term project and R&D from AMG (Andromeda Media Group) in Second Life. One of the projects has been improving audio by thinking of things these systems could really use, and that we weren’t happy with out of the box.

There is, of course, more to it than “We’ve doubled the audio resolution”, however impressive that may be. Things like Dynamic Crosstalk Suppression (DCS) are also included in our current prototypes.

Should another audio brand in Second Life wish to upgrade their own systems with this information, I wouldn’t mind. Whatever makes the experience in Second Life better overall is a win for everyone.

That being said, I’m not going to explain how we’re pulling off Dynamic Crosstalk Suppression. That’s our little secret.

As a final note, let’s recap how to upgrade our ambient audio systems in Second Life:

  • Stereo Synchronization
  • Understanding Audio Information
  • Using loops measured in minutes, not seconds
  • Optional Stereo Widening before splitting
  • Understanding the circumstances of how it will be heard
  • Dynamic Crosstalk Suppression
  • Optional User Defined Channel Widening
  • Using additional randomized audio to break it up

Mar 1, 2018

A Tale of Two Labs

Insight from my trip to Linden Lab and Beyond

Ache's Garage B

Clarity From Chaos

The sun shone down that day, caressing the silver Passat on I-5 as my assistant and I rolled through the unending fields of almonds which so graciously seem to blanket much of the California hills. Aside from the occasional confused cow (which I named Carl), there was little else to keep my mind occupied.

The radio in these parts seemed to assault my senses with either Spanish, Country Music, or Evangelism. It was clear I needed to turn that noise-box off for a bit and clear my head.

Sipping Carolina Honey Argo Tea (my favorite pretentious tea), I contemplated what was ahead and the shear insanity of it all.

After all, I somehow decided one day out of the blue to cart my balls around in a wheelbarrel and invite Ebbe Altberg, Samy Montechristo and Orion Simprini to hang out. It’s at this point, I usually get the obligatory “How did you get a meeting with them?” question.

A: I simply asked.

Now, whether that speaks to me being important or not is up in the air. Maybe if you asked, they’d schedule you in as well?

Ok, Michelle will schedule you in… she’s Ebbe’s assistant. Maybe the guy wanted a court jester for an hour? Maybe Ebbe stared out the window as I left and said “What the hell just happened?” Who knows…

At least Orion and I have talked back and forth for years and have always wanted to hang out with each other… so weirdly, asking a freakin pop-star to hang out is less crazy than asking the CEO of Linden Lab if he wanted to hang out, but I figured “Why the hell not?”

We had been on a road trip down to San Diego (from Portland, Oregon) and decided a few pit stops along the way were essential if we were going to make the best of this coastal crusade.

Will in San Diego

Of course, a few days in San Diego with friends were mandatory, as were a few days in Carlsbad at the Grand Pacific Palisades Resort. Walking into the place, a staff member asked if I would like some champaign as my assistant was checking us in. Pretty fancy stuff, that. I’d highly recommend booking there if you’re in the area, especially if you get the concierge service.

Further on our way back up, I had arranged to meet up with Orion Simprini (lead singer for the Orion Experience) around Los Angeles. He and I had been bantering back and forth with each other since around 2011, and with both of us from the Tri-State originally (NY, NJ, PA), we always figured hanging out would be easy…

And yet, all these years our schedules and locations never managed to coincide. Until today, where I sat at the corner of Manhattan Avenue and Pier Avenue at Java Man Coffee House by Hermosa Beach sipping my mocha latte, and listening to Children of the Stars when the man himself strolled up.

Will & OrionWe hung out around Hermosa Beach on the pier, kicking our feet up and blowing each other’s minds. Orion was really interested in the idea of doing 360 music videos and VR experiences, so I walked him through the different approaches and what they all meant in terms of production and control.

My boy just sat there with his mind blown at the world of possibilities. At the end of it all, we both hit on a film project that seemed crazy enough to work and I agreed to be a technical advisor and collaborator.

“In the virtual world, you’re God. Nothing exists in your space unless you put it there, and how good it is depends entirely on your attention to detail.”

Of course, you’re not here for that story…

The San Francisco Treat…

As awesome as hanging out with Orion was, I still had other business to attend and a tight schedule (as my assistant loves to keep reminding me). So we bid farewell and continued our drive back up to Redding where we would spend another night.

On the 21st, I was scheduled to roll into San Fransisco and stroll into Linden Lab

People have asked me a plethora of questions about this visit, and I’d like to preface with the disclaimer that the moment you walk into Linden Lab and sign in, you also sign an NDA. For the sake of not disclosing confidential information, this post will focus on just things that I’ve said, the obvious general information (that already is discernable or public) but not specific answers from Ebbe.

Just to make clear: I absolutely do not speak on behalf of Linden Lab, and anything disclosed here is either:

A) My opinion

B) Common sense (and/or public)

What I knew going in, was a bit scarce. So I simply took anything I saw there in the offices as “NDA” and that Ebbe has been trained by ninjas to seek and destroy.

I hadn’t really spent a ton of time in SANSAR prior to this day, but I did think it was kinda impressive for what it was. When answering the Facebook question about Ready Player One and Linden Lab, I was answering generally that they hadn’t dropped the ball there – but wasn’t going into details.

When Ebbe strapped me into the Oculus headset and toured me around SANSAR, I just took everything as “under NDA” as a precaution. Of course, Ache’s Garage was impressive and answers the “Did they forget about a Ready Player One Tie-In?”

Well, no… but I’m guessing if you keep up with the news in the world of Linden Lab (and our wonderful bloggers around the grid) then Ache’s Garage is old news by now. I haven’t been keeping up on those news bites, so Ache’s Garage is new to me (and impressive).

Will & EbbeI did get some interesting insights though during that discussion that maybe weren’t openly reported… more general thought experiments from Ebbe. I wouldn’t necessarily say this is top secret, but more like – we really hadn’t thought of things like that yet but it would be common sense if we thought about it.

Of course, Linden Lab worked with Warner Brothers and Intel and HTC for Ache’s Garage. It’s a great CES demo for sure – at least for the stage that SANSAR is at. Steven Spielberg has seen it first hand with Ebbe – but that is par for the course since I know Spielberg is really anal about how his creative works are represented elsewhere – so Ache’s Garage (like anything else) would need his blessing. Hearing it out of Ebbe that way made me chuckle internally because it’s not the first time I’ve heard similar in the industry about Spielberg.

Would Spielberg or Ernest Cline pop into Second Life or SANSAR for the movie release? I’d prolly say no, but then we already know that Ernest Cline has an account in Second Life, so how do we know the dude isn’t hanging out right now?

Spielberg… ok, that’s likely a “Hell no”. He doesn’t have time for that beyond a cursory one time stop-in. He’s Steven effing Spielberg.

There’s a good reason why SANSAR would be the go-to for companies like Warner Brothers for virtual world experiences. It’s generally because they want absolute authoritarian control over their intellectual properties, and Second Life (being you) scares the ever loving hell out of them.

To them, Second Life is like the inmates running the asylum. Of course, I always have contended that the chaos is manageable with the right approach.

And while SANSAR looks gorgeous, it still feels quite sterile. It’s more a static museum of spaces, versus Second Life which is entirely organic in nature. Each has a purpose, and I’m absolutely enthralled by the future.

But it still stands that the general public wants more than “pretty” when it comes to VR. They want (need) it to be of substance.

Aeche's Garage - SANSAR

[HTC] Ready Player One – Aech’s Garage

Picture: Inara Pey

Of course, I could be wrong about Linden Lab throwing the doors open for Ready Player One… but I’m actually hoping I’m not wrong. Because opening the floodgates to SANSAR right now, with Spielberg in Aech’s Garage would be very, very, bad for SANSAR.

SANSAR is a work in progress, and noticably so. That doesn’t mean it can’t be better over time. But if you were to try and throw it into the spotlight partially finished, the public would shred it and hold it up as an example of why Virtual Reality is failing and not succeeding. The public would compare it harshly to other more polished systems… and that’s unfair at this stage of development. In short, it would be the same mistake Philip Rosedale made with Second Life, so I’ll give Ebbe the benefit of the doubt and assume he knows not to repeat history.

I believe SANSAR is a good beginning, but it’s late to the party. There’s places like VRChat and Sinewave.Space that are much farther ahead as a platform to develop VR content with.

I’m sure there are other platforms as well, but I can’t be bothered to write out a laundry list.

Not that it matters. When it comes to VR Headsets, we’re still in a phase where engagement is measured often in minutes (20-30 minutes), whereas non-headset VR worlds like Second Life see an engagement rate measured in many hours.

SANSAR is a nice place to visit, but I wouldn’t want to live there.

Why Linden Lab is so hellbent on pushing SANSAR while effectively ignoring SecondLife, or treating it like the wicked red-headed step-child internally, is anybody’s guess. But I’ll touch on this more later in the post as I feel it deserves its own section to qualify.

That being said, the insight gained here was along the lines of all the assets these companies have already sitting around in 3D Models. IKEA for instance has their entire catalog in 3D models, Warner Brothers obviously has a massive asset library at their disposal, and so on.

So there is this opportunity to connect these Intellectual Properties into the virtual world – and that’s something Ebbe and I discussed while I was there. Not just “officially” converting existing assets from these companies into say, Second Life and SANSAR, but (and since I brought it up, I’ll assume it’s not covered in NDA – just the answer that Ebbe gave me) by converting the pre-existing user-generated content in those systems from IP Infringing content to Sanctioned under a “Brand Ambassador” program.

If you read this blog, then you know I’ve talked before about the premise of VIPER Licensing – and the question would be:

We all know if you cut off one head of the Hydra, ten more grow back. Why fight this when you can effectively leverage it so everyone wins?

The idea is to let Linden Lab be the “Steven Spielberg” figure and pre-negotiate those licenses on behalf of the userbase, create a best practices guidelines for the usage of those IPs, and then stamp the pre-existing content that we all know exists on Marketplace but “shouldn’t" as brand ambassadors.

Of course, Linden Lab would up your transaction fee to 25% on real world branded merchandise you were making virtually. 10% for Linden Lab and 15% for the IP holder (Coca-Cola, etc). But that still leaves you with 75% of the revenue, and a massive amount of creative freedom without the DMCA fears so long as you follow guidelines for representing IP in a way that doesn’t tarnish the brand.

Now, I can’t tell you what Ebbe said in that office concerning this. Whether Linden Lab explores this is anybody’s guess, but I put it on their whiteboard and tried my best to explain it to him.

The IP problem still persists in SANSAR as well, since an arcade full of Nintendo IP likely isn’t sanctioned by Nintendo for SANSAR. And while I absolutely think this stuff should still exist (and will, regardless of what we do), I think there is a better way to address how we deal with this prosumer culture in the 21st century and especially with user-generated content platforms.

DMCA just isn’t getting the job done, and likely is throwing gasoline on the fire via Streisand Effect. Being authoritarian about it all and trying to tightly control the creation process by requiring approval for uploaded assets would likely stifle and kill SANSAR, unless Linden Lab negotiated all those pre-existing assets from IP holders into SANSAR and SL themselves… which kinda defeats the purpose of user-generated content, and companies never seem to know what use cases people need in a virtual world, so people will still end up filling in the gaps with their own content.

So maybe the best approach is to leverage and manage the chaos instead of trying to eliminate it?

Land Prices

Another conversation I had with Ebbe was about those land prices. It was a question posed in the Facebook post so I made sure to ask. It was kind of a side-note to another conversation about the structure of Second Life being based on land sales, and a way to balance the equation through other means.

I won’t say what those other options were that were presented in the conference room. I will say that Linden Lab is fully aware of the land price “issue” and looking into ways to lower the costs.

Now, after I had stated this for Wagner James Au on his blog (New World Notes), I was contacted in-world by a few Land Barons who (you could say) were slightly paranoid about this situation. If land prices went down, they would reason, then everyone would just buy land and get into business for themselves, putting them out of business and killing their investments…

I don’t see it that way, honestly.

I think competition is healthy, and spurs innovation. So I guess you just can’t get away with an auto-mat system and a couple of volunteers working for tips and commission anymore. May have to invest in treating those companies more like companies, I guess?

Like I said, if land prices came down – it’s not without a sacrifice somewhere. Linden Lab is a company, not a charity. So you better believe they’re making the money up somewhere else if they do.

As per the responses given to Wagner for NWN, I’d like to point out that I was using a cell phone and in the middle of a 4 hour drive up the coast. Not exactly the best situation to collect thoughts, which is why you can consider the NWN post a “rough draft” so to speak where this post is collecting my thoughts better.


The other obvious question while meeting with the man himself came as a side question from Facebook that I saw concerning addressing lag in Second Life. I had already planned on presenting something that would drastically reduce lag (and bring a plethora of other benefits) in the right hands, so “reducing lag” in Second Life is a side-effect of this particular topic at hand.

The thing about Second Life is that lag is often inevitable. The way that game engines work is that you’re either in edit mode or the simulation is published. When you play video games, it’s all published and locked down. This gives the engine a chance to pre-bake textures, add in lighting calculations and optimize the scene. Before that, in all game engines, there is the live editing system which lets you place and design the scene with assets in a lower quality mode prior to optimizations and lighting calculations (and so on).

Unreal Editor

Unreal Engine: In-Editor Testing

Now, we can say that the “low quality” edit mode has come a long way over the years whereby it is on par with amazing graphics nonetheless. I remember editing levels in DOOM in the 90’s where it was just a top down map in wireframe before calculating the nodes and publishing the map, where I would then walk around to explore.


Second Life (as with any live edit system like ActiveWorlds) is just the game engine in permanent edit/preview mode. Those extra optimizations to the scene cannot happen and so things are slower.

That doesn’t mean it’s hopeless.

We can begin by optimizing our assets for a game scene, and choosing a balance between complexity and looks. There is, of course the idea that Second Life could introduce the idea of User Generated Zones.

For this, I’m borrowing from my years in the ActiveWorlds universe (before Second Life existed).


The idea of Zones is simply that you use Prims with extended properties to better control spaces in a scene. Whether that is to say – Particles originating outside this zone do not enter it (or vice versa), or to say Assets within this zone do not load until a user enters the zone.

These are all options that were and remain available in ActiveWorlds to this day (20 years later). The idea of user generated zones would instantly make things like Screen Space Particles (Weather) possible without it raining in your house. Doing this, you could also (in theory) run a script that changes the weather on the sim based on a location API call to a weather service.

The zones can also be used for user-generated culling spaces, which (if used properly) can drastically reduce lag in a virtual world – expecially in a highly dense and populated sim.

Of course, there are other benefits to Zones. One of which is assigning smaller spaces in 3D for builders or rights. An example here would be:

Think of an apartment building. There are maybe 20 apartments in the building. Now let’s say you could offer each apartment privacy, localized building, etc like parcelling without the all or nothing blunt instrument but instead using a more surgical approach.

For a more in-depth idea of what zones can do, just click the link above for the ActiveWorlds Wiki description.


Suddenly, weather built into Second Life doesn’t seem so crazy anymore. When last I checked, Windward Mark was wholly owned by Linden Lab (its parent company). So if Linden Lab got Windlight out of the acquisition, I would imagine they also got the weather system that came with it by default.

Yes, I personally backed up the Windlight demonstration video from Windward Mark Interactive about 7 years ago on my YouTube Channel precisely for this purpose.

Even if they somehow didn’t get the weather system, it’s not like a screen space particle system isn’t a turn-key solution these days from other 3rd parties or even in-house.

It’s there… sitting on a shelf unimplemented. The point made above concerning implementing Zones solves the original problem for not implementing Weather in Second Life 7 years ago. Of course, Zones solve a ton of other things while it’s at it.


About the only thing I have to say about particles is that it should be integrated into the toolbox (create) menu floater with the rest of the tools.

Yes, you should still be able to script particle emitters if you wish by hand, but there’s really no reason to have left the WYSIWYG editor for it out of the toolbox.

Again, ActiveWorlds made this happen years ago, so it boggles me why this is still a point of contention in Second Life development.

One of the things you’ll notice on the Wiki link is that there is a Particle Type drop-down. Sprite is selected in the picture by default and it’s the “Screen Space Particle” used (most often) for Weather effects.


“What about the user made HUDs sold on Marketplace to handle Particles?” you may ask.

It is my contention that when your user base creates elaborate work-arounds to fill in functionality gaps for your product, it’s time to make it a priority for native integration.

See Also: Somehow nobody thought a web based marketplace system would be needed officially for Second Life until XStreet sprang up via the end-users and eventually was bought out by Linden Lab.

This should have been native in the toolbox from the beginning, and it’s mind boggling that this many years into Second Life, it still isn’t.

Of course, I say that about weather and zones as well…

Making Mirrors

Of the other subjects that I brought up was how to implement mirrors in Second Life without causing a ton of lag.

I’ll just let the link above explain it.

The Tale of Two Labs

Why Linden Lab is so hellbent on pushing SANSAR while effectively ignoring SecondLife, or treating it like the wicked red-headed step-child internally, is anybody’s guess. But I’ll touch on this more later in the post as I feel it deserves its own section to qualify.


And now we get to the bigger question.

As noted by Bixyl Shuftan, and his apparently keen eye, Linden Lab has laid off about 12 people recently, and their VP of Product apparently has resigned to go work elsewhere.

In relation to my view of Linden Lab during my visit, and what Bixyl has found from Glassdoor reviews by former employees as of recent (which I’ll share here):

The other review from an anonymous employee, "Make LL Great Again" was less critical. But he still had a negative outlook, "Company is too focused on its new product; those who work on the mainstay feel set aside and taken for granted. HR is nearly non-existent; most team members are remote or barely in the office. Too many org changes lately have left folks feeling insecure, morale is low and sinking. I hope they can turn it around." His advice to the Lab's leaders, "You have some really awesome people there; get out of their way."

While I was at Linden Lab, I definitely got the feeling that SANSAR was the main focus with a near total avoidance of discussing Second Life or its future. It’s technology evangelism at its peak.

As far as Ebbe is concerned, he’s all-in for SANSAR while Second Life is … somewhere in the basement level with the engineers.

On one side of the equation I can see why Ebbe would be all-in for SANSAR. I’d assume Linden Lab spent a stupid amount of money developing it and couldn’t afford to pull the plug, and so he was likely told to produce an ROI come hell or high water.

Welcome to the board of directors world.

In a way, I’d assess that Rodvick made a mess and Ebbe is still trying to clean up and/or salvage things.

That being said, I have to agree with said anonymous former employee.

As a CEO, Ebbe has a choice to make – He is the captain of the Linden Lab ship, but he also decides what sort of captain he wants to be:

Captain Picard or Captain Ahab.

Right at this moment, he’s showing qualities of Captain Ahab, in the blind pursuit of SANSAR (Moby Dick). But I believe he’s intelligent and an overall great guy. Smart enough not to sabotage his own efforts and company.

After all, Second Life is still the goose that laid the golden egg. It didn’t die, it’s just being actively starved and strangled by the aforementioned organizational changes and CEOs.

Which is really unfortunate, because I also believe Linden Lab also has some brilliant and creative people there with their hands tied, and who absolutely love Second Life and want to make it better.

While I don’t have a Linden nametag myself, I pushed my devotion for virtual worlds as far as I could take it by crossing the country (New Jersey to California) and making sure I was physically parked in front of Ebbe at Linden Lab, with a whiteboard, to discuss some of these things (and more).

That’s some devotion right there, and… sadly, is as far as I can/could take things short of working at Linden Lab officially.

What Linden Lab does with the information is up to them. I’ve done all that I can in the process, and will turn my attention back to other projects.

I don’t suppose I’ll be coming back to San Francisco unless somebody gives me a damned good reason.