May 29, 2018

Organic Nature

Impossible Audio in #SecondLife


Immersive Audio Banner



When it comes to creating an immersive virtual environment, the goal is to allow the participant to suspend disbelief. Unfortunately, when it comes to places like Second Life (and SANSAR), we find that even the best designed sims always seem to forget the basics.


You know what I’m talking about today, and you’ve likely experienced this for yourself, (or rather didn’t). You teleport to some highly recommended sim somewhere only to find that the music is blaring 24 hours a day. When you turn the music off, you begin to understand why -


The whole place is eerily dead silent.


So what gives?


Audio immersion is a fundamental design staple when it comes to virtual worlds, and there are a few ways that designs either go about it or decidedly do not.


On the low end, we have our typical sim that just forgoes the immersive audio altogether in favor of sticking a music stream on the parcel. You’ll find more often than not that these locations are also quite sparse in their overall design, having a shopping mall or no real coherent planning.


Let’s say we happen across a sim that actually took some time to plan out their audioscape. Even here we’ll find that the audio often seems flat and repetative.


There are plenty of options on Marketplace when it comes to ambient audio, with the most populated option being from SoundScenes. What I am about to say is in no way an indication of whether Hastur Piersterson has done a great job or not with his product. I believe given the circumstances, the SoundScapes series of ambient audio is just fine for everyday use in your builds and I’d definitely recommend it.


One thing you’ll notice, (however), is that with the SoundScapes lineup the quality seems to be entirely hit or miss. Of course, this is totally subjective and I’m merely looking at the reactions of the customers who will give certain audio cubes a 3 star rating or 5 stars.


This isn’t a situation relegated to just SoundScapes, and it is something that persists across most (if not all) ambient audio systems in Second Life.


Much of the problem comes about from understanding what the limitations of SecondLife are in relation to audio itself.


  • 44100khz
  • Mono Channel
  • 10 seconds or less


That doesn’t seem to be a lot to work with up front, but if you understand how audio works, this is more than enough for a virtual world, especially when you understand why Second Life insists on uploading Mono tracks only.


The first bit of information we need to understand is that Mono tracks are required for Second Life to properly pan the audio points and add doppler. Well, not entirely… but this is the main stated reason because this is how the audio engine works in Second Life.


The problem here is that when you’re doing ambient audio systems in Second Life, you take these awesome stereo tracks and effectively crush them down to mono, in the process you lose what is called “side information”. A Mono track will effectively take stereo and average the left and right channel into a mid channel.



Mono mixes will always sound different to stereo ones, and there is little that you can do about that. On a technical level, the mono mix contains only the 'mid' information whereas the stereo mix has both 'mid' and 'side' information.


The reason a stereo mix 'sounds massive' is because of the quantity and nature of the side signal. If there is a lot of out-of-phase information in the stereo mix it will tend to sound very big, but this information will largely be lost when listening to the mid signal only.



This is why most audio in Scond Life sounds “flat” or low quality. We’ve simply stripped out the side information and uploaded the equivalent of average.


Now, you can get away with some tricks here in Mono and we’ll get to that in a moment. I want to address the 10 second limit first.


When we step up our game (pun intended) and start using ambient audio, maybe somebody has created a looped player in a cube (which is the most common approach), you find that 10 second loop sound annoying as hell and fake. It’s simply not organic enough to introduce randomness or it is so short that you can tell when it is looping.


This destroys the immersion.


Years ago when I was in ActiveWorlds, the AWGate world had a looping track of forest and birds. The problem was that this looped every 60 seconds or so. Those bird calls became predictable and ultimately annoying.


We could, of course, implement cubes that play some clips at random to break it up. We get the random crow in the distance or whatever. But let’s stay with our baseline for now.


Ok, so let’s say we step up our game again… this time we’re chaining together multiple 10 second clips to extend that loop further.


Excellent… we’re onto something now. But again, most stop around 30 seconds or 1-2 minutes in the high end. At the very least, we should be shooting for a 2 minute loop.


But what of the sound quality?


We’re still stuck with this crushed mono track, right? We’ve lost that side information and saved the mid range average. This still makes our audio sound flat.




Curse you Linden Lab!


Hold on… there is a light at the end of this tunnel.


We know that Second Life doesn’t allow stereo files (singular), and we have to upload in Mono only in 10 second maximum length. But that isn’t necessarily a limitation if you understand audio editing.


So let’s say that we understand now that combining a stereo track into a mono track will effectively lose the side information and flatten our audio.


But what if we isolated the Left and Right channels of a stereo track, and saved them separately as Mono tracks?


Of course we split them up into 10 seconds or less clips to chain together in-world.


Now we’re sitting with a split stereo audio… of which Second Life will gladly accept for upload. Those two channels saved individually also retain their side information, making them sound bigger when played back together in sync. There’s more depth to it and spatialization – far more than you’d get out of a mono track.


Now we’re stuck with solving the problem of how to play them simultaneously in Second Life. This is where the solution gets a little more complex because we can’t just code a single cube and let it go… I mean you could but that would be kind of a nightmare and limiting because you’re using a single item contents and dumping everything in there.


So let’s say we have our main cube, it’s a controller cube. The entire purpose of this object is to orchestrate the left and right channels of audio, which are in two other objects running a clone of our audio script and listening on an internal channel for the controller to tell them what to do.


You have the left channel audio files in one object, and the right channel audio files in the other object, both listening for the controller cube to tell them when to start and stop.


This should sound eerily similar to a Night/Day system except instead of day and night and two separate loops, we’re treating the two internals like left and right speaker to play simultaneously and giving them specialized audio that syncs together.


By doing it like this, we side-step the mono midrange problem and retain our side information, making the combined audio seem “bigger” and more robust. It just sounds more natural.


Of course, Second Life will pan those audios and add doppler because it thinks it’s just two separate tracks in mono and doesn’t know there is a correlation.


The purpose of doing it like this is predominantly to retain that side information which gives our audio depth. Now we have something that sounds more natural in the process, and whether Second Life is panning them is irrelevant because they are being panned in relation to each other, and that is what counts to retain the symmetry of the audio. We effectively are doubling the audio information being played back in-world, which to our ears sounds better.


These are what we can refer to as baseline ambient audio systems. Our “first layer” foundation. We build from here to create a totally immersive environment. Once we’ve sorted out the original audio information limitation and solved it, it becomes easier as we build with it.


Yes, we can have multiple cubes synced up but I’ll be the first to tell you that you actually don’t want to do that. If you have multiple cubes like this playing out of sync, it by defaults makes your environment seem organic and “random”.


As you move around the environment, they cue up out of sync with each other but in sync with itself (if that makes sense).


In the audio editing phase of such a project, we can apply some more tricks. What if we applied a wider spatialization to the stereo track before splitting it up? In-world, it would sound richer and more organic (within reason).


Once we understand how the audio system works, and a bit of audio theory for editing, we should be able to figure out how to get around the limits within reason.


I wouldn’t suggest that we can nail down true binaural audio in Second Life this way. If we approached it slightly different then yes we actually could.


Let’s say we applied the same technique to a pair of virtual headphones in Second Life. The headphone has two objects, one for right and left channel, and the headphone is the controller to them. We apply the same technique of stereo splitting and synchronization as above but now from a fixed position in relation to the listener.


With this setup, we could replicate full binaural audio in Second Life, albeit in a manner which is highly controlled. You wouldn’t get real-time panning like this, but in the bigger picture, maybe you could invent a pair of ASMR headphones for Anxiety Relief that plays back a head massage for fifteen minutes in binaural?


At this point you should understand that there is a bit of a downside to this, depending on what you’re trying to accomplish.


  • Synchronized Stereo Costs Twice As Much


Developing such a system in Second Life would also entail that your sound cube systems now cost you roughly twice as much to make, obviously because you have to upload double the audio files.


You therefore wouldn’t want to apply this technique to everything, but instead figure out where this technique would best be suited.


There is also another cost factor when trying to do this for a Day/Night system. Instead of two sets of audio in mono, you’re now using two stereo sets and so the cost of a Day/Night ambient system would run quadruple to had you just had a single mono loop.


I suppose in the bigger picture, we’re talking about that up-front cost and investment, and whether the benefits outweigh the costs. I’d imagine we would have to charge a slight premium for these HD Audio systems, but as long as it was still reasonable I think the end-user would still pay for it.


For me, the benefits definitely outweigh the costs. Audio using such a system sounds far better in Second Life than the typical flat mono loops we’re used to. It has a better dynamic range, it sounds less flat and more robust – even though Second Life pans the audio around as you move, that side information is still there and (interestingly) compliments it (I’ll get to that in a moment).


One could most definitely add the ability to “widen” the audio field in-world by allowing the end-user to expandor contract the left and right channel distance – which is just a fancy way of saying move those two cubes internall farther or closer together.





The Cetera Algorithm & HRTF 



The Cetera Algorithm is a reference to Starkey Labs and their hearing aid technology which makes the hearing aid seem invisible to the brain. Cetera removes the barrier between sound and the brain’s ability to process signals, and helps retain the subtle differences in arrival time between left and right ears so that your brain can process positional audio.


In the world of virtual reality, we also refer to this as understanding teh acoustic properties of Head Related Transfer Function (HRTF) which model 3D sound in both the room and how it arrives to your ears. That inference pattern of information is unconscious, but means a lot to the brain when trying to determine position, whether something sounds “real” or not and so on.


Let’s take an audio journey as an example:


Synthetic HRTF Audio Test


Of course, this is a massive oversimplification. So far as Second Life is concerned, yes it can be done but it likely will not anytime soon. We’re not talking about simple panning of left and right chanels with a mono track, but instead a panning stereo track, and even then we’re talking about a stereo track that was recorded in a very specific manner. Yes, we can effectively fake it in Second Life to a degree and under very controlled circumstances, but for our purposes here we’re discussing how to at least up the ante with stereo and the extended information at that level.


Suffice it to say, when somebody says that the difference between CD audio and Vinyl is “all in your head”, they don’t quite seem to understand how right they are (for all the wrong reasons).


While we may not get a full HRTF Model in Second Life, we can approximate things a bit. We can also take this information and subtle cues approach to help us further our understanding on how to approach and apply audio in the virtual world, even with our current limitations.


If we know that the extra information is paramount to our brain in order to process the audio better, then we can look for ways to reasonably retain that information and higher frequency whenever possible for a more natural listening experience.




Planning Your Scene


Even if we know all these crazy details about human hearing and perception, and create a tool by which exploits both how Second Life works and effectively doubles the perceptual audio resolution, there is still the understanding that the best tools are only as effective as the person using them.


For instance, we don’t actually want all of these ambient cubes synchronized to each other. With themselves, yes (and for obvious reason). But because of the nature of Second Life itself, and because such a system would invariably have a delay anyway for preloading and so on, it’s not a big deal and it’s actually more preferred to have the cubes not synced together because then they are offset around your sim playing out of sync with each other and effectively randomizing the soundscape based on where the end-user is location and moving.


The next part to understand is that we aren’t using these singular cubes as the end-all to be all. We have to plan ahead for a soundscape, and include those little details to layer things beyond the baseline.


A random cube that plays maybe crows during day and owls at night, or woodpeckers or whatever. That’s a good addition to the baseline.


The real trick here is walking around the sim as you’re building and asking yourself:


What does this sound like?


There’s really no such thing as “silence”. Whether the soda machine is making a compressor hum, there’s distance walla in a city (like white noise), the door opens, a bell rings entering the store, whatever… Things make noise on their own or when interacted with.


It all adds up.




AES Audio


A lot of this post comes about from a long-term project and R&D from AMG (Andromeda Media Group) in Second Life. One of the projects has been improving audio by thinking of things these systems could really use, and that we weren’t happy with out of the box.


There is, of course, more to it than “We’ve doubled the audio resolution”, however impressive that may be. Things like Dynamic Crosstalk Suppression (DCS) are also included in our current prototypes.


Should another audio brand in Second Life wish to upgrade their own systems with this information, I wouldn’t mind. Whatever makes the experience in Second Life better overall is a win for everyone.


That being said, I’m not going to explain how we’re pulling off Dynamic Crosstalk Suppression. That’s our little secret.


As a final note, let’s recap how to upgrade our ambient audio systems in Second Life:


  • Stereo Synchronization
  • Understanding Audio Information
  • Using loops measured in minutes, not seconds
  • Optional Stereo Widening before splitting
  • Understanding the circumstances of how it will be heard
  • Dynamic Crosstalk Suppression
  • Optional User Defined Channel Widening
  • Using additional randomized audio to break it up
















Mar 1, 2018

A Tale of Two Labs

Insight from my trip to Linden Lab and Beyond


Ache's Garage B



Clarity From Chaos


The sun shone down that day, caressing the silver Passat on I-5 as my assistant and I rolled through the unending fields of almonds which so graciously seem to blanket much of the California hills. Aside from the occasional confused cow (which I named Carl), there was little else to keep my mind occupied.


The radio in these parts seemed to assault my senses with either Spanish, Country Music, or Evangelism. It was clear I needed to turn that noise-box off for a bit and clear my head.


Sipping Carolina Honey Argo Tea (my favorite pretentious tea), I contemplated what was ahead and the shear insanity of it all.


After all, I somehow decided one day out of the blue to cart my balls around in a wheelbarrel and invite Ebbe Altberg, Samy Montechristo and Orion Simprini to hang out. It’s at this point, I usually get the obligatory “How did you get a meeting with them?” question.


A: I simply asked.


Now, whether that speaks to me being important or not is up in the air. Maybe if you asked, they’d schedule you in as well?


Ok, Michelle will schedule you in… she’s Ebbe’s assistant. Maybe the guy wanted a court jester for an hour? Maybe Ebbe stared out the window as I left and said “What the hell just happened?” Who knows…


At least Orion and I have talked back and forth for years and have always wanted to hang out with each other… so weirdly, asking a freakin pop-star to hang out is less crazy than asking the CEO of Linden Lab if he wanted to hang out, but I figured “Why the hell not?”


We had been on a road trip down to San Diego (from Portland, Oregon) and decided a few pit stops along the way were essential if we were going to make the best of this coastal crusade.


Will in San Diego


Of course, a few days in San Diego with friends were mandatory, as were a few days in Carlsbad at the Grand Pacific Palisades Resort. Walking into the place, a staff member asked if I would like some champaign as my assistant was checking us in. Pretty fancy stuff, that. I’d highly recommend booking there if you’re in the area, especially if you get the concierge service.


Further on our way back up, I had arranged to meet up with Orion Simprini (lead singer for the Orion Experience) around Los Angeles. He and I had been bantering back and forth with each other since around 2011, and with both of us from the Tri-State originally (NY, NJ, PA), we always figured hanging out would be easy…


And yet, all these years our schedules and locations never managed to coincide. Until today, where I sat at the corner of Manhattan Avenue and Pier Avenue at Java Man Coffee House by Hermosa Beach sipping my mocha latte, and listening to Children of the Stars when the man himself strolled up.


Will & OrionWe hung out around Hermosa Beach on the pier, kicking our feet up and blowing each other’s minds. Orion was really interested in the idea of doing 360 music videos and VR experiences, so I walked him through the different approaches and what they all meant in terms of production and control.


My boy just sat there with his mind blown at the world of possibilities. At the end of it all, we both hit on a film project that seemed crazy enough to work and I agreed to be a technical advisor and collaborator.


“In the virtual world, you’re God. Nothing exists in your space unless you put it there, and how good it is depends entirely on your attention to detail.”


Of course, you’re not here for that story…




The San Francisco Treat…


As awesome as hanging out with Orion was, I still had other business to attend and a tight schedule (as my assistant loves to keep reminding me). So we bid farewell and continued our drive back up to Redding where we would spend another night.


On the 21st, I was scheduled to roll into San Fransisco and stroll into Linden Lab


People have asked me a plethora of questions about this visit, and I’d like to preface with the disclaimer that the moment you walk into Linden Lab and sign in, you also sign an NDA. For the sake of not disclosing confidential information, this post will focus on just things that I’ve said, the obvious general information (that already is discernable or public) but not specific answers from Ebbe.


Just to make clear: I absolutely do not speak on behalf of Linden Lab, and anything disclosed here is either:


A) My opinion

B) Common sense (and/or public)


What I knew going in, was a bit scarce. So I simply took anything I saw there in the offices as “NDA” and that Ebbe has been trained by ninjas to seek and destroy.


I hadn’t really spent a ton of time in SANSAR prior to this day, but I did think it was kinda impressive for what it was. When answering the Facebook question about Ready Player One and Linden Lab, I was answering generally that they hadn’t dropped the ball there – but wasn’t going into details.


When Ebbe strapped me into the Oculus headset and toured me around SANSAR, I just took everything as “under NDA” as a precaution. Of course, Ache’s Garage was impressive and answers the “Did they forget about a Ready Player One Tie-In?”


Well, no… but I’m guessing if you keep up with the news in the world of Linden Lab (and our wonderful bloggers around the grid) then Ache’s Garage is old news by now. I haven’t been keeping up on those news bites, so Ache’s Garage is new to me (and impressive).


Will & EbbeI did get some interesting insights though during that discussion that maybe weren’t openly reported… more general thought experiments from Ebbe. I wouldn’t necessarily say this is top secret, but more like – we really hadn’t thought of things like that yet but it would be common sense if we thought about it.


Of course, Linden Lab worked with Warner Brothers and Intel and HTC for Ache’s Garage. It’s a great CES demo for sure – at least for the stage that SANSAR is at. Steven Spielberg has seen it first hand with Ebbe – but that is par for the course since I know Spielberg is really anal about how his creative works are represented elsewhere – so Ache’s Garage (like anything else) would need his blessing. Hearing it out of Ebbe that way made me chuckle internally because it’s not the first time I’ve heard similar in the industry about Spielberg.


Would Spielberg or Ernest Cline pop into Second Life or SANSAR for the movie release? I’d prolly say no, but then we already know that Ernest Cline has an account in Second Life, so how do we know the dude isn’t hanging out right now?


Spielberg… ok, that’s likely a “Hell no”. He doesn’t have time for that beyond a cursory one time stop-in. He’s Steven effing Spielberg.


There’s a good reason why SANSAR would be the go-to for companies like Warner Brothers for virtual world experiences. It’s generally because they want absolute authoritarian control over their intellectual properties, and Second Life (being you) scares the ever loving hell out of them.


To them, Second Life is like the inmates running the asylum. Of course, I always have contended that the chaos is manageable with the right approach.


And while SANSAR looks gorgeous, it still feels quite sterile. It’s more a static museum of spaces, versus Second Life which is entirely organic in nature. Each has a purpose, and I’m absolutely enthralled by the future.


But it still stands that the general public wants more than “pretty” when it comes to VR. They want (need) it to be of substance.



Aeche's Garage - SANSAR


[HTC] Ready Player One – Aech’s Garage

Picture: Inara Pey




Of course, I could be wrong about Linden Lab throwing the doors open for Ready Player One… but I’m actually hoping I’m not wrong. Because opening the floodgates to SANSAR right now, with Spielberg in Aech’s Garage would be very, very, bad for SANSAR.


SANSAR is a work in progress, and noticably so. That doesn’t mean it can’t be better over time. But if you were to try and throw it into the spotlight partially finished, the public would shred it and hold it up as an example of why Virtual Reality is failing and not succeeding. The public would compare it harshly to other more polished systems… and that’s unfair at this stage of development. In short, it would be the same mistake Philip Rosedale made with Second Life, so I’ll give Ebbe the benefit of the doubt and assume he knows not to repeat history.


I believe SANSAR is a good beginning, but it’s late to the party. There’s places like VRChat and Sinewave.Space that are much farther ahead as a platform to develop VR content with.


I’m sure there are other platforms as well, but I can’t be bothered to write out a laundry list.


Not that it matters. When it comes to VR Headsets, we’re still in a phase where engagement is measured often in minutes (20-30 minutes), whereas non-headset VR worlds like Second Life see an engagement rate measured in many hours.


SANSAR is a nice place to visit, but I wouldn’t want to live there.


Why Linden Lab is so hellbent on pushing SANSAR while effectively ignoring SecondLife, or treating it like the wicked red-headed step-child internally, is anybody’s guess. But I’ll touch on this more later in the post as I feel it deserves its own section to qualify.


That being said, the insight gained here was along the lines of all the assets these companies have already sitting around in 3D Models. IKEA for instance has their entire catalog in 3D models, Warner Brothers obviously has a massive asset library at their disposal, and so on.


So there is this opportunity to connect these Intellectual Properties into the virtual world – and that’s something Ebbe and I discussed while I was there. Not just “officially” converting existing assets from these companies into say, Second Life and SANSAR, but (and since I brought it up, I’ll assume it’s not covered in NDA – just the answer that Ebbe gave me) by converting the pre-existing user-generated content in those systems from IP Infringing content to Sanctioned under a “Brand Ambassador” program.


If you read this blog, then you know I’ve talked before about the premise of VIPER Licensing – and the question would be:


We all know if you cut off one head of the Hydra, ten more grow back. Why fight this when you can effectively leverage it so everyone wins?


The idea is to let Linden Lab be the “Steven Spielberg” figure and pre-negotiate those licenses on behalf of the userbase, create a best practices guidelines for the usage of those IPs, and then stamp the pre-existing content that we all know exists on Marketplace but “shouldn’t" as brand ambassadors.


Of course, Linden Lab would up your transaction fee to 25% on real world branded merchandise you were making virtually. 10% for Linden Lab and 15% for the IP holder (Coca-Cola, etc). But that still leaves you with 75% of the revenue, and a massive amount of creative freedom without the DMCA fears so long as you follow guidelines for representing IP in a way that doesn’t tarnish the brand.


Now, I can’t tell you what Ebbe said in that office concerning this. Whether Linden Lab explores this is anybody’s guess, but I put it on their whiteboard and tried my best to explain it to him.


The IP problem still persists in SANSAR as well, since an arcade full of Nintendo IP likely isn’t sanctioned by Nintendo for SANSAR. And while I absolutely think this stuff should still exist (and will, regardless of what we do), I think there is a better way to address how we deal with this prosumer culture in the 21st century and especially with user-generated content platforms.


DMCA just isn’t getting the job done, and likely is throwing gasoline on the fire via Streisand Effect. Being authoritarian about it all and trying to tightly control the creation process by requiring approval for uploaded assets would likely stifle and kill SANSAR, unless Linden Lab negotiated all those pre-existing assets from IP holders into SANSAR and SL themselves… which kinda defeats the purpose of user-generated content, and companies never seem to know what use cases people need in a virtual world, so people will still end up filling in the gaps with their own content.


So maybe the best approach is to leverage and manage the chaos instead of trying to eliminate it?




Land Prices


Another conversation I had with Ebbe was about those land prices. It was a question posed in the Facebook post so I made sure to ask. It was kind of a side-note to another conversation about the structure of Second Life being based on land sales, and a way to balance the equation through other means.


I won’t say what those other options were that were presented in the conference room. I will say that Linden Lab is fully aware of the land price “issue” and looking into ways to lower the costs.


Now, after I had stated this for Wagner James Au on his blog (New World Notes), I was contacted in-world by a few Land Barons who (you could say) were slightly paranoid about this situation. If land prices went down, they would reason, then everyone would just buy land and get into business for themselves, putting them out of business and killing their investments…


I don’t see it that way, honestly.


I think competition is healthy, and spurs innovation. So I guess you just can’t get away with an auto-mat system and a couple of volunteers working for tips and commission anymore. May have to invest in treating those companies more like companies, I guess?


Like I said, if land prices came down – it’s not without a sacrifice somewhere. Linden Lab is a company, not a charity. So you better believe they’re making the money up somewhere else if they do.


As per the responses given to Wagner for NWN, I’d like to point out that I was using a cell phone and in the middle of a 4 hour drive up the coast. Not exactly the best situation to collect thoughts, which is why you can consider the NWN post a “rough draft” so to speak where this post is collecting my thoughts better.




F!@#ING LAG!


The other obvious question while meeting with the man himself came as a side question from Facebook that I saw concerning addressing lag in Second Life. I had already planned on presenting something that would drastically reduce lag (and bring a plethora of other benefits) in the right hands, so “reducing lag” in Second Life is a side-effect of this particular topic at hand.


The thing about Second Life is that lag is often inevitable. The way that game engines work is that you’re either in edit mode or the simulation is published. When you play video games, it’s all published and locked down. This gives the engine a chance to pre-bake textures, add in lighting calculations and optimize the scene. Before that, in all game engines, there is the live editing system which lets you place and design the scene with assets in a lower quality mode prior to optimizations and lighting calculations (and so on).



Unreal Editor

Unreal Engine: In-Editor Testing



Now, we can say that the “low quality” edit mode has come a long way over the years whereby it is on par with amazing graphics nonetheless. I remember editing levels in DOOM in the 90’s where it was just a top down map in wireframe before calculating the nodes and publishing the map, where I would then walk around to explore.



Doombuilder_mapedit1



Second Life (as with any live edit system like ActiveWorlds) is just the game engine in permanent edit/preview mode. Those extra optimizations to the scene cannot happen and so things are slower.


That doesn’t mean it’s hopeless.


We can begin by optimizing our assets for a game scene, and choosing a balance between complexity and looks. There is, of course the idea that Second Life could introduce the idea of User Generated Zones.


For this, I’m borrowing from my years in the ActiveWorlds universe (before Second Life existed).




Zones 


The idea of Zones is simply that you use Prims with extended properties to better control spaces in a scene. Whether that is to say – Particles originating outside this zone do not enter it (or vice versa), or to say Assets within this zone do not load until a user enters the zone.


These are all options that were and remain available in ActiveWorlds to this day (20 years later). The idea of user generated zones would instantly make things like Screen Space Particles (Weather) possible without it raining in your house. Doing this, you could also (in theory) run a script that changes the weather on the sim based on a location API call to a weather service.


The zones can also be used for user-generated culling spaces, which (if used properly) can drastically reduce lag in a virtual world – expecially in a highly dense and populated sim.


Of course, there are other benefits to Zones. One of which is assigning smaller spaces in 3D for builders or rights. An example here would be:


Think of an apartment building. There are maybe 20 apartments in the building. Now let’s say you could offer each apartment privacy, localized building, etc like parcelling without the all or nothing blunt instrument but instead using a more surgical approach.


For a more in-depth idea of what zones can do, just click the link above for the ActiveWorlds Wiki description.




Weather


Suddenly, weather built into Second Life doesn’t seem so crazy anymore. When last I checked, Windward Mark was wholly owned by Linden Lab (its parent company). So if Linden Lab got Windlight out of the acquisition, I would imagine they also got the weather system that came with it by default.


Yes, I personally backed up the Windlight demonstration video from Windward Mark Interactive about 7 years ago on my YouTube Channel precisely for this purpose.


Even if they somehow didn’t get the weather system, it’s not like a screen space particle system isn’t a turn-key solution these days from other 3rd parties or even in-house.


It’s there… sitting on a shelf unimplemented. The point made above concerning implementing Zones solves the original problem for not implementing Weather in Second Life 7 years ago. Of course, Zones solve a ton of other things while it’s at it.




Particles


About the only thing I have to say about particles is that it should be integrated into the toolbox (create) menu floater with the rest of the tools.


Yes, you should still be able to script particle emitters if you wish by hand, but there’s really no reason to have left the WYSIWYG editor for it out of the toolbox.


Again, ActiveWorlds made this happen years ago, so it boggles me why this is still a point of contention in Second Life development.


One of the things you’ll notice on the Wiki link is that there is a Particle Type drop-down. Sprite is selected in the picture by default and it’s the “Screen Space Particle” used (most often) for Weather effects.


Particle_properties


“What about the user made HUDs sold on Marketplace to handle Particles?” you may ask.


It is my contention that when your user base creates elaborate work-arounds to fill in functionality gaps for your product, it’s time to make it a priority for native integration.


See Also: Somehow nobody thought a web based marketplace system would be needed officially for Second Life until XStreet sprang up via the end-users and eventually was bought out by Linden Lab.


This should have been native in the toolbox from the beginning, and it’s mind boggling that this many years into Second Life, it still isn’t.


Of course, I say that about weather and zones as well…




Making Mirrors


Of the other subjects that I brought up was how to implement mirrors in Second Life without causing a ton of lag.


I’ll just let the link above explain it.





The Tale of Two Labs


Why Linden Lab is so hellbent on pushing SANSAR while effectively ignoring SecondLife, or treating it like the wicked red-headed step-child internally, is anybody’s guess. But I’ll touch on this more later in the post as I feel it deserves its own section to qualify.



28337343_10212055853131678_4474012829395806107_o



And now we get to the bigger question.


As noted by Bixyl Shuftan, and his apparently keen eye, Linden Lab has laid off about 12 people recently, and their VP of Product apparently has resigned to go work elsewhere.


In relation to my view of Linden Lab during my visit, and what Bixyl has found from Glassdoor reviews by former employees as of recent (which I’ll share here):


The other review from an anonymous employee, "Make LL Great Again" was less critical. But he still had a negative outlook, "Company is too focused on its new product; those who work on the mainstay feel set aside and taken for granted. HR is nearly non-existent; most team members are remote or barely in the office. Too many org changes lately have left folks feeling insecure, morale is low and sinking. I hope they can turn it around." His advice to the Lab's leaders, "You have some really awesome people there; get out of their way."


While I was at Linden Lab, I definitely got the feeling that SANSAR was the main focus with a near total avoidance of discussing Second Life or its future. It’s technology evangelism at its peak.


As far as Ebbe is concerned, he’s all-in for SANSAR while Second Life is … somewhere in the basement level with the engineers.


On one side of the equation I can see why Ebbe would be all-in for SANSAR. I’d assume Linden Lab spent a stupid amount of money developing it and couldn’t afford to pull the plug, and so he was likely told to produce an ROI come hell or high water.


Welcome to the board of directors world.


In a way, I’d assess that Rodvick made a mess and Ebbe is still trying to clean up and/or salvage things.


That being said, I have to agree with said anonymous former employee.


As a CEO, Ebbe has a choice to make – He is the captain of the Linden Lab ship, but he also decides what sort of captain he wants to be:


Captain Picard or Captain Ahab.


Right at this moment, he’s showing qualities of Captain Ahab, in the blind pursuit of SANSAR (Moby Dick). But I believe he’s intelligent and an overall great guy. Smart enough not to sabotage his own efforts and company.


After all, Second Life is still the goose that laid the golden egg. It didn’t die, it’s just being actively starved and strangled by the aforementioned organizational changes and CEOs.


Which is really unfortunate, because I also believe Linden Lab also has some brilliant and creative people there with their hands tied, and who absolutely love Second Life and want to make it better.


While I don’t have a Linden nametag myself, I pushed my devotion for virtual worlds as far as I could take it by crossing the country (New Jersey to California) and making sure I was physically parked in front of Ebbe at Linden Lab, with a whiteboard, to discuss some of these things (and more).


That’s some devotion right there, and… sadly, is as far as I can/could take things short of working at Linden Lab officially.


What Linden Lab does with the information is up to them. I’ve done all that I can in the process, and will turn my attention back to other projects.





I don’t suppose I’ll be coming back to San Francisco unless somebody gives me a damned good reason.