Jun 24, 2012

Pixel Perfect

Exploring the premise of an ideal virtual environment | #Secondlife


If you spend enough time in virtual worlds, eventually you come to a number of realizations concerning pretty much every conceivable facet involved. This could be from the very basis of the hardware foundations to the high level sociological implications. For the most part, we partake in virtual worlds initially out of curiosity, and then invariably move on to more complex interactions such as creating items for a marketplace or scripting. There is, of course marketing, building and quite a lot of other “services” that one could get into for their virtual existence such as the jobs of host, manager, DJ, dancer (exotic or not) and maybe even the more extreme situations where you enter into the sex industry in some fashion.


Underlying all of this, however, there comes a point where you’ve reached a plateau in experiential synthetic environments and turn your attention to the underlying aspects of what makes such a place tick. I like to call this a pixelated enlightenment.


What this post will involve is precisely this train of thought as an exploration into how to make this synthetic environment culture better overall by making some fundamental changes to how the structure works. For all intents and purposes, let’s say this is an article about what I personally believe would constitute the Perfect Virtual World.





An image from the upcoming game RESET from Theory Interactive. This is real-time, in-game.



Resource Based Economy


On the surface, the problem with virtual environments which are sandbox based is that there is a lack of perceived value and scarcity of resources. In Second Life, for example, the only resource you have are prims and land impact coupled with your financial depth. This works in a rudimentary manner to create a sense of worth for items created, traded and sold in the virtual environment, but does little to instill a sense of actual worth to those items.


When you purchase or create items in Second Life, those items are essentially infinite. without the scarcity of restriction, those infinite items therefore rapidly depreciate in perceptual value – and I am often boggled by how people can apply worth to virtual items that are infinite to begin with. It’s not like a resource that is scarce, and therefore in limited supply or availability – so as long as the listing exists on Marketplace or in a store, you can always get a copy of that item just as perfect as the first one that the creator made.


I believe in the grand scheme of things that this methodology has to go.


I play a lot of Minecraft myself, and I believe that one of the most telling aspects of that game is that despite the rudimentary graphics, this game is highly popular. So clearly, there is an element at play which is compelling enough to bring in players. In this notion, I’d like to state that the underlying idea which I believe is compelling players to continually take part in this game happens to be the idea of resources and what you can do with them.


As a game, it takes into account by its very nature the underlying principle of resources and limits the players to being able to make only what they can afford in harvested resources. While this is compelling in a basic form, I still see an issue with this methodology.


Minecraft is still a static game world, and so the players are mostly limited for what they can create based on the predefined “recipes” in the game. We can see why a majority of the creative process in Minecraft falls under building structures or Redstone circuitry in this model, because those are the two main options for creative exploration – assuming we ignore the modding community, which I will do for the moment.


In a sandbox virtual environment such as Second Life, it would be better to tie various means of creation to built-in resources in the virtual world. While this is very unlikely to happen given the structure of the current system (limitations of perceived spatial occupancy in regions and terrain mechanics), I believe that if one were to sit down and reconstruct an entirely new synthetic environment system from the ground up, that resource scarcity should be an underlying priority.


Economics at the very core are defined as efficient management of resources on a regional or global scale, and it is monetary incentive/substitute which ties any homogenized meaning. But in a virtual environment, we’re not managing resources of scarcity other than processing impact, so the “prim count” structure is really just the benefit of how many CPU cycles you are allowed to utilized on any given instance.


So let’s take into account a set of resources up front in order to better manage an actual economy. After all, the things we create in a virtual world should be made of “something” and those somethings should be in limited supply based on the harvesting of the users within that system.



Minecraft Blocks



The aspect that I like about Minecraft and resources is not the recipes for item creation but instead the resources idea itself. In the context of a sandbox virtual environment like Second Life, where you can create anything, it would be pretty stupid to limit people to pre-defined recipes.


That being said, the element of natural scarcity and resources should remain when building anything in a sandbox virtual environment. So, let’s say we pre-define only resource types for which items can be made from or which can be used for things such as certain elements of scripting.


This is a very good foundation for our perfect virtual world scenario, because now we’ve tied inherent value to items created in the system with nothing lasting “forever”. Items would come with a built-in durability, resource cost, and scarcity based on how many of a certain resource is available at the time of purchase/creation which, in turn, is based on how much the population of the virtual environment are spending their time to collect/harvest those resources to sell to the system or keep for themselves internally.


In this manner, we define our resources as:


1. Mundane – Wood, Stone, Plastic, Glass, etc.

2. Rare – Precious metals/gems

3. Exotic – Imaginary type resources (float-stone, alchemy, etc)

4. Energy – Resources used for the production of electricity, fuel or food


Mundane items are abundant but have lower durability. Clearly wood, stone, etc in this case. Wood would be the lowest common denominator for fuel source (burning) and maybe Coal. Rare resources have a much lower probability of being found and constitute higher durability or “worth” such as gems or precious metals. Exotic materials like “float-stone” would be exceedingly rare but hold exotic properties like the ability to allow unconventional flight to an object that is built using it. Then there is the Energy resource which can be mundane, rare or exotic – in this case we state Wood/coal is mundane, Refined resources like Gasoline or whatnot are Rare, while Exotic energy sources could be from Crystalized sources like Zero-point Crystals.


Here’s the more detailed breakdown:


Let’s say you want to build a house in Second Life. Every prim or mesh you use will have a setting for resource to be made of (and technically they already do with Materials). In the context of link sets, the cumulative resources of all of the prims is the final resource cost for that particular item and the overall durability of that item is the average of all durability ratings.


The only time that the resources are in play are for the initial creation and the subsequent copies. But other than that, they are essentially used for the calculation of scarcity, durability and behavior of the item.


In the context of a house, it is made of many different types of resources – though I’m sure people would think it is clever to make them entirely out of an abundant resource like “wood” or “stone”. With the resource comes the underlying properties of durability and availability. So when you make the item, even if it’s a bunch of link sets, the pieces first have to make sure that the central economy can support deducting the resource for the instantiation of that item before it can be “rezzed”.


If there is a run on a particular resource that the item is made of, then the item cannot be copied into the virtual world space until such time as the resources become available to do so – either by the world economy or via your own personal resources. This is why you have the option of either keeping your resources to yourself as you harvest them or selling them back to the world economy for resource credits. It’s a choice as to what path you want to take with this and each has benefits and detriments.


Of course, I’m also hearing people crying bloody murder already about what if they just want the freedom to create without these resource restrictions much like they already do in Second Life?


I did think about this, and to that end figured in an infinite resource called Omnite which is essentially the universal resource of abundance and high durability. The caveat of using Omnite for your creations is that while it comes with a high durability and is always plentiful, the trade off is cost. Omnite is the most expensive resource in the economy because it can substitute for any and all resources.


Omnite isn’t an infinite durability resource either, however high the durability of it is. In order to obtain Omnite from the economy, you would have to spend more resource credits (which you can earn or buy outright). Resource credits being the universal currency in the virtual environment.


What you would have in the end are items made of conventional resources as harvested by the population (and having varying durability and qualities) which end up being less costly as the incentive for items made with these resources, versus items that skip the resource requirements and are purely made with Omnite at a higher cost for the end-item.


Therein we maintain a balance between a real resource economy and traditional style “creative” forces in a virtual environment.


Certain commands in a scripting environment would also come with a caveat of resources as well in order to perform a scripted task – such as in order to fly the item would need X amount of Float-Stone and X amount of Energy Units consumed per minute to maintain flight. So you could script the item accordingly but it would eat resources, and also break down over time.


Scripters could still substitute Omnite for any or all components since it’s the universal resource, but that would make their creation more expensive to operate and purchase. This being said, the other thought on this resource based economy is that when a particular resource is not abundant enough to facilitate the creation of an item, you can always substitute Omnite at the point of copy/sale for a higher resource credit cost.


Now we have a reason to maintain a healthy resource economy, while putting vast amounts of people to work in various endeavors. The healthier the economy is with resources, the lower the cost of goods within it. The worse the economy of resources are, the more items end up defaulting to Omnite and thus costing more.


There’s our fair trade for both ends of the spectrum.



Graphical Fidelity


Without a doubt, graphical fidelity should be as close to photorealistic as possible with as little hardware requirement as it takes to do this. Photorealism is a hallmark of an advanced Metaverse system not because it is photorealistic quality but because it implies a top-end capability with the inclusion of every possible lesser quality at the same time.


Photorealism does not exclude lower quality virtual environment spaces, but instead simply widens the premise of the entire system fidelity.


There is two schools of thought for how this is going to be accomplished, and both are likely to achieve it over time. On one side we have polygons which are taking the brute-force approach to increasing graphics, and then we have efforts like Atomontage and Euclideon in the procedural and point-cloud data approaches.


Personally, I’m in the cheering section for Euclideon – mostly because they aren’t using a brute-force approach to graphical fidelity but instead intelligent framing of the problem. In the grand scheme of things, I suppose the analogy is thus:


Polygon Methods are the Jocks in Highschool. The answer to everything is brute force. Gets the job done, but not very intelligently. Euclideon is like the geeks in the chess club – they’re taking their time analyzing and working smarter, not harder, to find an elegant and less expensive solution to the same problem.


Regardless of which methodology reaches this point, it is assured that photo-realism is inevitable over time in either scenario. The real question is really just how much time to get there and will it require a super-computer to pull off?







Binaural Audio


If you’ve never hear binaural audio before then you’re in for a treat. Essentially it’s 3D Audio that takes into account the Head Related Transfer calculations to create positional audio. It’s a manner to specifically record stereo audio without the stereo fatigue.


Use Headphones for the following audio tracks.





Decentralized Nature


The underlying structure should be a hybrid decentralized structure to avoid any particular choke points of access or failure while still maintaining some semblance of control for the important things.



Data Agnostic


Everyone has a cache, nobody is sharing. A central datacenter instead is too busy trying to serve everyone with redundant information and eventually (inevitably) failing over time. A perfect virtual world wouldn’t need to constantly ask a central server for data but instead ask other users in the network. Not only that, but if the data itself is agnostic (without meaning) then the local cache of the individual user could first be scanned when making requests before downloading anything from the network – essentially meaning that over time there will be less need to download individually because their caches already hold relevant data for new items.



Real World Inclusive


Items in the virtual context should have ties to the real world context. This is where the virtual world comes together with augmented reality to create a hybrid space. Virtual items brought out of online inventory and placed in your real world living room digitally via augmented reality. Why not? We can scan real life into the virtual world, so why not make it asynchronous?


More importantly, it’s a marketing/advertising opportunity that has yet to really blossom.



Academic, Non-Profit and Commercial Licenses


There is absolutely no reason to charge full scale commercial licensing or costs for personal or academic usage of such a system. It’s downright ignorant on all levels to lump them all together like that. In a perfect virtual environment, Non-Commercial and Academic usage of the system would be free or severely discounted while commercial usage would carry standard pricing tiers.



The platform that scales itself.


One system for every possible device and scenario. Not stripped down or reworked for each context. How things look on a standard computer is how it looks on a cell phone and tablet. High definition on the native client for your desktop also means high definition and nothing different for the embedded client in Facebook – aside from interface changes to accommodate usage abilities.


Procedural technologies inherently built in to allow automatic fidelity scalability into the future.



These are just some of my thoughts about what points would create a perfect virtual environment for the next phase of this industry. Clearly this isn’t something you’re going to get in the current generation such as Second Life, so the best I can hope for is the next generation to pull this stuff off.



Comment Question


What things would you like to see in a next-generation synthetic environment that would help make it perfect for you? Drop some comments below and let me know.





Jun 20, 2012

Microsoft Surface

Oh look, it’s a laptop/tablet hybrid design.


On Monday, Microsoft had this huge press conference to unveil plans for something they said would be revolutionary and, more to the point, a paradigm shift in computing interfaces. Of course, this is a really loose paraphrase because for all I know they were promising that they’d invented a new way to slice bread.


Over the weekend, millions of people were taking shots at what this amazing revelation could be, while I on the other hand already knew. To be honest, I knew this was coming a year ago so this is yet another case of the future catching up to me instead of the other way around. When I originally said it, I asserted (as I continued to do) that the only way that Windows 8 Metro made any sense at all was if it was married to a hybrid tablet/laptop design and that Microsoft clearly was designing an OS for devices that didn’t exist yet but were anticipating would exist.


Let me introduce to you the beginning of what will be the convergence of computing.


When I say “convergence” what I mean is that for all the times people have told me that tablets and mobile are the future, I just laugh and tell them they’re wrong. It’s not about Tablets or about Mobiles, and it’s not about whether or not PCs in any form are dead. What this is about is hybrid computing environments, as I said last year in the article The Future: Now In Tablet Form







A tablet used as the screen which is detachable from the keyboard? No way!



Last year I asked the question – what if the future is just a scenario where the screen on your laptop is the tablet and it detaches from the base unit to become a mobile device? Well, this is the future… However, that being said, this is only the beginning of that future. The second part of my question/statement was concerning the keyboard section being the more powerful base station while the tablet portion as the screen was the mobile processing computer on the go. When you docked the tablet to the base station the processing would switch over from the tablet to the base station to handle the tasks in a more robust manner.


We’re not quite there yet, but there are clearly prototypes in the works. The base station concept is a logical next step for computer manufacturers at this point instead of just slapping a keyboard on a tablet and calling it a day. At least I really hope so, because it would suck horribly if we were all being forced into a future where we’re paying the same price of an entry laptop and getting half the power and capabilities.


In this scenario, it’s also likely that systems like this would be wirelessly connected to the home computer as a server or something to offset the fact that you’re getting royally screwed on tablet storage. It’s 2012 and I still think cloud computing and storage as it is currently presented is a rip-off, and you can quote me on this freely.



In which Microsoft sells you half the computer and OS for the same price while calling it innovation.



Renting Your Data


I’m not ok with this idea and never have been. For cloud storage services the cost you pay is astronomical in relation to the actual storage space you get in return. At about $50.00 USD per month for 1TB of cloud storage, within 2 months you’ve already paid enough to rent that space to actually have bought an external 1TB hard drive outright.


For instance, let’s take DropBox as an example. On average, 1,000GB of storage is $15.00 USD per user, per month with a maximum amount of users being 500. Let’s say for an average household of 5 people (Mom, Dad and three kids) that’s $75 per month. Right there in the first month you may as well have bought a 1TB external HD and hooked it up to your home network. As a matter of fact, over the life of using cloud storage you would have spent so much money for the convenience of accessing your stuff “everywhere” that it becomes overkill.


Don’t get me wrong, cloud storage and access is going to continue being integrated into nearly everything and you are going to be forced into using it at every turn. For evidence of this, just look at the tablet/laptop hybrid above and ask whether 64GB hard drive is really enough for you?


Of course not.


That’s why Windows 8 conveniently comes with Windows SkyDrive cloud storage for you, where you can rent storage space continuously at a price which may as well be highway robbery. The best part about cloud storage is that there is no guarantee that your data is safe, let alone going to be available. What cloud storage is, and let’s make this clear, is that it is renting space on somebody else’s computer who will gladly “keep it safe” for you for a rental fee that may as well exceed the cost of just buying an external drive altogether. Overall, you may as well be paying a premium to store your own files to the tune of 450x the cost of the hard drive. Of course cloud computing and storage are the big industry when there is a ridiculous amount of money to be made.





Apparently I have $4,500 worth of Cloud Storage in the palm of my hand.



Then there is Google, which also offers Drive cloud storage services and generously gives 5GB freely. I am aware that there are plenty of cloud computing applications that are a benefit to humanity such as GMail, Google Docs, etc… I’m not disputing that at all. What I’m getting at is that cloud storage as it is currently presented seriously bothers me.


My personal thoughts about this are that the security and privacy of your data is about as expectant as you would imagine with say, Facebook. Which is to say, they all talk a big game but likely fall flat on their face in practice. There is nothing to say that the data center doesn’t have a cascade failure, or that the service itself isn’t discontinued after you’ve spent all that time investing in it with your data.


If you think that’s not likely, just ask yourself about Apple and their now defunct MobileMe cloud system that so many people jumped onboard. When Apple moved over to iCloud one of the things that wasn’t included in the transition just happened to be the MobileMe iDisk storage service. Essentially, users are prompted to download all of their data locally to avoid losing it.



What happens to the files on my MobileMe iDisk?


You can continue using MobileMe iDisk through June 30th, 2012, even after moving to iCloud. After June 30th, iDisk will no longer be available. You should save copies of all files stored on iDisk before that date. Please read this article for details.




You wouldn’t think this is a big deal but in the future it likely will be more of a problem when things like this happen. The only reason it’s even feasible right now to follow those directions from Apple about downloading all of your files from iDisk to avoid losing your data is because your local hard drive is actually big enough to handle that… The real question that I’m left asking internally is why the hell didn’t Apple simply migrate iDisk data over to iCloud for their users automatically?


But with tablets and hybrids, the internal storage is intentionally smaller in order to create a need to use these built-in cloud storage options (like iCloud or SkyDrive) so you actually become wholly dependent on it for your main drive.


So let’s fast forward 5 years where your average computer is a tablet hybrid and the internal storage is about 500GB as an SSD. Even under the premise that cloud storage is going to get cheaper, a scenario like the one above where migration or storage failure requires you to back up your stuff locally – you likely aren’t going to have that option since you’ve essentially had no need to have an adequate local storage system and it was so much more compelling and cheaper to buy a tablet with only 128GB internal storage (if you’re lucky).



Files - SkyDrive_1340200137177


April 1997 is when I got my Hotmail account. Now Microsoft just attaches everything to it.



Of course there is the other caveat for this cloud storage paradigm in that it’s only useful if you have wireless internet access. Otherwise, if you’re in a dead zone or don’t have WiFi where you’re at, you’re pretty much screwed out of most of your data.


Cloud storage and applications should be treated as a secondary option, and not your primary option. But unfortunately they are being integrated into the operating systems and devices under the premise of being your primary. That’s what bothers me… well, that and no longer just buying a program but instead indefinitely renting it – which I’ll concede isn’t totally the norm at this point but does seem to be happening with gaming and other media forms (ebooks). You may as well be renting your games indefinitely at this point if you’re paying for a subscription to an online gaming service like Steam, XBox Live, Origin, etc just to be able to play the game you actually bought.


Ten years ago you would have been up in arms about this. You just bought a video game at retail price (let’s say $50), you get home and install it. You fire that bad boy up and find out in order to play it you have to have internet access and the game must be able to check into the server periodically (or all the time) for it to work. The moment you no longer have an internet connection, the game stops working. The moment the publisher’s servers stop working, so does your game. The moment they discontinue their authentication server for your game, your game becomes worthless.


When Square|Enix did Final Fantasy XI Online I immediately stopped playing Final Fantasy games entirely. I understand the interest with online multiplayer gaming, I really do. I’m an avid user of Second Life and also have a Minecraft account. But the art of single player or multiplayer at home is pretty much a dying breed any more. There’s a reason, I suspect, that flash games online are a big ticket for entertainment. It’s because those games are modeled after the simpler games we used to play years ago where they were more about fun than trying to cram graphics. They have to make up the lack of graphics with actual storyline and gameplay.



Less is More?


The idea about all of this is that the future is not as great as we like to believe. If anything, we’re paying about the same price for considerably less than what we’ve had in the past. I’m not talking about less in a good way, either. Like, smaller components or more efficient circuitry…


No, I’m saying less is the new more in the same manner as you’re paying the same amount of money for a thinner tablet that does much less than the laptop you bought a few years ago. Standard on the laptop was a 500GB or 1TB hard drive but on that tablet (even the hybrid) the storage is more likely 32GB, 64GB or for the most expensive model 128GB internal storage for the same price as your laptop a few years ago at 500GB or 1TB internal storage.


To make up for giving you less internal storage, these companies found a solution that essentially screws you even harder than you just got screwed by connecting cloud storage to your device and offering to rent you space for a monthly fee that may as well be 450x the cost of the hard drive over the expected lifespan of that hardware.


Yes, you read that correctly.


If an external 1TB hard drive costs about $100 (which mine did), and the average expected lifespan of that hard drive is about 5 years, then I’ve paid $100 for 1TB of storage that will last 5 years. In comparison, for the equivalent of $75 per month under an average scenario and access to 1TB of cloud storage, I will pay $4,500 over the course of 5 years for the same storage capacity.


Take also into account that the operating system (Windows 8 in this example) is less robust than Windows 7 before it. Metro UI is stripped down and very basic in comparison to Windows 7 working environment. No Aero, no real multitasking… everything is pretty much full screen or not at all with Metro. There’s “legacy” support for applications that ran on Windows 7, so windows like you’re kind of used to… but the focus is entirely on Metro for this. Which brings me to another point that the outlet for Metro enabled applications is a choking point app store controlled tightly by Microsoft. I have the same issues with Apple’s App store as well, so don’t think I’m just harping on Microsoft here.


Even the bootloader is locked down and so is access to the Win32 API which most applications prior enjoyed access to freely in order to develop for Windows. Now the rules (as if there were rules before this) say that win32 API is off limits as well as a whole slew of other design requirements and guidelines to be accepted into the Metro store.


Good Christ… when did this happen?


Not specifically, but as a collective of humanity… when did we start really settling for less and paying more for it? When did we start saying it’s ok to have our options taken away from us and strictly controlled? When did we say it was alright to no longer do with the hardware, software and media we legally bought what we want to do with it?


When did we stop demanding the freedom to innovate on our own?



And then I calm down… sort of.


I have my gripes about all of this, but my assertion is still sound. I’m not interested in owning a tablet until such time as it becomes a full hybrid that isn’t screwing me over. I love the idea of the screen detaching to become a mobile tablet as my secondary computing paradigm – but not at the expense of my primary computing paradigm in the process.


I’m not accepting an either/or scenario.


When my tablet/screen is attached to the base, my “real” computer will take over the processing and power. My real computer then, will be my storage where the mobile tablet screen is underwhelming. My real computer, then, will have the desktop paradigm because it makes absolutely no sense to use a tablet interaction paradigm while I’m using a desktop, no more than it makes any sense whatsoever to use a desktop paradigm on a tablet computer.


Both will have access to my cloud drive which is a secondary backup but never my primary storage. It’ll be convenient for me because I’ll have access to my files on the go, but I won’t be worried about the service because a majority will be locally stored.


Cloud storage to me is a supplementary option that is worth paying for in that context, but never expected to be treated like my primary storage on account of my local system short changing me up front by design.


As far as the operating system goes, I like the freedom to run whatever the hell I want on my computer. I like the idea of multitasking, and by god if I want a media player that looks like a Polar Bear on LSD then so be it.



Bear on LSD


Meet Windows Media Player – Aeonix Edition.




I don’t like the idea of having a central application store being my only option under tight constraints to access programs. So this is what I’m expecting shortly after the release of Windows 8 -


It’s going to get jail-broken in a hurry. That Start Menu button is going to be there whether Microsoft likes it or not on the Desktop view, and if they fight too hard to force people to use the Metro screen instead, then they’ll just be shooting themselves in the foot. The Bootloader will be unlocked, and ultimately with enough people pissed off about this situation, Windows 8 will work the way that advanced users will demand it to work and not the way Microsoft demands it.


Either that or Microsoft is about to lose quite a lot of customers, with or without their new tablet.


It doesn’t help the situation when you aren’t willing to build an operating system that a majority of your customers are asking for. Luckily, that’s what Linux is for.


As a matter of fact – I might just go back to using Ubuntu Linux thanks to Windows 8.







Unlike Windows 8, I’ll have the option to choose whether or not I want to keep the Unity interface of Ubuntu 12.04 or simply go back to using Gnome (Fallback) like I’m used to. That’s the point that I believe Microsoft is entirely missing here… that people don’t necessarily like or hate Metro, but instead they hate being given no option to choose what they think is best for their own working environment.


We’ve had this desktop metaphor since the dawn of the GUI interface in the 1980s. That is a hell of a long time for the computing world to get accustomed to something. Forcing a radical change on people now in 2012 will likely go over about as well as Microsoft BOB or forcing Clippy on people.


Generally speaking, the bottom line is this:


The idea of a hybrid tablet/laptop is a good idea when put into proper perspective. Right now, however, it’s a veritable clusterfsk of half-baked ideas. Without the ability to choose a desktop paradigm that is familiar when using a keyboard and mouse, the Metro UI being forced on people will invariably be a no-go unless they’re using one of those new tablet hybrids.


Essentially it’s an operating system that is trying too hard to please tablet users at the expense of the interaction paradigm over the entire course of personal computing history. While I’m not against change, even if it’s a radical change, I still like the ability to make that choice on my own instead of an all or nothing approach.


Given that the only option is Metro in Windows 8 with the desktop paradigm being an afterthought, I will likely end up exercising my right to choose and skip Windows 8 altogether.


I don’t really care how amazing they claim the tablet with a keyboard is. It’s a revolution I’ll do without, thank-you.


As far as the tablet/hybrid itself:


The disturbing trend has been to build half of a computer and sell it to you at full price, while renting you access to the other half indefinitely for 450 times the actual cost. To the business end of things, this seems like a technology wet dream scenario because they’re making money hand over fist off of this. Even more surprising is that society as a whole (that means you) are gladly falling for it.


Of course, after seeing the first picture in this post, you probably ignored everything that came afterward.














Jun 19, 2012

Cloud Party [Beta]

An average virtual world for average people on Facebook | #SecondLife


Last night was pretty quiet for me as I enjoyed some downtime playing Minecraft, but all that came to an abrupt halt when the phone rang. On the other end of the line was Kevin Simkins who is currently out in California addressing some details for a project in conjunction with NASA. What he had to say peaked my interest only marginally, the gist was that a new virtual world named Cloud Party had gone into open beta only a few minutes prior and he wanted my analysis of it.


Clearly Kevin was excited, so I didn’t want to essentially rain on his parade – not that particle weather exists in virtual worlds yet. When it comes to these virtual worlds in a browser implementations, I always have a moment of immediate ennui that hits me. I’ve been in the industry for a long time, and my first thought whenever somebody tells me about a new browser based virtual environment is “This will be underwhelming at best.”


So, let’s get this review over with before I yawn so hard my jaw dislocates.




Cloud Party on Facebook_1340109255893


A familiar logo graces the party. Clearly Mesh Import is an option.



Cloud Party, for those who don’t know, is a virtual world built as an app in Facebook. From what I can discern, the name of the app itself gives an indication of the technical things happening in that I would believe this is a cloud based server system running what I can only assume is something similar to Kitely which (I believe) is an Opensim implementation on cloud architecture. The front-end looks to be a slightly more polished version of the web embed system I saw in alpha a year or so ago for OpenSim, or maybe it’s an iteration of what Jibe has done in their camp. I know Jibe uses Unity and is done by ReactionGrid, which for all intents and purposes has little in common with OpenSim implementations, but it’s undeniable that this is very likely a flavor of Second Life.


I’ve always had this bias against virtual worlds implemented inside a web browser, and I have a very good reason. In all the years I’ve been in the virtual worlds industry, I have yet to see a virtual world in a web browser that isn’t stripped to bare bones in order to compensate for the fact that it’s not using a native client. There’s always a trade off when you do this, and usually it’s in the usability and features section.






A screenshot of ExitReality, another virtual world in web browser implementation.



Immediately upon entering the party, which is a term I’m going to use ironically, I was slapped in the face with just how limited this is in comparison to a full viewer/native client implementation. The premise of Cloud Party being that each “area” is a floating island and each of those islands are interconnected spaces via bubbles you see in the sky. You can click those bubbles in the sky with other floating islands to “teleport” to them, so at least this is a half decent method to address tablet use and make the spaces easy to access for people who aren’t accustomed to something more robust.


Building is something I didn’t really tackle, but from what I can tell it works very differently than something like Second Life.



Cloud Party on Facebook_1340112074092


I dropped about 3 safes on the ground before I gave up.



Cloud Party on Facebook_1340112251090


I’m a bit torn on the preset objects library. Seems like a very basic Inventory.



I didn’t see anything familiar for moving items around, in so much as the X,Y,Z axis arrows like in Second Life. The Building Palette is comprised of premade objects and the ability to upload your own objects from a local source, though I have to admit there was no immediate indication of what formats in mesh this handles for upload.


I’m sure the upload of objects in conjunction with a marketplace is something Cloud Party will be implementing later on, because they have to pay the bills somehow.


Interface Stuff


The basics for the interface seem to be tied to a PDA in the upper right corner, though somebody told me it’s actually a Smart Phone icon. I suppose we’d be arguing semantics at that point since a smart phone essentially is a PDA these days, the digital swiss-army knife.


But I digress, as usual.




Cloud Party on Facebook_1340115801306


When you think about it, a Tablet in your Metaverse in your Web Browser is pretty Meta.




Being in Beta, there really isn’t much in the way of content. This goes for the Outfits section and avatar customization options which are nowhere near as comprehensive as a seasoned user would expect. Essentially it boils down to a short list of preselected clothing (couple of tshirts, few pairs of jeans, same style of hair in a few different colors) but nothing to get excited about. The experience in this section is pretty homogenized with nothing to really differentiate your avatar or make it personalized. Everyone more or less looks the same at this point, which you could chalk up to this being an early beta. That being said, I’m disturbed by this trend of companies to launch stuff to the public half finished and work on it further in a live state… this really should have been more polished in a closed beta with tiered integration of more users over time in order to make this far more polished before a total opening of the floodgates.


I’d like to see the shape abilities in this to really be able to make your avatar look like you want instead of the equivalent of the newbie Ruth in Second Life.


Cloud Party on Facebook_1340109151111

I mean, while I really do understand they are in early beta, being forced to look like this on opening day seems a bit dated at best. Not exactly the technology launch I was expecting. If anything, Cloud Party seems more like a technology preview than a finished application.


I would have hoped that a serious virtual world offering on Facebook would have been more polished. I’m already biased against these things because I always seem to see these virtual worlds in browser and they end up lackluster in seriously released form, but being lackluster out of the gate is just an insult to me. The entire point of trying to get people to be excited about a product is to put your best foot forward on a public launch. In the world of social media, especially in Facebook, you are now in a world of hurt as hundreds of millions of potential people check out your launch and say “What the hell is this?” and move on to something like Castleville or Angry Birds.


Considerably, that’s the audience you’re playing to at this point if you’re trying to get into social media implementation, and Zynga will bitch slap you if you don’t bring your A-Game to the table from the beginning. One of the comments I’ve heard over and again while using Cloud Party is that it’s like Second Life circa 2004, which to me isn’t a good thing. I mean, I find it hard to justify that this is a leap forward when technologically speaking it’s being openly compared to technology from 8 years ago.


However there is some good…


Far be it for me to be completely negative here, because I really do think there is a bit of good to be had from this. What is lacking in presentation and technical prowess I’d state is made up in the sociological indoctrination aspect. At the very least, this is a very lightweight virtual world implementation stripped down for the Facebook short attention span crowd. Being in a web browser is good for people who can’t be bothered to have a full fledged client, so you can’t exactly expect this to be Crysis level graphics. That being said, I’m still not impressed by Cloud Party. We could very well say that avayal.live does a far better job of virtual world in a browser, and if it wasn’t for the sandbox ability of Cloud Party, then it wouldn’t really have anything to add merit to its existence. Essentially, the sandbox ability is really the only saving grace here against many other virtual worlds in a web browser.






An early version of web.alive (avayalive) shows a much more polished interface.



Not to disappoint Kevin, who for all intents and purposes is the resident virtual worlds bad-ass, there are a number of sociological merits for implementing a virtual environment in a web browser, however such comes at the expense of lowest common denominator graphics and/or functionality to compensate (at least with current generation technology). In the long haul, it still offers something compelling for the light usage aspect but I wouldn’t go so far to say they’ve “done the impossible” as I read on their forums from another user.


The underlying question for me has always been that fine line of metaphor interaction, where we have to ask whether a stripped down virtual world in a web browser is really the future or whether we should instead be focusing on integrating the web into a native virtual world client. I clearly fall on the latter opinion in that a web browser is the lowest hanging fruit of complexity to incorporate into a virtual world client while in order to incorporate a virtual world into a web browser it will clearly have to suffer along a lot of lines to make this happen. Why take something that by all means should be graphically pleasing (a metaverse client) and deliberately bastardize it to work in a constraint that it is arbitrary?


HTML 5 be damned, it’s still not powerful enough to compete against a native client for virtual worlds.


What this boils down to is the “separate but equal” mentality for virtual worlds. Making a stripped down version of a more powerful system to embed in a web page while taking away a lot of the interface expectations and abilities in the process creates a fractured ecosystem. What you come to expect in something like Cloud Party isn’t what you get when you log into OpenSim, ReactionGrid or Second Life using a native client. This only compounds the problem already in progress with a plethora of virtual worlds all doing their own thing and nobody can agree to work together.


From a virtual worlds standard perspective, this just adds more fuel to the fire. Essentially, I may as well state that Cloud Party is about as innovative as the age old industry dream to cram a virtual world into a web browser from the mid-1990’s. I really have no idea what the obsession with this idea is, as if the web itself is the appropriate container for a full fledged virtual environment.


My understanding of the Metaverse is the following analogy:


The Metaverse is like the real world you walk around and interact with. The World Wide Web is the newspaper you read in the morning while you’re drinking your coffee in that Metaverse.


Therefore, the newspaper exists inside the Metaverse but the Metaverse doesn’t exist inside your newspaper.



Kevin Simkins - Resident Bad-Ass


This is Kevin Simkins, known in the virtual worlds industry by some as “Commander Bad-Ass”



The Bottom Line


Here’s a quick checklist of things that would have to happen before I ever accept the validity of a virtual environment in a web browser:


1. The end-user experience isn’t different between the web version and full client.

2. Both web client and native client are sandbox virtual environments.

3. Graphically, both instances should be on par with modern 3D Engine graphics.

4. Both the web client and the fully native client should access the exact same service.

5. Both are massive multi-user environments, sharing the same perceptual space.

6. If Google Plus and Facebook can have Voice and Video chat, a virtual world should as well.

7. High level of avatar customization is a must.

8. If I can log into the web version with a social media account, so should the native client.

9. User generated content and marketplace is a must.


and finally…


10. Be bold and actually do something worth getting excited about.


So who is Cloud Party good for?


Bottom line, it’s good for people who have never seen an advanced virtual environment like Second Life before. It’s geared toward the casual user, so in the grand scheme of things it’s good for an entry level indoctrination to the ecosystem. For the average user of Facebook, it’ll provide that quick-fix social environment of attention deficit they expect, but I’m loathe to say it’ll be anything deeper.


I can see this being used for low-impact spaces in regard to maybe meetings, some basic games, or collaboration, but again I have to state that if you’ve become accustomed to more robust virtual environments, then Cloud Party will likely not be as appealing since you’re being stripped considerably. Maybe in a few more years this will finally mature, but right now it’s a blast from the past. Who am I kidding… we’ve been saying that virtual worlds in a web browser will be the future since the 1990s. As far as I’m concerned, it’ll always be “just a few more years to mature”.


It’ll fill a niche, but I’m not expecting a revolution.



Jun 14, 2012

It’s All Been Done Before

When history becomes our future and time has no meaning


I’ve been thinking about the future a lot lately, and more importantly the relationship between society and media. For the record, I believe that the future in general is going to be really amazing going forward but not without some serious turbulence in transition. This, of course, is barring a societal reset which in the grand scheme of things seems entirely likely given our past. In order to really understand the future, we have to get some sort of grasp on our past and what our actions throughout history seem to propagate into our (repeated) futures.


I’m going to warn you in advance that this post is going to delve into the fringe area of thinking. In order to really grasp the totality of the subject matter, we really have to entertain quite a lot of things that up until now we’re more than happy to just ignore. The best way to ignore stuff, it seems, is to just nervously laugh and say “Of course it’s a conspiracy theory! That person is just laughable…” which really is our collective selves being made very uncomfortable over some realizations we’d rather not entertain. Who likes being lied to? It’s much easier to just pretend we’re the sane people and laugh off anything to the contrary. But let’s take a bit here and put that aside… you pretty much have to if you want to get a glimpse of the future.


I’m not looking for anyone to agree or disagree with what I am going to write in this post. It’s just a train of thought I’m putting down here, because these are thoughts that have been prominent on my mind for a number of years. There’s far more behind it than what you’ll read here, but I’m just not going to get into deeper details. You can either take it or leave it as you please.









For a little while, let’s suspend disbelief and entertain the notion that the way the world and universe really works turns out to be quite different than we believed.


Take for instance the premise of copyright in general. Originally it was supposed to be something along the lines of seventeen years of amnesty for the creator before the work fell into public domain, which would afford the creator of the work enough time to capitalize on their creation before having to continue working on something new. Over the past century or so, what we’ve seen is a curious sabotaging of that premise in that copyright has continuously been redefined and extended well past anything meaningful for society as a whole.


This, in turn, has created what should have been a copyright scenario whereby the situation falls heavily in favor of creators and not on the side of society. What has actually transpired is that society eventually begins to ignore copyright restrictions en-mass because they realize that what is being presented as original works are actually just a re-hashing of old materials cleverly disguised with new packaging.


But this post isn’t necessarily about copyright per se’ but more about the overall effects of societal change at an accelerating pace, as well as some other correlations. I’d like to first point to an excellent book by Dr. Raymond Kurzweil entitled The Age of Spiritual Machines whereby I’ll leave the choice up to you via the embedded link as to how you wish to get your hands on a copy. The prior sentence is actually quite compelling to this discussion because, as you will notice, I didn’t just link to Amazon.com or Audible, but instead a Google search whereby a plethora of options are available. Of course, some of those options available aren’t so legal but the real question is whether or not those options are actually ethical in the grand scheme of things. More importantly, however, is that it is a clear indication of what I am talking about in this post which I will term “Intellectual Critical Mass”: The point whereby the culmination of recorded history and achievement reaches a circumstance whereby it becomes a point of diminishing returns to create something wholly new or authentic.


Originally published in 1999, The Age of Spiritual Machines was out of print for quite some time and unavailable for purchase. This led to an interesting situation whereby the question had to be asked


What legal means of obtaining this information are there?


For the longest time, the answer was incredibly limited or nonexistent in context, so the alternative of simply “stealing” this book was widespread in lieu of being able to legally purchase it. As for myself, I own a hardcover version of the book purchased in 1999, and later on when I wanted an audiobook version I found there was little options available as it was out of print.


In my mind, an audiobook version is a transformative version of a product I already purchased the right to access, and so I simply downloaded an audiobook version via bittorrent at the time, and while I was at it, I grabbed a PDF version. This seems dubious at best in legality, but let me ask you a really important question; If media companies are treating their works as indefinite rentals, then at what point are you actually entitled to own a copy of what you purchase, and better yet, at what point should you simply be entitled to access the transformative versions thereafter? Wouldn’t you think that copyright in a digital age would cover access to transformative variations for those who have legally purchased access to a particular work? I mean, why not? If creators of media are telling you that you are essentially renting this stuff into perpetuity, then they should be conceding that you also should have access to all media forms of that work as long as you are “renting” it. But clearly it doesn’t work that way, now does it?


This goes back to the music/movie industry where just by changing the format of the media they are entitled to force you into repurchasing things you’ve already bought time and again. Think back to vinyl records (albums) then the onset of cassette tapes, then compact discs, and later digital music files in every conceivable “format” for encoding with or without ridiculous DRM schemes. Chances are, you’ve owned your music on a prior format like a cassette tape (and let’s just ignore 8 Tracks). Then CDs came out and promised higher fidelity (which was ultimately a lie compared to vinyl) and you rushed out and bought the CDs and a new CD player to play them. When the CD player was defacto, your cassette deck began phasing out, so even if you owned all those cassettes, you were hard pressed to listen to them if you didn’t have the hardware to play them back.


Of course, CDs are phasing out (if not entirely now) due to digital music and your iPods. So all the music you’ve bought on CD either get’s ripped to a digital format or you repurchase your music collection yet again. With Apple starting to omit the physical media drive in their laptops, you can see where the trend is going for the industry, in that physical media is going to disappear and be replaced entirely with the digital media paradigm. So all that physical media you have today will once again be obsolete and useless.


I remember when I was younger, I bought a copy of Pink Floyd – Dark Side of the Moon on CD. It was at a retailer (The Wall) who had a lifetime replacement guarantee. If that CD ever broke, scratched or whatever, all I had to do was bring it back and they’d replace it with the exact same CD brand new. Seemed great until The Wall went out of business because of digital MP3s and iPods.


Over the course of a number of years I had actually bought Dark Side of the Moon no less than twelve times, and often in an entirely different format (hard and digital) for what amounted to the exact same music. I think this is part of the problem with the media industry as a whole, in that the only reason they are making so much money to begin with is because they are endlessly reselling their back catalogs into near perpetuity across many different formats, and when they think they aren’t making enough money, they just invent a new format altogether and try to enact a paradigm shift to make the old format obsolete and force the population into repurchasing what they already bought many times before.


Of course, what is old is actually new again. A lot of the “new” music you buy is actually rehashed from the back catalog blatantly or discreetly, and repackaged for you to buy (yet) again. As if buying the same stuff twelve times and in ten different formats for many different devices wasn’t enough, the supposed new stuff is actually just covers of old music/movies redone and remixed back catalog with a new face on it.


The entirety of media is simply recycled on many levels.



Everything is a Remix Part 1 from Kirby Ferguson on Vimeo.



Therein is a profound realization in and of itself, and also where we get a little philosophical about the nature of civilization as a whole. If you think back throughout recorded history, there have been a lot of collapses and “lost” information. Take for instance the common light bulb.


You’d think that Thomas Edison somehow “invented” it but really he just perfected a transformative variation of an existing concept that was (quite possibly) thousands of years old. This is no different than say a hardware change or format change in media. The transformative variation of the light bulb goes back to ancient Egyptian times and the pyramids who likely had light bulbs of their own as well and the batteries to power them.




egyptian light bulb




Of course, you’d probably think – Well, this isn’t really an indication of anything… But then you realize that electrical engineers and scientists thought the same thing when seeing these depictions on the walls of pyramids and decided to test it out. Sure enough, the replica they made worked just fine and would have been possible during the ancient Egyptian times via the Bagdad style batteries.








What this means is that ultimately civilization seems to be repeating itself a lot. The old saying “Those who do not remember history are doomed to repeat it” holds true, but more interestingly that in the case of recorded history the side effect seems to be that it may be ultimately a good thing that civilization continuously collapses and is rebuilt.


Well, at least in the context of intellectual critical mass. When you begin recording history and the advancements of the planet, over time you begin to notice that we start going creatively bankrupt and repeating ourselves. There are only so many harmonically pleasing combinations we can assert for music, despite ever increasing complexity – and so that severely limits the amount of “new” music we can create as a planet, or more importantly any sort of creative or general advancement. It’s when the entirety of recorded music prior to today becomes spliced together as a new transformative work altogether that you start to see the beginning of the end of our creative stride in history.


Maybe that’s not an entirely accurate assertion. What I really mean to say is that when we reach that point in recorded history we ultimately are faced with a choice that up until recently has always been to collapse society and start again as if none of it existed prior. Whether this is intentional or not is up for debate, because all I can say with certainty is that it’s all been done before. Whether the inevitable collapses have happened because of or as a by-product of this intellectual critical mass should be something to consider as a separate thought experiment.


The crossroads of intellectual critical mass aren’t limited to merely music, but to all creative output on the planet. Scientific discoveries may even be a part of this collective amnesia, as well as any other recorded “creative” output. If we forgot something as important as a battery and light bulb in history, what’s to say that far more advancements in history weren’t lost as well?


We talk about the myth of Atlantis, and all the supposed high technology and advancement they had in the ancient world. Some say they had autonomous robots and a high-tech method of power generation through crystals or something. We laugh it off today and say those stories are absurd, and how convenient that the city of Atlantis sunk into the sea without a trace.


But let’s think about that from the intellectual critical mass standpoint.


For a moment, let’s transpose the circumstances of Atlantis onto the modern world and ask if it would be possible to recreate the myth in relation to our own future society.


Right now a vast majority of the human achievement and recorded history is in digital form, with no end in sight for “digitizing” it all. Clearly the Internet and whatever comes after this technological construct is one of the greatest achievements of mankind, becoming the storehouse of the collective of humanity and known knowledge past and present. The Internet, in effect, is the modern version of the Library of Alexandria, except it is far more powerful and all-encompassing.


So what happens when the electricity goes away or society collapses? While it’s all said and good that we have the collective of human intelligence and achievement at our disposal today via computers, all of our society is essentially recorded in machine readable form and not human readable form, as well as is wholly dependent on a specific variation of electrical power.


Should our society suffer some sort of catastrophic event which negates this in some manner, who in the future will believe we existed? We have robotics and worldwide information and video-phones, and things that only fifty years ago seemed like science-fiction. We would, therefore, become the new myth of Atlantis by definition. It’s not like our artifacts today would be understood in a few hundred or thousand years, and to make things worse, our future society would have been reset and re-discovering technology we already had while thinking it is wholly a new discovery and invention. Without the recorded past to explain to our future selves, we would have no indication that we were actually repeating ourselves.


This is that interesting crossroads for intellectual critical mass. We either transcend that situation and move forward to a technological singularity, or we somehow cause a collapse of society (likely through the power struggle and greed) and reset back to about the middle ages possibly, with our current society becoming the new Atlantis myth while our future (less advanced) selves find indications of our advancement over time in left over artifacts.


There are a number of interesting points to be made here, but the most important I can state is that we should at least put to rest the premise that advanced extra-terrestrial life does exist. I’ve thought long and hard about this one for many years and it seems quite absurd to me that an entire planet of otherwise rational beings can cast aside the very likely possibility that there is advanced life outside of this planet, and more so that such extra-terrestrial life has played an important part in our own history of advancement.


When we think about it in any logical manner, it would be downright asinine to state otherwise.


Our history is rife with examples of such extra-terrestrial involvements, right down to blatant pictograms on the walls of pyramids showing UFOs, Tanks, Helicopters and our own modern day astronauts. Greek Mythology seems likely the personification of many extra-terrestrials that came from the “heavens” and either helped or hindered mankind with what appeared to be god-like powers or “magic”. Even when we look at modern day religions, it seems far more likely that such things as “angels” are extra-terrestrials.


So I could be coming across as that guy on the History channel that everyone mocks citing everything in our history as alien influenced, but I say – So what? Why is it so hard to accept this premise as a global society?


The odds are much higher in favor of this than against it, and yet it seems so unlikely that it’s possible. I chalk that reasoning up to the human paradox. We’re on a planet that is essentially equivalent to a grain of sand in the universe, and somehow we as a collective society refuse to take it as read that we’re not alone.


I can’t even begin to explain how fundamentally ignorant that premise is.



Ancient Astronauts and that Aliens Meme guy…



Here’s the thing… can anyone actually give a plausible reason why humanity wasn’t fundamentally aided by extra-terrestrials throughout our history? As a matter of course, there is far more evidence to support this premise than what we as a society take as read for religion. At least, in the context of religion being separate from extra-terrestrial intervention. If extra-terrestrials showed up in history, they’d be described exactly like they have been in religious texts we read today.


I get the impression from looking at recorded history that humanity goes through this reset process far more often than we realize. Usually about the moment when we begin to reach this point of technological singularity or advancement, something happens with these extra-terrestrials and humanity gets set back quite a lot – or at least enough to where it will take about a few hundred or thousand years to get back to where we were. Along the way, we only have the long-term records to hint at what we used to have, but any of the short term records which the society would have ultimately relied on are missing. Much in the same manner as our own society relies on the short term digital media in our everyday experience but the long term stuff is all but nonexistent to the future.


The side-line premise here is that I’m writing this in the year 2012, and if we look at stuff like the Mayan Calendar, something is apparently up this year around December 21st. Do I know what that event is? No more than anyone else has a bunch of theories, and heck… a lot of people just say “Nothing will happen.”


Any number of those things could be right, but I’m not psychic.


I have, however, noticed quite a lot of things in my life that seem quite curious leading up to this. I’ve become a firm believer that media itself is a conduit for sociological introduction of technological or paradigm shifts in thinking. We seem to get a lot of media about certain concepts or scenarios in advance of those scenarios actually happening for real, as if it’s a manner to acclimate society to the possibility of such existing as common knowledge.


Quite literally, we may be facing the idea that it really has been done before and that those in some echelon of power are privy to this and are really in charge of reintroducing the rest of us as a society to these things that we’ve lost prior. I’d even go so far to say that (just like the premise of ancient civilizations) extra-terrestrials would be essentially asking to be taken to our leaders because our leaders are in charge of working with that societal change without just dropping it all on us at once.


Of course, we are at a point where the acceleration of advancement is so quick that we as a collective society are able to quickly discern that popular culture predicted many things we now actually have, as if people in our past somehow had some idea what was coming. Your iPad isn’t new if you know those tablets were shown in science fiction like Star Trek. Nor is your cell phone anything to brag about as far as “novel” because it looks like the Star Trek communicator.


Voice recognition and computers? Star Trek. Landing on the Moon? Predated in science fiction.


Even the Internet you’re using right now to read this was interestingly named as if it has a deeper functionality. Originally it was referred to as Intergalactic Network, which actually makes sense in the context of extra-terrestrials. I mean, if extra-terrestrials were sharing information with our society in a top-down method of our governments to the masses, what would be the trade from Earth? Extra-terrestrials would say they want to have access to the sum of human knowledge at any given moment, and during the 40s/50s that would have been awfully hard to accomplish with just the written book.


So we say aliens just help us design a planetary network of digital communication and information storage with the caveat of having their own dedicated access line. Makes sense in that context when you think about why on Earth the Internet would be referred to as Intergalactic Network.


The whole pop-culture indications get interesting though when you fast-forward a bit and ask about things like Stargate SG-1 and movies like Independence Day. Did you ever get the feeling that you’re being prepared for something? We could look at the premise of Stargate and it takes from the idea that ancient Egyptians were ruled by something like the Goa'uld which the Pharaoh of the time where considered “Gods” incarnate. It doesn’t take much to make that logical leap since about 90% of the idea is already there in our real life collective culture. Then there are the Asgard in the Stargate SG-1 universe, which for all intents and purposes are what we consider Greys. What if we look at the Men In Black movies?


I could go in either direction here concerning the significance of all of this… but I’d just like to state that when we look at history, and even recent history and science-fiction, we seem to be uncannily introducing ideals and concepts into the world about twenty or so years in advance via popular culture before such things become reality (or likely just released publicly).


So what is going on in places like Area 51?


We love to talk about the place that doesn’t officially exist, and interestingly enough it is the wisdom of the crowd that has an (on average) 91% accuracy rating, even if the individuals aren’t certain of the answer. There’s an interesting fact for you right there… so if we apply that understanding to society, then there is a 91% accuracy that Area 51 really is a hotbed for extra-terrestrials and technological wonders. Something is definitely up with humanity and this planet…




Individually we may be wrong, but together we’re more accurate than we know




A look into the Future


In order to really get a grasp on what the future will likely hold for us, we have to first reach out into the fringe of collective culture and look at the bigger picture. This, of course, means that we reach into some fringe aspects of understanding before we can get back to a more balanced center of logical progression. I could go on about extra-terrestrials and ancient civilizations, and while we’re at it, I could let my hair stand on end and look like a total lunatic. Maybe I’m just spouting conspiracy theories?


What I can say about the future is that there is some really interesting, and logical, events which are coming. Assuming that global society doesn’t get reset again under the premise of collapse, (which I really am hopeful about), we can look at this from the point of a singularity aspect.


Again, assuming that we don’t just get a reset, I can make a few predictions about the near and long term future with confidence:


1. Copyright is dead.

2. The Singularity is near.

3. What you know today is just a fraction of what there really is to know.


Let’s explore these three initial points.


Copyright is dead. Right now the premise of copyright already is dead, it just hasn’t gotten the memo. As a society we’re on the verge of creative bankruptcy, at least on the onset. There is still creative exploration and transformative works, but the mainstream application of creative output is slowing down. Scenarios and creative output are reaching that intellectual critical mass, and over the next ten to twenty-five years, and due to accelerating returns, it’ll be all but pointless to try and copyright something going forward.


There is already quite a lot of creative bleed-over from prior works in history, and because we’re accelerating advancement, the shear volume of media and output will overwhelm society. It’ll be unlikely that the future of creative output will be copyrightable, or at the very least it will be unlikely that those official copyrights will be respected in a traditional manner. Already we’re seeing that today with society, despite the backlash from traditional rights holders.


The singularity is near. I really don’t know when it will happen, but there is a good estimate from Dr. Kurzweil. Already we’re primed for that mentality of existence and really don’t realize it yet. You have on-demand access to the sum of human knowledge in an instant, we exist in multiple personas through social media and virtual environments, and within twenty years the lines between reality and synthetic environments will be ultimately blurred becoming it’s own space entirely in an augmented existence. At some point computers will become fast enough to handle human level intelligence, so over all this is not a matter of if but when. Even more interesting is that leading up to it we’re in transition to that way of living, but the ubiquity of it all is so pervasive that the subtlety of that transition is all but unnoticed by the majority.


When you really think about it, we’re being primed to be demi-gods. Obviously this is contingent on the perspective and context, but overall in relation to our past we’re pretty powerful and god-like already. This in and of itself is profound because if in this short amount of time we can accelerate this far in advancement, can you imagine what a few million years ahead would do in relation to us today? If you can imagine that advancement, then you pretty much nailed what the bigger picture for aliens would be in context to us.


What you know today is just a fraction of what there really is to know. This isn’t just a general statement, because obviously the universe has a lot left to find out. What I’m getting at is that the actual knowns today far outnumber what actually is publicly known. It’s not in the best interest of anyone higher up the chain to let the population know much of what they already do, simply on the premise that a majority of the world population are in no way prepared to know it let alone actually comprehend it rationally. It is not implausible to understand that a majority of this planet are kept at a certain level of disclosure while there exists a whole other level of things going on.


Have you ever watched the movie Lord of War? It was based on a true story, and even then gives you just a glimpse of how world governments work on the level of arms dealing. Imagine that level of operation on a large number of dealings. What we know is clearly not how it actually works in the world – in essence we’re living in a perceptual bubble. It’s quite possible that we’re staring at a breakdown of that perceptual bubble going forward. Already we’re surprised at revelations that are brought on by Wikileaks, but we’re surprised only because we think those things are abnormal when really that’s actually business as usual in the real world. No wonder world governments (especially the United states) wants to drag Julian Assange into the deepest, darkest hole they can find and never let him out. Of course, then, Bradley Manning will be prosecuted very harshly for leaking information.


There is a penalty for pulling back that global curtain for the public to see.


However, that being said… I will tell you that there is far more going on in the world than you are likely aware of. Profound is an understatement for what information is being withheld from you. Do I actually know what that information is?




It wouldn’t be hard to really figure it out, though, would it?


Remember, there is wisdom in the crowd. What we know collectively as a planet versus individually would astound you, and give you a much clearer perspective. Whatever the answers are, there is a high probability that they may be found somewhere in the average of the cumulative understanding. It would make sense then, would it not, that the term “social graph” is such a hot topic for companies these days, as well as things like the Google Zeitgeist. Clearly somebody knows something about the world that you do not… or more ironically that you actually do know but don’t know you know.


The future is really interesting… all you need to know going forward is that it’s time to start thinking like a unified planet instead of divided. Once we do, it’ll all become much clearer from there.


Whatever December 2012 has in store, and even if it’s absolutely nothing… I think it’s a good time to really reflect on the future of humanity, and its place in the universe. A time to think about each other as a unified species, and most importantly, to move forward together.


We can do better, together.




Relax… we’ve been there and done this before.