Jun 21, 2011

Human 2.0

Some dialog about the evolution of the Human Digerati. #SecondLife



“Pay attention!”

I can remember those words being repeated time and again in the classroom by a teacher at the end of their wits with the students. In the age of computers and ubiquitous technology, our students, and likely many of us born around or after the 1980s, exhibit strange behavior when compared to the prior generations. For one, we simply can’t be bothered to pay attention; our minds wandering from thought to idea in a seemingly chaotic manner.

From the viewpoint of “legacy” teachers, to borrow a term from computer lingo, these digital natives have the attention span of a gnat, and often times are simply impossible to teach. I beg to differ with that analysis, mainly because we’re assuming that these students (many who are now full grown adults) are somehow deficient, and that is a dangerous path to entertain. Down that path, we simply label children and adults as being Attention Deficit Disorder and prescribe some drug to “normalize” them.




Personally, I’m against the whole idea of A.D.D. and ADHD being considered a disorder or diagnosis at all, because of the implication that these children and adults are somehow broken and need to be fixed. In reality, they are Digital Natives and their very minds have been altered through rapid and long-term exposure to technological ubiquity. These children and adults aren’t suffering from poor attention span and focus, but quite the opposite; They’ve become exceedingly proficient at parallel thinking and multitasking, and as a result, anything linear is ignored.

I recently had a conversation with a friend in #SecondLife where she was explaining to me that she was told her son had A.D.D and required medication in order to correct the “problem”. Of course, she refused to have her son drugged up and instead chose a more hands-on approach to handle the situation, which included actually figuring out what was going on (to the best of her abilities). She had asked me what I had thought about the situation, and I essentially told her that she had done the right thing, because I didn’t think the boy needed medication for A.D.D. nor did I think there was anything wrong with him. In short, her son is the product of being a Digital Native, and should be celebrated instead of shunned for it. As far as I am concerned, it is a gift and not a curse.

It’s not a matter of not being able to pay attention or focus, but a matter of lightning quick prioritization of importance, meaning, and weighted analysis of attention. Unfortunately, the results are that anything linear is given a much lesser importance and thus marginalized at best in the minds of Digital Natives as “not important”.

I can say with certainty that I fall into the category of Digital Native very clearly, in that I personally disliked school, and while that isn’t exactly enough to justify the point there lies within it a deeper reasoning below the surface. It wasn’t that I disliked school, but instead the courses were bland and not interesting to me. At first I believed it was because I was somehow failing to grasp what was being taught, but later I realized a bigger truth; The teachers were failing to grasp the attention of the students or understand how they truly learned things.

There are two paths to consider when dealing with whether this apparent lack of attention has deeper meaning, and most of the time we immediately insist there is something wrong with the students. I’d like to venture out and say that the problem is not with the students but with the teachers as well as the legacy society that is trying to interact with these Digerati kids, in that they are Digital Immigrants (Legacy Thinkers in a Digital Native environment), and speak a very different language than that of the Digital Natives. Schools teach in very linear fashions, where text comes first and then maybe pictures afterward. Boring lectures that carry on far longer than required, dry material, and absolutely no immersion or hypermedia context. On the other side of the desk are the students, who are digital natives, carelessly engaged in multitasking and parallel thinking requirements, listening to teachers demanding that they devote 100% of their focus to what they are saying instead.

It’s just not going to work like that, and we know it. Growing up, I was told constantly how listening to music or watching television was counter-productive to studying or work, because (as the theory goes) you can’t do two things at once without sacrifice to the overall performance of both. For some things I’ll buy that line, like when driving you shouldn’t be texting people, but clearly we have radios installed in our cars and listening to music while driving isn’t cited as a number one cause of accidents – even when we’re constantly seeing reports of drivers in accidents when using their cell-phones. It really depends what we’re doing and whether we’re properly thinking in parallel – clearly the radio isn’t causing accidents but the cell phones are, unless you’re using a Bluetooth headset to talk.

The point is, the idea that we’re somehow lesser able simply because we’re multitasking is absolutely bunk, at least if we’re testing against Digital Natives and not Digital Immigrants.





At some point, research was done for Sesame Street revealing that children do not actually watch television continuously, but “in bursts.” They tune in just enough to get the gist and be sure it makes sense.  In one key experiment, half the children were shown the program in a room filled with toys.  As expected, the group with toys was distracted and watched the show only about 47 percent of the time as opposed to 87 percent in the group without toys.  However, when the children were tested for how much of the show they remembered and understood, the scores were exactly the same.  “We were led to the conclusion that the 5-year-olds in the toys group were attending quite strategically, distributing their attention between toy play and viewing so that they looked at what was for them the most informative part of the program.  The strategy was so effective that the children could gain no more from increased attention.” [1]

If a 5 year old can think in parallel effectively, so can you. They’re more aware of their surroundings than we give them credit for.

Heck, whenever I’m working on a project I am most often listening to music or multitasking across multiple tabs on my web browser simultaneously while answering messages. How do you think I wrote my book chapter? By nature I’m an “Immersive Learner” which is a fancy way of saying that all methods of learning interaction are the best manner for me at the same time and fall exceedingly short individually – Visual, Kinesthetic, and Auditory. In a hyper-media frame of mind that is thinking in parallel, the problem may be that we’re not being taught in a manner that is immersive or takes into account what type of learner we are. How better to address this problem than to teach in an Immersive Learning environment that caters to all types at the same time?

I mean, the most effective forms of education are turning out to be not from real teachers but from fictional characters in video games, television, or unrestricted access to hyper-media.



Maybe Animaniacs should have been required television?


It isn’t a matter of whether or not children or adults are having trouble focusing or paying attention, but quite the contrary – in that they are devoting no more than is required for multiple tasks, with a weighted importance (more attention) to things that they can actually take something away from having given more attention to. If you’re speaking their language (Immersive Learning), then you’re likely to have their full attention.

There is also rising evidence that technology actually does rewire the brain, changing the way we think, which leads me to believe that we’re not actually deficient but instead at a higher advancement than the prior generations of legacy thinkers. Digital Natives are the human equivalent to multi-core processors versus Digital Immigrants (Legacy Thinkers) who are single core processing.

That’s not to say that Digital Natives are overall better than Digital Immigrants. Oh yes, we have our drawbacks as well as advancements. We’re literally wired to think in non-linear and random order, a hypermedia frame of mind, where we are free to explore every notion and idea we have instantaneously and often times in multiplicity. Of course, the problem here is that it may be literally impossible for us to go backwards to the prior way of thinking (at least not without a lot of time, effort, or drugs). The question, though, is why would we actually want to impede our advancement?

Which brings me to the point I raised earlier about this whole A.D.D. and ADHD diagnosis. I don’t think it’s a diagnosis that actually addresses the root of the problem, which to me is that the  legacy thinkers are unable to interface with these new technorati children, and thus instead of learning how to better do so, they drug the ever loving hell out of these kids and bring them back down to their level, intentionally retarding their ability to reach maximum potential.

That should be a global crime against humanity.


"They were always hanging cores on me to adjust my behavior" – GladOS


We live in a world where children can readily memorize hundreds of Pokemon, their stats and characteristics, but teachers can’t seem to convince them to memorize the nations of the world and the statistics that go with it.  Clearly it isn’t because the children are somehow deficient or incapable, quite the opposite in that I’d venture to say Digital Natives are not only capable they are hyper-capable with an astounding ability to parse mountains of information in seconds if it suits them. The only way it will suit them, however, happens to be in a manner that gets them to warrant using 100% of their attention on it – which is Immersive Education.

I can admit that I’ve learned more playing Where in the World is Carmen Sandiego? (and watching the subsequent cartoon Where on Earth is Carmen Sandiego?) than all of my Geography classes combined. I also learned my World History from teachers who knew better than to toss a book in front of me and tell me to read – no, they engaged us in thought experiments and non-linear methods of exploration. To that point, I will always thank Mr. George Gelderman for giving me a keen interest in Social Studies and World History. He is, and was, a brilliant example of non-linear education at its best and I can’t thank that man enough.



Let’s face it, we all had a really good reason to find Carmen Sandiego… *ahem*


I didn’t have an interest in the fine arts or theatre until I actually watched Hamlet as a movie and started going to the theatre in person. Before that, classics like Hamlet, Beowulf as well as Romeo & Juliet really meant nothing to me. To a Digital Native, the text only versions often times simply do not parse. You simply fail to visualize what is going on or the relevance inherent with the text. It’s almost like your mind marks it as unimportant and ignores it until it can be explained in a hyper-media manner suited for parallel thinking. Going to those theatre performances, I immediately took in all that was going on and loved the premise, story, and plots immensely. When I was in school, there was a time when such experiences were made to be fun and engaging, later on to be offered as dry text and some lectures. It was no wonder I lost lost interest!

There comes a point in education where we realize it is no longer fun or engaging. Prior to that moment, we are sponges, soaking up vast amounts of academic stimuli in short periods of time. We end up with issues in the learning chain when we forget the reason children (and adults) continue to learn; Because it was fun.

The problem isn’t the children or adults that are Digital Natives, it’s simply a legacy communication problem. The reason they can memorize hundreds of Pokemon is because it’s a non-linear, immersive learning experience, coupled with a learning environment that suits them best – Gameification or Hyper-media. It’s entirely about immersive experiential learning.

A frequent objection I hear from Digital Immigrant educators is "this approach is great for facts, but it wouldn’t work for 'my subject.'" Nonsense. This is just rationalization and lack of imagination.  In my talks I now include "thought experiments" where I invite professors and teachers to suggest a subject or topic, and I attempt - on the spot - to invent a game or other Digital Native method for learning it.  Classical philosophy?  Create a game in which the philosophers debate and the learners have to pick out what each would say.  The Holocaust?  Create a simulation where students role-play the meeting at Wannsee, or one where they can experience the true horror of the camps, as opposed to the films like Schindler's List.  It's just dumb (and lazy) of educators - not to mention ineffective - to presume that (despite their traditions) the Digital Immigrant way is the only way to teach, and that the Digital Natives' "language" is not as capable as their own of encompassing any and every idea. [2]

This is why I love the idea of using virtual world environments for education. It’s not a magic bullet, any more than slapping a cartoon character on the textbook is going to suddenly show results. It’s a tool that should be used effectively, but can just as quickly be used ineffectively if not disastrously.

I wouldn’t say that trying to cram legacy, linear teaching methods into a virtual world space is a good idea. If anything, it’s a colossal waste of time. If you want to know how to make effective learning experiences, talk to companies like MadPea Games in SecondLife. I learned quite a lot in a fun and intuitive game atmosphere that was non-linear because of them, and as a result, I played The Kaaos Effect many times over simply because I enjoyed it.


The Kaaos Effect–1943 German U-Boat Level: MadPea Games, SecondLife


This is the future of education, though I am quick to point out that legacy thinking need not apply. I’m pretty sure that if you were to take those same children diagnosed with ADD or ADHD and sat them down to play this game, they would probably learn it inside and out in no time. As it turns out, this is the type of language they (we) speak. It is an immersive, hyper-media learning experience that is non-linear.

Digital Natives aren’t broken. They don’t need to be fixed with drugs. We’re just really bored with linear and legacy interactions. We’re wired differently in the head, and that’s not a bad thing, to the contrary it’s actually the best thing for humanity that we are. The question is, whether or not the legacy teachers and methods will learn to speak our language or if they’ll continue to find excuses to dumb us down.

I was playing Portal 2 recently and a line in the game from GladOS really hit home. The A.I. GladOS was remembering that when she started becoming “too intelligent”, “too quick” and “too powerful” the scientists tried everything to slow her down. Eventually they resorted to attaching defective units to her that were spewing gibberish into her head, and constantly distracting her on purpose. The sole point of this was to mentally weigh her down and try to restrict her any way that they could in order to keep her manageable and under control.

In a world where our kids can blow through Portal 2 with ridiculous complexity and problem solving skills, we’re trying to convince them (and ourselves) that these Digital Natives must be drugged up and slowed down in order to “function” in society.

I don’t buy that, and neither should you.  Welcome to Human 2.0

References:

[1] Elizabeth Lorch, psychologist, Amherst College, quoted in Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little Brown & Company, 2000, p. 101.

[2] Marc Prensky, Digital Natives. Digital Immigrants, From  On the Horizon (NCB University Press, Vol. 9 No. 5, October 2001)

Jun 13, 2011

The Future: Now in Tablet Form

There has been some debate between myself and @atomicpoet about whether or not Apple somehow has killed the WIMP interface, or the “window, icon, menu, pointing device” paradigm, with their iPad tablet system (and maybe iPhone & iPod Touch). Of course, I’m playing Devil’s advocate with this debate because, just like the Devil, I always have an Ace up my sleeve during the game.

Yes, I believe Tablets and touch interfaces have a future, but at the same time I am not likely to say that the classic WIMP paradigm is going to die either. After all, who was the first person to write their entire book solely on their iPad2? That would be computing suicide simply for the lack of ergonomics involved with using the on-screen keyboard.

Of course, @atomicpoet then later informed the world that getting a Bluetooth keyboard for the iPad is a must have, and I replied asking if he was getting the wireless mouse to go with it. I know a wireless mouse doesn’t exist for an iPad, because there is a lack of pointer on the tablet. However, despite this, just like any PC with a touch screen (HP anyone?) you’re going to find out quickly why having a wireless mouse when the Tablet is docked is a good idea.

Having to constantly reach your arm out and tap, swipe, zoom, pinch, etc on your screen in front of you to use the interface when you are also using the wireless keyboard is going to make your arms tired.

Besides, I was told by a very reputable person (Steve… something or another) that Laptops and PCs didn’t do anything well and that Tablets simply did everything well. Except, apparently, the whole keyboard thing… or probably the whole "copy your entire music and video collection to the paltry internal storage” thing. And when you hook up that keyboard to type, doesn’t it seem like you’ve essentially just reverted your “Magical” Tablet into … a half assed, underpowered laptop computer?

Well, except for the lack of having a mouse… which you’ll realize that you need after a few hours of reaching forward to interact with your docked tablet in a now completely non-ergonomic and tiring way. Like I said, I fully believe Tablets have a promising future, but like most things, they aren’t the end-al to be-all solution for the advancement of our technology.

I get the idea that they are a touch screen, and the interface is brilliant for a touch screen. But this is assuming they remain a touch screen for the primary input under the premise of a mobile computing device. The moment you begin to realize that you need to start adding peripherals to the tablet device to make it more productive, you also must realize that it’s a glorified LCD screen meant for mobile access on the go. I call this “short attention span” computing, where everything is pretty much broken up into a quickly digestible bite.

It doesn’t really do anything better, except under the premise of providing “apps” which are essentially short for “applications”, and I’d wager that “apps” is actually an excellent metaphor since these are programs which are scaled down into short-term attention span snippets and not really constituting fully robust “applications” as you would see on a full out computer. Maybe I’m wrong about this… but I somehow don’t expect to see GarageBand or Photoshop running on an iPad anytime soon.

For one, the storage on these tablet devices simply doesn’t allow it, and if you’re dumb enough to “rent” space in the cloud to make up for the fact they screwed you out of storage to begin with, then you deserve to lose your money (suckers). Secondly, the internal processing specifications is paltry compared to your actual laptop computer or home computer. Third, and foremost, these damned things are expensive appliances – they aren’t upgradable.

There is a solution to this collective brain-fart we call “marketing hype”, which also coincides with the crux of my argument.

Did it ever occur to anyone that the future is neither Tablets nor WIMP style computers, but a Hybrid of both? You know… a laptop where the screen detaches and becomes a mobile Tablet (however limited in ability) but then when docked to the Laptop itself, becomes much more powerful (1TB Hard Drive, 8GB RAM, nVidia Graphics Chip, etc)?

Remember, a Tablet is just a fancy touch screen. Imagine the 17” LCD screen on your laptop today, and if it could be undocked from the laptop and instantly become a mobile Tablet.


Presenting: The Future



lenovo_ideapad_u1_hybrid_lepad_removed

Lenovo IdeaPad U1 hybrid notebook



Seeing this picture gave me an epiphany as to why Windows 8 is being designed as a hybrid Tablet/WIMP Operating System. With Windows 8 running on a hybrid notebook like the one above, Microsoft would be able to tout a computing environment that truly “just did everything” and leaving iPad owners feeling pretty stupid for shelling out $699 for a device that is paltry in comparison.

I’ll have a Tablet one day, and I guarantee it. But mine will dock to a gaming laptop that will empower me to actually get some serious work (or play) done. My tablet will play Crysis 2 when docked, and you will tell me about the next iteration of Angry Birds or Fruit Ninja.

I’m not against tablets, but I believe they are an intermediary technology at best. A stepping stone to something much more powerful, but on their own aren’t anything to be excited about aside from the part where I say “I can’t wait till they become my screen on a laptop so I can dock and undock”.

I don’t know why we, as self-proclaimed rational thinking people, are acting as absurdly stupid as we are. We’re being easily conditioned to accept less and less for the same price. $699.00 for a device that comes with 64GB of storage? Have we lost our minds? Being told “more storage and power aren’t everything” and actually believing it is making me lose hope for the future of humanity.

I have a portable hard drive that fits in my pocket and gives me 1TB to keep my files. Hell, the internal hard drive on my own laptop is 500GB and puts the iPad2 to shame. To make up for the intentional storage screwing that you’ve already taken, Apple will soon be offering cloud storage and 5GB free, but anything more than that will cost you a rental. I have more storage than what Apple is offering you in my left pocket of my jeans right now – in the form of an 8GB USB Flash Drive.

Seriously, you’re dumb enough to rent storage space to make up for the fact that Apple fsked you over intentionally. We’ve been reduced to drooling idiots, apparently. You can’t even do what you want with your own purchased device, as the App Store will confirm. There is a reason people are constantly trying to jailbreak these devices.

I’m not an Apple, Microsoft, or Linux fanboy by any means. I just look at what “just works” for what I need to do. Apple is too totalitarian about their computing ecosystem for me to ever bother, Microsoft (while not perfect) gives me at least some level of control over my computer, albeit not nearly as much as Linux would. Windows, to me, is a comfortable middle ground.

But aside from that, Tablets alone aren’t a solution. In fact, they don’t seem to “just do” anything particularly well – other than sucker you out of money constantly and deliver less of everything as a result. So I suppose Apple has made the ultimate technology… one which they convince their customers that paying $700 – $900 for a tablet, that is essentially the same price as an entry level laptop many times more powerful, is acceptable. Oh, and also a walled garden under the rule and authority of Apple and only Apple called an App Store. You’d think Steve Jobs hated Open Source or even third party developers.

I think the future is in the Hybrid Tablet/Laptop running Windows 8, to be honest. It’s the best of both worlds, and would run just about every single existing Windows program on Earth out of the box, which makes the fabled Apple App Store a minority bragging right at best.

Don’t even get me started on the Kindle or Nook. My books don’t need batteries and are never “leased” or “licensed”.

Jun 9, 2011

Doing what Nintendon’t

I’m a huge fan of @Sega games, especially the Sonic the Hedgehog series. At least the old school, side scrolling games like Sonic 1 and Sonic 2, though I’m still unsure about Sonic the Hedgehog 3.  Of course we’ll just ignore the fact that there were a lot of piss-poor 3D versions of Sonic the Hedgehog that made the true fans cringe and scream in agony, while we move onto the latest foray into the franchise – Sonic the Hedgehog 4.

For what it’s worth, it’s a decent game. I’m sure plenty of people worked hard as part of Team Sonic to bring this game together in an attempt to get back to the roots of the franchise and bring back all the things that the fans loved from the glory days. Unfortunately for @Sega they managed to fubar quite a lot in their launch, namely in actually managing to do a better job with a multi-million dollar development budget and entire team of highly skilled and paid programmers than a single fan who managed to produce Sonic Fan Remix unfunded and in his spare time.


sonic4
Sonic the Hedgehog 4 – while good, is nowhere as amazing as Sonic Fan Remix


What’s worse is that in the digital age of marketing, @Sega has managed to forget a few cardinal rules of the game, which I’ll be outlining here:

1. If a single fan of your franchise can do more justice with your franchise than an entire team of paid professionals and a multi-million dollar budget, you don’t send a flurry of DMCA notices to have the content removed.

2. This is the age of the Internet. Anything truly spectacular is like Bebe’s Kids, it doesn’t die, it multiplies.

3. Any truly intelligent executive should know that the first question shouldn’t have been “How fast can we have that content taken down?” but instead “Why isn’t he working for us?”

Allow me to elaborate.

You see, if you were to follow that link to the Sonic Fan Remix, you’d be greeted by a well polished website and a gorgeous gameplay video of the beta. You would also notice a giant button saying “Download” which leads to a file repository which has removed the game on “Copyright Violation” grounds – which is an ambiguous way of saying:

Sega decided they were having their asses handed to them by a single programmer/artist who was working in his spare time without a budget to create a Sonic the Hedgehog game that garnered more attention and praise than their own multi-million dollar production called Sonic 4, and instead of admitting they did a piss poor job and having earned the scorn of multitudes of diehard Sonic fans (to the point of calls for boycotting the game), they decided to get pissy and send their lawyers after Sonic Fan Remix, having it removed on ground of copyright violation.



Sonic Fan Remix | Actual HD Gameplay


In the history of the prosumer culture, this has never once actually worked to the advantage of any corporate entity. I can name the RIAA/MPAA versus Napster, Limewire, eDonkey, and others in a never ending battle which rages on today to the dismay (and wonton monetary losses in litigation for the RIAA/MPAA in the process). Anything truly revolutionary in the digital world doesn’t die, it multiplies or evolves into something greater, like a Phoenix.

To be honest, if I were working at @Sega and saw this game above prior to the launch of Sonic 4, I would have probably pissed myself. It is in nearly every possible way superior to Sonic 4, right down to the fact that it didn’t require a Hollywood style blockbuster budget to create, nor an entire high-paid professional team. Adding to that the meticulous detail the programmer put into actually studying and figuring out the game physics from the original games in order to replicate the experience in the new game precisely.

This one guy has managed to do what the entire current Sonic Team could not, with all of their resources and manpower.

However, in the spirit of corporate stupidity, @Sega decided that instead of seeing this work and making it their number one priority to make it an official @Sega game, giving it the resources and manpower it deserves, they instead decided to bawl about Copyright Violation, and make a piss-poor attempt at sweeping their shame under the collective rug in hopes nobody would ever talk about it again.

So true is the case of Sonic Fan Remix. While @Sega may have thought they have done due diligence and wiped Sonic Fan Remix from the face of the Earth, they have instead given the fans of the franchise even more reason to spread the game around the world, to multiple server mirrors, making the game available in not just a single location, but probably hundreds and propagating like wildfire, while gaining scorn for their own Sonic 4 series in the process.

Ignoring the Streisand Effect in the online world is a cardinal sin. Ignoring what your customers actually want is part of those seven deadly marketing sins as well. Not having enough common sense at the executive level to know the difference between filing DMCA or Making it a priority to hire the person is simply bad business in the prosumer culture.

Let me tell you what I see when I watch (and play) Sonic Fan Remix.

This is, hands down, the most brilliant example of a portfolio that could ever grace the desks of Sega Enterprises and they tossed it aside never understanding that they’ve literally been handed the Goose that lays Golden Eggs.

This isn’t to say that @Sega should cancel Sonic the Hedgehog 4, by any means. What I’m getting at is that Sonic Fan Remix is the single most opportune event that could have come to their attention. A literal gold mine for the rebirth of the Sonic the Hedgehog franchise.

Here’s a plan of action for Sega:

1. Hire the guy behind Sonic Fan Remix.

2. Make Sonic Fan Remix an official Sega production.

3. Keep the name Sonic Fan Remix.

4. The premise of the title is that the fans submit levels with a level creator that Sega puts out in a contest, with the winners being included officially in the release.

5. The level creator remains in the game upon release.

6. Fan created levels are then facilitated to be uploaded to a central repository, with which other players can download such levels, rate the levels, and even attach gamer points and fan created achievements to their custom levels. (think Little Big Planet 2)

7. Sega can create official extended level packs in this Remix network (aside from fan created levels) as a Downloadable Content pack, as well as additional features, unlockables and characters for preset microtransaction payments.

8. Sega makes every Sonic fan deliriously happy, saves face, provides a Sonic game that successfully relaunches the franchise, acknowledges the ridiculous talent of the creator of Sonic Fan Remix, as well as every fan on Earth, while facilitating residual income from the process, and rebuilds/unites every Sonic the Hedgehog fan on the planet under a single banner called Sega.

Until such time as Sega comes to their senses, Sonic Fan Remix isn’t going anywhere, simply because the world is full of fans that want it. As such, Sega needs to quit trying to squash it and actually acquire it productively before it spreads further outside of their control.

I’ll leave my readers with a (currently) working link to download Sonic Fan Remix in the meantime.


Of course, in the event that @Sega hasn’t learned a damned thing from this article, I may provide another link to a mirrored copy of this game. However, I will shake my head in awe of their corporate stupidity – as you should too.

Link doesn't work? Try clicking here: Google Search [Sonic Fan Remix]

Update: The June 9th link was taken down and replaced with the June 11th link. Apparently @Sega hasn't learned a thing from reading this, or maybe they haven't bothered to read it and comprehend what is being said here. No matter how many times they DMCA the mirrors, there exists hundreds or thousands of mirrors of this file across the world, and that number continues to grow the more they remove single mirrors of this file (after all, wouldn't you expect people to say "The link doesn't work, but here's three more mirrors I put up for you") - Even if @Sega keeps killing the links I'm providing here, anybody with 5 minutes and access to Google can find "Sonic Fan Remix Download"

Any further link updates will be posted in the comments of this post in the future event that the link above stops working. Feel free to post your own DL links in the comments for Sonic Fan Remix as well.

Jun 4, 2011

Unreality

#AugmentedReality Step into a virtual environment where virtual reality and reality merge. |

 

One of the many things that cross my mind when thinking about the future of virtual environments is whether or not any virtual environment will reign supreme in the future. I hear a lot about how virtual environments will redefine the future of interaction, or about the latest advancements, however lesser heard is how the true future of virtual environments will evolve.

 

While things like Second Life have a lot of merit and potential in the grand scheme of things, I truly believe the real focus for the future is in Augmented Reality. not the augmented reality we see today, though, because it is currently too limited to make a dent. I mean, we see augmented reality today and it is tied to this notion of markers for the camera to recognize and translate into digital data, whether it be some QR Code or a predefined picture to recognize.

 

Aside from that, it’s very limiting because we have this stationary camera with which to see the world through the eyes of the computer, maybe a web camera or similar. So in that respect we’re at a point where Augmented reality is really limited to stationary scenes or instances, maybe holding up QR codes to the camera and viewing the results through the computer screen.

 

But what if this Augmented Reality was unshackled from these limitations?

 

 

 

Imagine this, in real time, from a first person perspective.

 

 

 

I suppose we would have to first take a look at the current limitations and address them properly, or at the very least make some speculations as to how we are to overcome them.

 

Let’s first address the issue of marker based augmented reality. To me, it’s a good first step in the right direction but is far too limiting. In order to make this ubiquitous, we’d need to devise a method of markerless augmented reality and work from there. I’ve seen some really good steps in this direction from such things as HandyAR, and in recent weeks announcements from Sony have really peaked my interest in the arena of augmented reality. So let’s say that markerless augmented reality is a solid approach.

 

The question now stands as to how we unchain the view from a stationary aspect and make it truly mobile in nature.

 

I would imagine that such a system would be a lightweight visor, using things like OLED and maybe even a depth system like we see in the Nintendo 3DS to capture a reasonable depth video of the surroundings in a mobile form. But of course, this isn’t enough. The camera on the visor must also convey the depth of the surroundings, so we turn to 3D cameras like the one found in a Kinect to manage that, but probably with higher fidelity and resolution (which is a matter of technology progression).

 

So far we’re on a good track with this.

 

 

 

We’re beginning to see the possibilities.

 

 

 

Now onto the actual environment and markerless tracking. If we don’t have an actual marker any longer, how do we handle the positioning and environment?

 

Well, an idea has been in my head for a number of years (despite knowing how feasible it is) that could offer some insight. What if the unit had GPS? Well, we have an idea of how accurate the tracking resolution of a standard GPS is these days, and thus could create a virtual grid overlay using that data. I suspect the longitude and latitude plus minutes and seconds (resolution allowing) would be enough to define that virtual grid overlay. In the event it is not accurate enough, we can always do client-side approximations and tracking to compensate.

 

So we have a basic grid overlay, using GPS, and a visor to convey this Augmented virtual world.

 

Of course, this is still not quite enough, but it becomes a hell of a core framework to build on. Going forward, such a system would rely heavily on Wi-Fi for data, so it probably wouldn’t work too well outside of that sort of range, unless 4G data plans on cell phones could offer a solution (though to be honest, cell phone companies have an archaic business model and stranglehold on that system – charging for overages and per minute usage).

 

I think we’re getting somewhere though…

 

Let’s say, then, that the initial limitation would be within a Wi-Fi range (with advances coming later to alleviate this). So we run the bulk of the software and computation on a cloud system, and stream results to the visor for output. In this manner, we take the grid overlay information, the user orientation and position, plus the location of 3D real world objects (as recorded by the Kinect type 3D Camera on the visor) and we have a virtual landscape to start with.

 

Just as the visor is broadcasting your own position to the cloud server, the cloud server is keeping track of others with a visor system in proximity to you, updating your visor accordingly.

 

Alright, this is looking good so far.

 

Now, let’s say we want to create an item in the real world grid from our virtual options. I suppose having an inventory at our disposal would come in handy. Data objects are phantom for the most part but would respond the the user through physics or virtual touch (like buttons and such), and that would be programming/scripting to accomplish. However it wouldn’t be like pulling out a virtual chair and dropping it on the real world grid to sit on it. It’s not really solid, so we accept that these digital items are mostly there for informational or augmentation of environment purposes.

 

You may be noticing that my interpretation of the Augmented Grid isn’t a passive experience, like we normally see and expect from current AR. Sure we’ll have widgets like Weather apps and information displays, but those are just feeding you information on a one way street. I’m interested in a dynamic augmented grid where we participate and create openly.

 

Keeping track of spatial alignment and position for virtual objects in this augmented reality would fall under the combination of GPS, Kinect, and Cloud Software to calculate, so when we attach… say a virtual digital information sign to a building (a virtual notecard so to speak), the Kinect camera on the visor knows we are facing a real world object that is solid, the gyroscope and internal compass knows the direction, and the GPS knows the rough grid location where that data gets stored. Toss in an altimeter while we’re at it in order to give access to a Z location and not just X and Y.

 

Since this system is running mostly on a cloud architecture, we could include any number of virtual systems and widgets into our Spatial View. Maybe Skype calls with video and audio? hand gestures at this point become trivial for controlling the system (and subsequent programs) loaded into our visor via the Kinect camera and APIs for motion tracking. The same aspects wold also be used for manipulating the virtual overlay environment, grabbing and positioning virtual items, displaying an augmented keyboard overlay on a surface to type text onto a virtual notecard in 3D space, the list goes on.

 

In reality, all we see is a person wearing a visor headset.

 

There would be an overflow of data in the 3D world around us, and so filters would be employed to keep the stream relevant to our interests. Possibly hand gestures while facing locations (like restaurants, venues, or even other people) would allow us to access further information on demand.

 

Then there is the augmented self.

 

It may seem trivial, but we do this already with virtual environments in that we customize our virtual avatars in the manner with which we wish to be perceived. It would be no different in the Augmented Reality space, in that since we are broadcasting our virtual positions to the cloud architecture, the Kinect camera can overlay the virtual skeleton and track points, and then overlay a virtual avatar on the person’s real life body, controlled through real life movements in real time.

 

I believe at this point we’re getting dangerously close to the realm of being masters of our reality.

 

We could be walking down the street, in real life, wearing a visor and the view of the world around us would be vastly different. Other people would look like virtual representations, maybe avatars of all types. They could also have scripted items on them like we do in Second Life, and as such are represented in the Augmented Grid accordingly.

 

Imagine merging Second Life with Real Life, and then taking that a hundred-fold further with the amount of information and personal interaction access we’d have on-demand in spatial reality.

 

We would create games that ran in real life, through augmented grid space. Zombie invasions in our back-yard, Tower defense on our front lawns. Augmented Reality HALO games in the neighborhood park.

 

Augmented distance games against people around the world, on-demand.

 

Then there is a viable business use as well. Imagine being able to redecorate an entire house in real life with virtual furniture, painting all of your walls different colors in real time without touching a drop of paint. Want to see what you’d look like in a new pair of clothes? Try it on in Augmented Space. Just as such a system can project an avatar overlay, it could just as well be used to overlay a dress or shirt onto your augmented wireframe. Cloth physics of course would be applicable for that.

 

Maybe your avatar is also a hybrid? A real life self wearing augmented items?

 

Tour guides in art museums would be great as well, with audio and visual. Augmented art galleries, where you are in the environment itself or even in the paintings… who knows?

 

City walks would be phenomenal using such a visor system.

 

Want to see what a building will look like before it is built? Overlay the HD model on the AR Grid at the location where it is supposed to be, and get a feel for the presence and looks of the design in 1:1 scale.

 

This is the world I wait to live in. It is a world inspired by a dream I had years ago, where all of these things were not only possible, but commonplace. Experienced from a first person perspective it is absolutely wonderful, if not a bit confusing at first.

 

I’m sure this is long standing dream of many futurists… but that doesn’t stop me from dreaming :)

Jun 1, 2011

Poit Mesh Narf!

#Secondlife | I believe I’m getting the hang of this idea for blogging by the seat of my pants. Sure, it’s a little more raw than my usual meticulously over polished approach, but it’s possible that these sorts of posts are conveying what’s really going on in my mind without too much of a PR filter.

 

This post is a little different than most because I normally start out with a witty title and write around that idea. This time, however I’m doing it in reverse. Writing the content and then thinking up an appropriate title for it after the fact, based on whatever it was that I wrote.

 

Today’s prominent thought is that of the announcement by Linden Lab that Mesh will likely arrive in July. Despite the actual timing details, I cannot help but feel entirely underwhelmed by the announcement. To me, it’s starting to feel like virtual worlds companies are leveraging common features as though they are revolutionary, giving them the sort of hype and attention usually reserved for something truly innovative and new.

 

 

 

Pinky08_8

Tada! We’ve incorporated totally obvious things, eight years later!

 

 

Of course, Mesh will revolutionize SecondLife in some manner, but it’s the same analogy as forcing Michelangelo to paint the Sistine Chapel with Crayola Watercolors simply because you can’t be bothered to let him use professional quality materials. Mesh, for all it’s worth, is a staple of 3D virtual environments and has been since the 1990s. It’s just another name for uploading and using 3D models in a virtual environment, and prior to the Linden Lab effort with Mesh, this is pretty much how every single virtual environment in history accomplished user generated content where they allowed the users to generate the worlds.

 

I’m thinking back to ActiveWorlds where this was implemented since 1996.

 

Linden Lab is the exception to the status quo in that from the beginning, for some really odd fluke of reasoning (Shiny Pants!), nobody thought it was important enough to consider allowing (much like actually putting particle controls in the build menu). Of course Prims can be quite powerful to an extent, and allow for “rapid prototyping” in a virtual environment. The whole idea of Constructive Solid Geometry (which the Prims approach closely resembles) is not foreign or even a bad idea.

 

But ignoring the basic need for importing 3D models into the system for use at the same time as most (if not all) virtual environments considered this standard protocol seems silly. I suppose the idea that a single object as a model versus many in prims would somehow dilute the business model and circumvent the cost per prim revenue that Linden Lab has enjoyed all along.

 

Not that the whole idea with Mesh is new either, concerning SecondLife, because (like most things concerning SL it seems) this is an idea implemented years ago by offshoots like RealXtend, and only recently getting the Linden Lab seal of approval. Kind of like Body Physics, or any number of things that the community seemed to work out ahead of time. For complete information about RealXTend I’d highly suggest following and having a talk with @AdamFrisby on Twitter.

 

Back to the whole per-prim business model though for a moment. I’m sure that actual 3D Models will come with a weighed prim cost based on the model complexity, in order to offset the per-prim business model already in place and keep it intact. I really don’t understand why something like Mesh should be hindered in order to sustain the legacy interests. In ActiveWorlds I know that there is a cell limit where you can only place a certain number of objects per area, and a model only counts as a single object, regardless of the complexity. In SecondLife terms, that would translate to the equivalent of 1 model equals 1 prim; I know, it’s shocking, right?

 

But that doesn’t seem to be the case with SecondLife in that a 3D Model uploaded to the system for use undergoes some calculation of rendering cost and the prim-cost is extrapolated from that so your single model then becomes the cost of many prims. See, to me, intentionally inflating the item count to make one model equal twenty seems… Hell, I don’t even have a word for that. Maybe ass-backwards or defeating the damned point of using a 3D model to begin with.

 

Then, of course, we’re dealing with preserving the legacy business model, which is charging per prim, so it stands to reason that the introduction of 3D Models into SecondLife would somehow be nerfed (NARF!) as a consequence. So we’re back to this whole rendering cost idea, which implies that some sort of rendering of these models and content must be happening on the server side and Linden Lab sees the need to offset that cost.

 

I’m getting this uneasy feeling that future implementations of anything in SecondLife will be at the mercy of outdated business decisions made years prior, and end up like a Ferrari with a speed cap set to 60MPH. In the same train of thought, though, I’m getting that feeling concerning just about every virtual environment I’ve encountered.

 

I don’t think I could reasonably work at a place like Linden Lab or any other virtual environment company, simply because I value my intelligence and sanity. Any point where you hear me say “Hey, Linden Lab! Hire Me!” is actually an inside joke between myself and a few close friends in the industry. Because let’s face it, Aeonix Linden would be about as effective as any other person working at Linden Lab to make appropriate change, which is to say, not really. I’d be just taking orders like any other peon in the ranks and under the gun to grind out the same old song and dance in accordance with the PR department.

 

I might be in a bit of a cynical mood lately, but a lot of this stuff seems like nonsense to me. The biggest news item for development from Linden Lab is in implementing a feature that is commonplace in every other system since the dawn of virtual environments, and treating it like everyone should be floored in amazement and awe at the coming of this thing called Mesh.

 

Of course, it will revolutionize SecondLife. But if you deliberately withhold any common thing for years and suddenly offer it, it’ll seem like the biggest thing since sliced bread. We could apply this to say… food. If you didn’t have any for a really long time, even when it’s common elsewhere, and suddenly you’re offered food, that sandwich becomes the greatest thing you’ve ever tasted.

 

I’ll call this the starvation theory.

 

If you starve your customers of common things long enough, they’ll be grateful for just about any scraps you decide to throw in their direction later on. Inversely, they’ll be just as equally pissed off when you take away a Gourmet Meal and hand them a Happy Meal (Viewer 2). The same mentality applies to consciously deciding to remove integral keystones to the foundation of what makes SecondLife what it is and calling it a “basic” view. Stuff like… an inventory, landmarks, voice chat, and the ability to spend money; this latter one perplexed me to no end because I always thought it was in the best interest for Linden Lab to take people’s money.

 

Treating the use of 3D models in a virtual environment like the next innovation… I really never thought I’d see the day that happened. For me, I’m grateful that Linden Lab decided to show a little initiative and get with the program, but I won’t be playing into the hype of it. It’s a feature and ability that is as common as water, and the fact that Linden Lab decided to voluntarily leave people in the desert for years doesn’t change the fact.

 

It just makes Linden Lab about eight years late to the party.

 

Better late than never, I suppose.