Apr 29, 2010

The Lessons of Lucasfilm's Habitat

The Lessons of Lucasfilm's Habitat

Chip Morningstar and F. Randall Farmer
Electric Communities

This paper was presented at The First Annual International Conference on Cyberspace in 1990. It was published in Cyberspace: First Steps, Michael Benedikt (ed.), 1990, MIT Press, Cambridge, Mass.

Additional highlighting not inherent in the original document has been added for emphasis by Andromeda Media Group as Bold and Italicized text. We emphasize this text for the single purpose of making a valid point to one M Linden of Linden Lab (makers of Second Life). Whether he actually reads and understands the emphasized paragraphs of this document will determine whether Second Life will sustain in the future, as well as determine if any virtual environment system going forward will be able to sustain itself.


Lucasfilm's Habitat was created by Lucasfilm Games, a division ofLucasArts Entertainment Company, in association with Quantum Computer Services,Inc.It was arguably one of the first attempts to create avery large scale commercial multi-user virtual environment. A far cry frommany laboratory research efforts based on sophisticated interface hardware andtens of thousands of dollars per user of dedicated compute power, Habitat isbuilt on top of an ordinary commercial online service and uses an inexpensive-- some would say "toy" -- home computer to support user interaction. In spiteof these somewhat plebeian underpinnings, Habitat is ambitious in its scope.The system we developed can support a population of thousands of users in asingle shared cyberspace. Habitat presents its users with a real-time animatedview into an online simulated world in which users can communicate, play games,go on adventures, fall in love, get married, get divorced, start businesses,found religions, wage wars, protest against them, and experiment withself-government.

The Habitat project proved to be a rich source of insights into thenitty-gritty reality of actually implementing a serious, commercially viablecyberspace environment. Our experiences developing the Habitat system, andmanaging the virtual world that resulted, offer a number of interesting andimportant lessons for prospective cyberspace architects. The purpose of thispaper is to discuss some of these lessons. We hope that the next generation ofbuilders of virtual worlds can benefit from our experiences and (especially)from our mistakes.

Due to space limitations, we won't be able to go into as much technical detailas we might like; this will have to be left to a future publication.Similarly, we will only be able to touch briefly upon some of the history ofthe project as a business venture, which is a fascinating subject of its own.Although we will conclude with a brief discussion of some of the futuredirections for this technology, a more detailed exposition on this topic willalso have to wait for a future article.

The essential lesson that we have abstracted from our experiences with Habitatis that a cyberspace is defined more by the interactions among the actorswithin it than by the technology with which it is implemented. While we findmuch of the work presently being done on elaborate interface technologies --DataGloves, head-mounted displays, special-purpose rendering engines, and so on-- both exciting and promising, the almost mystical euphoria that currentlyseems to surround all this hardware is, in our opinion, both excessive andsomewhat misplaced. We can't help having a nagging sense that it's all a bitof a distraction from the really pressing issues. At the core of ourvision is the idea that cyberspace is necessarily a multiple-participantenvironment. It seems to us that the things that are important to theinhabitants of such an environment are the capabilities available to them, thecharacteristics of the other people they encounter there, and the ways thesevarious participants can affect one another. Beyond a foundation set ofcommunications capabilities, the technology used to present this environment toits participants, while sexy and interesting, is a peripheral concern.

What is Habitat?

Habitat is a "multi-player online virtual environment" (its purpose is to be anentertainment medium; consequently, the users are called "players"). Eachplayer uses his or her home computer as a frontend, communicating over acommercial packet-switching data network to a centralized backend system. Thefrontend provides the user interface, generating a real-time animated displayof what is going on and translating input from the player into requests to thebackend. The backend maintains the world model, enforcing the rules andkeeping each player's frontend informed about the constantly changing state ofthe universe. The backend enables the players to interact not only with theworld but with each other.

Habitat was inspired by a long tradition of "computer hacker science fiction",notably Vernor Vinge's novel, True Names [1], as well as many fondchildhood memories of games of make-believe, more recent memories ofrole-playing games and the like, and numerous other influences too thoroughlyblended to pinpoint. To this we add a dash of silliness, a touch of cyberpunk[2,3], and a predilection for object-oriented programming [4].

The initial incarnation of Habitat uses a Commodore 64 for the frontend.One of the questions we are asked most frequently is, "Why theCommodore 64?" Many people somehow get the impression that this was atechnical decision, but the real explanation has to do with business, nottechnology. Habitat was initially developed by Lucasfilm as commercial productfor QuantumLink, an online service (then) exclusively for owners of theCommodore 64. At the time we started (1985), the Commodore 64 was the mainstayof the recreational computing market. Since then it has declined dramaticallyin both its commercial and technical significance. However, when we began the project, we didn't get a choice of platforms. The nature of the deal was suchthat both the Commodore 64 for the frontend and the existing QuantumLink hostsystem (a brace of Stratus fault-tolerant minicomputers) for the backend were givens.

The largest part of the screen is devoted to the graphics display. This is an animated view of the player's current location in the Habitat world. The scene consists of various objects arrayedon the screen, such as the houses and tree you see here. The players arerepresent by animated figures that we call "Avatars". Avatars are usually,though not exclusively, humanoid in appearance. In this scene you can see twoof them, carrying on a conversation.

Avatars can move around, pick up, put down and manipulate objects, talk to eachother, and gesture, each under the control of an individual player. Control isthrough the joystick, which enables the player to point at things and issuecommands. Talking is accomplished by typing on the keyboard. The text that aplayer types is displayed over his or her Avatar's head in a cartoon-style"word balloon".

A typical Habitat scene (&copy 1986 LucasArtsEntertainment Company).

The Habitat world is made up of a large number of discrete locations that we call "regions". In its prime, the prototype Habitat world consisted of around 20,000 of them. Each region can adjoin up to four other regions, which can be reached simply by walking your Avatar to one or another edge of the screen. Doorways and other passages can connect to additional regions. Each region contains a set of objects which define the things that an Avatar can do there and the scene that the player sees on the computer screen.

Some of the objects are structural, such as the ground or the sky. Many arejust scenic, such as the tree or the mailbox. Most objects, however, have somefunction that they perform. For example, doors transport Avatars from oneregion to another and may be opened, closed, locked and unlocked. ATMs(Automatic Token Machines) enable access to an Avatar's bank account. Vending machines dispense useful goods inexchange for Habitat money. Habitat contained its own fully-fledged economy, with money,banks, and so on. Habitat's unit of currency is the Token, owing to the factthat it is a token economy and to acknowledge the long and honorableassociation between tokens and video games.

Many objects are portable and may be carried around in an Avatar's hands or pockets. These include various kinds of containers, money, weapons, tools, and exotic magical implements. Listed here are some of the most important types of objects and their functions. The complete list of object types numbers in the hundreds.

Object Class Function
ATM Automatic Token Machine; access to an Avatar's bank
Avatar Represents the player in the Habitat world
Bag, Box Containers in which things may be carried
Book Document for Avatars to read (e.g., the daily newspaper)
Bureaucrat-in-a-box Communication with system operators
Change-o-matic Device to change Avatar gender
Chest, Safe Containers in which things can be stored
Club, Gun, Knife Various weapons
Compass Points direction to West Pole
Door Passage from one region to another; can be locked
Drugs Various types; changes Avatar body state, e.g., cure
Elevator Transportation from one floor of a tall building to
Flashlight Provides light in dark places
Fountain Scenic highlight; provides communication to system
Game piece Enables various board games: backgammon, checkers, chess,
Garbage can Disposes of unwanted objects
Glue System building tool; attaches objects together
Ground, Sky The underpinnings of the world
Head An Avatar's head; comes in many styles; for customization
Key Unlocks doors and other containers
Knick-knack Generic inert object; for decorative purposes
Magic wand Various types, can do almost anything
Paper For writing notes, making maps, etc.; used in mail system
Pawn machine Buys back previously purchased objects
Plant, Rock, Tree Generic scenic objects
Region The foundation of reality
Sensor Various types, detects otherwise invisible conditions in
the world
Sign Allows attachment of text to other objects
Stun gun Non-lethal weapon
Teleport booth Means of quick long-distance transport; analogous to
phone booth
Tokens Habitat money
Vendroid Vending machine; sells things


The following, along with several programmer-years of tedious and expensive detail that we won't cover here, is how the system works:

At the heart of the Habitat implementation is an object-oriented model of the universe.

The frontend consists of a system kernel and a collection of objects. Thekernel handles memory management, display generation, disk I/O,telecommunications, and other "operating system" functions. The objectsimplement the semantics of the world itself. Each type of Habitat object has adefinition consisting of a set of resources, including animation cels to drivethe display, audio data, and executable code. An object's executable codeimplements a series of standard behaviors, each of which is invoked by adifferent player command or system event. The model is similar to that foundin an object-oriented programming system such as Smalltalk [5], with itsclasses, methods and messages. These resources consume significant amounts ofscarce frontend memory, so we can't keep them all in core at the same time.Fortunately, their definitions are invariant, so we simply swap them in fromdisk as we need them, discarding less recently used resources to make room.

When an object is instantiated, we allocate a block of memory to contain theobject's state. The first several bytes of an object's state information takethe same form in all objects, and include such things as the object's screenlocation and display attributes. This standard information is interpreted bythe system kernel as it generates the display and manages the run-timeenvironment. The remainder of the state information varies with the objecttype and is accessed only by the object's behavior code.

Object behaviors are invoked by the kernel in response to player input. Each object responds to a set of standard verbs that map directly onto the commands available to the player. Each behavior is simply a subroutine that executesthe indicated action; to do this it may invoke the behaviors of other objectsor send request messages to the backend. Besides the standard verb behaviors,objects may have additional behaviors which are invoked by messages that arriveasynchronously from the backend.

The backend also maintains an object-oriented representation of the world. Asin the frontend, objects on the backend possess executable behaviors andin-memory state information. In addition, since the backend maintains apersistent global state for the entire Habitat world, the objects are alsorepresented by database records that may be stored on disk when not "in use".Backend object behaviors are invoked by messages from the frontend. Each ofthese backend behaviors works in roughly the same way: a message is receivedfrom a player's frontend requesting some action; the action is taken and somestate changes to the world result; the backend behavior sends a responsemessage back to the frontend informing it of the results of its request andnotification messages to the frontends of any other players who are in the sameregion, informing them of what has taken place.

The Lessons

In order to say as much as we can in the limited space available, we will describe what think we learned via a series of principles or assertionssurrounded by supporting reasoning and illustrative anecdotes. A more formal and thorough exposition will have to come later in some other forum where we might have the space to present a more comprehensive and detailed model.

We mentioned our primary principle above:

A multi-user environment is central to the idea of cyberspace.

It is our deep conviction that a definitive characteristic of a cyberspacesystem is that it represents a multi-user environment. This stems from thefact that what (in our opinion) people seek in such a system is richness,complexity and depth. Nobody knows how to produce an automaton that evenapproaches the complexity of a real human being, let alone a society. Ourapproach, then, is not even to attempt this, but instead to use thecomputational medium to augment the communications channels between realpeople.

If what we are constructing is a multi-user environment, it naturally followsthat some sort of communications capability must be fundamental to our system.However, we must take into account an observation that is the second of ourprinciples:

Communications bandwidth is a scarce resource.

This point was rammed home to us by one of Habitat's nastier externally imposed design constraints, namely that it provide a satisfactory experience to the player over a 300 baud serial telephone connection (one, moreover, routed through commercial packet-switching networks that impose an additional,uncontrollable latency of 100 to 5000 milliseconds on each packet transmitted).

Even in a more technically advanced network, however, bandwidth remains scarce in the sense that economists use the term: available carrying capacity is not unlimited. The law of supply and demand suggests that no matter how much capacity is available, you always want more. When communications technology advances to the point were we all have multi-gigabaud fiber optic connections into our homes, computational technology will have advanced to match. Our processors' expanding appetite for data will mean that the search for ever more sophisticated data compression techniques will still be a hot research area (though what we are compressing may at that point be high-resolution volumetric time-series or something even more esoteric)[6].

Computer scientists tend to be reductionists who like to organize systems interms of primitive elements that can be easily manipulated within the contextof a simple formal model. Typically, you adopt a small variety of very simple primitives which are then used in large numbers. For a graphics-oriented cyberspace system, the temptation is to build upon bit-mapped images or polygons or some other graphic primitive. These sorts of representations, however, are invitations to disaster. They arise from an inappropriate fixation on display technology, rather than on the underlying purpose of the system.

However, the most significant part of what we wish to be communicatingare human behaviors. These, fortunately, can be represented quite compactly, provided we adopt a relatively abstract, high-level description that deals withbehavioral concepts directly. This leads to our third principle:

An object-oriented data representation is essential.

Taken at its face value, this assertion is unlikely to be controversial, asobject-oriented programming is currently the methodology of choice among thesoftware engineering cognoscenti. However, what we mean here is not only thatyou should adopt an object-oriented approach, but that the basic objects fromwhich you build the system should correspond more-or-less to the objects in theuser's conceptual model of the virtual world, that is, people, places, andartifacts. You could, of course, use object-oriented programming techniques tobuild a system based on, say, polygons, but that would not help to cope withthe fundamental problem.

The goal is to enable the communications between machines take place primarily at the behavioral level (what people and things are doing) rather than at the presentation level (how the scene is changing). The description of a place in the virtual would should be in terms of what is there rather than what it lookslike. Interactions between objects should be described by functional models rather than by physical ones. The computation necessary to translate between these higher-level representations and the lower-level representations required for direct user interaction is an essentially local function. At the local processor, display-rendering techniques may be arbitrarily elaborate and physical models arbitrarily sophisticated. The data channel capacities required for such computations, however, need not and should not be squeezed into the limited bandwidth available between the local processor and remote ones. Attempting to do so just leads to disasters such as NAPLPS [7,8].

Once we begin working at the conceptual rather than the presentation level, we are struck by the following observation:

The implementation platform is relatively unimportant.

The presentation level and the conceptual level cannot (and should not) betotally isolated from each other. However, defining a virtualenvironment in terms of the configuration and behavior of objects, rather thantheir presentation, enables us to span a vast range of computational anddisplay capabilities among the participants in a system. This range extendsboth upward and downward. As an extreme example, a typical scenic object, suchas a tree, can be represented by a handful of parameter values. At the lowestconceivable end of things might be an ancient Altair 8800 with a 300 baud ASCIIdumb terminal, where the interface is reduced to fragments of text and the usersees the humble string so familiar to the players of text adventure games,"There is a tree here." At the high end, you might have a powerful processorthat generates the image of the tree by growing a fractal model and renderingit three dimensions at high resolution, the finest details ray-traced in realtime, complete with branches waving in the breeze and the sound of wind in theleaves coming through your headphones in high-fidelity digital stereo. Andthese two users might be looking at the same tree in same the place in the sameworld and talking to each other as they do so. Both of these scenarios areimplausible at the moment, the first because nobody would suffer with such acrude interface when better ones are so readily available, the second becausethe computational hardware does not yet exist. The point, however, is thatthis approach covers the ground between systems already obsolete and ones thatare as yet gleams in their designers' eyes. Two consequences of this aresignificant. The first is that we can build effective cyberspace systemstoday. Habitat exists as ample proof of this principle. The second is that itis conceivable that with a modicum of cleverness and foresight you could startbuilding a system with today's technology that could evolve smoothly as thetomorrow's technology develops. The availability of pathways for growth isimportant in the real world, especially if cyberspace is to become asignificant communications medium (as we obviously think it should).

Given that we see cyberspace as fundamentally a communications medium ratherthan simply a user interface model, and given the style of object-orientedapproach that we advocate, another point becomes clear:

Data communications standards are vital.

However, our concerns about cyberspace data communications standards center less upon data transport protocols than upon the definition of the data being transported. The mechanisms required for reliably getting bits from point A topoint B are not terribly interesting to us. This is not because these mechanisms are not essential (they obviously are) nor because they do not posesignificant research and engineering challenges (they clearly do). It isbecause we are focused on the unique communications needs of an object-basedcyberspace. We are concerned with the protocols for sending messages between objects, that is, for communicating behavior rather than presentation, and for communicating object definitions from one system to another.

Communicating object definitions seems to us to be an especially important problem, and one that we really didn't have an opportunity to address inHabitat. It will be necessary to address this problem if we are to have a dynamic system. The ability to add new classes of objects over time is crucial if the system is to be able to evolve.

While we are on the subject of communications standards, we would like to makesome remarks about the ISO Reference Model of Open System Interconnection [9].This multi-layered model has become a centerpiece of most discussions about data communications standards these days. Unfortunately, while the bottom 4 or 5 layers of this model provide a more or less sound framework for considering data transportissues, we feel that the model's Presentation and Application layers are not so helpful when considering cyberspace data communications.

We have two main quarrels with the ISO model: first, it partitions the generaldata communications problem in a way that is a poor match for the needs of acyberspace system; second, and more importantly, we think it is an activesource of confusion because it focuses the attention of system designers on thewrong set of issues and thus leads them to spend their time solving the wrongset of problems. We know because this happened to us. "Presentation" and"Application" are simply the wrong abstractions for the higher levels of acyberspace communications protocol. A "Presentation" protocol presumes characteristics of the display are embedded in the protocol. The discussions above should give some indication why we feel such a presumption is bothunnecessary and unwise. An "Application" protocol presumes a degree offoreknowledge of the message environment that is incompatible with the sort ofdynamically evolving object system we envision.

A better model would be to substitute a different pair of top layers: a Message layer, which defines the means by which objects can address oneanother and standard methods of encapsulating structured data and encoding low-level data types (e.g., numbers); and a Definition layer built on top ofthe Message layer, which defines a standard representation for object definitions so that object classes can migrate from machine to machine. Onemight argue that these are simply Presentation and Application with differentlabels, but we don't think the differences are so easily reconciled. Inparticular, we think the ISO model has, however unintentionally, systematicallydeflected workers in the field from considering many of the issues that concernus.

World Building

There were two sorts of implementation challenges that Habitat posed. The first was the problem of creating a working piece of technology -- developing the animation engine, the object-oriented virtual memory, the message-passingpseudo operating system, and squeezing them all into the ludicrous Commodore 64(the backend system also posed interesting technical problems, but itsconstraints were not as vicious). The second challenge was the creation and management of the Habitat world itself. It is the experiences from the latterexercise that we think will be most relevant to future cyberspace designers.

We were initially our own worst enemies in this undertaking, victims of a way of thinking to which we engineers are dangerously susceptible. This way ofthinking is characterized by the conceit that all things may be planned in advance and then directly implemented according to the plan's detailed specification. For persons schooled in the design and construction of systemsbased on simple, well-defined and well-understood foundation principles, thisis a natural attitude to have. Moreover, it is entirely appropriate whenundertaking most engineering projects. It is a frame of mind that is anessential part of a good engineer's conceptual tool kit. Alas, in keeping with Maslow's assertion that, "to the person who has only a hammer, all the worldlooks like a nail", it is a tool that is easy to carry beyond its range ofapplicability. This happens when a system exceeds the threshold of complexityabove which the human mind loses its ability to maintain a complete andcoherent model.

One generally hears about systems crossing the complexity threshold when they become very large. For example, the Space Shuttle and the B-2 bomber are bothsystems above this threshold, necessitating extraordinarily involved, cumbersome and time-consuming procedures to keep the design under control --procedures that are at once vastly expensive and only partially successful. To a degree, the complexity problem can be solved by throwing money at it. However, such capital intensive management techniques are a luxury not available to most projects. Furthermore, although these dubious "solutions" to the complexity problem are out of reach of most projects, alas the complexity threshold itself is not. Smaller systems can suffer from the same sorts of problems. It is possible to push much smaller and less elaborate systems over the complexity threshold simply by introducing chaotic elements that are outside the designers' sphere of control or understanding. The most significant such chaotic elements are autonomous computational agents (e.g.,other computers). This is why, for example, debugging even very simple communications protocols often proves surprisingly difficult. Furthermore, a special circle of living Hell awaits the implementors of systems involving that most important category of autonomous computational agents of all, groups of interacting human beings. This leads directly to our next (and possibly most controversial) assertion:

Detailed central planning is impossible; don't even try.

The constructivist prejudice that leads engineers into the kinds of problems just mentioned has received more study from economists and sociologists [10-15]than from researchers in the software engineering community. Game and simulation designers are experienced in creating virtual worlds for individualsand small groups. However, they have had no reason to learn to deal with large populations of simultaneous users. Since each user or group is unrelated tothe others, the same world can be used over and over again. If you are playing an adventure game, the fact that thousands of other people elsewhere in the(real) world are playing the same game has no effect on your experience. It is reasonable for the creator of such a world to spend tens or even hundreds of hours crafting the environment for each hour that a user will spend interacting with it, since that user's hour of experience will be duplicated tens ofthousands of times by tens of thousands of other individual users.

Builders of online services and communications networks are experienced in dealing with large user populations, but they do not, in general, create elaborate environments. Furthermore, in a system designed to deliver information or communications services, large numbers of users are simply aload problem rather than a complexity problem. All the users get the same information or services; the comments in the previous paragraph regarding duplication of experience apply here as well. It is not necessary to match the size and complexity of the information space to the size of the user population. While it may turn out that the quantity of information available on a service is a function of the size of the user population, this information can generally be organized into a systematic structure that can still be maintained by a few people. The bulk, wherein the complexity lies, is the product of the users themselves, rather than the system designers -- the operators of the system do not have to create all this material. (This observation is the first clue to the solution to our problem.)

Our original specification for Habitat called for us to create a world capable of supporting a population of 20,000 Avatars, with expansion plans for up to 50,000. By any reckoning this is a large undertaking and complexity problems would certainly be expected. However, in practice we exceeded the complexity threshold very early in development. By the time the population of our online community had reached around 50 we were in over our heads (and these 50 were"insiders" who were prepared to be tolerant of holes and rough edges).

Moreover, a virtual world such as Habitat needs to scale with its population. For 20,000 Avatars we needed 20,000 "houses", organized into towns and cities with associated traffic arteries and shopping and recreational areas. We needed wilderness areas between the towns so that everyone would not be jammed together into the same place. Most of all, we needed things for 20,000 people to do. They needed interesting places to visit -- and since they can't all be in the same place at the same time, they needed a lot of interesting places to visit -- and things to do in those places. Each of those houses,towns, roads, shops, forests, theaters, arenas, and other places is a distinct entity that someone needs to design and create. We, attempting to play the role of omniscient central planners, were swamped.

Automated tools may be created to aid the generation of areas that naturallypossess a high degree of regularity and structure, such as apartment buildingsand road networks. We created a number of such tools, whose spiritualdescendents will no doubt be found in the standard bag of tricks of futurecyberspace architects. However, the very properties which make some parts ofthe world amenable to such techniques also make those same parts of the worldamong the least important. It is really not a problem if every apartmentbuilding looks pretty much like every other. It is a big problem if everyenchanted forest is the same. Places whose value lies in their uniqueness, orat least in their differentiation from the places around them, need to becrafted by hand. This is an incredibly labor intensive and time consumingprocess. Furthermore, even very imaginative people are limited in the range ofvariation that they can produce, especially if they are working in a virginenvironment uninfluenced by the works and reactions of other designers.

Running The World

The world design problem might still be tractable, however, if all players hadthe same goals, interests, motivations and types of behavior. Real people,however, are all different. For the designer of an ordinary game orsimulation, human diversity is not a major problem, since he or she gets toestablish the goals and motivations on the participants' behalf, and to specifythe activities available to them in order to channel events in the preferred direction. Habitat, however, was deliberately open ended and pluralistic. The idea behind our world was precisely that it did not come with a fixed set of objectives for its inhabitants, but rather provided a broad palette of possibleactivities from which the players could choose, driven by their own internal inclinations. It was our intent to provide a variety of possible experiences,ranging from events with established rules and goals (a treasure hunt, forexample) to activities propelled by the players' personal motivations (startinga business, running the newspaper) to completely free-form, purely existentialactivities (hanging out with friends and conversing). Most activities,however, involved some degree of pre-planning and setup on our part -- we wereto be like the cruise director on a ocean voyage, but we were still thinkinglike game designers.

The first goal-directed event planned for Habitat was a rather involvedtreasure hunt called the "D'nalsi Island Adventure". It took us hours todesign, weeks to build (including a 100-region island), and days to coordinatethe actors involved. It was designed much like the puzzles in an adventuregame. We thought it would occupy our players for days. In fact, the puzzle was solved in about 8 hours by a person who had figured out the critical cluein the first 15 minutes. Many of the players hadn't even had a chance to get into the game. The result was that one person had had a wonderful experience,dozens of others were left bewildered, and a huge investment in design and setup time had been consumed in an eyeblink. We expected that there would be a wide range of "adventuring" skills in the Habitat audience. What wasn't so obvious until afterward was that this meant that most people didn't have a very good time, if for no other reason than that they never really got toparticipate. It would clearly be foolish and impractical for us to do things like this on a regular basis.

Again and again we found that activities based on often unconscious assumptionsabout player behavior had completely unexpected outcomes (when they were notsimply outright failures). It was clear that we were not in control. The morepeople we involved in something, the less in control we were. We couldinfluence things, we could set up interesting situations, we could provideopportunities for things to happen, but we could not dictate the outcome.Social engineering is, at best, an inexact science (or, as some wag once said,"in the most carefully constructed experiment under the most carefullycontrolled conditions, the organism will do whatever it damn well pleases").

Propelled by these experiences, we shifted into a style of operations in which we let the players themselves drive the direction of the design. This provedfar more effective. Instead of trying to push the community in the directionwe thought it should go, an exercise rather like herding mice, we tried toobserve what people were doing and aid them in it. We became facilitators asmuch as we were designers and implementors. This often meant adding newfeatures and new regions to the system at a frantic pace, but almost all ofwhat we added was used and appreciated, since it was well matched to people'sneeds and desires. We, as the experts on how the system worked, could oftensuggest new activities for people to try or ways of doing things that peoplemight not have thought of. In this way we were able to have considerableinfluence on the system's development in spite of the fact that we didn'treally hold the steering wheel -- more influence, in fact, than we had had whenwe were operating under the illusion that we controlled everything.

Indeed, the challenges posed by large systems are prompting some researchers toquestion the centralized, planning dominated attitude that we have criticizedhere, and to propose alternative approaches based on evolutionary and marketprinciples [16-18]. These principles appear applicable to complex systems ofall types, not merely those involving interacting human beings.

The Great Debate

Among the objects we made available to Avatars in Habitat were guns and variousother sorts of weapons. We included these because we felt that players shouldbe able to materially effect each other in ways that went beyond simplytalking, ways that required real moral choices to be made by the participants.We recognized the age old story-teller's dictum that conflict is the essence ofdrama. Death in Habitat was, of course, not like death in the real world!When an Avatar is killed, he or she is teleported back home, head in hands(literally), pockets empty, and any object in hand at the time dropped on theground at the scene of the crime. Any possessions carried at the time arelost. It was more like a setback in a game of "Chutes and Ladders" than realmortality. Nevertheless, the death metaphor had a profound effect on people'sperceptions. This potential for murder, assault and other mayhem in Habitatwas, to put it mildly, controversial. The controversy was further fueled bythe potential for lesser crimes. For instance, one Avatar could stealsomething from another Avatar simply by snatching the object out its owner'shands and running off with it.

We had imposed very few rules on the world at the start. There was much debateamong the players as to the form that Habitat society should take. At the coreof much of the debate was an unresolved philosophical question: is an Avatar anextension of a human being (thus entitled to be treated as you would treat areal person) or a Pac-Man-like critter destined to die a thousand deaths orsomething else entirely? Is Habitat murder a crime? Should all weapons bebanned? Or is it all "just a game"? To make a point, one of the players tookto randomly shooting people as they roamed around. The debate was sufficientlyvigorous that we took a systematic poll of the players. The result wasambiguous: 50% said that Habitat murder was a crime and shouldn't be a part ofthe world, while the other 50% said it was an important part of the fun.

We compromised by changing the system to allow thievery and gunplay onlyoutside the city limits. The wilderness would be wild and dangerous whilecivilization would be orderly and safe. This did not resolve the debate,however. One of the outstanding proponents of the anti-violence point of viewwas motivated to open the first Habitat church, the Order of the Holy Walnut(in real life he was a Greek Orthodox priest). His canons forbid his disciples to carry weapons, steal, or participate in violence of any kind. His church became quite popular and he became a very highly respected member of the Habitat community.

Furthermore, while we had made direct theft impossible, one could still engagein indirect theft by stealing things set on the ground momentarily or otherwise left unattended. And the violence still possible in the outlands continued to bother some players. Many people thought that such crimes ought to beprevented or at least punished somehow, but they had no idea how to do so.They were used to a world in which law and justice were always things providedby somebody else. Somebody eventually made the suggestion that there ought tobe a Sheriff. We quickly figured out how to create a voting mechanism androunded up some volunteers to hold an election. A public debate in the townmeeting hall was heavily attended, with the three Avatars who had chosen to runmaking statements and fielding questions. The election was held, and the townof Populopolis acquired a Sheriff.

For weeks the Sheriff was nothing but a figurehead, though he was a respectedfigure and commanded a certain amount of moral authority. We were stumpedabout what powers to give him. Should he have the right to shoot anyoneanywhere? Give him a more powerful gun? A magic wand to zap people off tojail? What about courts? Laws? Lawyers? Again we surveyed the players,eventually settling on a set of questions that could be answered via areferendum. Unfortunately, we were unable to act on the results before thepilot operations ended and the system was shut down. It was clear, however,that there are two basic camps: anarchy and government. This is an issue thatwill need to be addressed by future cyberspace architects. However, our viewis that a virtual world need not be set up with a "default" government, but caninstead evolve one as needed.

A Warning

Given the above exhortation that control should be released to the users, weneed to inject a note of caution and present our next assertion:

You can't trust anyone.

This may seem like a contradiction of much of the preceding, but it really isnot. Designers and operators of a cyberspace system must inhabit two levels ofvirtual world at once. The first we call the "infrastructure level", which isthe implementation, where the laws that govern "reality" have their genesis.The second we call the "percipient level", which is what the users see andexperience. It is important that there not be "leakage" between these twolevels. The first level defines the physics of the world. If its integrity isbreached, the consequences can range from aesthetic unpleasantness (theaudience catches a glimpse of the scaffolding behind the false front) topsychological disruption (somebody does something "impossible", therebyviolating users' expectations and damaging their fantasy) to catastrophicfailure (somebody crashes the system). When we exhort you to give control tothe users, we mean control at the percipient level. When we say that you can'ttrust anyone, we mean that you can't trust them with access to theinfrastructure level. Some stories from Habitat will illustrate this.

When designing piece of software, you generally assume that it is the soleintermediary between the user and the underlying data being manipulated(possibly multiple applications will work with the same data, but the principleremains the same). In general, the user need not be aware of how data areencoded and structured inside the application. Indeed, the very purpose of agood application is to shield the user from the ugly technical details. It isconceivable that a technically astute person who is willing to invest the timeand effort could decipher the internal structure of things, but this would bean unusual thing to do as there is rarely much advantage to be gained. Thepurpose of the application itself is, after all, to make access to andmanipulation of the data easier than digging around at the level of bits andbytes. There are exceptions to this, however. For example, most game programsdeliberately impose obstacles on their players in order for play to bechallenging. By tinkering around with the insides of such a program -- dumpingthe data files and studying them, disassembling the program itself and possiblymodifying it -- it may be possible to "cheat". However, this sort of cheatinghas the flavor of cheating at solitaire: the consequences adhere to the cheateralone. There is a difference, in that disassembling a game program is apuzzle-solving exercise in its own right, whereas cheating at solitaire ispointless, but the satisfactions to be gained from it, if any, are entirelypersonal.

If, however, a computer game involves multiple players, delving into theprogram's internals can enable one to truly cheat, in the sense that one gainsan unfair advantage over the other players of which they may be unaware.Habitat is such a multi-player game. When we were designing the software, our"prime directive" was, "The backend shall not assume the validity of anything aplayer computer tells it." This is because we needed to protect ourselvesagainst the possibility that a clever user had hacked around with his copy ofthe frontend program to add "custom features". For example, we could notimplement any of the sort of "skill and action" elements found in traditionalvideo games wherein dexterity with the joystick determines the outcome of, say,armed combat, because you couldn't guard against someone modifying their copyof the program to tell the backend that they had "hit", whether they actuallyhad or not. Indeed, our partners at QuantumLink warned us of this veryeventuality before we even started -- they already had users who did this sortof thing with their regular system. Would anyone actually go to the trouble ofdisassembling and studying 100K or so of incredibly tight and bizarrelythreaded 6502 machine code just to tinker? As it turns out, the answer is yes.People have. We were not 100% rigorous in following our own rule. It turnedout that there were a few features whose implementation was greatly eased bybreaking the rule in situations where, in our judgment, the consequences wouldnot be material if people "cheated" by hacking their own systems. Darned ifpeople didn't hack their systems to cheat in exactly these ways.

Care must be taken in the design of the world as well. One incident thatoccurred during our pilot test involved a small group of players exploiting abug in our world database which they interpreted as a feature. First, somebackground. Avatars are hatched with 2000 Tokens in their bank account, andeach day that they login the receive another 100T. Avatars may acquireadditional funds by engaging in business, winning contests, finding buriedtreasure, and so on. They can spend their Tokens on, among other things,various items that are for sale in vending machines called Vendroids. Thereare also Pawn Machines, which will buy objects back (at a discount, ofcourse).

In order to make this automated economy a little more interesting, eachVendroid had its own prices for the items in it. This was so that we couldhave local price variation (i.e., a widget would cost a little less if youbought it at Jack's Place instead of The Emporium). It turned out that in twoVendroids across town from each other were two items for sale whose prices wehad inadvertently set lower than what a Pawn Machine would buy them back for:Dolls (for sale at 75T, hock for 100T) and Crystal Balls (for sale at 18,000T,hock at 30,000T!). Naturally, a couple of people discovered this. One nightthey took all their money, walked to the Doll Vendroid, bought as many Dolls asthey could, then took them across town and pawned them. By shuttling back andforth between the Doll Vendroid and the Pawn Shop for hours, theyamassed sufficient funds to buy a Crystal Ball , whereupon they continued theprocess with Crystal Balls and a couple orders of magnitude higher cash flow.The final result was at least three Avatars with hundreds of thousands ofTokens each. We only discovered this the next morning when our daily databasestatus report said that the money supply had quintupled overnight.

We assumed that the precipitous increase in "T1" was due to some sort of bug inthe software. We were puzzled that no bug report had been submitted. Bypoking around a bit we discovered that a few people had suddenly acquiredenormous bank balances. We sent Habitat mail to the two richest, inquiring asto where they had gotten all that money overnight. Their reply was, "We got itfair and square! And we're not going to tell you how!" After much abjectpleading on our part they eventually did tell us, and we fixed the erroneouspricing. Fortunately, the whole scam turned out well, as the nouveau richeAvatars used their bulging bankrolls to underwrite a series of treasure huntgames which they conducted on their own initiative, much to the enjoyment ofmany other players on the system.

Keeping "Reality" Consistent

The urge to breach the boundary between the infrastructure level and thepercipient level is not confined to the players. The system operators are alsosubject to this temptation, though their motivation is expediency inaccomplishing their legitimate purposes rather than the gaining of illegitimateadvantage. However, to the degree to which it is possible, we vigorouslyendorse the following principle:

Work within the system.

Wherever possible, things that can be done within the framework of thepercipient level should be. The result will be smoother operation and greaterharmony among the user community. This admonition applies to both thetechnical and the sociological aspects of the system.

For example, with the players in control, the Habitat world would have grownmuch larger and more diverse than it did had we ourselves not been a technicalbottleneck. All new region generation and feature implementation had to gothrough us, since there was no means for players to create new parts of theworld on their own. Region creation was an esoteric technical specialty,requiring a plethora of obscure tools and a good working knowledge of thetreacherous minefield of limitations imposed by the Commodore 64. It alsorequired a lot of behind-the-scenes activity that would probably spoil theillusion for many. One of the goals of a next generation Habitat-like systemought to be to permit far greater creative involvement by the participantswithout requiring them to ascend to full-fledged guru-hood to do so.

A further example of working within the system, this time in a social sense, isillustrated by the following experience. One of the more popular events inHabitat took place late in the test, the brainchild of one of the more activeplayers who had recently become a QuantumLink employee. It was called the"Dungeon of Death".

For weeks, ads appeared in Habitat's newspaper, The Rant, announcing that thatDuo of Dread, DEATH and THE SHADOW, were challenging all comers to enter theirlair. Soon, on the outskirts of town, the entrance to a dungeon appeared. Outfront was a sign reading, "Danger! Enter at your own risk!" Two system operators were logged in as DEATH and THE SHADOW, armed with specially concocted guns that could kill in one shot, rather than the usual 12. These two characters roamed the dungeon blasting away at anyone they encountered.They were also equipped with special magic wands that cured any damage done to them by other Avatars, so that they wouldn't themselves be killed. To make things worse, the place was littered with dead ends, pathological connections between regions, and various other nasty and usually fatal features. It was clear that any explorer had better be prepared to "die" several times before mastering the dungeon. The rewards were pretty good: 1000 Tokens minimum and access to a special Vendroid that sold magic teleportation wands. Furthermore,given clear notice, players took the precaution of emptying their pockets before entering, so that the actual cost of getting "killed" was minimal.

One evening, one of us was given the chance to play the role of DEATH. When we logged in, we found him in one of the dead ends with four other Avatars who were trapped there. We started shooting, as did they. However, the last operator to run DEATH had not bothered to use his special wand to heal any accumulated damage, so the character of DEATH was suddenly and unexpectedly"killed" in the encounter. As we mentioned earlier, when an Avatar is killed,any object in his hands is dropped on the ground. In this case, said object was the special kill-in-one-shot gun, which was immediately picked up by one of the regular players who then made off with it. This gun was not something that regular players were supposed to have. What should we do?

It turned out that this was not the first time this had happened. During the previous night's mayhem the special gun was similarly absconded with. In this case, the person playing DEATH was one of the regular system operators, who,used to operating the regular Q-Link service, simply ordered the player to give the gun back. The player considered that he had obtained the weapon as part of the normal course of the game and balked at this, whereupon the operator threatened to cancel the player's account and kick him off the system if he did not comply. The player gave the gun back, but was quite upset about the whole affair, as were many of his friends and associates on the system. Their world model had been painfully violated.

When it happened to us, we played the whole incident within the role of DEATH.We sent a message to the Avatar who had the gun, threatening to come and kill her if she didn't give it back. She replied that all she had to do was stay in town and DEATH couldn't touch her (which was true, if we stayed within the system). OK, we figured, she's smart. We negotiated a deal whereby DEATH would ransom the gun for 10,000 Tokens. An elaborate arrangement was made to meet in the center of town to make the exchange, with a neutral third Avatar acting as an intermediary to ensure that neither party cheated. Of course,word got around and by the time of the exchange there were numerous spectators.We played the role of DEATH to the hilt, with lots of hokey melodramatic shtick. The event was a sensation. It was written up in the newspaper the next morning and was the talk of the town for days. The Avatar involved was left with a wonderful story about having cheated DEATH, we got the gun back,and everybody went away happy.

These two very different responses to an ordinary operational problem illustrate our point. Operating within the participants' world model produced a very satisfactory result. On the other hand, what seemed like the expedient course, which involved violating this model, provoked upset and dismay.Working within the system was clearly the preferred course in this case.

Current Status

As of this writing, the North American incarnation of Lucasfilm's Habitat,QuantumLink's "Club Caribe", has been operating for almost two years. It usesour original Commodore 64 frontend and a somewhat stripped-down version of ouroriginal Stratus backend software. Club Caribe now sustains a population ofsome 15,000 participants.

A technically more advanced version, called Fujitsu Habitat, has recentlystarted pilot operations in Japan, available on NIFtyServe. The initialfrontend for this version is the new Fujitsu FM Towns personal computer, thoughports to several other popular Japanese machines are anticipated. This versionof the system benefits from the additional computational power and graphicscapabilities of a newer platform, as well as the Towns' built-in CD-ROM for object imagery and sounds. However, the virtuality of the system is essentially unchanged and Fujitsu has not made significant alterations to the user interface or to any of the underlying concepts.

Future Directions

There are several directions in which this work can be extended. Most obviousis to implement the system on more advanced hardware, enabling a more sophisticated display. A number of extensions to the user interface alsosuggest themselves. However, the line of development most interesting to us isto expand on the idea of making the development and expansion of the worlditself part of the users' sphere of control. There are two major research areas in this. Unfortunately, we can only touch on them briefly here.

The first area to investigate involves the elimination of the centralized backend. The backend is a communications and processing bottleneck that will not withstand growth above too large a size. While we can support tens of thousands of users with this model, it is not really feasible to support millions. Making the system fully distributed, however, requires solving a number of difficult problems. The most significant of these is the prevention of cheating. Obviously, the owner of the network node that implements some part of the world has an incentive to tilt things in his favor there. We think that this problem can be addressed by secure operating system technologies based on public-key cryptographic techniques [19, 20].

The second fertile area of investigation involves user configuration of the world itself. This requires finding ways to represent the design and creation of regions and objects as part of the underlying fantasy. Doing this will require changes to our conception of the world. In particular, we don't think it will be possible to conceal all of the underpinnings to those who work with them. However, all we really need to do is find abstractions for those underpinnings that fit into the fantasy itself. Though challenging, this is,in our opinion, eminently feasible.


We feel that the defining characteristic of cyberspace is the shared virtual environment, not the display technology used to transport users into thatenvironment. Such a cyberspace is feasible today, if you can live without head-mounted displays and other expensive graphics hardware. Habitat serves as an existence proof of this contention.

It seems clear to us that an object-oriented world model is a key ingredient inany cyberspace implementation. We feel we have gained some insight into thedata representation and communications needs of such a system. While we think that it may be premature to start establishing detailed technical standards forthese things, it is time to begin the discussions that will lead to suchstandards in the future.

Finally, we have come to believe that the most significant challenge for cyberspace developers is to come to grips with the problems of world creationand management. While we have only made the first inroads onto these problems,a few things have become clear. The most important of these is that managing a cyberspace world is not like managing the world inside a single-user application or even a conventional online service. Instead, it is more like governing an actual nation. Cyberspace architects will benefit from study ofthe principles of sociology and economics as much as from the principles ofcomputer science. We advocate an agoric, evolutionary approach to world building rather than a centralized, socialistic one.

We would like to conclude with a final admonition, one that we hope will not beseen as overly contentious:

Get real.

In a discussion of cyberspace on Usenet, one worker in the field dismissed ClubCaribe (Habitat's current incarnation) as uninteresting, with a comment to theeffect that most of the activity consisted of inane and trivial conversation.Indeed, the observation was largely correct. However, we hope some of theanecdotes recounted above will give some indication that more is going on thanthose inane and trivial conversations might indicate. Further, to dismiss thesystem on this basis is to dismiss the users themselves. They are paying moneyfor this service. They don't view what they do as inane and trivial, orthey wouldn't do it. To insist this presumes that one knows better than theywhat they should be doing. Such presumption is another manifestation of theomniscient central planner who dictates all that happens, a role that thisentire article is trying to deflect you from seeking. In a real system that isgoing to be used by real people, it is a mistake to assume that the users willall undertake the sorts of noble and sublime activities which you created thesystem to enable. Most of them will not. Cyberspace may indeed changehumanity, but only if it begins with humanity as it really is.


[1] Vinge, Vernor (1981), "True Names", Binary Star #5, Dell PublishingCompany, New York.

[2] Gibson, William (1984), Neuromancer, Ace Books, New York.

[3] Bruce Sterling, ed. (1986), Mirrorshades: The Cyberpunk Anthology,Arbor House, New York.

[4] Sussman, Gerald Jay, and Abelson, Harold (1985), Structure andInterpretation of Computer Programs, MIT Press, Cambridge.

[5] Goldberg, Adele, and Robson, David (1983), Smalltalk-80: The Languageand Its Implementation, Addison-Wesley, Reading, Mass.

[6] Drexler, K. Eric (1986), Engines of Creation, Anchor Press,Doubleday, Garden City, New York.

[7] American National Standards Institute (December 1983), Videotex/TeletextPresentation Level Protocol Syntax, North American PLPS, ANSI.

[8] Alber, Antone F. (1985), Videotex/Teletext: Principles andPractices, McGraw-Hill, New York.

[9] International Standards Organization (June 1986), Information ProcessingSystems -- Open System Interconnection -- Transport Service Definition,International Standard number 8072, ISO, Switzerland.

[10] Hayek, Friedrich A. (1978), New Studies in Philosophy, Politics,Economics, and the History of Ideas, University of Chicago Press,Chicago.

[11] Hayek, Friedrich A. (1973), Law Legislation and Liberty, Volume I:Rules and Order, University of Chicago Press, Chicago.

[12] Hayek, Friedrich A. (1989), The Fatal Conceit, University ofChicago Press, Chicago.

[13] Popper, Karl R. (1972), Objective Knowledge: An EvolutionaryApproach, Oxford University Press, Oxford.

[14] Popper, Karl R. (1962), The Open Society and Its Enemies (fifthedition) , Princeton University Press, Princeton, New Jersey.

[15] Sowell, Thomas (1987), A Conflict of Visions, William Morrow, NewYork.

[16] Miller, Mark S., and Drexler, K. Eric (1988), "Comparative Ecology: AComputational Perspective", in Huberman, B.A., ed., The Ecology ofComputation, Elsevier Science Publishers, Amsterdam.

[17] Miller, Mark S., and Drexler, K. Eric (1988), "Markets and Computation:Agoric Open Systems", in Huberman, B.A., ed., The Ecology ofComputation, Elsevier Science Publishers, Amsterdam.

[18] Drexler, K. Eric, and Miller, Mark S. (1988), "Incentive Engineering forComputational Resource Management", in Huberman, B.A., ed., The Ecology ofComputation, Elsevier Science Publishers, Amsterdam.

[19] Rivest, R., Shamir, A., and Adelman, L. (February 1978), "A Method forObtaining Digital Signatures and Public-Key Cryptosystems", inCommunications of the ACM, Vol. 21, No. 2.

[20] Miller, Mark S., Bobrow, Daniel G., Tribble, Eric Dean, and Levy, DavidJacob (1987), "Logical Secrets", in Shapiro, Ehud, ed., Concurrent Prolog:Collected Papers, MIT Press, Cambridge.

Apr 19, 2010

Defining the Metaverse - Revisited

"It's a construct of the mind, brought to fruition by our dreams and desires. A collaborative hallucination."

- William Burns

In 2006, I was a contributor to the Solipsis Decentralized Metaverse project on an intellectual basis, and as to how such a system would fulfill a greater definition of a true Metaverse system. Later, in 2007, Solipsis did a presentation which included the very definitions and explanations I had contributed in 2006 concerning the true nature of a full Metaverse, and in 2008 they gave an updated presentation at IEEE even going so far as to directly quote me for a full page with the likes of Philip Rosedale and others in the industry.

The ANR-RIAM Solipsis presentation given in 2007 can be found here:

The 2008 IEE Presentation version can be found here:
http://www.andromedaunderground.com/darian/Solipsis - A Decentralized Architecture for Virtual Environments.pdf

In the latter, pages 5 and 30 are of interest for this post. Page 5 on the latter is the definition of (Meta)Data, MetaWorld, MetaGalaxy, and finally Metaverse. These definitions, and more importantly the line of succession, are from the following article written in 2006 (and reposted here with some updates). Page 30 of the Solipsis presentation is a quotation from me concerning the actual nature of a full Metaverse. the reason I make a big deal of this today in 2010 is because years later, companies like BlueMars, Linden Lab etc are still ignoring the original points from the Solipsis research (and research points I've made since 1999, and have been made since 1990 by Morningstar and Farmer).

The original article can be found here:


An electronic 3D representation of a real world or fictional environment, populated by real people and programs (known as bots or daemons). Within such an environment it is possible not only to interact with the scenery as you would in real life, it is also possible to interact with other system users in 3D real time.

The Metaverse, as defined by the book Snow Crash (Neal Stephenson), is representative of a highly evolved Internet system in which the standard 2D interface for the GUI is all but replaced by a lush three dimensional interactive system with approximately sixty million simultaneous users (referred to as "in world").

What Classifies a Metaverse?

A common misconception concerning the fictional Metaverse was that it consisted only of a single world. It is my personal observation that the world known as "The Street" was in fact just one of many worlds within the Metaverse, though the most popular as it was the default starting world for the Metaverse within the incarnation of the book Snow Crash. There were obviously other literature concerning the fictional Metaverse structure and what it's purpose was, starting with the father of the cyberpunk genre himself, William Gibson, whose short story Johnny Mnemonic (1981) was popularized as a movie in 1995 and was actually just a short story taking place within the same time line and space as a larger set of stories.

The idea here is that, the William Gibson idea of the Metaverse differed greatly from Neil Stephenson, and while the ideas of Cyberpunk differed across authors there was still an underlying element of a Metaverse structure. Each of these ideas of a Metaverse differed, but they encompassed different aspects of the vision. As a result, I get the understanding that there wasn't exactly a single, giant, virtual planet like in Snow Crash, but a wide decentralized system with centralized gateway servers to handle it. In this aspect "The Street" from Snow Crash really represents a single virtual world among a decentralized Metaverse, but something like The Street would have been managed as a gateway to the Metaverse, and thus most popular.

Due to these fictional systems arising within the same culture, I would safely say that in a real world environment it would make sense to incorporate all of these aspects as a total Metaverse concept.

Another argument in favor of this is that the very meaning of Metaverse itself demands there be more than a single world approach; as we see with the terminology of (Meta)Data, MetaWorld and MetaGalaxy and finally Metaverse wherein it defines a MetaData Universe. Indeed the very term Metaverse derives from the word MetaData, and so would follow in this correct line of logic - MetaData, MetaWorld, MetaGalaxy, and Metaverse when outlining the progression of the virtual space each term would inhabit. Also of note is that Neil Stephenson when writing Snow Crash was influenced by the Unix operating system as his view of computers of the day - hence we see a lot of Unix type references in his book, so I couldn't imagine that he meant Metaphysical Universe instead of a Unix styled Metadata Universe. The first being a more holistic approach while the latter sticking with the computer and Unix styled theme.

I am highly confused at the twisting and misrepresenting of the term Metaverse by companies such as Second Life in which they are trying to use the old "bait and switch" approach to the word. Instead of its original meaning of MetaData Universe, they instead imply that it's meaning derives from Metaphysical Universe.

While this may work for promotion and advertising to the public, it does a grave disservice to the word itself and also misleads the public into believing that the product (Second Life) is indeed a full scale Metaverse, when instead it qualifies as only two MetaWorlds (Adult SL and Child SL). Even with their misleading advertising, they still do not qualify as a Universe structure.

When probing further into the subject, one would also find that There.com was also misclassified as a Metaverse by the public (more than likely due to the misrepresentation of the word to begin with), while in fact even There.com only classified as a single MetaWorld.

To gain a better insight on these classifications, I will present a step by step example of each:

  • Metadata: A single virtual object (standalone VRML display of a single 3D object) or simply 2D websites. [picture]
  • MetaWorld: Perceived single location space (world). Either single user or massive multi-user. [single user] [multiuser]
  • MetaGalaxy: A group of MetaWorlds, more than likely massive multiuser, interconnected (not standalone) - ActiveWorlds
  • Metaverse: Multiple MetaGalaxy systems linked into a perceived Virtual Universe, although not existing on a central server, (Morningstar and Farmer: Lessons Learned From Habitat - Decentralization) [no system yet qualifies]
Since I have already classified both Second Life and There.com as a MetaWorld structure, I move onto another of the contenders in this field: Active Worlds.

Active Worlds as a platform is the closest to an actual Metaverse classification that currently exists today. While not entirely a Metaverse, there is a much higher probability that it can become one if properly implemented. As of this writing, the Active Worlds Browser (AW Browser) is currently available in server configurations that reach only as high as a MetaGalaxy (Multiple Worlds, Interconnected, Centralized). This despite their server classifications of Sol, Galaxy and Universe (which again are incorrectly labeled based against proper classifications).

The Active Worlds server comes in a Sol configuration, which by all standards is simply the same as a MetaWorld configuration and no interconnectedness to other systems. Sol servers are small Metaworld configurations with a stand alone browser to access it (usually also very small user access limits as well).

The Galaxy configuration is essentially the same as the Sol configuration in that it is a single MetaWorld configuration though the differences are that this configuration facilitates much larger virtual land and simultaneous user limit options (whereas the Sol configuration does not). The misleading aspect here is that this is not actually a MetaGalaxy (or a galaxy by proper definition which would include many planets and even stars). Instead the so-called "Galaxy" from Active Worlds is nothing more than a larger MetaWorld configuration than a Sol.

Which brings me to the "Universe" configuration. By definition a Metaverse should be many Galaxy systems linked together as a decentralized network of Metaworlds. What we find here with the Active Worlds Universe server is nothing more than an actual MetaGalaxy configuration, regardless of how far it can expand or how many worlds can exist within it. It is still a centralized system containing many worlds (and not decentralized with many metagalaxies).

So by proper definition of the terms in use, the Sol configuration for Active Worlds should be merged with the Galaxy server option and renamed MetaWorld Server, while the Universe configuration should be renamed MetaGalaxy - leaving open the Metaverse option for actual implementation once again, though not an option for purchase because a Metaverse constitutes decentralized massive multi-metagalaxies linked through a single interface.

What would a Metaverse look like?

Under current configurations of an Active Worlds system, there lies the innate ability to evolve into something much larger. While it is considered physically impossible to link multiple worlds or galaxy structures within the Second Life platform (and even There.com) whether this is by limitations of software, hardware or by business decision, with the Active Worlds system there is a latent ability to evolve into a full scale Metaverse.

There already exists many MetaGalaxy configurations for Active Worlds (Sol, Galaxy and Universes by their definition) and all are currently stand-alone in nature. A full scale Metaverse would evolve should the client browser be given the ability to traverse all of these independant MetaWorlds and MetaGalaxies to create a perceived Metaverse space.

This follows well the writings of Morningstar and Farmer (Lessons Learned from Lucasfilm's Habitat) where they explain that a decentralized system is a must due to bandwidth constraints. The idea here is that while it is feasible to house a great multitude of virtual inhabitants within a single perceived environment, the cost to user ratio skyrockets as these figures reach the hundreds of thousands and even millions of users. Though within a decentralized system (many linked MetaGalaxies) the perceived space of all of the worlds and galaxies metaphorically creates a seamlessly integrated and single Metaverse structure.

The first area to investigate involves the elimination of the centralized backend. The backend is a communications and processing bottleneck that will not withstand growth above too large a size. While we can support tens of thousands of users with this model, it is not really feasible to support millions. Making the system fully distributed, however, requires solving a number of difficult problems. The most significant of these is the prevention of cheating. Obviously, the owner of the network node that implements some part of the world has an incentive to tilt things in his favor. We think that this problem can be addressed by secure operating system technologies based on public-key cryptographic techniques (Rivest, Shamir and Adelman, 1978; Miller et al, 1987). - Morningstar and Farmer: Lessons Learned From Lucasfilm's Habitat

Admittedly the Active Worlds Universe configuration is supposed to handle a mere 65000 simultaneous users before reaching a need to essentially "clone" the server for a mirrored version capable of handling, again, only another 65000 simultaneous users. This configuration is for the entire server itself, and is split among the amount of worlds currently within the server. The main drawback here is that under current configurations for Active Worlds, both cloned servers would require a separate install of essentially the same browser in order to visit either. Two copies of the same program for two systems that are essentially the same. Imagine trying to visit the Metaverse right now under these constraints - each section would be a separate download and install of essentially the same browser.

This idea is like downloading Firefox and finding out you can only visit 500 websites, and to add the functionality of only another 500 websites you must download and install a separate copy of the same program (continuously). This concept is the exact opposite of what the Internet itself was meant to be.

Under this configuration, it would be impossible to host ten very popular worlds within a server effectively.

Popular websites play host to millions of users per hour, and under this idea, if one were to implement a MetaWorld using seemingly the most capable system (Active Worlds) this MetaWorld would essentially be completely obliviated within the first 15 minutes (or less).

The idea of allowing the client to traverse all servers (Sol, Galaxy and Universe) across the globe makes a Galaxy (in active worlds terms) less important, as a Metaworld configuration (currently known as a Sol/Galaxy) could be configured to as much as 65000 users for that single Metaworld, while access to this Metaworld could be accomplished through normal web means (or even teleport linked in world).

This would create a nearly seamless (or if done correctly, seamless) transition across many multitudes of server configurations across the real world, and thus creating a real MetaData Universe wherein the servers are not centralized, and the perceived space for the user is infinite.

What is stopping them?

While much of the hardware and software exists to truly create a full Metaverse system, the vast majority of companies involved with systems capable of doing so are keeping their programs closed to expansion, and thus hindering the greater potential that their software systems could produce.

Some "homebrew" applications are popping up across the Internet, interestingly enough, in places like Sourceforge.net in order to attempt to tackle this problem. Unfortunately no homebrew software campaign has managed to sucessfully capture the ease of use and power of a real metaverse system.

Points of reference - "Snow Crash" : Neal Stephenson "Diamond Age" : Neal :Stephenson

Internet 2.0

The future of the Internet itself has yet to be determined, but one thing is for sure - it includes ever increasing bandwidth options. While the fiber optic internet of fiction has yet to be realized, there are ongoing plans to utilize high bandwidth systems for our immediate and distant future.

It's a matter of mathematics to say that technology in most respects will double every 18 months via Moore's Law. This is not only limited to the speed of processors, but indeed extends into every facet of technology connected to the CPU. Years ago, it was stated that it would be impossible to imagine the consumer needing anything more than 640k of RAM for all applications, and yet we are currently using gigabytes of RAM for the latest software, operating systems and games.

So too, the demand for bandwidth must follow as our online experience matures and the rate of data transfer increases. Where once we had simple HTML text websites with a few hyperlinks, we now have media rich sites utilizing streaming audio and video. But these too will mature as the bandwidth increases; there is only so much a person can experience via video and audio.

So what form will our future Internet take? The obvious answer for this is a Metaverse, or more aptly, Multiple Parallel MetaGalaxies. With broadband rapidly expanding from all directions (cable broadband is now up to 6 and 8Mb/sec) we are expected to double the information transfer capacity accordingly. It's not viable information that we will experience in the future, but instead passive information - wherein objects representing concepts (metaobjects) will be downloaded on demand and instead of a flat webpage, we will experience an entire online virtual space, collaboratively.

The future of the Internet is indeed broadband. And not the 3Mb/sec you see today, but upwards of 90Mb/sec and higher. Of course, current hardware will not support the onslaught of data transmission, and nobody expects it to. Just as the bandwidth increases, it does so in response to ever increasing storage capacities, cpu seeds, GPU (graphics processing units) and onboard RAM requirements. see also Abilene Backbone [Internet2]

Some companies today are in possession of systems that can easily qualify as a Metaverse (with some minor modification), though not nearly as high definition as the ones described in fictional literature. Everything has a start, and even the most advanced technology meta-morphs into something truly amazing every year it exists. What we have is the beginning of the Metaverse as we know it. Conceived in the late 1980's (Habitat), advanced in 1995 (Active Worlds), and finally gaining acceptance in 2006 with companies like Active Worlds and SecondLife.

As well it should. While the real world grasps the meaning of what a true Metaverse is, and how it should effect their life - I write here the actual definitions and explanations of what they are and how they should be implemented. In the end, a Metaverse structure is nothing more than an entirely new media form.

What this boils down to is that the content contained within even a MetaWorld (no matter how well linked into a universe structure) can be described simply as a collaborative 3D Website. A Metaworld is nothing more than an advanced, collaborative, real time 3D website. It is an entirely different media form altogether, but understanding it in this light helps a great deal.

A Metaverse can be described as a 3D Internet, plain and simple, wherein the content contained within can be seen as a 3D Website or Webpage. With this in mind, even an experienced technology expert (right down to the average person) should be able to easily grasp what this is all about. It is wholly incorrect to classify such a system as a Game, or simply and Educational tool, or even just a chat-room. Just like the Internet now - a Metaverse can be everything you see on the Internet today, in a wholly new and remarkable form of media.

The browser chosen for this writing was due to a few discerning factors:

  • The Active Worlds platform is massive multiuser.
  • The browser can be embedded into a webpage
  • The environment is easily scriptable and editable.
  • Multiple "metaworld" capability

While other companies such as Secondlife indeed have impressive systems, they are missing half of those requirements to qualify as a true Metaverse. They are both categorized as a single Metaworld, despite what they may tell the media and are incapable (or unwilling) to create multiple Metaworlds in a structure linked together in order to create a Metaverse.

Also listed, Blaxxun Interactive, was supposedly trying to create the Metaverse using the VRML standard and adding it's own back-end client to add support for massive multiuser - but fell short of all of the requirements by not including scriptable and editable environments. (VRML is not a live language, but instead a precompiled world standard).

Possible Issues In The Future

Since 2006, the issue of piracy has become a big talking point and ISPs in the United States (and elsewhere) have been pushing to implement a capped bandwidth structure for their customers as a result. Instead of upgrading their network capacity to alleviate the network congestion, many ISPs (including larger names with a stranglehold on their markets) are quietly imposing a bandwidth limit on their users which would drastically hurt the Metaverse going forward. This is shameful behavior at best, and until this issue is taken care of, it will effectively impede the creation of effective Metaverse systems going forward.

What this boils down to is the need for developers to design better and more efficient systems which require drastically less bandwidth while retaining the high quality going forward. this seems to be a contradictory requirement, and even impossible at first glance, but the main purpose of Andromeda Underground is to research such methods, and after a few years doing so we've come to the conclusion that it is completely possible to accomplish. Not just possible, but we've figured out exactly how to accomplish it.

And so, while companies employ Cloud Servers to pre-calculate the scenes (BlueMars), or Simulators which chew up bandwidth centrally (SecondLife), we continue to look forward to the day when a true metaverse will be created while enjoying the follies and misguided actions of current generation systems.

William Burns

Project Leader for Andromeda Media Group

April Updates; or

50 Ways to Leave Your Simulator

Spring is in the air, and change is blooming all around us during the first quarter of 2010! The members of Andromeda Underground are all quite busy working on various projects to make the virtual environment market a better place to inhabit, and I'd like to take a moment, just sit right there, and I'll tell you how I became the Prince of a town called Bel-Air.
Wait.. no... sorry, got sidetracked.

Starting with the forums, a lot of updates and improvements have been made since the last post in March, so I'll start there and try to outline what's new :)

Forums - MPL Andromeda3D (GSK)

Starting April 14th, some changes to user permissions are being revised. Administrators and Moderators are exempt from this change. Users must now meet the following requirements to access certain sections:
  • Users must now have at least 5 posts to view links and images, or post links and images.
  • Users must now have at least 5 posts to create visitor messages, create and view photo albums, and comment on photo albums.
  • Users must now have at least 10 posts to view attachments.
  • Users must now have at least 15 posts to send private messages. Users can send messages and emails to administrators and moderators without fulfilling this post count requirement.
  • Users must now have at least 20 posts to send emails to other users. Users can send messages and emails to administrators and moderators without fulfilling this post count requirement.
Users, both anonymous and registered members, can now report bugs, make feature requests, share ideas, and more. If you see a bug or something doesn't go right, click the [Feedback] button on the left of each page and file a report. This uses the [Get Satisfaction] service, so if you don't have an account yet, you will need to make one (you can use your Twitter, Facebook, or OpenID accounts to create a [Get Satisfaction] account). Creating a [Get Satisfaction] account is quick and easy. After your report is posted, the administrators, like myself, will be made aware and we'll look into it. Keep checking back for additional information and statistics on your report. You can access the [Get Satisfaction] hub by clicking [Support] in the navigation bar.

Additional Updates and Changes in the Forums:
  • April 1: April Fools!
  • April 5: Users with confirmed donations no longer see forum advertisements.
  • April 5: [Support] is now listed under [Apps].
  • April 6: In the event the online user count reaches above 500 concurrent users, the forums will close until the user count is reduced below 500 to increase server load. Once user count has reduced to below 500 concurrent users, the forums will automatically re-open.
  • April 6: The April Fools Day 2010 prank was re-enabled. You can access the prank by clicking here or by selecting "APRIL FOOLS 2010" from the dropdown box at the bottom-left corner of each page of the forum. Likewise, you can return to the default style by selecting Underground v3 from the list.
  • April 6: It is now possible to receive text message alerts upon recipient of a new private message or visitor message. To set this up, visit your User Control Panel. Administrators and Moderators cannot see your phone number.
  • April 12: Fixed an error where all of the text on the forum was bold.
  • April 12: All users are now required to wait 20 seconds before they can accept revised Terms of Service.
  • April 12: The Terms of Service has been updated.
  • April 14: Added framework to allow for post count-based user permissions.
  • April 14: The list of available BBcodes have been updated.
  • April 18: A feature has been disabled because it has been made aware that it violates the vBulletin Terms of Service.
Of course, there is also the upgrade to vBulletin 4.0, and the subsequent new theme created for Andromeda Underground. Unfortunately we were unable to keep the old theme from the prior version because it was incompatible with vB 4 so GSK and I collaborated on the new updated version.

Pixel Labs in SecondLife

Some of the members of the Andromeda Underground are participating in a Second Life collaborative named Pixel Labs. To date, the members roster includes:

  • Darian Knight (Aeonix Aeon)
  • Goshenta Silversmith
  • s1rux Forsythe
  • drduke Rehula
  • Jon Dragoone (nwasells)

As a result, many of our top posters in the forums have been pre-occupied with collaborative projects. Some of the projects being worked on:

book Tablet : http://www.perpetualstudios.com/book/help/

And of course the koios presentation screen system.

The book Tablet has been gaining a lot of attention in Second Life, and even has an article written about it and the people of Pixel Labs -


Association of Virtual Worlds: Person of the Year Nominations

It seems that the Association of Virtual Worlds is holding a Person of the Year awards event, and our very own Darian Knight has two nominations as of this post! If you'd like to nominate Darian for the Person of the Year Award - check out the following link:


Of course, you can email your nomination to edita@associationofvirtualworlds.com
with a subject line of "Person Of The Year Nomination"

Be sure to include Darian's real name for the nomination, and mention that Darian Knight (Aeonix Aeon) is the alias. Not sure what the real name is? William Burns.

Happy Birthday to Darian Knight!

Yep, April 21st marks the birthday for the Project Leader of Andromeda3D - and this year we're not really sure what to do for the occasion. One year he threw a huge virtual worlds concert for his birthday, and repeated the concert the next year. Another year he had a fireworks display for the public, but this year - he's said that the best gift he could get for his birthday are nominations for the Association of Virtual Worlds Person of the Year Award.

Nothing fancy this year -

May 22nd is the Official Anniversary of Andromeda Underground!

Coming in the next month are a number of gifts to our forums members to celebrate the 2 year anniversary of Andromeda Underground. We're not going to say exactly what those gifts are, because it's a surprise!

We will, however, say that it's a good day to make sure you are logged in :)

Thanks for catching up with us this month, and hope to see you in the forums!


The Andromeda Underground