Mar 1, 2009

A Recipe For Awesome...

One of the recent endeavors that we have been on here at Andromeda3D is to figure out a solution for one of the oldest problems in virtual worlds. That is, how to easily allow the user to make their avatar look like themselves should they choose to. One popular approach is to simply take two photos of the subject, one from the front and another from the side, and wrap a face map to a generic model of a head. While this is fairly quick, the results aren't so good in the end as the face map tends to warp and stretch to the incorrect mesh.

The obvious solution would be to have an algorithm analyze the
photo and then mold the facial features of the mesh to fit the photo. As you can imagine this is no simple task, but with a bit of research it was discovered that there are definitely ways to accomplish this.

So we find a photo of an individual with which we would like to import into the system as a 3D representation of the face on an avatar, and we allow the algorithm to analyze the photo while actively morphing the model to match the features.

In our test, we chose a familiar face to work with -

If we take into account that this was the photograph imported into the system, obviously not an ideal front shot for 3D modeling, we can see that the result of the algorithm and mesh matching are indeed impressive, as shown below:

But this is not the end of the brainstorming and technology think tank session, by no means. The facial generation system also rigs the animation points on the model so that phonemes are also incorporated. The first thing this brought to mind was allowing the mouth to move realistically when speaking, but not on a post process sort of manner.

What if we marry this technology together with the VoIP system, then marry those two together with a Real Time LipSync system so that not only would the user have the ability to look like themselves in the virtual world by importing a single photo, but also have their virtual lips sync to their VoIP conversation as well in real time?

Now we're getting someplace. After a bit more research into this idea, we came across an SDK which allows just that - real time syncing of streaming audio to mouth animations based on phonemes. Not only can the avatar look photo realistic, but it can actually move its mouth properly when you speak. The downside so far is that the delay in audio processing is about 10ms, which in the grand scheme of things isn't actually that bad.

As for the face generation algorithm, on a decent computer with a good graphics card the process should take roughly 2 - 5 minutes to create the custom face and head. On slower computers (or computers with bad OpenGL Drivers) this process can take an extended amount of time into the 10 - 20 minute range for completion.

While the generation process may be slower on outdated computers, it still works the same way with the same results. So it's a matter of whether the user wishes to wait for the customization process to finalize or not. There is a definite up side to all of this in that once the face is customized to the photo, the file generated to recall the data is around 40 - 50kb in size and the calculations for mesh morphing are not required again (unless the user imports a new photo).

Then we add to this chain, yet another feature we've been working on. The Shader Skin process by which a procedurally generated skin texture is applied to the mesh as a first layer, then the face map is applied as the top layer, adjusting the opacity and color to come to a close approximation and create a seamless transition.

As for hair, again we're looking at Shaders for high end computers or premodeled hair geometry for slower systems. nVidia has been doing some interesting research concerning realistic and physX enabled hair in real time, so we'll be definitely keeping an eye on that.

Let's take a moment to recount our recipe for awesome -

1. Import photo which then automatically is analyzed to create a matching 3D Mesh for avatars
2. Apply real time LipSync technology to a VoIP stream
3. The facial animations of the model are already rigged and ready for use in the LipSync scenario
4. Real Time Shader Hair with PhysX
5. Procedural Skin generation to blend seamless between face and rest of body

Since the Digital DNA profiles of people are very small files, comparing a generated file to one on file already (such as Robert Downey Jr.) would be fairly easy. This would enable us to make sure people who are importing a photo aren't importing a known celebrity in the database who has requested IP Protection from Andromeda (Part of the VIP Program).

For the time being, this is definitely an avenue of research which we wish to continue as we put this virtual universe together.

- The Management


Post a Comment