The metaverse won’t grow until we wear our own faces there

Paul Raven @ 02-11-2010

Interesting think-piece from Wagner James Au of New World Notes; he’s wondering if the drop-off of interest in virtual worlds is driven by the very human need to be able to see the real face of the person you’re interacting with. The riff originates from noting that folk in Halloween costumes that hid their faces experienced less engagement and roleplay with others than those in costumes where the face was uncovered:

Without the ability to peek at the person behind the costume, people were largely leery, and standoffish. Many of these face-obscuring costumes were incredibly creative and believable, which you might think would encourage more roleplay. But for the most part, if they couldn’t get a rough idea of the person inside the outfit, people would hold back.

I think we’re seeing a similar effect with virtual worlds, as compared to social games. Most of the biggest social games, like FarmVille, have customized avatars, but the avatar is connected to a real identity, and perhaps even more important, a real face. In effect, social game avatars act like Halloween costumes, where you can see the person inside the outfit. Most avatars in virtual worlds, by contrast, resemble a full body costume where the face is largely or totally obscured. This is probably a major reason why they’ve failed to gain mass adoption. In effect, most of the population is looking at virtual world avatars the same way people at Halloween parties look at costumes that have hidden faces — with interest and curiosity, maybe, but also with some apprehension or unease.

If I’m right, one good way to grow virtual worlds is to make avatars more like casual Halloween costumes, in which you’re able to know a little about the person controlling it. That doesn’t necessarily mean linking the avatar to the owner’s Facebook profile. (In fact I’d suggest linking avatar profiles to dating sites, like OKCupid, would be more productive than Facebook.) Halloween isn’t popular because people want to actually be Bat Man or Sarah Palin or even Pedobear — they want to express a part of their personality in a fun way, in a fun social context where others are doing the same. And above all, have this roleplay connect to the rest of their lives.

It’s a pretty loose thesis at this point, but it does chime with my own experiences in metaverse realities, namely that the anonymity and/or immersive never-out-of-character role-playing aspects that so engage the core demographics of such spaces are actively repellent to others.

I suspect business-sphere interest and investment in metaverse tech will be the necessary developmental catalyst for the sort of transparency Au is suggesting (a sort of video-conferencing on steroids, which might get popular very fast when oil prices start climbing again and flying overseas for meetings becomes an unsustainable overhead), but I also suspect that the heaviest metaverse users will always be those who find the wearing of masks to be a liberation from reality rather than a disconnect from it.


Little lost robot

Paul Raven @ 19-05-2009

Robots have been mobile for decades, but they’ve only ever been able to go places for which they had a map or set of directions stored. That’s all changed thanks to a team of roboticists from Munich, who’ve built the first robot that can be unleashed into unfamiliar territory without a map. How does it complete its journey? It asks for directions, of course:

ACE uses cameras and software to detect humans nearby, based on their motion and upright posture. As it closes in on a likely helper, ACE’s “head” – bearing a touchscreen and a second screen displaying an animated mouth – turns to face the chosen person.

A speaker working in sync with the animated mouth is used to get the person’s attention and to ask them to touch the screen if they want to help. Willing guides are then asked to point the robot in the correct direction, with the response being analysed by posture recognition software. Direction set, ACE says “thank you” before trundling off.

Pointing, rather than telling the robot where to go, avoids confusion caused by the fact that the robot and the facing pedestrian each have a different sense of left and right.

Although it interacted with 38 people over a period of nearly five hours – ACE did eventually reach its destination. In fact, the team report that the robot was making very good progress until it reached a busy pedestrian area where its own popularity became a problem.

The current rarity of mobile robots in public spaces is obviously a big factor here; in a few more decades, we may barge past lost robots on the pavement as quickly and guiltily as we do homeless people or street-drunks.

The principle on display here is that of robot-human interaction in order to gather environmental data to complete a task or journey, which is all well and good, but it’s a proof-of-concept more than anything else. If all you needed was a robot that could navigate an unfamiliar cityscape, it’d be far easier to kit it out with good visual sensors and a GPS unit.

Hell knows this would be useless for military applications; if your super-killbot had to stop at every enemy checkpoint to ask the way to headquarters, I dare say the best place it would end up would be a long long way from anything at all… [story via regular commenter Evil Rocks; apologies to Paul McAuley for the headline]