Consumer-grade virtual reality is still in its infancy. Anyone who has spent more than a few minutes trying out one of the solutions on the market can likely tell you a story about a time where something ruined their immersion. Momentary lapses in motion tracking can make a person feel ill. Maybe they played a game where the graphical fidelity was too low to feel convincingly “real.” Perhaps their story involves their avatar not having arms, or involves arms that didn’t line up with their real body.
Arms in VR became something of a hot topic recently with the release of the trailer for Half-Life: Alyx, Valve’s upcoming VR-exclusive return to the series. PC Gamer published two opinion pieces about its lack of first person arms—one saying players should have arms, and the other defending Valve’s decision not to include them. On Twitter, Reddit, and across various forums, these opposing takes inspired some lively debate: should VR start prioritizing arms and full first-person bodies?
Given that relatively few people have VR in their homes and even fewer have tried developing experiences for it, I figured I’d reach out to some experts on the matter. I wanted to get their views on why first-person arms and other body parts are tricky to do in VR, and when it makes sense to include them. Here’s what I learned:
Invisible Arms Versus Too-Visible Arms
Will Smith, founder and CEO of FOO VR, says that in his experience “no one notices whether they have arms or not” after the first few minutes in VR. This is a fairly common observation and argument for not including first-person arms—one that touches on two pillars of what constitutes VR immersion: presence and embodiment. Presence is the feeling of being inside the virtual environment, while embodiment refers to the feeling of having your body represented in that space. While there is a strong amount of interplay between the two, depending on the aims of the VR game or application in question, it might make sense to privilege presence over embodiment or vice-versa.
FOO’s VR animation system, which is used to make TV-ready content, employs machine learning and inverse kinematics (IK) techniques to position joints as a means “to create believable movement, not necessarily to capture the performer’s movement accurately.” To put it another way, when the voice actor of Carl from Aqua Teen Hunger Force put on a VR headset to reprise the role in a show powered by FOO’s animation tech, the point wasn’t for him to feel like he was there or that he actually was Carl. Rather, performance was the aim, not presence or embodiment.
Smith and company have found that seeing first-person limbs isn’t necessary for the live performers driving that animation. “After doing hundreds (or maybe thousands) of VR demos, we found that a significant number of people find the sensation of their in-game and real-world limbs not lining up perfectly extremely discomfiting,” says Smith.
The practical gains from not rendering first-person limbs go beyond non-game applications like FOO’s. “The math that we do to animate limbs is pretty hefty for mobile VR (like the [Oculus] Quest), so the other big benefit of hands-only VR is that you can port from desktop to mobile easier,” Smith explains. “Also, you aren’t wasting any of the valuable viewport on arms.”
Kaspar Hald, an indie VR developer working on a game called Project: Warped, offers up more examples of how forgoing first-person arms can be thought of as a purposeful design choice rather than an omission (and no, his game doesn’t include arms):
One advantage of floating hands is that [the] player knows what part of their body is relevant to the game world and the gameplay. For example, you know you cannot be hit on your arms if they are not shown. Another advantage may be that having your separated hands allows you to accept and use them for more abstract purposes. For example, the game Compound by Bevan McKechnie uses floating hands and turning your wrist as if looking at a wristwatch activates a level map. The size of the map would likely have been an issue if it had a chance to overlap with a rendered arm.
Since the animation calculations can be intensive and the field-of-view on VR headsets is still relatively limited, one could argue that arms should only be included if there’s a purpose in mind beyond embodiment.
Inverse Kinematics, Your Body, And You
Markus Steinberger and Mathias Parger of the Graz University of Technology were part of a research project that arrived at a fairly impressive approach for first-person arms in VR. Without tracking data for arms, it does a solid job at lining up in-game with the player’s real arms:
Using only tracking for the head and two hands (what’s commonly available from a VR headset and two controllers), Steinberger and Parger’s model uses inverse kinematics to guess at where the player’s arms are. IK, generally, is a method for procedurally calculating angles and positions for joints in a chain or skeleton. In this case, with the tracked positions of the head and hands being known, the IK model solves for plausible positions of the players’ neck, shoulders, and elbows.
By only focusing their IK system on the upper body and carefully tuning the constraints, the Graz team’s model outperforms other first-person arm solutions in VR. If you look down in their demo, you won’t see your torso and legs, but that means the IK system won’t be further limited by also having to solve for those untracked body parts. Over email, Steinberger and Parger explained the differences between their approach and those of Final IK and Stereo Arts’ FullBody IK—two IK solutions that aren’t specialized for first-person VR arms:
Final IK only turns your upper body when your arms are far off to one side, because it tries to keep the feet planted and the body looking towards the same direction as your feet. Our system has very responsive shoulders which are not influenced by the lower body. Probably a more important difference is the way we move the elbow based on the hands position relative to the shoulder. Compared to other methods, we use a more complex system to estimate which direction the elbow is pointing at.
In testing, the Graz team also compared a method for full arm tracking against their IK method and found that players preferred the IK, even though it doesn’t present the real position of the arms. The Graz team also found that players performed better at an archery game either with floating hands or with the upper-body IK solution than they did with arm tracking. This may be due in part to latency and errors in the tracking, which we can assume would improve as the tech improves. “As long as the participants are busy, they do not even really notice that their arms are not exactly in the same pose,” says Steinberger and Parger. “On the other hand, if your hand is always a few frames in the past, the participants notice that something is wrong with their avatar and it becomes difficult to steer.”
Animating body parts in VR matters beyond your first-person view. “In slower applications, like social apps with avatars, the limitations of the IK solver become more visible,” Steinberger and Parger explain. The developers at Stress Level Zero also encountered this issue on their multiplayer VR title Hover Junkers, and made a video on how adding shoulders to their Final IK animation solution improved both the realism of the player model movements and non-verbal player communication.
Elbows Are The Real Trick
VR experiences that have gameplay interactions specific to upper-arms are still few and far between; Steinberger and Parger say they would have liked to see a first-person body in Superhot VR, if only for the sake of being able to strike enemies with your elbow. As cool as it might be to shatter an assailant in Superhot with your elbow, though, the Graz team found it’s the most difficult joint to deal with:
While the shoulders are also difficult to predict, they are not visible to you in VR most of the time. The forearms and elbows are nearly always on your screen and the elbow has a large freedom to move. This becomes most apparent when you are only moving your elbow but not your hand, and the virtual elbow stays in the same pose. Without additional body tracking hardware, this problem cannot be solved by any IK or simulation software.
If you look at the beginning of the Graz team’s video demonstration, you’ll see that even their sophisticated upper-body only solution often puts the elbows a few degrees off from the tracked arm data. Smith agrees with Steinberger and Parger—”Nothing out there can accurately place elbows that I’ve used,” he says.
In his piece arguing that Half-Life: Alyx would be better off with arms, PC Gamer’s Christopher Livingston makes note of the elbow problem:
Granted, arms can be occasionally be weird and unconvincing in VR too, as anyone knows who has played a full-bodied game in VR and put their controllers down, or held them in the same hand, or made some sort of movement that briefly confused their virtual elbows. Arms are probably tricky to do well in VR.
With a nod to Half-Life: Alyx’s four-year development cycle, Livingston poses a rhetorical question: “If anyone can crack the elbow code, shouldn’t it be Valve?” From what I’ve seen, the Graz team has gotten as close as anyone else (admittedly, with far fewer resources at their disposal than Valve might have), and their stance is plain and simple: you can’t solve for the elbows with current tracking solutions, only approximate them.
If you want perfect, one-to-one arms in VR, then you’ll need additional tracking hardware—and if VR is to gain a stronger foothold, either as a platform for gaming or a more all-encompassing medium—all the required hardware needs to be more widely accessible. Perhaps one day the floating appendage experiences of this current generation of VR will been seen as outdated, but for now there are legitimate design and technological concerns for not including first-person limbs.