Glossary
Home Project Overview People Resources UCL Code

 

Up
Glossary

List of Common Interaction Metaphors

Virtual Hand

Description :- SELECTION/MANIPULATION technique.

One to one mapping between physical and virtual hand.

Inputs :- 

1. Tracked hand position

2. Discrete input (button) to indicate selection. 

Notes :- 

Disadvantage that must be close to object to select/manipulate it. But a very natural interaction metaphor.

Possible Events generated:-
Implementation :- 

SELECTION - Simple traversal of scene graph to find collision between object and tracked hand.

MANIPULATION - Attach object to hand in scene graph, so it inherits hands transformations, then on release reattach object to world.

 

QuickTime Video of Virtual Hand User In CAVElike IPT (6mb)

Ray Casting

Description :- SELECTION/MANIPULATION technique.

Ray cast from tracked hand, nearest intersected object is potentially selected.

Inputs :- 

1. Tracked hand position and direction vector.

2. Discrete input (button) to indicate selection.

Notes :- 

Proven to perform well, selection only needs user to control 2 DOF not 3.

Possible Events generated:-
Implementation :- 

SELECTION - Simple traversal of scene graph to find collision between object and ray.

MANIPULATION - Attach object to hand ray in scene graph at the distance at time of selection, so it inherits hands transformations, then on release reattach object to world.

 

QuickTime Video of RayCasting User In CAVElike IPT (1.4mb)

Occlusion Selection "Sticky Finger"

Description :- SELECTION technique.

Ray cast from eye through hand, nearest intersected object is potentially selected. Effect is that you place hand over object on image plane to select.

Inputs :- 

1. Tracked hand position.

2. Tracked Head Position (eye)

3. Discrete input (button) to indicate selection.

Notes :- 
Possible Events generated:-
Implementation :- 

Go-Go arm extension

Description :- SELECTION/MANIPULATION technique.

Like virtual hand, but users reach is greatly extended by a non linear mapping of real hand-torso distance and virtual hand-torso distance. For close objects mapping is one-one, and becomes non linear after some threshold is crossed.

Inputs :- 
Notes :- 
Possible Events generated:-
Implementation :- 

HOMER (Hand centered Object Manipulation Extending Ray casting)

Description :- MANIPULATION technique.

Uses ray casting to select, then moves virtual hand to object for manipulation. Any subsequent move of orientation of tracked hand changes orientation of object (like virtual hand) but any move in position of hand moves object position according to linear mapping of physical hand-torso distance, to virtual hand-torso distance.

Inputs :- 

1. Tracked hand position and orientation.

2. Tracked head or torso position.

3. Discrete input (button) to indicate manipulation start stop.

Notes :- 
Possible Events generated :-
Implementation :- 

Scaled World Grab

Description :- MANIPULATION technique.

On selection user is scaled (or equivalently the world is scaled) so that the virtual hand touches the selected object. In a mono system and/or when the user doesn’t move, the user doesn’t notice the scaling, i.e. the image remains the same before and after scaling. However when the user moves he realises he is a giant (or a dwarf?).

Inputs :- 

1. Tracked hand position and orientation.

2. Tracked head position.

3. Discrete input (button) to indicate manipulation start stop.

Notes :- 
Possible Events generated :-
Implementation :- 

Simply? need to scale world by the ratio 

(eye - hand dist)/(eye-object dist).

 

World In Miniature (WIM)

Description :- SELECTION/MANIPULATION/TRAVEL technique.

User selects and manipulates objects in a miniature handheld copy of the world. Because WIM is always near user, we can use simple virtual hand for selection and manipulation. Can also use for navigation by including a representation of the user in the WIM and moving this about.

Inputs :- 

1. Tracked hand position and orientation.

2. Discrete input (button) to turn WIM on/off.

3. Discrete input (button) to indicate manipulation/selection start stop.

Notes :- 
Possible Events generated :-
Implementation :- 

You need a copy of the scene graph, that has functionality of the normal full scale environment.

Pointing

Description :- TRAVEL technique.

User specifies direction of travel vector with some tracked device or prop, usually attached to the hand, like a wand or glove.

Inputs :- 

1. Direction vector (position not needed?).

2. Button or joystick to start/stop movement (if not using continuous automatic movement).

Notes :- 
Possible Events generated :-
Implementation :- 

Very simple, translate viewpoint by (direction vector * velocity)

Gaze-Directed steering (not eye tracking)

Description :- TRAVEL technique.

Users head is tracked and direction head is pointing determines direction of travel

Inputs :- 

1) Head direction vector (position not needed?).

2) Button to start/stop movement (if not using continuous automatic movement).

Notes :- 

Movement maybe constrained to ground plane or not. Disadvantage that user can’t look around whilst moving.

Possible Events generated :-
Implementation :- 

Very simple, translate viewpoint by direction vector * velocity

Map Based Travel

Description :- TRAVEL technique.

User moves “user icon” on a map representation of VE, then system smoothly animates user to selected position (or can instantly transport user, but can cause disorientation, in games have a toggle for smooth animation, so for new environments user will be smoothly animated, but for familiar environments can just jump to selected point).

Inputs :- 

Needs user to be able to manipulate objects on a map. So might have a ray pointer to select and move user icon. But could also use any of selection manipulation techniques. For this reason inputs depend on the choice of selection/manipulation metaphor.

Notes :- 
Possible Events generated :-  
Implementation :- 

Quite complex, need a map representation of world, and then need to know mapping between map user icon position and the Vworld (i.e. scale factor, origin of world in map, etc). Also might need to calculate a path that avoids objects in world.

Target Selection

Description :- TRAVEL technique.

Bit like map based travel, User selects a (visible) object, then system smoothly animates user to object position.

Inputs :- 

Needs user to be able to select objects in world, so again can use any of selection techniques. For this reason inputs depend on the choice of selection metaphor. Might just select targets from a list/menu.

Notes :- 
Possible Events generated :-  
Implementation :- 

Fairly simple once selection metaphor is implemented. Again might need to calculate an automatic path that avoids objects in world.

Grabbing the air

Description :- TRAVEL technique.

User stretches out arm and grabs, then any subsequent hand gestures move the world around the user. (like pulling on a rope, used in the game "Black and White")

Inputs :- 

1. Needs tracked hand (usually using pinch gloves).

2. Button to grab and release (or pinch with gloves).

Notes :- 

Can be physically tiring, because needs large arm/hand movements.

Possible Events generated :-  
Implementation :- 

Translate world according to vector from initial grab point to current hand position, do this every frame until grab is released.

Liangs Cone

Description :- SELECTION/MANIPULATION technique.

Like Ray Casting, but a instead of a ray a cone is used. The nearest object to the user which is inside the cone may be selected. 

 

See "Geometric Modeling using 6 DOF input devices" Jiandong Liang, Mark Green.

Inputs :- 

1. Tracked hand position and direction vector.

2. Discrete input (button) to indicate selection.

Notes :- 

Was designed to allow easy pickup of very small objects, which might be difficult using ray casting. Can have trouble selecting partilly occluded objects as only nearest object to user in cone may be selected.

Possible Events generated :-  
Implementation :- 

As with  ray casting but all the objects must be tested to see if they fall within a conic volume defined by the position and direction of the users wand. The nearest object in the cone to the user is potentially selected.

Steeds Cone

Description :- SELECTION/MANIPULATION technique.

A variant of Liangs cone. When button is pressed and held all objects within the current cone are highlighted, forming a potential set of selectable objects P. The user then moves the cone, and any objects in P which become outside the cone are removed from P. Once the user has a single object left in P it may be selected by realesing the button. Realese of the button before a single object remians in P means no objects are selected. 

Inputs :- 

1. Tracked hand position and direction vector.

2. Discrete input (button) to indicate selection.

Notes :- 

May have an advantage over Liangs cone in fact that can selected occluded objects. 

Possible Events generated :-  
Implementation :- 

As with Liangs cone, but all objects in cone are stored, forming a set of potentially selected objects. 

 

 

Common Virtual Reality Terminology

IPT

Immersive Projection technology

Such as CAVE and Reactor multi-walled displays

 

QuickTime Video of Virtual Hand User In CAVElike IPT (6mb)

HMD

Head Mounted Display

Such as ???

QuickTime Video of HMD user

CVE

Collaborative Virtual Environment