• Nenhum resultado encontrado

Navigating Virtual Reality Worlds with the Leap Motion Controller

N/A
N/A
Protected

Academic year: 2021

Share "Navigating Virtual Reality Worlds with the Leap Motion Controller"

Copied!
88
0
0

Texto

(1)

F

ACULDADE DE

E

NGENHARIA DA

U

NIVERSIDADE DO

P

ORTO

Navigating Virtual Reality Worlds with

Leap Motion Controller

Rui Miguel de Paiva Batista

Mestrado Integrado em Engenharia Informática e Computação Supervisor: Prof. Rui Pedro Amaral Rodrigues

Co-Supervisor: Prof. Jorge Carlos Santos Cardoso

(2)
(3)

Navigating Virtual Reality Worlds with Leap Motion

Controller

Rui Miguel de Paiva Batista

Mestrado Integrado em Engenharia Informática e Computação

Approved in oral examination by the committee:

Chair: Nuno Honório Rodrigues Flores (Professor)

External Examiner: Pedro Miguel do Vale Moreira (Professor) Supervisor: Rui Pedro Amaral Rodrigues (Professor)

(4)
(5)

Resumo

As visitas comerciais a imóveis têm como objetivo dar mais detalhes a um potencial comprador sobre a habitação. Este processo não é eficiente. O comprador tem que se deslocar fisicamente ao local da habitação, o que custa tanto dinheiro como tempo. De forma a tentar mitigar estes fatores começam a aparecer no mercado imobiliário as visitas virtuais. Estas visitas colmatam as falhas mencionadas, pois o comprador apenas tem que se deslocar a um ponto de acesso como o seu computador pessoal. No entanto, estas visitas virtuais também têm as suas desvantagens. Embora os imóveis virtuais possam ser criados com um grande detalhe, nas visitas virtuais os potenciais compradores não têm um sentimento de presença. Normalmente o comprador apenas pode ver fotos/vídeos de algumas divisões ou ter uma vista superficial semelhante a uma planta. A realidade virtual pode trazer ao mercado imobiliário o sentimento de presença. Com a utilização de periféricos como o "Oculus Rift", o utilizador consegue ser colocado no ambiente virtual numa perspetiva de primeira pessoa. No entanto isto também traz alguns problemas a nível da navegação. Esta dissertação tem como objetivo encontrar e avaliar formas de navegação sustentáveis para mundos virtuais a 3 dimensões usando vários dispositivos (como “Leap Motion”) e compará-las com alguns métodos de navegação existentes, tais como o comando. No âmbito deste trabalho serão usados os dispositivos “Leap Motion, o “Oculus Rift”, e um motor gráfico (e.g. Unity 3D ou Unreal Engine).

O “Leap Motion” é um dispositivo recente para a interação com mundos virtuais. Este con-segue detetar a orientação, posição e curvatura das mãos de um utilizador. Em conjunto com o “Oculus Rift”, um dispositivo que permite através de visão estereoscópica oferecer ao utilizador uma perspectiva de primeira pessoa numa cena tridimensional, permite a interação entre o uti-lizador e um mundo virtual. Como o âmbito principal será o estudo de formas de navegação em mundos tridimensionais, serão tidos em conta alguns dos problemas típicos nos ambientes imer-sivos, como por exemplo o cansaço sentido durante a navegação. Será usado um motor gráfico de forma a criar a cena tridimensional. O principal caso de utilização será o de uma pessoa que ne-cessita de observar um imóvel e não tem disponibilidade de o visitar fisicamente. Com o “Oculus Rift” será possível colocar o utilizador numa perspetiva de primeira pessoa a visitar virtualmente o imóvel que será criado através dos motores gráficos supracitados. Será no entanto necessário que o utilizador se desloque pelo imóvel, usando para este efeito o “Leap Motion” e outros dispositivos. O caso de utilização principal tem alguns requisitos inerentes, tais como o facto de os utilizadores finais não possuírem experiência com algumas das tecnologias envolvidas, pelo O caso de uti-lização principal tem alguns requisitos inerentes, tais como o facto de os utilizadores finais não possuírem experiência com algumas das tecnologias envolvidas, pelo que a curva de aprendiza-gem para a adaptação terá de ser baixa. Alguns aspetos normalmente associados às primeiras utilizacões do “Oculus Rift” serão tidos em conta como, por exemplo, o enjoo ou, no caso do “Leap Motion”, o cansaço após algum tempo de utilização. Como o objetivo desta dissertação é uma avaliação de algo subjetivo, serão efetuados testes de utilização. Estes testes terão associados alguns questionários já existentes como o "Simulation Sickness Questionnaire" que servem como

(6)

ferramentas de comparação entre simuladores de realidade virtual.

Palavras chave: HCI design and evaluation methods; Interaction paradigms; Interaction devices; Interaction techniques; Interaction design; Accessibility;

(7)

Abstract

Real state tours is a process where a potential buyer visits a house to know more details about it. This is an inefficient process at several levels. The user needs to travel physically to the house which brings monetary costs and it is a time consuming activity. To mitigate these factors the market is starting to have virtual real state tours. These have several benefits and the main advantage is the user only having to dislocate to a access point like a personal computer. Virtual tours also have its disadvantages. The houses built in virtual environments, although detailed, can not give the potential buyer the feeling of presence. Normally the buyer can only look at pictures of rooms or see the virtual house in a broad perspective like a blueprint.

Virtual reality can bring to the real state market the feeling of presence. Using devices like the Oculus Rift, users can be placed in a virtual reality property with a first person perspective. This brings some new challenges like navigation.

The dissertation Navigating Virtual Reality Worlds with the Leap Motion Controller main goal is to find and evaluate several navigation methods to control a virtual character inside a virtual simulator. Recent devices as the Leap Motion will be compared with more traditional devices, like the gamepad. To accomplish the goals set in this dissertation, tools like a game engine (Unity3D or Unreal Engine) will be utilized. Leap Motion is a recent device to interact with virtual worlds. It can detect in real time the position direction and arch from user’s hands. When combined with Oculus Rift, a device that through stereoscopic vision offers the user a first person perspective which allows the user to interact with the virtual world with a greater level of immersion. With the main goal set, some frequent problems with immersive simulators will be considered (for example, the user’s fatigue). To help create the virtual world a game engine will be utilized. A survey of the main game engines will be made and one of them will be chosen to create the final application.

The main use case scenario will be a real estate virtual tour. A user that can not travel to an apartment can, through the application, make a virtual tour to it anywhere in the world. With the first perspective feature given by the Oculus Rift, the user can have a immerse tour and have a realistic view of the apartment. However, it will be necessary to navigate through the property. Using a device like the Leap Motion, the user can interact with the world without losing the immersion factor.

The main use case scenario has some requirements that must be taken into account. Has the target audience is every one over eighteen years old, the learning curve of the techniques must be low and the devices must be simple to handle. Also, effects related with the use of the Oculus Rift will be taken into consideration and evaluated, like motion sickness and fatigue.

To evaluate the several techniques some tests will be made to the users, like the Simulation Sickness Questionnaire(SSQ). These act like comparison tools to test virtual reality simulators. Key Words: HCI design and evaluation methods; Interaction paradigms; Interaction devices; In-teraction techniques; InIn-teraction design; Accessibility;

(8)
(9)

Acknowledgments

I would like to thank all the people that helped me during this dissertation. For the guidance given and all the helpful discussions during the semester, Professor Jorge C. S. Cardoso was of the most valuable help. All his patience, insight and knowledge were essential in the development of this project. All the members from School of Arts, that gave me the possibility to work in a great environment. A special thanks to Tiago, for the help he gave me locating all possible physical resources for the development and testing on the School of Arts. I also would like to thank my supervisor Professor Rui Pedro Amaral Rodrigues for helping me with the document, for all availability and for providing all the resources for the experiment in FEUP. Another special thanks to Eng. António Sérgio Ferreira, for his "strong words" on this last part of the semester. Thanks also to my girlfriend for all her patience that she had in times of stress, always helping me see the brighter future. I would like to thank also all the test subjects that volunteered for my experiment.

(10)
(11)

“A bruise is a lesson... and each lesson makes us better.”

(12)
(13)

Contents

1 Introduction 1

1.1 Context . . . 1

1.2 Problem Definition . . . 2

1.3 Tasks and Objectives . . . 2

1.4 Dissertation Structure . . . 3

2 Literature Review 5 2.1 Navigation Techniques and Virtual World Interactions . . . 5

2.1.1 Metaphors . . . 5

2.2 Physical Devices . . . 8

2.2.1 Head-Mounted Displays . . . 8

2.2.2 Input Devices . . . 10

2.3 Evaluation and Metrics . . . 11

2.4 Summary . . . 12

3 Proposed Methodology and Architecture 13 3.1 Methodology . . . 13

3.1.1 Applications Review . . . 14

3.1.2 Techniques Review . . . 15

3.1.3 Game Engine Review . . . 17

3.2 Architecture . . . 17

4 Iterative design and implementation of navigation metaphors 21 4.1 Techniques Iterations . . . 21

4.1.1 Gamepad . . . 21

4.1.2 Airplane . . . 22

4.1.3 Point to Point . . . 23

4.2 Final Iterations and Implementation . . . 24

4.2.1 Airplane . . . 24 4.2.2 Gamepad . . . 25 4.2.3 Point to Point . . . 26 4.3 Summary . . . 27 5 Experimental Procedure 29 5.1 Experiment Outline . . . 29 5.2 Scenario . . . 30 5.3 Experiment . . . 30 5.3.1 Procedure . . . 32

(14)

CONTENTS

5.3.2 Physical Setup . . . 33

5.3.3 Data Retrieved . . . 33

6 Results 35 6.1 Data Log Analysis . . . 36

6.1.1 Paths . . . 36

6.2 Questionnaires . . . 40

6.2.1 Individual Device Questionnaire . . . 40

6.2.2 Simulation Sickness Questionnaire . . . 49

6.2.3 Device Preference Questionnaire . . . 51

6.3 Summary . . . 53

7 Conclusions and Future work 55 7.1 Future Work . . . 56

References 59 A Questionnaires 63 A.1 Individual Device Questionnaire . . . 63

A.2 Simulation Sickness Questionnaire . . . 65

(15)

List of Figures

2.1 The jumper metaphor . . . 7

2.2 Sword of Damocles considered one of the first HMDs . . . 9

2.3 The Oculus Rift development kit 2 . . . 9

2.4 Leap Motion and its representation of user’s hands . . . 12

3.1 Architecture diagram, an overall view of the system . . . 19

4.1 Gameobject hierarchy on Unity3D . . . 24

4.2 Movement gestures on the airplane technique . . . 25

4.3 Directions that the user could move in the gamepad . . . 26

4.4 Cylinder from the Point to Point technique . . . 27

4.5 Top perspective from a house with and without the navigation mesh . . . 27

5.1 Map from the second part of the experiment . . . 31

5.2 Map from the third part of the experiment . . . 32

6.1 Second part paths(Path following) by technique . . . 36

6.2 Third part (search task) paths by technique . . . 38

6.3 Total time of the experience for each technique for the second part of the experi-ence . . . 39

6.4 Total time of the experience for each technique for the third part of the experience 40 6.5 IDQ - Results for question 1 . . . 41

6.6 IDQ - Results for question 2 . . . 41

6.7 IDQ - Results for question 3 . . . 42

6.8 IDQ - Results for question 4 . . . 42

6.9 IDQ - Results for question 5 . . . 43

6.10 IDQ - Results for question 6 . . . 44

6.11 IDQ - Results for question 7 . . . 45

6.12 IDQ - Results for question 8 . . . 45

6.13 IDQ - Results for question 9 . . . 46

6.14 IDQ - Results for question 10 . . . 46

6.15 IDQ - Results for question 11 . . . 47

6.16 IDQ - Results for question 12 . . . 48

6.17 Raw scores of the SSQ from the Game Pad Technique . . . 50

6.18 Raw scores of the SSQ from the Airplane Technique . . . 50

6.19 Raw scores of the SSQ from the Point to Point Technique . . . 51

6.20 Questionnaire answers on less preferred device . . . 52

(16)
(17)

List of Tables

(18)
(19)

Abbreviations

SSQ Simulation Sickness Questionnaire HCI Human-Computer Interaction GPU Graphics Processing Unit HMD Head-Mounted Device FOV Field Of View

DK1 Development Kit 1 DK2 Development Kit 2 DOF Degrees Of Freedom

API Application Program Interface P2P Point to Point

NPC Non Playable Characters LM Leap Motion

FPS Frames per Second

IDQ Individual Device Questionnaire CSV Comma-separated Value

(20)
(21)

Chapter 1

Introduction

Navigation in three-dimensional worlds is a subject that has being studied at great lengths in the past. The continuous evolution of the computational power is something that can propel the existing number of virtual reality applications in the market. Navigation is very important in virtual worlds, because most of the time if a user needs to perform a certain task, he will most likely need to position himself on a target location. Virtual worlds are not new concepts in the mind of users mainly due to gaming, but the emulation of virtual worlds has a lot of use cases. For example, they can simulate a critical real world situation without the inherent peril. One of the biggest use cases for virtual worlds are piloting simulations in aviation industries, testing the user and evaluating him without the risk of crashing a real plane. In order to interact with a virtual reality world the user must give inputs to the system. The way user’s intentions can reach the system can be executed in numerous ways and several peripherals can be used.

One of the most relevant goals trying to be achieved in virtual reality worlds is the level of the immersion felt by the user. With the advancements made in GPU (graphical processing unit) technologies, developers can create worlds with increasing level of detail, allowing users to get a better feel of immersion. When adding Leap Motion as the input peripheral, it can give the user a higher sensation of immersion by representing his own hands in real time in the virtual scene. Some techniques proposed by some authors are going to be classified, studied and evaluated, so that the most promising ones can be implemented to be tested. The results of these tests will be analyzed and conclusions drawn.

1.1

Context

The evolution of GPUs (graphical processing units) can propel the virtual reality market. With each new generation of graphic cards, developers can achieve virtual worlds very similar to reality in visual terms. This allows for a greater sense of immersion. When trying to reach greater levels of immersion, developers tend to use HMDs (Head-Mounted Devices). These devices, like the

(22)

Introduction

Oculus Rift, give a first person view to an user. However, to interact with the virtual scenario with traditional devices the user usually has to take the HMD off to find these controllers because he can not see his hands. This breaks the feeling of immersion. The Leap Motion controller is a relatively new device that can, in real time, track and represent the user’s hands favoring proprio-ception(awareness of the position of one’s body), adding to the immersion previously mentioned. The use of HMDs can give the user a first person perspective of the virtual world and in conjunc-tion with the Leap Moconjunc-tion and a realistic game engine, developers can create a virtual world that can give a great sense of immersion. In order for a user to interact with objects and explore the virtual world without losing immersion, a good method of navigation has to be found. This disser-tation aims to find, implement and evaluate some techniques already presented by some authors in order to find which best fits our scenario. This dissertation was proposed by Professor Jorge Cardoso: a researcher in CITAR (Research Center for Science and Technology of the Arts) from the School of Arts of Universidade Católica Portuguesa, which is a research center recognized by FCT (Foundation for Science and Technology).

1.2

Problem Definition

In virtual reality scenarios, to perform a task the user’s avatar usually has to navigate to a target location. The navigation in virtual reality worlds is well documented and several methods for navigation have been proposed[DRC05][BKH98].

The methods mentioned usually have the purpose of using peripherals like a gamepad and a wide screen. Introducing Leap Motion combined with a HMD like the Oculus Rift brings a new paradigm that tries to give the user a greater sense of reality and immersion. This dissertation aims to provide a study of different techniques for virtual reality navigation using an HMD (Head-mounted device) and to develop the most promising ones. After the implementation of these techniques, user testing will be required to evaluate the differences between them, trying to reach the best one for the new paradigm by analyzing the results.

1.3

Tasks and Objectives

This dissertation’s main objective is to study and compare some of the existing navigation tech-niques in three-dimensional virtual worlds, using devices like the Leap Motion as input and Oculus Rift as a display. For that main goal to be reached, there are several secondary tasks which must be achieved:

1. Study of various existing navigation techniques: A survey and study of existing techniques and methods to them must be performed in order to have the means to classify and compare the different techniques proposed.

2. Study of several peripheral’s API and their integration with each other: In this dissertation the use of several peripherals is required. To ease the transition from theory to practical

(23)

Introduction

implementation, the study of the several devices API’s is required. To help the development a game engine will be used to create the virtual scenario. A study of the several game engines is also required for the goals set.

3. Development of the techniques previously studied: After a careful analysis of the relevant methods and evaluation metrics, the implementation of the selected methods will be made in order to have an empirical evaluation and extract the features important for this dissertation. 4. User testing: Different tests with users will be made with the selected techniques. To draw conclusions about the techniques a experiment will be planned and executed. This experi-ment will have several test subjects performing tasks in order to retrieve data. This data will be analyzed and conclusions drawn.

1.4

Dissertation Structure

This dissertation contains 6 more chapters. In Chapter 2a bibliographic review is made, where some physical devices are presented, the most relevant metaphors are explained and some evalua-tion and categorizaevalua-tion methods proposed by some authors are presented.

In Chapter 3, the methodology is presented and architecture is explained. It will also be presented the multiple iterations that each technique suffered during the development phase In chapter5, the design process and the implementation of techniques is explained.

Chapter 4the experimental procedure and the required metrics to be gathered are explained in detail. The results and its analyses from the experimental procedure will be presented in Chapter

6. In Chapter7, an overview of the dissertation will be presented, what was achieved and future work.

(24)
(25)

Chapter 2

Literature Review

On this chapter the bibliographic content found will be reviewed for the purpose of the goals and tasks proposed. The Bibliographic Review will start on the Navigation techniques and some ex-amples of virtual world iterations on section2.1. The physical devices that can be used to interact with the techniques are reviewed on section2.2. A review of existing methods of evaluation and metrics on section 2.3 will be presented. The chapter is concluded with a summary of all the sections as well as the planning of the dissertation2.4.

2.1

Navigation Techniques and Virtual World Interactions

In order to navigate in virtual worlds several techniques have evolved through the times. In order to catalog and organize them into categories metaphors and taxonomies were created. In this section some metaphors will be analyzed and taxonomies will be presented, as well as several metrics to evaluate them.

2.1.1 Metaphors

In order to organize the navigation techniques some authors [DRC05] created their own tax-onomies mostly based on metaphors. These explain that a metaphor is a concept that copies a concept that the user already knows and with that in mind the user can compare his own experi-ence, and apply it to the new concept that the metaphor tries to explain. These metaphors come from the experience of manipulating a camera in a virtual scene, because in most of them we can see the camera as an avatar that represents the user, and what the camera presents is the field of view of the user as well. Several taxonomies have been adopted by several authors, DeBoeck [DRC05] dividing them into two different main categories: Direct Camera Control, and Indirect Camera Control. In the first one the user can control directly the position and his orientation with an input device. On the other hand, in the indirect camera control the user can only select the region of interest and the system has the responsibility of dislocating the user toward the place

(26)

Literature Review

he indicated. Within the Direct camera control [DRC05] also separates the techniques that are user or object centric. The first one, user centric, includes the metaphors that suit a better world exploration, while object centric is more focused on exploring single objects in a scene. Another taxonomy presented by [BKH98] is focused on separating the techniques into the components that compose the locomotion: The first component is named Direction/target selection, and consists on having the user picking the interest area where he wants to navigate. The second is velocity/accel-eration, where the user can control his velocity or acceleration. The third category differentiates on the types of inputs the user can give to the system. Some metaphors found described by the two authors follow:

2.1.1.1 Eyeball in Hand

Also known as camera in hand, on this metaphor the user has a perspective equivalent to holding the camera in his own hand. He sees everything in a first person perspective and all the inputs are translated into direct geometrical transformations in the camera. DeBoeck locates this category in his taxonomy as user centric direct camera control.

2.1.1.2 World in Hand

In this metaphor [TDD04] the camera is placed in a location and does not move. Also a first person perspective, when the user gives inputs they are translated into geometrical transformations on the world, allowing the user to navigate through it.

2.1.1.3 Flying Vehicle

Also known as airplane metaphor [WO90], the user can treat the camera as an airplane, giving him liberty to change the speed of his translations and/or rotations.

2.1.1.4 Walking

This metaphor does not allow the user to control the distance of the camera to the ground, it is defined and fixed with constant height. This metaphor is very similar to the Flying Vehicle, with the exception that has less degrees of freedom.

2.1.1.5 Gestures

This metaphor also known as pointing techniques explained in [TDD04], implies that in the ex-periments made, the author found that test subjects intuitively tried to use a pointing gesture to tell the system where they wanted to go. This metaphor also includes users using any kind of device to point the desired location, like the Leap Motion or a remote, or even a stylus.

(27)

Literature Review

2.1.1.6 Gaze-Directed

In this metaphor user’s can navigate through the virtual world by picking a interest region and hav-ing the system automatically navigathav-ing the user to the desired place. This metaphor disthav-inguishes it self from the Gestures metaphor in the picking phase. The user instead of picking the region of interest with a gesture or a stylus, picks it with his gaze, by looking at the region. This technique can be used with a camera tracking the eyes of the user, or with a head-mounted display that can calculate where the user is focusing his gaze.

2.1.1.7 Discrete

In this metaphor the user cannot use a continuous method for navigating. The user can select his target location in a list or a menu presented to him. After the selection the system is in charge of moving the user.

2.1.1.8 Jumper

This metaphor presented in [BBS11], tries to combine the real world with the virtual world in-teractions. It was implemented with the goal of exploring objects in close proximity, and also to navigate in larger worlds.As seen in the figure2.1the user can select the target location, making the system produce a jump in the avatar landing him on the spot previously chosen. The difference between this metaphor and the Discrete is the picking is made in real time by selecting a place in the directly in the environment instead of having a textual interface. Also the navigation system usually provides a visual feedback of the navigation, a small animation to the user.

Figure 2.1: The jumper metaphor: (a) how the target selection is presented; (b) the animation from the jump; (c) The target location after the jump

2.1.1.9 Possession

In this metaphor the user is in a first person view, and with a peripheral device tries to point to the target destination. The system then shows a representation of the user’s future position allowing the user to correct his positioning if he so desires. One evolution of this metaphor its where the user can also decide the orientation of the representation having more control on his navigation.

(28)

Literature Review

2.2

Physical Devices

This section aims to present the physical devices that were considered relevant to this dissertation. These peripherals are usually the contact point from the user to the system, either to give inputs or to give feedback to the user. The input peripherals are devices like the gamepad or the keyboard. These allow for the user to give commands to the system to execute the desired actions. The devices to give visual feedback to the user are HMD’s (Head-Mounted displays). These allow the user to see the virtual world in a first person perspective.

2.2.1 Head-Mounted Displays

Head-Mounted Displays are devices that allow a user to see through them. They can either aug-ment reality, or give the user a sensation of being in a virtual world. The first HMD (Sword of Damocles)2.2was created in 1966 by Ivan Sutherland and Bob Sproull [Sut68]. Their concept of three-dimensional head-mounted display is the same we still use today in well known devices like Oculus Rift. The main idea is to create a three-dimensional perspective from two two-dimensional images of the same scene, with a slightly different perspective, creating the illusion that the user is in a three-dimensional world. The main goal for Sutherland and Sproull was to create a device that could emulate real life, so that when the user moved his head the perspectives in the HMD moved as well. Since 1968 HMD’s have changed and evolved. The miniaturization of technology helped in that sense: by having more computational power in smaller devices, different types of HMD’s emerged so they could allow different use types. Google’s HMD (Google Glass), for example can produce virtual content that is superimposed on reality, augmenting it. These types of devices can create an overlay of virtual elements on the real FOV(field of view) of the user.

Oculus rift on the other hand is a fully-closed HMD, it does not display anything from the real world (without other devices), but allows the user to be in a completely new world in a virtual environment. These type of HMDs completely fill the user’s FOV, so that images from the real world can only be seen if captured and relayed to the HMD. This dissertation will focus on these last types of HMD’s because it suits better the main focus of the dissertation that is navigating in virtual reality worlds with some immersion. A list of a of HMD’s that can be used on this dissertation and their differences.

2.2.1.1 Oculus Rift

Oculus Rift [Ocu] is a binocular head-mounted display shown in figure2.3, undoubtedly one of the most famous. The company Oculus VR started by releasing the Development Kit 1, a head-mounted display for the developers to start getting in touch with the platform. In July 2014, the company released the Development kit 2, the current version known to developers that consists in two displays with high resolution, 2160x1200 (1080x1200 for each eye) and with a refresh rate of 90 Hz. The Oculus Rift is a fully closed HMD. There are two displays that present to each eye a different perspective of the same image, creating the illusion to the user that the objects in the

(29)

Literature Review

Figure 2.2: Sword of Damocles considered one of the first HMDs

image are in a three-dimensional space. The DK2 also has a tracking system using the accelerom-eter and a small camera (included in the kit,) that allows developers to know the position and orientation of the user’s head with 6 degrees of freedom (DOF). Another great advantage the DK2 presents is that some game engines like Unity3D, already have API’s (application programming interface) for integration with the oculus, which allows the developer to create applications with ease.

Although Oculus VR still has not released a commercial version for non-developer users, the company has announced that there will be one in March 2016.

Figure 2.3: The Oculus Rift development kit 2

2.2.1.2 HTC Re Vive

HTC Re Vive [HTC] is a binocular Head-Mounted display developed by HTC in conjunction with Valve [Valb]. Like the Oculus Rift it is still in development, only being available for purchase the developer version. The Re Vive possesses 2 displays with 1080x1200 per eye with a refresh rate of 90 Hz. The tracking made with the Re Vive is performed with a gyroscope and an accelerometer. Some base stations are included in the kit. With these developer can track not only the position of the head like in the oculus but also the movement of its user in small rooms. The room must be approximately 4.5 square meters, and the base stations must be placed and configured previously

(30)

Literature Review

in the room. This represents a big advantage in the system provided: developers can easily track the movement of the user.

2.2.1.3 Samsung Gear VR

The Samsung Gear VR [Sama] is a Head-mounted display to be used with the Samsung Note 4 [Samb]. The Gear VR works as support to the mobile phone as it presents the content to be displayed, so the Gear VR is limited by the phone specifications and screen size. The Samsung device has a 5.7 inch screen with a resolution of 2560x1140 for both eyes. Besides the support, the Gear VR also has proximity sensors, a gyroscope and an accelerometer, which allow the user to watch the content through the two lenses.

2.2.2 Input Devices

This section aims to analyze devices that have the purpose to allow the user to control the system. The user can interact with the devices in different ways. He can physically push buttons or move a joystick, or the user can have his hands tracked by a camera having his gestures captured and recognized to control the system.

2.2.2.1 Game controller

This type of devices is one of the most known to users in virtual reality navigation, with its benefits proven in several years of use in games. The gamepad, for example, is one of the most common. Mostly used to play games, the user uses this device to navigate in the virtual world and perform the game objectives. Another one more specific is the joystick. This device allows the user to have a more precise navigation thorough the world, instead of having eight types of directions (up, down, left, right and their diagonals). In addition to the gamepad, the joystick allows navigation in all those directions and the ones in between. The evolution of the gamepad led to the incorporation of the joystick into them, giving the user more liberty to choose his favorite method of navigation. The gamepad in its later evolution became one of the most used devices used for navigation, especially due to the years of experience that games brought to users.

2.2.2.2 Keyboard/mouse

Another way to control an avatar in virtual reality worlds is using a keyboard and mouse. These also already proved to be efficient in gaming. The keyboard is commonly used for translation movements like forward and backward, while the mouse is responsible for the rotation of the camera. A few implementation of some virtual reality simulations are made only with the mouse, making the camera have a constant speed throughout the simulation.

(31)

Literature Review

2.2.2.3 Leap Motion

The Leap Motion controller was created by the Leap motion company in 2010. It can track the user’s hands and gestures in real time and can give the user a visual feedback of his hands. The Leap Motion is a device that can detect hands within one meter in front of it. The device possesses two infrared monochromatic cameras and three LEDs. These last ones produce an infrared light and the reflection of objects in that light is captured by the cameras and interpreted by the computer connected to the device. The Leap motion is connected to a computer, which allows for a three dimensional representation of the object reflected in the computer’s screen. It was developed an API for application implementations, in order to allow several programming languages to develop in (for example C++ or JavaScript). It also can connect to several gaming development engines such as Unity3D [Uni] and Unreal Engine [Gamb]. The device is approximately 7,6cm x 3cm x 1,3cm (length x width x height), making it a small device in comparison with his competitors namely Kinect [Mic], with also a reduced price in comparison (around 100 euros). The controller also excels in precision: some studies have reported that the Leap Motion has an error margin of a "hundredth of a millimeter" [WBRF13]. This measure was achieved with a robotic arm though. The Leap Motion controller is connected with a USB cable which allows PC and Mac usage. There is also an online store called "Airspace" where the user can download several applications to explore the controller. Some applications have some specific requirements or limitations, as the mandatory use of oculus rift, or (some) only work for PC or Mac. The device can track 10 fingers individually, the palm of the hand and the arm of the current user. It can detect their position, rotation and orientation, and track them in real time with great precision. The cameras are able to capture up to 290 frames per second, against the 30 frames provided by Kinect. The controller already comes with some pre-determined gestures implemented, like the "swipe gesture" and the "key tap gesture". The framework allows the implementation of custom-made gestures and their association to specific actions. Some problems reported with the usage of the device are related to limit situations in which the mechanism loses some of his precision. As the Leap Motion uses cameras to track the user hands, it can only detect features of objects in front of the device, so if the user’s hands are overlapped on each other the cameras cannot detect the first captured hand, and the second becomes an unstable and incorrect representation. Also when the hands are in boundaries of Leap Motion’s field of view by the Leap Motion the same problems occur. Another well documented problem is the sensitivity to lighting conditions: in a very bright room, the infrared cameras can have problems detecting the user’s hands, even though the device already has a compensation mechanism for this kind of situations.

2.3

Evaluation and Metrics

To evaluate the previously mentioned techniques, the taxonomy presented in [BKH98] will be used. In this article, the authors defend that in order to better understand their effect we should

(32)

Literature Review

Figure 2.4: Leap Motion and its representation of user’s hands

present the techniques in several categories. The authors state that a higher level of understand-ing could be achieved with this approach. The taxonomy splits techniques into 3 main branches: "Direction/target selection", "Velocity/Acceleration Selection", and "Input conditions". With this taxonomy present we can easily classify our own and other techniques and organize them. Be-sides the categorization, the authors of the same article [BKH98] also present to us a list of quality factors which are "mensurable characteristics of the performance of a technique", allowing a qual-itative evaluation of the traveling techniques. The metrics presented are the following: speed, accuracy, spacial awareness, ease of learning, ease of use, information gathering, presence, and user comfort.

2.4

Summary

This Bibliographic Review allows to understand that there are numerous techniques proposed, although only a few are relevant to the work. The use of different peripherals for navigation is going to have to be carefully analyzed because it can lead to implementation changes which can lead to different results than expected. The performance measures found in this section are all relevant for this work, but some like presence and information gathering will not be the focus of this work. Besides these characteristics, some questionnaires will be presented in the experimental session, to gather some information about sickness and fatigue felt with each technique. These questionnaires are usually used to compare virtual simulators.

As user testing is proposed, the implementation period will need to end extremely early, and with the study and comprehension of the several API’s and their integration a bigger time slot has to be reserved than expected in the beginning leaving little to no margin of errors. The testing will also have to be planned very carefully because of that margin of error, specially while creating the virtual reality worlds.

(33)

Chapter 3

Proposed Methodology and

Architecture

This chapter aims to explore the initial steps of the proposed work. The main decisions made regarding the development and testing phase will be detailed.

In Section 3.1 the methodology behind the work made will be presented, the origin of the techniques and the choice of game engine presented.

In section3.2the architecture of the application will be explained in detail showing the several modules for the different techniques and how they interact with the peripherals chosen.

3.1

Methodology

The goal of this section is to explain the methodology behind the implemented work, by exploring the reasoning behind the selection of techniques.

The work first started with a in-depth study of the metaphors found in the bibliographic review. This study helped making a first selection of techniques chosen for development. The main stores from Leap Motion and Oculus Rift were targeted and a review of the applications found was made (as presented in section 3.1.1). Before the development phase a review was made also to the main existing game engines. In section3.1.3 the reasons on the choice of game engine are explained and a review is also made. After selecting the development platform and what techniques to implement, the development phase started with a preliminary implementation of the techniques and their testing (alpha release). This phase was important to refine the techniques with the feedback given by users on the alpha tests. The user feedback allowed to have a better understanding from the techniques and improve them.

(34)

Proposed Methodology and Architecture

3.1.1 Applications Review

Besides the several metaphors mentioned before, several techniques were explored to understand the next steps needed for this study. The main search parameters were the simplicity of use of the techniques and trying to mitigate the motion sickness. For this study, an extensive research of the Leap Motion store as well as the Oculus Rift store was conducted. As both stores are live, several applications are uploaded daily to it. To circumvent this problem, only the applications available at the date of the study and that were deemed relevant were considered. All the applications submitted after that date were discarded.

The snapshot was taken in September 18th, 2015, and around 120 applications were chosen to be part of the study. Most of them were paid apps, so the only way to evaluate them was to find some use case videos. It was possible to find the video in the page of the store referring to the game most of the times, but some applications did not release it, so some extra research was needed (Google/YouTube). The selection criteria for the applications was that they had to have some kind of character that navigated. Applications where the avatar moved only on two dimensions were also considered. The main focus of this study was the locomotion of the character and its interaction with the surroundings. Although several applications were found matching the criteria, they can be summarized in three types of locomotion:

• Infinite Scroller:

In this type of games, the movement is made in an automatic way normally in two dimen-sions. The user only needs to control the direction of the movement as it automatically propels him forward/upward. This kind of applications do not have a winning scenario, the main goal is to beat the highscore gathering power ups to help with the progression. The type of locomotion, as mentioned before is produced with the help of the system. The user only has to control the direction with his own hands in case of a Leap Motion, and/or the Oculus Rift depending of the game.

In the games using only Leap Motion, it is placed on the table in front of the screen (table mount), giving the user the possibility to place his elbows on the table and control the ap-plication. Normally in this type of games the direction of the movement is directly mapped to the user’s hands, giving him control of the character by simply moving his hand to the left/right and/or up/down. In some rare cases the user can also control the velocity of the automatic movement.

• Discrete Navigation:

In this category, all the applications without continuous movement were chosen. In these types of games the user had to select the target destination and the system would transport him to it. Various modes of navigation were found: some use tele-transporting, others use a small animation to give visual feedback. In the game Deify [wg] for example, the player is in a first person perspective and can rotate his character using the Oculus Rift. With the Leap Motion controller the user can point to some pre-defined waypoints, and after

(35)

Proposed Methodology and Architecture

a visual animation (some fixed time) the user could move from waypoint to waypoint. In another application like Aboard the looking glass [Hof], a more restrict navigation mode was applied, the user could also rotate the camera in a first person perspective with the Oculus Rift, but only could select one fixed waypoint, The main difference was that he could only navigate to it after solving a small puzzle.

• Free Navigation:

The types of applications that fall in this category can be very diverse. In these types of games, the user has full control of the character movement. The games using Leap Motion gather the information from the user’s hands to control the character. On the Leap Motion applications the device is also in a table mount giving the opportunity for the user to rest his arms by placing them on the table. In these games (Leap Motion) the user could control most of the degrees of freedom, and a great variety of perspectives were found. For example in Orc’s Arena [Stu], the user found himself in a third person view, where it was not possible to move the character up and down in the YY axis turning it into a walking metaphor, also the user could also move at different speeds. Some applications as the game Solar Warfare [Teg] gave complete freedom of movement. The user was in control of an airplane (also in a third person view), and with his hands he could control the velocity, pitch, roll, yaw, and movement in any axis.

In games using the Oculus Rift the user has to control the character using a game pad or the mouse and keyboard. In these games, the user also had to use some kind of helping device for the locomotion. Normally these devices are the game pad or a keyboard/mouse. In most of these games the user could control the rotation with the mouse and the translation movements with the keyboard.

3.1.2 Techniques Review

From the techniques found four were chosen for testing. The first technique selected was named Point to Point (P2P). It is a technique based on the discrete navigation type where the user takes advantage of the Oculus Rift and any other kind of device that can give inputs. In this case a wireless mouse was used. In a first person perspective the Oculus Rift accelerometer is used to rotate the character.

When the user looks into an area where he can navigate, a small cylinder would appear cen-tered on the point of view of the Oculus, indicating a possible navigation place. The cylinder appears using ray casting: the system calculates in real time a straight line starting from the center of the user’s field of view with a predetermined range. When this line enters in contact with a possible mesh where the avatar can walk, it moves the cylinder to the end of the casted ray. When the user clicks on the mouse, the system (using the A star algorithm) calculates the best path to navigate.

One of the techniques chosen used the Leap Motion controller to perform the navigation. It was named Airplane because the gestures used in the navigation were similar to an airplane

(36)

Proposed Methodology and Architecture

banking. In this technique the user would be in a first person perspective and could rotate the camera that represented the user’s eyes by rotating the Oculus Rift. In this case the Leap Motion controller would be placed in a special support [Mot] made for the Oculus Rift. In order to navigate the user would need to extend his own hands in front of the Leap Motion, open them and spread his fingers. With the both hands open, the character would move in the direction the Oculus was pointing, with both hands closed it would move the character backwards having as reference point the Oculus front. The left hand would also control the movement speed, each finger corresponded to one level of speed: one finger extended corresponding to the slowest speed and five fingers extended to the maximum speed. By taking both hands out of the field of view of the Leap Motion the user could stop the movement. This could also be achieved by having one hand closed and the other one open. The system was thought this way in order to try to compensate some fatigue in the hands. When the first alpha tests were made, it was pointed out that it was a very fatiguing technique. The head mount made the user lift his arms when he wanted to move. To try to mitigate this problem two techniques were adopted. The first one was thought in order to let the user rest. By taking both his hands out of the scene, the movement would stop. The second one was developed to help minimize the fatigue in the movement. The system would remember the status of the right hand when it was out of the reach of Leap, by doing so, it was possible to lay down the right hand, and control the movement with the left one, to the exception of moving backwards.

Another technique was found in [JNSS], this revealed to be a method for the PlayStation Move [Son] that would use two Move remotes to emulate the user’s hands. The system was very simple: The user would have to crawl with his virtual hands through the map in order to navigate in the virtual reality world. This method was developed for the use of the Leap Motion, but instead of having the virtual hands touching the floor to grab it, the user had to close his hands to inform the system that he was ready to move. Then he needed to propel himself forward or backward by pulling his arm backward or forward respectively. This technique revealed to be very useful because the user had to give a specific input to inform the system he wanted to move, thus releasing his hands to interact with the rest of the environment. On the other hand on the first alpha tests, it revealed to be a very fatiguing system. Also the Leap Motion would sometimes fail to recognize the state of the hands, giving a very unsatisfying experience to the user.

The technique with the gamepad was the baseline for comparison with the other two tech-niques. Using as a reference games like Super Mario [Nin] or Counter Strike [Vala], it was nec-essary to develop the adaptation of the navigation for the Oculus Rift. The user would see the world in a first person perspective and could rotate his character/view using the accelerometer in the Oculus. The gamepad, with a very simple technique design, would use only the left joystick for movement. By tilting it forward the user would move in the same direction as the Oculus, and backwards in the same way. This technique also could make strafe movements to the left or right as well as diagonal movements, always having as a reference point the front of the Oculus Rift, the same direction the user was pointing to.

(37)

Proposed Methodology and Architecture

3.1.3 Game Engine Review

To create the virtual reality world, several game engines were studied in order to select the most adequate ones for the task at hand.

CryEngine

The CryEngine [Crya] is a very powerful engine which can create very realistic models. Created by a German company, Crytek [Cryb] has already a proven history of success with the release of various games like Far Cry [Ubi] and Sniper Ghost Warrior 2 [Int]. With several versions of the engine already released, the latest is the forth, released in 2013.

Unity 3D

Unity 3d [Uni] a game engine created in 2005 by Unity Technologies. Since then, they have released five versions of Unity (with the latest giving direct support to Leap Motion), where virtual reality was directly supported. The Unity engine can produce games to several platforms, and has both a free and a paid version. Leap Motion is directly supported by the Unity game engine, and has a great set of tutorials and very good documentation.

Unreal Engine

Unreal Engine [Gamb] is a game engine developed by Epic Games[Gama]. The first version was released in 1998 and since then four versions have been developed. With games that demonstrate the engine quality like Unreal Tournament [Gamc] or Deus Ex[Eni], Epic Games has proven to have a great engine. The support for Leap Motion, although it existed, when the research was made, was lacking documentation and the models offered were not realistic enough (the hands were made out of small triangular prisms).

3.2

Architecture

In this section a broad view of the system will be presented and explained. The architecture shown in figure3.1is based on Unity Game Engine. The system can be divided into three parts, one for each device used by technique. The gamepad technique has a very simple architecture. The device, connected by USB, communicates with the computer’s operating system that interacts with Unity Game Engine. With the information from the operating system, Unity then reports to a custom made script for this technique in C#. This script named Gamepad.cs, listens for the inputs from the gamepad and tells the game engine what actions should be made. The game engine, after applying these actions, renders the image to the screen and Oculus Rift at the same time.

In the Point to Point technique the user controls the navigation using the mouse and Oculus Rift. The user operates the Oculus Rift to select to the place he wants to move to and when he clicks on the mouse the navigation system transports him to the selected place. The architecture starts

(38)

Proposed Methodology and Architecture

with an input (in this case from the mouse), which is propagated through the operating system informing Unity game engine that the mouse was pressed. A custom made C# script is waiting inside Unity for the input of the mouse. When the game engine relays that the mouse was pressed, the script moves the character to the target location. The navigation method called navigation mesh agent is provided by the game engine (Unity3D). This system needs some configurations: Unity needs to know details about the agent to calculate where the agent can and can not go. For example, the agent in this application had to be lower than the height of the door in order to pass through it. Unity also needs to know the minimum and maximum step that he can climb, (useful for example when climbing stairs). Several other parameter can be adjusted using this tool, like the maximum velocity of travel and the maximum acceleration. After configuring the agent, the developer needs to select which parts of the scenario can be walked. Then the system calculates where the agent can or can not navigate (this process is called baking).

The airplane technique is the most complex one, as the device (Leap Motion) can not commu-nicate with the Unity game engine directly as the other devices. The communication starts with the Leap Motion device capturing the user’s hands and relaying the information to the operating system. Then, it communicates with the game engine, which uses the Leap Motion API (a sepa-rate package installed previously). A C# script was created in unity named Airplane.cs. This script uses the API to gather information about the user’s hands. With this information, the script knows what the user intends to do and applies these actions to the virtual character. An example of the code to recognize if one hand of the user is closed follows:

if (leapHand.GrabStrength == 1){ {

fullStop = true; }

}

If the user has his hand closed, then the character stops. The game engine, with the help of Leap Motion’s API, represents the hands of the user in the scenario, giving visual feedback to the user. The scene is then sent to the Oculus Rift API, where the cameras are properly adjusted to give the user a three-dimensional perspective.

The scene is a very important part of the environment. Although immersion will not be eval-uated directly, it is still a very big factor for the end user. For example, the scenario can help distracting the user from the motion sickness he might be feeling. The scenario was created with the help of Unity’s asset store, and the help of a company that produces virtual models of houses. To navigate on the scene, a virtual character was needed. To accomplish this task, a special rig was made that consisted on having the asset from the Oculus Rift API that already had the Leap Motion asset embedded. This asset consisted on having the camera and the representation of the hands always at the same distance (parameter setting). The asset has several levels of hierarchy. when applying a transformation to the root of the asset, Unity3D applies it to all its children.

(39)

Proposed Methodology and Architecture

(40)
(41)

Chapter 4

Iterative design and implementation of

navigation metaphors

In this chapter the design of techniques will be presented in greater detail, namely how the tech-niques evolved after the alpha tests made for user feedback. It will be discussed the implementation details from all the techniques. Some illustrations on the user interaction with the techniques will also be shown.

4.1

Techniques Iterations

From the beginning that this dissertation was thought to have a final experiment to evaluate and compare the different techniques found. The main goal of the experiment is to find which is the best technique to navigate in a virtual tour to an apartment from a possible buyer. The target audience is the population older than eighteen years old. One of the main thoughts during the development phase was to make the navigation the simplest possible avoiding at all costs the motion sickness effect caused by the Oculus Rift. In order to minimize these effects, several measures were developed, and then tested to refine the techniques. The technique improving process was made interactively, each technique was developed and placed on a testing phase (alpha release) where some users could experiment with them. After the experiment, impressions from the test subjects were collected and helped improve the techniques.

4.1.1 Gamepad

This technique was based on games that used the game pad to navigate into three dimensional worlds. The user has the Oculus Rift to look around in the virtual scene and uses the game pad to navigate. This technique’s first iteration use the Oculus Rift but instead used both joysticks and a button. The left joystick controlled the translation movements of the character. Upwards meant that the character would walk forward towards the direction he was facing. The user could also go left, right, or backward, by moving the joystick in the corresponding direction. The right joystick would control the rotation of the character only in the vertical and horizontal axis, not allowing for

(42)

Iterative design and implementation of navigation metaphors

the user to yaw. This iteration soon started to show that the rotations with the right joystick induced motion sickness. Also it was not natural to have two devices(the Oculus Rift and the gamepad) controlling the rotations. The next iteration was trying to move some functions from the left to the right joystick, the forward and backward movement was still on the left joystick but the horizontal and diagonal strafe was shifted to the right joystick, removing the rotation feature. This solution was implemented because some of the alpha testers found very unnatural not using the right hand or joystick. But this solution failed, as it was even more unnatural to make horizontal and vertical movements with one joystick and strafe on the other. The final iteration of this technique was then developed using only the left joystick which controlled the character in eight directions forward, backward, left, right and the diagonal movements in between.

4.1.2 Airplane

Based on the airplane metaphor, this technique was developed having in mind the Leap Motion. As the device is relatively new to users, the implementation focused on simplicity. The first iteration was focused on the interaction with the Leap Motion API, as it did not require the Oculus Rift. The Leap device was placed on the table in front of the computer screen, and the user could see his own movements through it. The user could only move forward by extending his hands in front of his head. Using this technique the misreadings of the controller were minimized. The subject only had to place his hands in the field of view of the Leap Motion. The final result of this iteration was that the user could only go forward without turning. In order to solve this problem, the system was designed to recognize movements from the right hand. When the user swiveled left and right, the character would also rotate in the same direction.

The next iteration focused on finding the synergy between the Oculus Rift and Leap Motion. After some consideration, it was clear that the Leap Motion could not be on a table mount. When the user moved his head or body, his hands would be in the field of view of the device. This was a very uncomfortable position for the user. The first approach was to give the user the ability to rotate through the movement of his right hand. When the alpha users started testing it, they thought that it induced more motion sickness. They were also attempting to avoid the movement of the right hand. To aggravate this, sporadic misreadings of the Leap Motion made almost impossible to operate this technique. To solve this problem, the Leap Motion was placed in the Oculus head-mount [Mot]. The change of stance from the Leap Motion meant that the axis system that was in place on the table mount was different from the head mount. The new iteration still had the rotation associated to the right hand. But with the change of stance, the user instead of making a swivel gesture had to tilt his hands in order to rotate left and right. This approach still induced great levels of motion sickness to the alpha testers and had to be removed. The next iteration was improved by removing the rotation from the right hand. The character’s rotation was only possible through the Oculus Rift accelerometer. This method revealed to be stable: the user could navigate through a open environment with easiness. However when tested in a closed environment (inside of an unfurnished house) the user had some troubles with the refined control of his character. Furthermore, the repeated action of lowering his arms to stop movement presented to be very

(43)

Iterative design and implementation of navigation metaphors

fatiguing and the travel speed was too high to control the character with precision. This problem was solved by using the left hand as the speed controller. Through Leap Motion API, the system is able to recognize the number of extended fingers of the left hand and calculate the movement of travel. This speed has a range of five levels, in which one finger extended is the slowest possible and five fingers the fastest. This iteration was stable and worked very well. The only drawback was the fatigue it caused. In an attempt to minimize it, a small change was made: the system would save the state of the right hand when it dropped out of the field of view. With this final modification the user could rest his right hand, controlling everything with the left one (with the exception of the backwards movement).

4.1.3 Point to Point

This technique was based on video [Vir]. In the footage the user is using a HMD to navigate in the virtual world. To move the avatar the user has a cross in the center of his field of view, with that cross he can select in a textual menu where he wants to navigate. After studying the possible ways to develop a method similar to the one in the video, it was decided that the type of movement was going to be discrete. The first step was picking the interest area. A similar method to the one on video was developed, by placing several spheres distributed throughout the map. Unity would then cast an invisible ray (ray casting) starting in the center of the user’s FOV with a finite length. If the ray intersected some object before reaching the full length, an event triggers that identifies the object. If the object was one of the spheres previously mentioned, the user could start the movement by giving an input (in this iteration, pressing a key in the keyboard) and automatically transport himself to the selected point. This technique was stable enough for the alpha testing, and it produced positive results. However, when placed in the context of this dissertation, placing so many keypoints in a closed scenario like the inside of a house would be unwise. A balance had to be found: to much keypoints would clutter the vision, diminishing the keypoints would make the number of possible positions drastically reduce. To solve this problem the keypoints were removed, and another method adopted. Based on Google Maps, where the user selects the place he wants navigate to, the system (with a small animation) navigates him towards that place. Using this knowledge a small cylinder was placed into the scene. The cylinder worked as a placeholder and a visual feedback to the user, indicating the next possible position the character could navigate to. Using the ray tracing method mentioned before, the user could point his head towards the place he wanted to go, and the small cylinder would appear in the other end of that ray. After pressing a key, the system transported him automatically to the place the cylinder was previously. The user could repeat this action to navigate freely in the three-dimensional scene. Although this version was stable, some alpha testers stated that they felt a bit disorientated with the sudden dislocation. To deal with this aspect, it was decided that when the character changed his position a visual feedback would be given. After some investigation and algorithms exploration, the decision was to use Unity’s navigation Mesh agent. This is a technique usually found in games to make the non player characters (NPCs) move without any type of inputs. The navigation Mesh agent needs to know the possible locations were he can move to. Unity provides a Bake feature

(44)

Iterative design and implementation of navigation metaphors

that can be used to calculate meshes that can be walked in the scene, as long as the developer provides some information for the desired agent (for example, height and width). With the agent configured the user becomes the agent and can give inputs to move it. The system transports him avoiding obstacles (non walkable places), and calculating the shortest path possible, using the A* algorithm.

4.2

Final Iterations and Implementation

In this section the final iteration from the techniques will be presented. The level of detail on this section should allow for a better understanding of the implementation behind each technique.

In the development phase it was necessary to have an object in Unity3D to apply the scripts made for navigation. This object consisted on several gameObjects with diverse properties or-ganized in hierarchy, it represents the user’s avatar in the virtual environment. As seen in figure

4.1 on top of the hierarchy was the Rig, this gameobject was responsible for the logging files taken in each experiment. It was also where the transformations from the scripts were applied. The game engine with the hierarchy system allows the developer to apply geometrical transfor-mations. These transformations as translation, rotation and scaling will be inherited from the top gameObjectby his children.

Figure 4.1: Gameobject hierarchy on Unity3D

The LMHeadMountedRig is a gameObject from the LM’s API that allows the developer to place the camera and the hand’s representation at a distance natural for the user.

4.2.1 Airplane

In the final iteration of the airplane technique the user could navigate throughout the virtual envi-ronment using only his bare hands, using the Leap Motion to track the user’s hands and represent them. To move, the user needed to have his hands extended forward with his fingers spread as shown in figure4.2a. The user could also move backward by closing both his hands at the same time as shown in figure4.2b. In this technique the user could also choose the movement speed

(45)

Iterative design and implementation of navigation metaphors

(a) Forward movement (b) Backward movement

(c) Speed selection (d) One hand movement

Figure 4.2: Movement gestures on the airplane technique

by closing his fingers. As seen in figure4.2cthe user has two fingers held in the upright position allowing him to move at the second level of speed defined. The system was designed to have five levels of speed, each level corresponding to the number of upright fingers. With one finger up the user would move at the slowest speed and with five fingers at the maximum speed. This maximum speed of this technique was equal to the speed in the other techniques that could not have variations in speed. The airplane technique in the alpha tests revealed to be very fatiguing. In trying to mitigate that factor, an alternative version of the technique was developed. The system would memorize the position of the right hand when taken out of the scene. This allowed the user when moving forward to rest his right hand, using only his left to control the movement as shown in figure4.2d.

The airplane technique was based on the airplane metaphor found in the bibliographic review. Its adaptation to the Oculus Rift was made in order to give the user a first person perspective. The development of the technique was made in a C# script, using the method Update(), a method that is called every frame, the position of the hands of the user was updated at the same time. For each frame the game engine would know the state of both hands and would apply the transformations in order to make the rig (avatar) navigate through the virtual environment.

4.2.2 Gamepad

This technique was based on already existing games in the market. The final iteration of this technique only used the left joystick as shown in figure4.3. The user could move front, backwards and to the sides, as seen in the figure in red. It was also possible to move in the diagonals between the directions mentioned, these were developed to make the user have a more natural experience with navigation.

The script made for this technique made use of the update() method. This method is called every frame of the application. The system would know the position of the joystick at each frame

(46)

Iterative design and implementation of navigation metaphors

Figure 4.3: Directions that the user could move in the gamepad

and move the avatar to the user’s desired position.

4.2.3 Point to Point

In the final iteration of this technique the user would use the mouse to give input to the system on where he wanted to place his avatar. Every time the user was looking with the Oculus Rift to the region where it was possible to navigate, the system would give a visual feedback that it was possible to move. This feedback translated into a small white cylinder that was placed in the target region. The technique can be divided into two main phases: the picking phase, where the user would pick the location and the navigation phase, where the system would place the avatar on the chosen location.

The picking phase uses the Unity3D’s ray casting function. The system traces an invisible ray from the center of the user’s field of view until it reaches its maximum length or the ray intersects with a relevant gameObject. The gameObject is considered relevant in this context if the user can navigate to it, for example the floor, or the ground of the houses. When the ray intersects with one of the relevant objects the system knows the exact position of the intersection and moves a white cylinder to that position as seen in figure4.4. The user can move the ray by moving his head, by doing this the user can select the target region. The navigation phase of the region starts when the user presses the mouse’s left key. It activates the navigation mode on the game engine. This mode was implemented using Unity3D’s navigation mesh agent. This method is usually applied in games where the NPCs (Non Playable Characters) need to move in specific regions or patterns. The NavMesh agent consists of an agent on the virtual world that can navigate through it. The developer when creating the agent has to specify its characteristics. The height and width of the agent have to be specified as well as the movement speed and the maximum step height. This allows the game engine to calculate where the agent can or can not walk. For example: Unity must know the height of both the agent and the door. With this information, the game engine is able to know if the agent can fit through the door. The step height was important for the system to know if the agent could climb the steps in the entrances of the houses. After creating the

Referências

Documentos relacionados

(Santamouris et al. 2008) monitored the IAQ in 62 classrooms of 27 naturally ventilated schools of Athens. Measurements were performed in spring and fall seasons when window

En 2007 tres asociaciones, APFG (Associationpour la Promotion des Femmes de Gaoua) de Burkina Faso, con una larga experiencia de trabajo contra la ablación y por los

Com base nos pressupostos suprarreferidos, o presente estudo pretende apurar a existência de relação entre a orientação motivacional, a regulação emocional e o

Las acciones propuestas se refieren en primer lugar a aspectos institucionales: formalizar una estructura operativa para el mejoramiento de barrios, así como las responsabilidades

No domínio científico, espera-se que forneça mais subsídios e estimule mais trabalhos aprofundados sobre a questão das relações interculturais, não tanto

Os teores de P na cinza óssea dos animais foram deficientes, variando de 9,9% a 14,2% nas vacas em lactação e de 9,2% a 12,7% nos bovinos jovens, sendo, em ambas as categorias,

In order to capture the properties of action lattices that remain valid for intervals we propose a new structure called Quasi-action Lattices which generalizes action lattices and

No entanto, diferentemente do que se propõe na literatura, que consiste em wavelets com ordem elevadas, neste trabalho verificou-se que uma wavelet mãe da famí- lia Daubechies