• Nenhum resultado encontrado

5 . 2 . I V R U X- I M M E R S I V E V I R T UA L R E A L I T Y U S E R E X P E R I E N C E

B

C

D E F G H I J A

Figure 5.8: IVRUXinterface. A) Pie charts representing intervals of time where a par-ticipantis looking at target spheres; B) User selection scroll view; C) 360º panorama; D) Intervals of time where aparticipantis looking at target spheres; E) Story Events; F) En-vironment Audio; G) Character Audio; H) Character Movement; I) Character Animation;

J) Scrubbable Timeline

• This data was organized in a scrubbable timeline (see J in fig. 5.8), where we are able to monitor key events of five types: story events, character animation, character position (according to predefined waypoints in the scene), character dialogue and environment audio.

• User behavior can be observed in three modes: the single-camera mode, a mode for 360º panorama (see C in fig. 5.8) and a stereoscopic mode for simulation ofVR HMD.

• The prototype replicates the story’s3Denvironment and the visual representation of the user’s head tracking (field of view) by a semi-transparent circle with the iden-tification number of theparticipant. Moreover, a line connecting past and present head-tracking data from eachparticipantallows us to understand theparticipant’s head motion over time.

• The scrubbable story timeline (see J in fig.5.8) presents the logged events and audio events.

• Semi-transparent colored spheres are also shown (see C in fig.5.8). One (in yellow) represents the Point of Interest (POI)in the story, simulating the "Director’s cut".

The two others (in orange and red) represent the location of the two characters.

• A scrollable panel (see B in fig. 5.8) allows the user to choose whichparticipant session to analyze and by selecting it, three pie charts (see A in fig. 5.8) are shown indicating the ratio of time that theparticipantspent looking at one of the target colored spheres. Additionally, the timeline is also updated to represent the intervals of time where aparticipantis looking at each target (see D in fig.5.8).

An evaluation of this IVRUX prototype is described in appendix section B.3. In summary, 32participantsexperienced the first iteration ofThe Old Pharmacyscene. In addition to quantitative and qualitative measures, the trajectories ofparticipantsbased on captured data from viewing were analyzed using theIVRUXprototype. We were able to identify shortcomings ofThe Old Pharmacynarrative, such as the camera orientation, story pacing issues and lighting design, which were addressed in later iterations ofThe Old Pharmacy. Another important factor that resurfaced from the observation of these trajectories, was the weight given by participants to diegetic sounds in guiding their narrative experience, from following dialogues between characters to finding sources of sound events.

5.2.2 Second iteration ofIVRUX

IVRUXwas reimplemented to be compatible with the second iteration of The Old Pharmacy (see section5.1.2). The prototype, built using Unity 5 14and running on an NVIDIA Shield K1 tablet, allowed for loading ofXMLfiles from an FTP server:

• This iteration was initially designed as a way for the experimenter to review the experience with theparticipantbut was never included in a study protocol.

• Unlike the previous version, this version did not allow aggregation of several par-ticipants’ experiences due to the time differences betweenparticipants. These time differences are caused by the different interaction behaviors when accomplishing a task. Additionally, the use of the tablet as a platform was limiting in the interactions possibles or the space for visualizing the recorded data.

• Unlike the previous version with three observation modes, this iteration ofIVRUX only allows for a single camera corresponding to the position and orientation the participantwas experiencing the virtual world.

• A scrubbable timeline on top (see fig. 5.9(b)) presents key story and audio events for monitoring.

• Objects in the virtual environment were color-coded according to the interaction happening in the timeline and their distance from theparticipant.

14https://unity3d.com/pt/unity/whats-new/unity-5.0

5 . 2 . I V R U X- I M M E R S I V E V I R T UA L R E A L I T Y U S E R E X P E R I E N C E

• Touch events were color-coded depending if they were inside or outside the joystick areas. The increasing opacity of these touches was also mapped to their state (re-spectively, "canceled", "began", "moved", "stationary", "ended"); clicking an object corresponds to a touch "ending" and therefore is the most visible.

Ultimately, this prototype was not used to analyze any study data as it required individual analysis of a large set of sessions.

5.2.3 Third iteration ofIVRUX

The third iteration ofIVRUXwas conceptualized as a proof-of-principle prototype, addressing aspects explored in the first and second iteration. Although this iteration was never made into a functional prototype, the design of this iteration was helpful for future work (in chapters6-7) as it focused on a more structured approach to data analysis.

Another reason for it never evolving to a functional prototype stage was related to the development cost. For example, MRAT, a Mixed Reality Analytics Toolkit of similar scope toIVRUX, had a development "period of 18 months" [Neb+20].

This iteration ofIVRUXwas intended to be used in an academic context, offering a support system to facilitate communication of creators and/or researchers with partici-pants, and therefore streamline the process of evaluatingUXinVR, both for360º video and for model-basedVR. Following software design practices, severalartifactswere cre-ated15. High fidelity wireframes in figure5.10show the focus on the analysis of capture data through an online dashboard:

• In the modular analytics dashboard (see fig. 5.10(a)), the researcher could choose the scope ofparticipants(segmentation ofparticipants, B in fig. 5.10(a)), the media representation (first person view or unwrapped video controlled by a scrubbable timeline, A in fig. 5.10(a)), temporal visualizations (a heatmap overlaid on the video, A in fig.5.10(a); scanpaths, C in fig.5.10(a); timeline, E in fig.5.10(a)) and non-temporal visualizations (single heatmaps or graphs on core metrics; D in fig.

5.10(a)).

• Outside this visualization, a video annotation interface (see fig. 5.10(b))) would allow the practitioner to manually and/or automatically annotate the video accord-ing to their needs by usaccord-ing spatial and temporalPoint of Interest (POI), grouped by layers, allowing for a more detailed and complex description of narrative structure inVR(e.g., a narrative with multiple branching andPOIs). This annotation could then be used in analytics to give more quantitative data to the process (E in fig.

5.10(a)). For example, creating a layer "MainPOI", adding aPOIto it, that changes

15Theseartifactsincluded user stories, stakeholder descriptions, use cases diagrams, use case descrip-tions, functional requirements, non-functional requirements, conceptual class diagrams, high-level system architecture, low fidelity wireframes and high fidelity wireframes.

(a)Participantwalking (using touch on left joystick) towards a selectable object

(b)Participantwaiting while a object is currently chosen; timeline on top shows story and audio events

Figure 5.9: IVRUX reimplementation for the second iteration of FoL, running on a NVIDIA Shield K1 tablet

5 . 2 . I V R U X- I M M E R S I V E V I R T UA L R E A L I T Y U S E R E X P E R I E N C E

(a) High fidelity wireframe for theIVRUXvisualization interface. A) Media; B) Partici-pantselection; C) Scanpaths; D) Non-temporal visualizations; E) Temporal visualiza-tions

(b) High fidelity wireframes for theIVRUXannotation interface

Figure 5.10: High fidelity wireframes for a reconceptualized third iteration ofIVRUX

in space and time along with the video; the analytics interface would then be able to provide information such as the percentage of viewers following the mainPOIat a specific time.