Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Poster: Creating a User-Specific Perspective View for Mobile Mixed Reality Systems
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
ABSTRACT
We propose a method for creating a user-specific perspective view
for mobile mixed reality (MR) systems that provides threedimensional
perception based on pseudo motion parallax. In
general, most common mainstream interface for a hand-held
AR/MR system is video-see-through style with a deviceperspective
view. On the other hand, our method provides a userperspective
view, which displays MR images according to the
user’s viewpoint and the position of the display. Namely, our
method trims a scene behind the display and superimposes CG
images in consideration of the user’s point of view. This enables
the user to perceive a stereoscopic effect with a non-3D display
based on pseudo motion parallax. We implemented a prototype
and tested our proposed method on smartphones.
Keywords: Mixed reality, user-specific perspective view,
smartphone, motion parallax, head tracking.
Index Terms:? H.5.1 [Information Interface and Presentation
(e.g.,HCI)]: Multimedia Information Systems— Artificial,
augmented, and virtual realities.
1 INTRODUCTION
In recent years, hand-held based AR/MR applications have
become the mainstream in the field of AR/MR. However, most of
them display AR/MR scene from the viewpoint of the camera on
the hand-held device [1]. Of course, an ideal hand-held based
AR/MR system displays a scene from the user’s viewpoint, as if
the hand-held device disappears and CG images are superimposed
with perspective correctness (Figure 1). Geometrically correct
user-perspective rendering for AR/MR systems could become a
powerful depth cue for understanding three-dimensional structure
of AR/MR world because it provides motion parallax. Hill et al.
proposed a method for creating the augmented scene from the
user’s point of view [2]. However, they implemented the system
with a single workstation equipped with GPU and a tablet display.
Baričević et al. have also created a proof-of-concept prototype
using a Kinect sensor, a Wiimote, and a workstation [1]. In this
poster, we tried to implement a prototype on smartphones without
any additional sensors.
2 CREATING A USER-SPECIFIC PERSPECTIVE VIEW
2.1 Overview
To create a user-specific perspective view, there are three
challenges to be solved:
(a) How to estimate the user’s head (eye) position
(b) How to estimate the six-degrees-of-freedom (6-DOF) display
pose (position and orientation) in the world
© How to create the model of the real world
In this research, we tackle challenge (a) and (b) to realize a
user-specific perspective view on smartphones, and we assume
that the depth of the real world is always constant. In other words,
we make an assumption that the background is a planar surface.
Above all, challenge © requires high-quality reconstruction of
geometry. This kind of 3D reconstruction using active sensors,
and cameras are well-studied in the field of computer graphics and
computer vision. Methods using cameras need a lot of
computational power for reconstructing geometry. One example
of methods using active sensors is KinectFusion which was
proposed by Izadi et al. [3]. However, this method requires
additional sensors. Unfortunately, current smartphones does not
equipped with active sensors, such as that of Kinect’s, and do not
have sufficient computational power. For these reasons, we did
not tackle challenge ©.