29-09-2016, 04:18 PM
1456901415-virtualreality.docx (Size: 1.55 MB / Downloads: 6)
ABSTRACT
Virtual Reality (VR), sometimes referred to as immersive multimedia, is a computer-simulated environment that can simulate physical presence in places in the real world or imagined worlds. Virtual reality can recreate sensory experiences, which include virtual taste, sight, smell, sound, touch, etc. Most current virtual reality environments are primarily empirical experiences, displayed either on a computer screen or with special stereoscopic displays, and some regulated simulations include additional sensory information and emphasize real sound through speakers or headphones targeted towards witnesses. Some advanced, haptic systems now include tactile information Virtual Projection is can turn almost any surface into a dynamic video display. A projection device is a form of input device whereby the image of a virtual key is projected onto a surface It involves the use of a laser, interference, diffraction light intensity recording and suitable illumination of the recording In a proposed system we are using virtual based display images. Light is traced a human and where he need then projected through light source. Key is pressed and then picture is captured by wireless camera Image is processed in mat lab and perform a corresponding task is known as “VIRTUAL REALITY BASED CONTROLLED SYSTEM.
INTRODUCTION
IMAGE PROCESSING
Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.
Closely related to image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired (via imaging devices such as cameras) from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images (e.g., videos or 3D full-body magnetic resonance scans).
In modern sciences and technologies, images also gain much broader scopes due to the ever growing importance of scientific visualization (of often large-scale complex scientific/experimental data). Examples include microarray data in genetic research, or real-time multi-asset portfolio trading in finance.
INTERACTIVE SYSTEM
An interactive whiteboard (IWB), is a large interactive display that connects to a computer. A projector projects the computer's desktop onto the board's surface where users control the computer using a pen, finger, stylus, or other device. The board is typically mounted to a wall or floor stand. They are used in a variety of settings, including classrooms at all levels of education, in corporate board rooms and work groups, in training rooms for professional sports coaching, in broadcasting studios, and others.
OPERATION
An interactive whiteboard (IWB) device is connected to a computer via USB or a serial port cable, or else wirelessly via Bluetooth or a 2.4 GHz wireless. In the latter case WEPand WPA/PSK security is available. A device driver is usually installed on the attached computer so that the interactive whiteboard can act as a Human Input Device (HID), like a mouse. The computer's video output is connected to a digital projector so that images may be projected on the interactive whiteboard surface.
The user then calibrates the whiteboard image using a pointer as necessary. After this, the pointer or other device may be used to activate programs, buttons and menus from the whiteboard itself, just as one would ordinarily do with a mouse. If text input is required, user can invoke an on-screen keyboard or, if the whiteboard provides for this, utilize handwriting. This makes it unnecessary to go to the computer keyboard to enter text.
Thus, an IWB emulates both a mouse and a keyboard. The user can conduct a presentation or a class almost exclusively from the whiteboard. In addition, most IWBs are supplied with software that provides tools and features specifically designed to maximize interaction opportunities. These generally include the ability to create virtual versions of paper flipcharts, pen and highlighter options, and possibly even virtual rulers, protractors, and compasses—instruments that would be used in traditional classroom teaching.
RELATED WORKS
Shadow Puppets: Supporting Collocated Interaction with Mobile Projector Phones Using Hand Shadows, Lisa G. Cowan, Kevin A. Li
Pico projectors attached to mobile phones allow users to view phone content using a large display. However, to provide input to projector phones, users have to look at the device, diverting their attention from the projected image. Additionally, other collocated users have no way of interacting with the device. Sharing information displayed on a mobile device's small screen with collocated people can be difficult. Pico projectors make it easier for mobile phone users to share visual information with those around them using a projected image, which can be much larger than the device's screen. However, current commodity projector phones only support input via the handset’s user interface. As a result, users must look at the handset to interact with the phone's buttons or touch screen, dividing attention between the handset and the projected display. This context switching can distract presenters and viewers from ongoing conversations taking place around the projected display. Additionally, viewers may find it difficult to interpret what the presenter is doing as he interacts with the handset, and they have no way of interacting with the system themselves.
High-Accuracy Stereo Depth Maps Using Structured Light, Daniel Scharstein, Richard Szeliski
The Recent progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. We use structured light to uniquely label each pixel in a set of acquired images, so that correspondence becomes (mostly) trivial, and dense pixel-accurate correspondences can be automatically produced to act as ground-truth data. Structured-light techniques rely on projecting one or more special light patterns onto a scene, usually in order to directly acquire a range map of the scene, typically using a single camera and a single projector. Random light patterns have sometimes been used to provide artificial texture to stereo-based range sensing systems. Another approach is to register range data with stereo image pairs, but the range data is usually of lower resolution than the images, and the fields of view may not correspond exactly, leading to areas of the image for which no range data is available.
Skin put: Appropriating the Body as an Input Surface, Chris Harrison, Desney Tan, and Dan Morris
Skin put, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with several proof-of-concept applications we developed.
Bonfire: A Nomadic System for Hybrid Laptop-Tabletop Interaction, S. K. Kane, D. Avrahami, J. O. Wobbrock, B. Harrison, A. D. Rea, M. Philipose, and A. LaMarca.
Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. Bonfire attempts to combine the advantages of the laptop’s screen, keyboard, and computing power with the natural input, extended space, and object-awareness of projected and perceived peripheral displays. Bonfire provides a large interaction space without significantly increasing the laptop’s size and weight, as with recent multi-display laptops. We describe Bonfire’s architecture, and offer scenarios that highlight Bonfire’s advantages. We also include lessons learned and insights for further development and use.
PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System, Andrew D. Wilson
PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor. Play Anywhere’s configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-projected vision-based table systems, including a shadow-based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-vision table systems, the ability to continuously track sheets of paper, and an optical flow-based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms.
EXISTING SYSTEM
USE OF MULTIPLE CAMERAS
In the existing system we are using multiple cameras to obtain the relative position between the fingertip and the projected surface. A pico-projector can be used to significantly increase the limited screen size of the mobile devices. With the development of the projection technology, we believe that embedded projectors in the mobile phones will be very common, and people will enjoy a way of displaying digital contents on everyday surfaces. Meanwhile, the interactions (e.g., touch, gesture) on the projected display are thought to be appealing. To achieve the touch interaction, the biggest challenge lies in how to determine whether the fingers touch the projected surface or not.
CHAPTER 4
PROPOSED SYSTEM
One projector and one camera make up a 3-D measurement system. In this field, structured light, which achieves 3-D reconstruction by analyzing a feedback image of a certain pattern projected on the object, is one of the most promising techniques but the computational complexity of 3-D reconstruction is high, which will greatly influence the real-time capability of the system. Therefore, we propose a novel approach that takes advantage of the buttons’ distortions caused by the fingers to detect the touch operation on the screen. For example, if a button is clicked by the finger, then the shape of the button will change in the camera’s image plane (CIP). Furthermore, we explore the model of the buttons’ deformation caused by the finger, which shows that there is a positive relation between the button’s distortion and the finger’s height to the projected surface. Then the touch information of the finger can be extracted from the button’s distortion. Instead of tracking the hand’s 2-D position, which is also recognized as a challenging work in computer vision, we focus on detecting the deformation of the buttons to determine the touch action on the projected surface.
SOFTWARE DESCRIPTION
MATLAB
INTRODUCTION
MATLAB, which stands for MATrix LABoratory, is a software package developed by Math Works, Inc. to facilitate numerical computations as well as some symbolic manipulation. The collection of programs (primarily in FORTRAN) that eventually became MATLAB were Developed in the late 1970s by Cleve Moler, who used them in a numerical analysis course, he was teaching at the University of New Mexico. Jack Little and Steve Bangert later reprogrammed these routines in C, and added M-files, toolboxes, and more powerful graphics(original versions created plots by printing asterisks on the screen). Moler, Little, and Bangert founded Math Works in California in 1984.
What is MATLAB?
• MATLAB (“MATrix LABoratory”) is a tool for numerical computation and visualization. The basic data element is a matrix, so if you need a program that manipulates array-based data it is generally fast to write and run in MATLAB (unless you have very large arrays or lots of computations, in which case you’re better off using C or Fortran).
• High level language for technical computing
• Stands for MATrix LABoratory
• Everything is a matrix - easy to do linear algebra
Using MATLAB
The best way to learn to use MATLAB is to sit down and try to use it. In this handout are
a few examples of basic MATLAB operations, but after you’ve gone through this tutorial
you will probably want to learn more. Check out the “Other Resources” listed at the end
of this handout.
The Beginning
When you start MATLAB, the command prompt “>>” appears. You will tell MATLAB
What to do by typing commands at the prompt.
Creating matrices
The basic data element in MATLAB is a matrix. A scalar in MATLAB is a 1x1 matrix,
and a vector is a 1xn (or nx1) matrix.
Advanced operations
There’s a lot more that you can do with MATLAB than is listed in this handout. Check
out the MATLAB help or one of the “Other Resources” if you want to learn more about
the following more advanced tools:
• Numerical integration (quad)
• Discrete Fourier transform (fft, ifft)
• Statistics (mean, median, std, var)
• Curve fitting (cftool)
• Signal processing (sptool)
• Numerical integration of systems of ODEs (ode45)
M-files and functions
If you are doing a computation of any significant length in MATLAB, you will probably want to make an m-file. Anything that you would type at the command prompt you can put in the m-file (for example, “script.m”) and then run it all at once (by typing the name of the m-file, e.g. “script”, at the command prompt). You can even add comments to your m-file, by putting a “%” at the beginning of a comment line.
File I/O
MATLAB allows you to save matrices and read them in later. The simplest way to do this is using the commands “save” and “load”. Typing in “save A” saves matrix A to a file called A.mat. If you want to read in matrix A later, just type “load A”. You can also use the load command to read in ASCII files, as long as they are formatted correctly. Formatted correctly means that the number of columns in each line is the same and the columns are delimited with a space. Suppose you have a file called “datafile.dat” that contains the following lines:
12.5 6 9
1 3.5 125
2 4 0
You can put multiple individual plots in the same figure window using the “subplot”
Command. Type “help subplot” for more information. Once you’re done with your plot,
you’ll probably want to label the axes:
>> xlabel(‘x’)
>> ylabel(‘y’)
You can also give it a title:
>> title(‘My plot’)
Now, let’s do a 3-D example. First, generate some sample data:
>> z = peaks;
The “peaks” command generates a sample function. The default is a 49x49 matrix, but
you can specify a different size. “Peaks” can be very useful if you want to test your
plotting script or just play around with making plots.
We can make a 3-D shaded surface plot using the “surf” command:
>> surf(z)
OUTLINE:
• Introduction and where to get MATLAB
• Data structure: matrices, vectors and operations
• Basic line plots
• File I/O.
The MATLAB System:
• Development Environment
• Mathematical Function Library
• MATLAB language
• Application Programming Language.
Creating Matrices:
• zeros(m, n): matrix with all zeros
• ones(m, n): matrix with all ones.
• eye(m, n): the identity matrix
• rand(m, n): uniformly distributed random
• randn(m, n): normally distributed random
• magic(m): square matrix whose elements have the same sum, along the row, column and diagonal.
• pascal(m) : Pascal matrix.
Basic Mathematical Operations
Addition: >> C = A + B
Subtraction: >> D = A – B
Multiplication:>> E = A * B (Matrix multiplication)
>> E = A .* B (Element wise multiplication)
Division: Left Division and Right Division
>> F = A . / B (Element wise division)
>> F = A / B (A * inverse of B)
>> F = A . \ B (Element wise division)
>> F = A \ B (inverse of A * B)
MATLAB consists of:
The MATLAB language
• a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features.
The MATLAB working environment
• the set of tools and facilities that you work with as the MATLAB user or programmer, including tools for developing, managing, debugging, and profiling
Handle Graphics
• The MATLAB graphics system. It includes high-level commands for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics.
The MATLAB function library.
• a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms as well a special image processing related functions
The MATLAB Application Program Interface (API)
• a library that allows you to write C and Fortran programs that interact with MATLAB. It include facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.
Starting and Quitting MATLAB
To start MATLAB click on the MATLAB icon or type in MATLAB, followed by pressing the enter or return key at the system prompt. The screen will produce the MATLAB prompt >> (or EDU >>), which indicates that MATLAB is waiting for a command to be entered.
• In order to quit MATLAB, type quit or exit after the prompt, followed by pressing the enter or return key.
Display Windows
MATLAB has three display windows. They are
1. A Command Window which is used to enter commands and data to display plots and graphs.
2. A Graphics Window which is used to display plots and graphs.
3. An Edit Window which is used to create and modify M-files. M-files are files that contain a
• program or script of MATLAB commands.
Entering Commands
Every command has to be followed by a carriage return <cr> (enter key) in order that the command can be executed. MATLAB commands are case sensitive and lower case letters are used throughout.
To execute an M-file (such as Project_1.m), simply enter the name of the file without its extension (as in Project_1).
MATLAB Expo
In order to see some of the MATLAB capabilities, enter the demo command. This will initiate the MATLAB EXPO. MATLAB EXPO is a graphical demonstration environment that shows some of the different types of operations which can be conducted with MATLAB.
Abort
In order to abort a command in MATLAB, hold down the control key and press c to generate a local abort with MATLAB.
The Semicolon (
If a semicolon ( is typed at the end of a command, the output of the command is not displayed.
Typing %
When per cent symbol (%) is typed in the beginning of a line, the line is designated as a comment. When the enter key is pressed, the line is not executed.
The clc Command
Typing clc command and pressing enter cleans the command window. Once the clc command is executed, a clear window is displayed.
Creating and Saving a Script File
Any text editor can be used to create script files. In MATLAB, script files are created and edited in the Editor/ Debugger Window. This window can be opened from the Command Window. From the Command Window,
select File, New and then M-file. Once the window is open, the commands of the script file are typed line by line. The commands can also be typed in any text editor or word processor program and then copied and pasted in the Editor/Debugger Window. The second type of M-files is the function file. Function file enables the user to extend the basic library functions by adding ones own computational procedures. Function M-files are expected to return one or more results. Script files and function files may include reference to other MATLAB toolbox routines.
MATLAB function file begins with a header statement of the form:
• function (name of result or results) = name (argument list)
Running a Script File
A script file can be executed either by typing its name in the Command Window and then pressing the Enter key, directly from the Editor Window by clicking on the Run icon. The file is assumed to be in the current directory, or in the search path.
Input to a Script File
There are three ways of assigning a value to a variable in a script file.
1. The variable is defined and assigned value in the script file.
2. The variable is defined and assigned value in the Command Window.
3. The variable is defined in the script file, but a specified value is entered in the Command Window
CONCLUSION
IPS, an interactive projective system, was proposed that was merely composed of a projector and a mono-camera. Touch interaction on a flat surface was supported by the system. To achieve this goal, we explored the finger’s influence on the button’s distortion and built a model to describe the button’s distortion. We found that there was a significant positive correlation between the button’s distortion and the height of a bare finger. Then a novel, fast, and robust approach was proposed to detect the touch action on the surface. It was performed in three stages: 1) mapping by homography and extracting region of interest, 2) distortion detection, and 3) touch judgment. Meanwhile, the button’s distortion detection, which was similar to canny edge detection, was robust to the shadows and finger’s edge, by comparing the detected edge direction with the button edge’s direction. Additionally, the touch detection algorithm was processed on the ROI, so the computation complexity was low, which ensured the real time property of the touch detection.