Interactivity With A Virtual 3 D Environment example essay topic

3,459 words
What does virtual reality actually mean? Virtual Reality can be broadly said to be synthetic stimulation. Contrary to the popular belief propagated by most media VR extends far beyond the bounds of 3 D computer generated environments. The concept is much wider, stretching from it's most primitive form, the radio or television to the realm of bio-mechanics which involves artificial sensations of touch, smell and taste. The computer is but a tool, even if a very important one, in this field. Virtual Reality as a concept focuses on creation of environments and immersing subjects into them.

The type and complexity of the environments depend on the equipment and of course the purpose. The Virtual-Environment (VE) may be incomplete, for example in a cinema hall where a user only uses his eyes and ears and no other senses. So far a VE complete in all respects has not been developed but for the 'Star Trek - The Next Generation' enthusiasts an excellent example is the 'holodeck', a room in the starship where life-like environments (including smell & taste) are generated for entertainment of the crew who stay away from the Earth for extended periods of time. Why Virtual Reality? The problem: As mechanisation and specialisation rear its ugly heads the need for an aesthetic facade was felt. This is where architects were looked upon to create friendly work environments among hostile looking mechanical devices.

The advent of automation has brought about more and more numbers of computer operators into the workplace. This means more number of monitors and keyboards. These are not necessarily the best interfaces for a worker as has been proved by medical experts time and again. This is where VR was considered as an option. The solvers: Architects have always been creating such environments out of bricks and cement and interiors using lights and sometimes sounds, and so were the natural choice as designers of such interfaces.

The idea being to create interfaces friendlier than the standard array of buttons and of course more intuitive. This led to the interface becoming more and more like real life, with sounds and images or icons to indicate events or messages. The solution: By and by the interfaces began to be used as a tool for creation and also a means of expression. Computer graphics emerged as a totally new medium for art.

It slowly became an interactive medium, giving a feedback to the user according to his actions. The range of human senses which these interfaces interacted with were not limited only to sight or hearing. In fact one of the first example of a virtual interface is the joystick of civilian aircraft which simulated scaled down wing & fuselage vibrations without actually transferring the mechanical energy. The Virtual Environment It was during the late 80's that high-speed computers came into the picture giving enough power to convert mathematical data into images in real time. Initially the computers were only used for viewing data in easily readable forms like 3 D graphs etc. It was only after the entertainment industry started using 3 D virtual environments for first person view type games like Wolfenstein that it's potential as a training and visualisation tool was realised.

At about the same time research was being done on training for pilots and astronauts with the help of such computer simulated environments. During such research it was realised that there was immense potential for designing mechanical equipment in such an environment. Where else on Earth can you get moon like atmospheric and gravity conditions to test a new moon vehicle design. It also could be given controllable parameters which gave man a chance to play God. Forms of Virtual Reality Most of virtual reality's essentials can be described in threes. There are three general forms of VR, for example.

There are also three types of VR application, three levels of necessary VR software, and three general types of VR hardware peripherals. Through-the-window: The most common form of virtual reality, called through-the-window VR, is already well known to the general public through its widespread use in arcade games and motion-based seat theatres. Through-the-window VR allows a participant to look into a virtual word from a seat in the real world. The "window" the user looks through may be as small as a home computer monitor, or as large asphalt two story movie screen. Motion-based seat theatres, the most common manifestation of through-the-window VR, allow for no true interactivity. The user is simply flown through a 2 D-film-based world, usually at high speed on a bumpy ride, without being given any chance to change the itinerary or to interact with objects in the world.

Through-the-window arcade games, however, are based on computer-generated images, not film, and thus are often more effective, usually allowing both 3 D effects and some interactivity. In a through-the-window theatre experience, the participant views Scenes on the screen by the seat lurches and shudders in response to the images portrayed: roller-coasters, swan-dives off buildings, and cliff-edge Dune-buggy rides. The images are almost always "real" images (that is, photographed with a motion picture camera) rather than "virtual" images (created in software). And any participant who looks away from the screen during the experience "falls out of the world" and back into the reality of the theatre. But the sensations of speed and rapid movement while looking at that world can be convincing. Immersive: Immersive VR, on the other hand, is done with a head mounted display and emphasises interactivity with a virtual (software-derived, not filmed) 3 D environment.

The head mount allows the participant to enter and become immersed in the virtual world. The principal difference between an immersive and other forms of VR include the fact that the user in an immersive system can turn around, look behind, and see something in the virtual world - swimming fish, exploding volcanoes, angry wasps, or the back door - not just the back of a theatre seat. An immersive virtual world is genuinely three dimensional and inclusive. Usually, immersive worlds also are interactive: the participant decides where to travel. This freedom may even extend to travelling outside models built by the VR-world-developer, even allowing the user to fly through the ceiling and collide with objects.

The objects may in turn respond to the viewers movements. For example on opening a south side window the viewer may let the wind into the room upsetting something. The head mounted display actually has two components to it that possible the effect of immersion. One is the display itself, typically LCDs mounted in front of collimating lenses that straighten out the rays from the LCD, making them appear to emanate from optical infinity rather than 4 inches from the eyes.

The other element is a tracking device that records the movement of the user and sends the coordinates it collects to the computer. Those coordinates tell the software rendering the images on the LCD where the user is looking. Without the tracker the users viewpoint would not change with the head movement and the effect of immersion would not be cogent. Second-person: The third type of VR, second-person VR uses a camera to capture the image of the participant and immerse it into the virtual world. Users then watch their own images on a television or movie screen interacting with objects in the virtual world. One popular game in second person VR features virtual hockey rinks with virtual skaters, and pucks that are deflected by a real keyed in goalie.

In most second person systems the insertion of the participant in the virtual world is done by chroma-keying. This is quite similar to what TV viewers see when they watch the weatherman in front of weather-maps. VR Application Types Perambulation: Perambulation involves walking or flying through some type of model. This may take the form of rolling through the CAD rendering of a hospital in a virtual wheelchair, checking for architectural barriers to access. Or it may mean meandering down the human oesophagus at the end of a virtual endoscope.

In perambulation the user is generally in observing aspects of the virtual world. Interactivity is generally limited. Interactivity may focus on moving objects around in a virtual space (repositioning furniture in a virtual house), or removing objects from a scene, but for the most part, perambulation applications centre on observation rather than manipulation of the virtual world. Synthetic experience: Synthetic experience on the other hand involves training of muscle memory or sight and response co-ordination. Synthetic experience applications such as virtual surgery or power-plant-control-room operations allow participants to safely and cheaply practice skills that are dangerous or expensive to develop in the real world. Such applications allow the user to experiment on and get feedback from different objects in the virtual world.

Sim-city, a game, allows a person to act as a city planner, his actions deciding the fate of the city as it grows. Here real life situations are also simulated like earthquakes, floods, accidents and strikes by unsatisfied workers. A participant in such a world learns how to perform actions, usually with his hands, by practising them (not just observing them), in the same way as a pianist learns to play the piano. The actions performed in synthetic experience usually relate one action to a certain result and are practised over and over again till they become a second nature to them. Efforts are on to train architects in developing buildings integrating the structural system from the initial design stage itself using direct feedback mechanisms that tell an architect about the stability of the structure as and when it is created. Realisation / Reification: The third type of application, Realisation or Reification, allows users to see and graphically manipulate profuse context dependent data.

Reification means making a thing out of an idea, and realisation, an extension of scientific visualisation and visual languages, refer to representation of complex data in a graphical fashion. Such a representation, unlike most scientific visualisation or visual language icons, usually is both three-dimensional and interactive. The chief uses of realisation are in industries that process prodigious quantities of data in real time. An example of a campus design whose VR model has been created for people to experience.

A large number of subjects are made to take a tour of the campus and their movements tracked. Data like the preferred vehicles etc. could also be recorded. And a final graph of most densely travelled paths and most sparsely used paths could decide the road widths and possible green areas. Another form of reification is called Augmented Reality. This typically consists of a headset through which the viewer can see the real world but with the virtual world superimposed on it. This technology was first used in the HUD (heads up display) in fighter planes where the flight data was superimposed on the windscreen.

Augmented reality has found a place in many manufacturing units where it is used in order to check or correct manufacturing flaws in machine parts. An outline of a perfectly shaped machine part is superimposed on the actual machine part and will show the flaws on the part. The outline changes along with the head movements around the machine part. As a design tool in Architecture?

VR for architects started as a visualisation tool. It was soon realised that visualisation had a novelty value and no real utility. The power of this system lies in real world simulations. In the following paragraphs are some uses of this system have been described. Behavioural Models With the help of behavioural scientists computer models are generated with people of different kinds and behaviour inserted into building models. After the model is complete different types of situations are created like fire or earthquake and movement patterns of people are studied.

For example in a computer model of a school it was found that in case of an emergency children tend to be more organised following the child in front of him or her than adults, and that most doorways should be so designed as to allow an adult carrying a child to easily pass. Handicapped Space Design Method: A very interesting design methodology has evolved using VR as a tool. Based on Erno Rubick's cube which is an example of a limited space or handicapped space where one only thinks in terms of rotation without translation. In a similar way the designer is immersed in a space where real world limits apply, like site boundaries, lack of sunlight, soil conditions, even building bylaws and of course gravity. This method uses immersive VR systems make the designer to become God of this VE.

Once inside the VE he can say 'let there be a wall' and there will be a wall. He can put windows and stretch them in real time to suit the illumination needs, and whenever required can view his creation from any angle he desires. For example while placing a beam in position, the designer may see figures pertaining to the beam like stress values, estimated cost of the element depending on the size and shape, and be warned if the beam is too thin to carry the load. So far this method hasn't been used by people because it requires a lot of practice and it is difficult to get rid of the habit of pencil and paper.

However it has found its applications in design of marine structures, space stations and probable lunar or Martian bases. In these the conditions or world limits are different from earth. Labour Co-Operative Design: Another very important use for this tool is co-operative design. Such an example of design is Biosphere II which is an 8-year project to study interdependence of different natural habitats in the Earth. Some scientists have been enclosed for this whole period inside a large structure with areas designated for different types of regions on Earth, like deserts, tropical forests etc., to study them.

In this project architects, structural engineers, environmental scientists, air conditioning experts etc., were experiencing the same virtual space during the design process and giving their inputs on each and every stage of the design. This type of designing enables all the participants to view and respond to the same stimulus depending on their training and knowledge. A parallel to this type of a design in the conventional way would involve extensive drawings which would be checked and revised by each scientist or engineer and would go through a number of iterations before the final approval. It would also require each of these experts to understand the drawings and conventions. What VR means to an architect? In architecture, most VR work has focused on edifice prototyping and testing.

Edifices range from rooms in private residences, to hospitals to vast chemical plant facilities. This approach allows the user to construct a virtual building first, then test it for compliance with various regulations or for the comfort or well being of future denizens. In recent times work in this arena has been concentrated on linking the construction of a virtual building to a spreadsheet, so that the design considerations can be visualise d and their financial outcomes viewed simultaneously. Viewers can then make changes to a building in real time, to satisfy design versus finance constraints.

They can then view the building to see whether it looks the way they want it to look, and study the resulting spreadsheet created by the system to see if the costs are in line with the budget. The system may also generate reports on environmental impacts of design decisions. How do they do it? Till now Virtual Reality was a domain of the high speed, multiprocessor computers which cost the earth, but with desktop power doubling every year it is well within reach of the common man. The hardware usually ranges from VR headsets with different views for each eye, data-gloves to 3 D tracker balls. The software uses three basic levels of techniques depending on the kind of use and hardware available.

Primitive Geometry: The first type is the primitive geometry type with objects consisting of basic 3 D-primitive objects like spheres cubes etc. These may be variously shaded, lighted, coloured, positioned and animated to provide a world that appears to be filled with recognisable objects. Polygon: A more detailed level below geometry is the polygon. Think of polygons as the faces of a cube or the sides of a pyramid.

Onto these polygons can be mapped textures (scanned images such as photographs, zebra stripes or brick patterns) that add to the photorealism of the final world. Voxels: At the lowest and the most processor intensive level are voxels, three-dimensional pixels. This is the level at which medical imaging is done. Because the amount of detail it can convey is huge-and, consequently its processing demand prodigious, voxel-level work in VR is, at least so far not readily attainable.

But as this is nothing more than a performance limitation, not requiring any fundamental breakthrough in technology, only more horsepower, it is likely to dissolve as a limitation in the next few years. What The Future May Hold Imagine a project for a technical lab being built for space research. Many kinds of technical people will be involved in such a project. Conventionally it would be required that each one understands the others technical jargon, and it is of course the duty of the architect to co-ordinate them all. But working in the virtual environment everyone's jargon is converted into solid (well almost) tangibles which we can perceive and experiment with.

After the design process is over the construction process will start. Since the virtual model is already finalised the job can be handled by industry robots. Much like a plotter plotting a drawing on paper after being fed into the computer. On the other hand supervision of this process will become very easy with the use of augmented reality kits, which will give a superimposed view to the viewer of the as-is and the as-should-be state of the project. Linking the virtual world data to conventional spreadsheets or project management methods one may even draw out an accurate project schedule. The Real World Is Messy In all of the commercial areas beyond entertainment, VR developers have discovered that pure VR applications seem to be relatively few.

As a result, most commercial work in this field now involves either co-ordination with or direct integration to other technologies. Behavioural animation (the insertion of lifelike behaviour into creatures in the virtual world), for example, may require half a dozen technologies and techniques outside of VR. In such a working environment VR becomes one technology among many. And it is likely that, in time, as VR entertainment develops, these same concerns will be important to game-builders as well, as they broaden their scope from just 3 D graphics in a helmet to genuinely enlivened universes.

The future of virtual reality points in that direction, the commingling of other technologies with VR. For example as optic fibre replaces copper, and bandwidth limitations recede, long distance networked VR is becoming a reality. A doctor in Seattle may conduct an operation in Delhi and could just point to the tumour to be cut. As CPU power increases, detailed voxel-level images are being experimented with. The GUI is being replaced by more intuitive and natural V RUI, and desktop metaphors are being replaced by "landscape" and "interactive-data-taffy" interfaces. As Artificial Life begins to play a role behind the scenes in behaviorally animating virtual objects, virtual worlds will begin to take on the richness and the complexity of the real world.

And as all these advancements accelerate, it will become possible for participants in virtual experiences to feel for the first time that they are natives in the virtual world, not just tourists.