Immersion and Interactivity and Museum Exhibits

INTRODUCTION

 

GLAM institutions (Galleries, Libraries, Archives, and Museums) are increasingly turning to technological solutions to preserve, exhibit, and disseminate cultural heritage.  Because of the importance of sensory stimuli in presenting the complex and multifaceted objects and representations of cultural heritage, new approaches and techniques, enabled by digital computer technology, are being employed to provide engaging experiences for both scholars and the general public.  The need for these new approaches is particularly urgent due to the rapidly increasing volume and heterogeneity of born-digital media.  The collection of techniques known as Extended Reality, or XR, which include virtual reality and augmented reality, is one such approach to enhance sensory experiences in the presentation, study, and analysis of the ideas and objects that are focal points of GLAM institutions.

 

 

XR – Virtual Reality, Augmented Reality, Extended Reality, and Mixed Reality

 

The term extended reality, or XR, refers to any combination of real (physical) and virtual (computer-generated) environments, as well as the human-computer interactions (HCI) associated with these environments.  Although the “X” in XR is commonly understood to denote “extended”, it is intended to denote a placeholder for any of the following designations: VR (virtual reality), AR (augmented reality), and MR (mixed reality).

 

Virtual reality (VR) denotes computer simulations of reality, existing completely in silico (“in silicon”; i.e. they exist only as simulations, with no realization is “real” reality, although HCI is supported).  Computer games and flight simulators can be considered forms of virtual reality.   VR is effected primarily through computer monitors, as well as larger-scale immersive environments, specialized 3D glasses, low-cost, high field-of-view head-mounted displays (HMDs), and other peripherals, such as hand-held devices and different types of headsets.

 

Augmented reality (AR), like VR, is an interactive experience of a real-world environment that incorporates computer simulations, but where these simulations enhance real, physical objects.  The virtual simulations can therefore be considered as perceptual information that is “overlaid” onto material reality.  This computer-generated overlay can either add to the real environment, or hide, or mask features or object in the real environment.  Instead of “replacing” material reality, as is the case with VR, AR supplements these views.  AR supports multiple sensory modalities, including visual, audio, haptic (tactile feedback), and even, in some cases, olfactory (utilizing the sense of smell).  AR therefore is a combination of real and virtual worlds, and requires real-time human-computer interaction, as well as accurate 3D alignment of the computer-simulated and real objects.  Consequently, AR occurs in the real, physical world.

 

An important application of AR is computer-assisted therapy and surgery, where 3D models of an organ, tissue, or other physiology generated from magnetic resonance imaging (MRI) or computed tomography (CT) are overlaid onto a surgeon’s view during a therapeutic intervention.  Digital (virtual) content – imagery, audio, 3D objects, web links, an external application, etc. –  is combined with material objects or with locations, and is rendered to the user through smartphones, headset, or specialized input/output devices.  The content presented to the user is based upon matching locations from a global positioning system (GPS) or image recognition.

 

With AR, the designer creates digital content that is meant to be paired with an object or

location. This content is accessible through a smartphone, headset, or other device and is

triggered by either image recognition through the device camera or a GPS location match.

The application that matches trigger with referent will display content that might include

an image, a sound file, a 3-D object, a video, a web link, or a link to another AR track. AR

can also trigger external applications, such as the device phone or Twitter.

 

Mixed reality (MR), similar to AR, is a hybridization of virtual and real worlds.  However, in MR, the merging of the virtual and real environments produces a new environment that can occur either in the physical or virtual world.  Because they are experienced in time and space, which are supplemented with computer-generated simulations, AR and MR, are sometimes known as hybrid reality systems (Szabo, 2018).

 

Applying the tools and techniques of XR is relatively new to the digital humanities, as is evaluating scholarship at the intersection of these fields.  For some scholars, the immersion, realism, and objectivity that XR provides can be seen to be in competition with the traditional humanities, which privileges ambiguity, uncertainty, incompleteness, and argumentation.   Nonetheless, XR potentially offers a viable, alternative approach to digital humanities scholarship, as well as an object of critical study in itself (Szabo, 2021).

 

One of the benefits of virtual reality is that it provides presence, which is described as the sense or perception of being physically present in a virtual, computer-simulated “world”.  This “being there” aspect that VR provides is especially important in digital heritage settings, where the participant is ideally virtually transported into the past.  Although visualization has always been recognized as an integral component of the digital heritage experience, the role of VR is less clear-cut (Kateros et al., 2015), (Foni et al., 2010).  VR can be complemented by gamification, in which elements of gameplay and interaction, are added to non-game systems, giving users an enriched experience of digital culture (Kateros et al., 2015), even on mobile devices such as cell phones.  Gamification is particularly relevant to intangible heritage.  In contrast to tangible heritage, consisting of static 3D objects such as buildings and other structures that can be modeled with graphics software, intangible heritage consists of simulations, such as, for example, animated virtual characters re-enacting plays or performing other cultural customs (Kateros et al., 2015).

 

3D digital techniques were employed for the reconstruction of the Ancient Agora of Thessaloniki, built in the second century AD, which was the seat of a Roman Forum, and the Great Palace of Knossos, a central political site of ancient Minoan civilization, and built between 1700 and 1400 BCE.

 

Oculus Rift HMD hardware and the associated software development kits (SDKs) supported custom software development.  The Unity3D game engine and a custom GPU-based computer graphics framework were employed for rendering.

 

Content creation for the digital heritage sites – or the generation of digital assets used in the VR system – follows a specific process, or pipeline.  In the case of the Agora, a sketch by archaeologists served as the basis for the construction of a 3D model with 3D modeling software (Google SketchUp).  3D modeling software generates a surface representation of a 3D object consisting of spatial coordinates.  Used in a wide variety of applications, the software also facilitates the generation of 3D geometry from 2D representations, as was done in this case.  The resulting geometry was output into 3D COLLADA (COLLAborative Design Activity) format.  COLLADA is an interchange format for interactive 3D systems, allowing different graphics software applications to use the digital assets, which are represented as XML files.  The 3D models are then used as part of a VR system employing gamification.   Geometric algebra methods are then employed for smooth animation of scenes with virtual 3D characters.  Mathematical details of the technique are beyond the scope of the current discussion.  Interested readers are referred to (Papagiannakis et al., 2014).

 

For these XR systems to be useful and effective, and to provide enriching, engaging experience, user testing is vitally essential.  Testing users’ comfort with these systems can also potentially mitigate problems resulting from simulated environments, such as motion sickness, disorientation, or eyestrain.  Users’ comfort with HMDs must also be assessed.  As these XR systems are relatively novel, conventions by which these systems can be evaluated are not yet uniformly established.  Testing of the XR content is particularly important, especially since, as a relatively new technology, improperly designed content not only detracts from the intended user experience, but can potentially result in an uncomfortable one (Kateros et al., 2015).

 

Particularly relevant to the digital humanities is the immersive element that XR entails. Immersion refers to the perception of being physically present in a real environment that is in fact virtual, or computer-generated.  Immersive environments are hybrid computer/physical systems that “surround” the user with computer generated sensory stimuli, including visual images or 3D models, and audio, tactile, and (sometimes) olfactory stimuli.  The immersive environment is a physical space into which the user is placed, and into which the computer-generated sensory stimuli are rendered, or presented, to the user.  The environment is controlled by a computer system, or several computer systems, which generate the stimuli and render them to the physical component of the environment.

 

Immersive environments have many potential applications to the digital humanities.  For instance, as described in other sections of this course, “Big Data” analysis and visualization is increasingly important for research in the digital humanities.  As explained below, cultural content and digital heritage, consisting of large-scale textual data, audio-visual content, and a wide variety of large digital archives and databases are sources of both structured data (which can be represented in a tabular format) and unstructured data that all contribute to this vast data volume and heterogeneity.  Because of its volume and variety, and therefore its complexity, gaining insights into or detecting patterns in these data can be challenging.  Although new techniques and data-reduction methods, combined with interactive visual analytics, are addressing these challenges, an intuitive way to analyze these data is to interact with it while being “inside” it, or “immersed” in it.  Consequently, Big Data visualization for the digital humanities is one of the main applications of immersive technologies and techniques (Lugmayr & Teras, 2015).

 

Serious games, or simulations of real-world procedure formulated as games, are different from educational games.  These games are generally not given much consideration in the digital humanities, but the design and interaction mechanism of these games, especially in the context of immersive environments, are relevant for designing user experiences in humanities domains.  For example, visualizing large cultural structures, such as buildings and other important historical sites, can effectively be studied through environments that allow natural spatial movement and observation from multiple viewpoints.  Consequently, adopting and adapting some of the insights from the design and development of serious games can provide benefits to digital humanities research and projects (Lugmayr & Teras, 2015).

 

Among these applications, the most prominent are related to cultural contexts, such as museums, art galleries, and performances.  An early example, FogScreen, developed in the early years of the new millennium, is a 3D walk-through immersive environment that enables high-resolution images, 3D geometry, and 3D models to be projected in space, and allows images to appear to float freely.  FogScreen technology was used to develop an interactive children’s game.  A laser scanner was used to add interactivity, wherein the position and movement of users’ hands were captured by the scanner and computationally converted into mouse-like events that were processed using standard user-interface procedures in software, specifically, a game engine.  In other words, the human-interaction mechanisms that players use to interact with games, for example, through a game controller or mouse, were emulated with more realistic human movements in the FogScreen system.  In a standard computer game, the display and/or audio and/or tactile feedback is rendered, or presented to the player, after this interaction, but in this case, the rendering is output to the FogScreen instead of to a typical output device (Lugmayr & Piirto, 2006).  CAVE   environments (a recursive acronym for Cave Automatic Virtual Environment) are popular implementations of immersive 3D virtual reality.  The environment provides a compelling sense of being surrounded by a fictional “world”.  Originally developed for scientific visualization, flight simulation, and for surgical training, such immersive environments can potentially be used to provide simulated performances or exhibitions.  As another example, an immersive system named HIVE (Hub for Immersive Visualisation and e-Research) was developed at Curtin University in Australia (Lugmayr & Teras, 2015).  Although the system is used for scientific visualization and for the health sciences, it is being used in many humanities projects, including interactive displays of heritage sites, historical panoramic views of Perth and Fremantle, Australia, multimedia for the studying Australian prisoners of war in Japan during World War II, and an interactive virtual environment consisting of large-scale 3D displays for visualizing cultural big data (see Here).

 

From these examples, it is seen that immersive technologies can potentially contribute to and enrich the digital humanities in a variety of ways.  Through sensor tracking and motion capture, they can provide a more compelling experience of virtual and augmented spaces.  These experiences are particularly relevant in augmented reality environments, which combine the physical and virtual worlds.  Immersion provides socially rich and embodied experiences that allow a more natural interaction with virtual environments (Lugmayr & Teras, 2015).

 

Finally, with these virtual and augmented spaces, new cross- and inter-disciplinary knowledge is gained, wherein all domains benefit.  New humanities knowledge is created through the new forms of interaction with digital media that are afforded by immersion and the natural interactivity provided by immersive environments. Ultimately, the goal is improved understanding, insight, and the obtaining of meaning (Lugmayr & Teras, 2015).

 

From the preceding discussion, it should be clear that immersive technologies, especially augmented reality, have particular significance for cultural heritage.  Cultural heritage is explained by the United Nations Educational, Scientific and Cultural Organization, or UNESCO, as follows:

 

“The idea of cultural heritage is a familiar one: those sites, objects and intangible things that have cultural, historical, aesthetic, archaeological, scientific, ethnological or anthropological value to groups and individuals. The concept of natural heritage is also very familiar: physical, biological, and geological features; habitats of plants or animal species and areas of value on scientific or aesthetic grounds or from the point of view of conservation”.

 

Representing and preserving this cultural heritage digitally leads to the concept of digital heritage, which UNESCO defines as follows:

 

“Digital heritage is made up of computer-based materials of enduring value that should be kept for future generations. Digital heritage emanates from different communities, industries, sectors and regions. Not all digital materials are of enduring value, but those that are require active preservation approaches if continuity of digital heritage is to be maintained”.

 

Especially important for digital heritage in particular, and for the digital humanities in general, augmented reality environments facilitate apprehension, combining the abstract and the visceral.  For digital heritage, the hybrid experiences provided by this technology adds nuance and immediacy to historical investigations.  In fact, AR has been recognized by the Wall Street Journal as being an increasingly common computer-based technology in museums of art and cultural heritage (Szabo, 2018).  Apprehensive experience facilitated through AR is related to experiential learning and the gaining of experiential knowledge.  In contrast, comprehension is related to the acquisition of abstract knowledge, obtained through observation, and with a measure of distance from the phenomena being studied (Szabo, 2018).

 

AR provides documentary annotation.  Such annotation may consist of audio, photographs or other imagery of an historic site or cultural artefact or, in the case of architecture, a 3D model.  It is a form of one-to-one correspondence between virtual and material objects (Szabo, 2018)The Museum Without Walls concept is an example of how this type of annotation can be employed.  The not-for-profit CultureNOW organization describes The Museum Without Walls as “celebrating our vast cultural environment as a gallery that exists beyond museum walls through cultural tourism and arts education” (Szabo, 2018), in which visual and auditory experiences complement each other, while intuitive information visualization and other contextual data can be generated in real-time to enrich the experience.  Users can also interact with and manipulate this digital content, and therefore users can actively engage in interpretation.

 

AR also provides interpretive intervention as another form of annotation to complement the user’s sensory experience provided by the digital media.  These interventions may include games, especially alternate reality games (ARGs) in which users actively participate.  For cultural heritage, location-based ARGs combine historical and social elements to create engaging user experiences.  Another type of interpretative intervention is synesthesia effects, in which perceptual stimuli result in a person experiencing a secondary, unrelated sensation (Szabo, 2018).  The sensory stimulation of one cognitive pathway may produce an experience resulting from the activation of a second cognitive pathway.  A typical example is grapheme-colour synesthesia, in which some alphanumeric characters (numbers and characters of the alphabet) are perceived as possessing an inherent colour.  Another example is chromesthesia, in which specific sound trigger the perception of certain colours.  The twentieth-century French Modernist composer Olivier Messiaen (1908 – 1992) had a form of synesthesia in which he reported “seeing” colours triggered by certain sounds and harmonies or reading them on a music staff (Szabo, 2018).  Messiaen was also inspired in many of his compositions by his experiences of colour, and attributed aspects of his harmonic innovations to this hybrid sound-colour perception.

 

In terms of technology, location-based AR, when used for cultural heritage experiences, make extensive use of maps.  Consequently, geographic information systems (GIS) – and frequently, global positioning system (GPS) – are integral components of AR systems, and are used for creating the virtual content that is mapped onto material reality.   GIS allows space to be discretized into 2D x– and y– coordinates on the basis of maps, and, through sophisticated geometric algorithms, can represent 3D markers, regions, and zones with which GPS operates.  With GIS, users can interact with geographic, political, and historical maps to create various spaces, which are then processed and generated in silico and output to the AR system.  Layered maps, or maps placed on top of other maps to add more information, becomes the focal point for annotation.  Important geographic features and locations are already represented in the GIS system as Points of Interest, similar to what are found in guidebooks or tourist maps.  These points in themselves are abstract, but, in combination with historical maps, associate points on these maps with contemporary locations, a process known as georectification. Hence, history and cultural heritage are experienced both spatially and temporally (Szabo, 2018).

 

However, AR (and XR in general) and immersive experiences also need to be viewed critically.  The concern is that the enriched engagement offered by hybridized material and digital reality could potentially minimize the importance of the histories and lives of the people who live in the real, non-virtual worlds that are being represented.  The apprehensive and comprehensive experiences afforded by AR could be seen as imposing a “fiction” upon a real place, and designers of applications and content for digital heritage must be cognizant of the degree to which these fictions are imposed upon that real place.  Suggestions for addressing these concerns include counter-mapping, which is mapping that is not exclusively focused on geography (specifically, geometry), but that also incorporate the temporal dimension, ideas, intellectual activities, and community values.  As pointed out by Victoria Szabo, a leading scholar in XR for cultural heritage, “[t]ensions between what was designed and what is experienced could be important moments for an AR intervention” (Szabo, 2018).

 

Information about pedagogical initiatives and research questions concerning the deployment of XR in the digital humanities can be found on the Institute for Virtual and Augmented Reality for the Digital Humanities (VARDHI) website.  VARDHI was supported by a National Endowment for the Humanities (NEH) grant to Duke University in the United States.  A summer institute at Duke was set up to focus on XR in digital humanities research, teaching, and outreach.

 

[Work Cited]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Digital Humanities Tools and Techniques II Copyright © 2022 by Mark Wachowiak, Ph.D. is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book