What is immersive storytelling?
Immersive storytelling transports interactors to different times, places, and alternate realities. With the help of emerging technology, immersive storytelling is also revolutionizing how we experience and share culture. From the simple to the complex, the course will cover a mix of digital and analog forms of storytelling. While there’s little definitive terminology, it’s still important to be familiar with common terms and their meanings. Here are some key definitions to get you started on your journey. Please consider how these can be applied to your own practice. This website is a work in progress. We hope you will contribute examples of your own work in the course.
Overarching Terms for Electronic forms of Immersion
- Virtual Reality (VR): An immersive, virtual environment that surrounds an interactor and fully blocks the physical world from sight. It’s viewed through a cardboard viewer or a HMD (head-mounted device). VR has come to mean both 360° films and responsive, computer-created virtual environments. Some of these are interactive, others are not.
- Augmented Reality (AR): A virtual layer superimposed over a real-world environment, creating a hybrid view. AR is often viewed through a mobile or tablet device. A common example is a Snapchat filter. It’s a digital skin that maps to an interactor’s physical facial features.
- Mixed Reality (MR): Similar to AR, MR is a virtual layer overlaid on a real-world environment. MR, however, is viewed through glasses or headsets that simultaneously allow the viewer to see the real world and the virtual one.
- Extended Reality (XR): A broader term that encompasses the emerging digital interfaces that can either fully immerse the interactor in a simulation or mix the physical world with digital layers.
- Interactive: Experiences that require an interactor's engagement, often through a controller of some kind. Interactivity is controlled with systems of rules—these can range from simple to complex. A simple interactive system is a single tap or click to reveal information, whereas complex interactions can be multilayered, like a branching narrative. Stories and databases can be interactive.
- Responsive: An experience that adapts to an interactor or a changing set of conditions. For example, in a physical environment, a camera (with camera-vision software) could “read” a response to an interactor's gestures or movement. In a digital experience, like a website, this can be a window adapting to fit a screen size (i.e. differentiating between a desktop, tablet, or mobile device).
- Generative: For generative experiences, a programmer designs an algorithmically controlled system that may respond to a variety of multisensory inputs (i.e. movement, touch, light, sound, etc.). Combined with the interactor’s response, the software generates an experience. The result is a collaboration between the interactor and machine.
- Participatory: Participatory experiences ask for an interactor's contribution—an idea, response, feedback, or something personal.
- Simulation: Often called the “third way” of acquiring knowledge, simulation allows interactors to gain understanding through a model that communicates a different experience, without an interactor actually experiencing it. While virtual reality is a good example of simulated environments, not all require technology. In fact, simulations have existed for more than a century. Analog examples include cycloramas and stereoscopes.
Artificial Intelligence (AI)
Glossary of Terms
- Artificial Intelligence (AI): the field of creating intelligent machines. Machine learning is a subset of AI and the two terms are often used interchangeably. AI is the use of algorithms and statistical data that allow computers to simulate knowledge building. Computer systems learn by identifying patterns in training datasets and applying them to new data.
- Bots: a type of software application that runs automatic processes on the internet. The most common example of a bot is a web crawler (what Google uses to index its searches).
- Facial recognition: The use of artificial intelligence and image processing to identify people. In comparing faces to existing datasets, the systems learn to recognize patterns of facial contouring to identify the unique features of a specific human subject.
- Game Design: using game mechanics, storytelling, code, worldbuilding, and aesthetic techniques to create analog or digital games and worlds. Game design’s core principles are now widely applied to other digital interactions, this is called “gamification.”
- Motion capture: to record frame-by-frame movement of humans or animals that is then applied to computer-generated characters to create a realistic simulated animation.
- Multisensorial: This is a catch-all term that on a basic level means an experience relies on more than one sense. While it can be used to describe films (technically most rely on sight and hearing), multisensorial is really tied to immersive experiences, meaning an environment isn’t a single screen or canvas, but surrounds the viewer, through VR (virtual space) or a projected digital space (physical environment)—there are many directions it can take. These experiences often rely on layered sound.
- Haptic devices: Gloves, hand-held controllers, or suits that provide users with touch feedback through vibration. These are commonly used in VR games to immerse users and make the environments feel more realistic.
- Non-linear Storytelling: This approach relies on telling stories out of chronological order. The technique can take many forms. On a basic level, this could be a film that utilizes flashbacks or parallel stories to progress its narrative.
- Branching narratives: Branching narratives are interactive stories, where users can choose a direction at choice points. In other words, it’s a choose-your-own-adventure. The user encounters divergent paths and selects a way forward. These narratives diverge, but also may converge around key points or endings.
- Physical Computing: Electronic circuit design that controls interactive systems.Instead of starting with a digital screen or interface, physical computing begins by exploring how humans express themselves physically. This approach often relies on physical objects people can interact with. These objects are usually connected to programmed electronics, like microcontrollers and sensors, creating interactor responsiveness to these systems. Common tools include: Arduino and Raspberry Pi.
- Projection mapping: using incredibly high-resolution projectors to project onto a surface of an object (often an unusual one, not just a white wall). Projection mapping is a powerful tool to spatially augment an environment and immerse viewers in a story. Large-scale projection is commonly a key aspect of this immersion. Some are interactive. Others rely on physical computing, meaning the projections interact with physical objects, which the interactor can use to trigger new paths.
- Three-Dimensional Model Capture:
- Photogrammetry: the process of creating a 3D digital model of an object. After capturing an object from different angles and locations with a regular camera, software detects patterns to build a 3D reconstruction of the object.
- Cloud-point Scanning (FARO): a non-contact, non-destructive technology that digitally captures the shape of physical objects using a line of laser light. These scanners create “point clouds” of data from the surface of an object. They’re an accurate way to capture a physical object’s size and shape into the computer world as a digital 3D representation.
- Wearable technologies: The catch-all term for devices worn by an interactor enhancing their human abilities is digital watches. These feature sensors to equipment to monitor behavior and health. Another common example is head-mounted devices (HMDs), like VR headsets and mixed-reality viewers. The lo-fi version, cardboard viewers, are also popular for experinging 360° films.
- Head Mounted Displays-Current Most Popular Technology (2020):
Untethered (wireless) with Mobile Phone:
- Google Cardboard (NY Times release Nov. 2015)
- Gear VR - Oculus and Samsung (2014)
- Google Daydream Untethered-stand-alone systems
- Oculus Go (2017)
- Vive Cosmos (2019)
- Oculus Quest (2019)
- NReal Glasses (2020-still coming out https://www.nreal.ai/)
Tethered (cord attached to computer):
- Oculus Rift (2016) and Oculus Rift S (2019)
- HTC Vive (2016) Pro (2019)
- Valve Index (2018) Controllers (2019)
- Game Engines - Software platforms used to create interactive, digital experiences.
Popular Game Engines/VR Companies: