Glossary of Terms

  • Artificial Intelligence (AI)
  • Bots
  • Facial recognition
  • Game Design
  • Motion capture
  • Multisensorial
  • Haptic devices
  • Non-linear Storytelling
  • Branching narratives
  • Physical Computing
  • Projection mapping
  • Three-Dimensional Model Capture
  • Photogrammetry
  • Cloud-point Scanning
  • Wearable technologies
  • Game Engines
  • Artificial Intelligence (AI): the field of creating intelligent machines. Machine learning is a subset of AI and the two terms are often used interchangeably. AI is the use of algorithms and statistical data that allow computers to simulate knowledge building. Computer systems learn by identifying patterns in training datasets and applying them to new data. 
    • Bots: a type of software application that runs automatic processes on the internet. The most common example of a bot is a web crawler (what Google uses to index its searches).
    • Facial recognition: The use of artificial intelligence and image processing to identify people. In comparing faces to existing datasets, the systems learn to recognize patterns of facial contouring to identify the unique features of a specific human subject. 
  • Game Design: using game mechanics, storytelling, code, worldbuilding, and aesthetic techniques to create analog or digital games and worlds. Game design’s core principles are now widely applied to other digital interactions, this is called “gamification.” 
  • Motion capture: to record frame-by-frame movement of humans or animals that is then applied to computer-generated characters to create a realistic simulated animation. 
  • Multisensorial: This is a catch-all term that on a basic level means an experience relies on more than one sense. While it can be used to describe films (technically most rely on sight and hearing), multisensorial is really tied to immersive experiences, meaning an environment isn’t a single screen or canvas, but surrounds the viewer, through VR (virtual space) or a projected digital space (physical environment)—there are many directions it can take. These experiences often rely on layered sound. 
  • Haptic devices: Gloves, hand-held controllers, or suits that provide users with touch feedback through vibration. These are commonly used in VR games to immerse users and make the environments feel more realistic. 
  • Non-linear Storytelling: This approach relies on telling stories out of chronological order. The technique can take many forms. On a basic level, this could be a film that utilizes flashbacks or parallel stories to progress its narrative. 
  • Branching narratives: Branching narratives are interactive stories, where users can choose a direction at choice points. In other words, it’s a choose-your-own-adventure. The user encounters divergent paths and selects a way forward. These narratives diverge, but also may converge around key points or endings. 
  • Physical Computing: Electronic circuit design that controls interactive systems.Instead of starting with a digital screen or interface, physical computing begins by exploring how humans express themselves physically. This approach often relies on physical objects people can interact with. These objects are usually connected to programmed electronics, like microcontrollers and sensors, creating interactor responsiveness to these systems. Common tools include: Arduino and Raspberry Pi.
  • Projection mapping: using incredibly high-resolution projectors to project onto a surface of an object (often an unusual one, not just a white wall). Projection mapping is a powerful tool to spatially augment an environment and immerse viewers in a story. Large-scale projection is commonly a key aspect of this immersion. Some are interactive. Others rely on physical computing, meaning the projections interact with physical objects, which the interactor can use to trigger new paths.
  • Three-Dimensional Model Capture: 
    • Photogrammetry: the process of creating a 3D digital model of an object. After capturing an object from different angles and locations with a regular camera, software detects patterns to build a 3D reconstruction of the object.
    • Cloud-point Scanning (FARO): a non-contact, non-destructive technology that digitally captures the shape of physical objects using a line of laser light. These scanners create “point clouds” of data from the surface of an object. They’re an accurate way to capture a physical object’s size and shape into the computer world as a digital 3D representation.
  • Wearable technologies: The catch-all term for devices worn by an interactor enhancing their human abilities is digital watches. These feature sensors to equipment to monitor behavior and health. Another common example is head-mounted devices (HMDs), like VR headsets and mixed-reality viewers. The lo-fi version, cardboard viewers, are also popular for experinging 360° films. 
    • Head Mounted Displays-Current Most Popular Technology (2020):

Untethered (wireless) with Mobile Phone:

  1. Google Cardboard (NY Times release Nov. 2015) 
  2. Gear VR - Oculus and Samsung (2014)
  3. Google Daydream Untethered-stand-alone systems 
  4. Oculus Go (2017) 
  5. Vive Cosmos (2019) 
  6. Oculus Quest (2019) 
  7. NReal Glasses (2020-still coming out https://www.nreal.ai/)

Tethered (cord attached to computer): 

  1. Oculus Rift (2016) and Oculus Rift S (2019)
  2. HTC Vive (2016) Pro (2019)
  3. Valve Index (2018) Controllers (2019)
  • Game Engines - Software platforms used to create interactive, digital experiences. 

Popular Game Engines/VR Companies:

UNITY 2d/3d 
UNREAL
GAME MAKER STUDIO 
RGBD Toolkit