Spatial computing
Spatial computing refers to 3D human–computer interaction techniques that are perceived by users as taking place in the real world, in and around their bodies and physical environments, instead of constrained to and perceptually behind computer screens or in purely virtual worlds. This concept inverts the long-standing practice of teaching people to interact with computers in digital environments, and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with and encompasses others including extended reality, augmented reality, mixed reality, natural user interface, contextual computing, affective computing, and ubiquitous computing. The usage for labeling and discussing these adjacent technologies is imprecise.
Spatial computing devices include sensors—such as RGB cameras, depth cameras, 3D trackers, inertial measurement units, or other tools—to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space. They further use computer vision to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often they also use extended reality and mixed reality to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.
Spatial computing often refers to personal computing devices like headsets and headphones, but other human-computer interactions that leverage real-time spatial positioning for displays, like projection mapping or cave automatic virtual environment displays, can also be considered spatial computing if they leverage human-computer input for the participants.