Early thoughts on ‘spatial computing’ through the lens of geography education

Kenneth Y T Lim, National Institute of Education, Singapore; Bryan Z W Kuok, Independent scholar; Ahmed H Hilmy, National Institute of Education, Singapore


By their very nature, virtual environments and immersive worlds suggest affordances for learning that geography educators have been particularly suited to speak towards, not least of which being the potential for dynamic, embodied, and multisensory learning experiences to sit alongside field studies in the physical world.  Such environments and their associated technologies are not new, having been marketed to consumers since at least the mid-2000s – and earlier if panoramic photographs are included.

Virtual environments are, however, enjoying a recent resurgence of interest (see, for example, Zhao, et al., 2021), not only because of the recent pandemic but because of the introduction of the mixed reality headset from Apple in February 2024. In its rhetoric of marketing, the company is advancing the paradigm of what it terms ‘spatial computing’.

In this essay, we share our early thoughts on the extent to which the spatiality of ‘spatial computing’ is a gimmick or something that might potentially whet the appetite of geography educators and our associated research community.

The surfacing of geographical intuitions

In the course of a typical school day, members of the school community – staff and students alike – traverse the campus several times a day, sometimes being exposed directly to the elements, while at other times in the shade. The paths we take as we traverse our campuses reflect our tacit responses to such exposure (Lim, et al. 2024). Through our own bodily experience, we therefore develop over time a textured map of our respective school campuses, which in turn influences our decision-making in subconscious ways.

Perhaps a signifier of the skilful teacher is how to design for opportunities for such tacit ‘geographical’ knowledge to be made more explicit, in order for students to connect their everyday embodied lived experiences in authentic ways with the formal codified domain knowledge of the classroom.

The moves we choose to make in our learning environments are constrained by a number of factors, one of which is the effective management of multiple digital devices during our lessons. We do not say this as shorthand for blind advocacy of affording each student access to their own device at all times – while there be some advantages to this, this model has its concomitant implications for classroom management. However, we see tremendous opportunities in digital tools that can serve as creative canvases for students to express their often nascent understandings of geographical concepts. For example, students could manipulate terrain in an immersive environment to depict features such as a river delta. These digital artefacts can then serve as focal points for teacher-facilitated classroom discussions, helping students connect geographical concepts with their own lived experiences.

On Collaborative Observation

In 2015, we published a paper (Cho & Lim, 2015) in the British Journal of Educational Technology (BJET) in which we advanced a pedagogical strategy we termed Collaborative Observation, as part of our work on the Six Learnings curriculum design framework (Lim, 2009). In this paper, we addressed the problem of how teachers could effectively manage and scaffold the learning experiences of pupils in large classes (typically, forty pupils per class), particularly when the learners are operating as avatars in an immersive environment. In that paper, we compared three different conditions, namely learners in a 1:1 ratio with a computer, learners in a 1:40 ratio sharing the use of a single computer, and traditional didactic instruction. With regard to the latter, we advanced the case for Collaborative Observation: namely, learners in a 1:40 ratio sharing the use of a single computer.

Learner-Generated Augmentation

In 2020, we followed up with a second paper in BJET, this time describing the construct of what we term as Learner-Generated Augmentation (Lim & Lim, 2020). The latter

describes activities in which learners use Augmented Reality (AR) tools to annotate their local environments, giving teachers better insight into which aspects of their surroundings students find significant and meaningful. In this context, ‘augmentation’ refers to the addition of digital information onto the physical environment through AR technology. Through this process, students can express their emerging understandings of a topic by linking digital content to personally meaningful elements in their physical environment.

For example, a student learning about local history might choose to digitally annotate a site within the neighbourhood with historical information, while the teacher may have chosen to augment the town hall instead. More often than not the elements in their environments which novices might choose to annotate would be different from those which the teacher (as domain expert) might choose. In the hands of a skilled teacher, such differences represent rich opportunities for discussion and mutual learning. Learner-Generated Augmentation acknowledges where the learners are coming from, helps make their otherwise tacit conceptions more visible to the teacher, and has applications in the sciences as well as in the humanities. For instance, in a geography lesson about their local community, students could be tasked with creating augmented reality annotations on a map to highlight landmarks, infrastructure, or environmental features that are personally significant to them. This would surface the students’ mental models of their neighbourhood to the teacher. The resulting student-created AR content could then serve as boundary objects for class discussion and collaborative knowledge building.

Spatial computing and its implications for geography education

The two papers published in 2015 and 2020 explored pedagogical strategies founded on distinct premises and contexts. Collaborative Observation, as described in the 2015 paper, involved multiple learners observing an expert (the teacher) perform a task in a virtual world, then collaboratively discussing and solving related problems. In contrast, the Learner-Generated Augmentation approach introduced in the 2020 paper tasked learners themselves with creating augmented reality artifacts to represent their emerging understanding of a topic, situated in personally meaningful real-world contexts.

While these two approaches may seem conceptually oppositional in terms of who generates the virtual / augmented content (expert vs learner) and the technology used (virtual world vs AR), the affordances of the Apple Vision Pro allow for a convergence of these premises and contexts. The device’s advanced AR capabilities enable both expert-led demonstrations akin to Collaborative Observation and learner-driven creation as in Learner-Generated Augmentation, all within the learner’s immediate environment. This fusion is enabled by what Apple refers to as the paradigm of ‘spatial computing’.

‘Spatial computing’ refers to the ability of devices like the Vision Pro to understand and interact with the user’s surrounding physical space, blending digital content seamlessly with the real world. It leverages technologies such as advanced computer vision, real-time 3D mapping, and gesture- / eye-tracking to create immersive mixed reality experiences anchored to the user’s environment.

‘Spatial computing’ technologies like the Vision Pro foreground the role of the body in meaning-making and creative expression. By allowing learners to engage with digital content overlaid on their physical surroundings, these devices facilitate an embodied, multisensory approach to learning that bridges the physical and psychological dimensions. Learners can leverage natural interactions and familiar environmental cues to construct personally relevant understandings, moving fluidly between consuming and producing knowledge artifacts in a shared hybrid space.

As a wearable, the Vision Pro lends itself naturally to the notion of embodiment, in that – in such cases – the learners’ auditory and visual sensory inputs are augmented by the affordances of whatever apps the learners are using, but also that the apps have a certain degree of geospatial permanence within the augmented world of the learner. The advanced AR capabilities of the device enable both peer-led demonstrations akin to Collaborative Observation and learner-driven creation as in Learner-Generated Augmentation, all within the learner’s immediate environment. There are competing technologies such as the Meta Quest 3 which is primarily focused on immersive VR experiences and Microsoft’s Hololens, although this appears to have more limited AR capabilities. While these competing headsets tend to be optimized for either VR consumption or basic AR annotations, Apple’s ‘spatial computing’ paradigm appears to better support both expert-guided collaborative experiences as well as open-ended learner creation within a unified device.

Given the cost of the Vision Pro at the time of writing, it will be some years off before schools can afford the luxury of a 1:1 ratio of the Vision Pro (or its successors or future competitors) to pupils. Yet this is not to discount the potential of the Vision Pro today for socially constructed meaning-making in field-based activities in both physical and virtual sites. Thus, for example, it is perfectly possible to imagine the scenario of a field-based lesson in which the learners annotate their local environments as they explore their neighbourhoods, leaving digital notes (such as text and sketches) at locations and sites that they themselves consider significant. In the context of a lesson unit, learners could be tasked to cast or to record their screens as they explore their environments and annotate them, for subsequent post-activity discussion in either small groups or as a class, as facilitated by the teacher.

Concluding remarks

Geographers have a unique appreciation that space is a shared and contested construct and at the same time, understanding that space and place are deeply personal and tacit. One of the earliest attempts to tease these tensions and relationships out through digital means was Moed’s (2002) project: ‘Annotate space: interpretation and storytelling on location’. That project pre-dated smartphones, using early mobile phones in urban environments to document social constructions of space. In the two decades that have since passed, a new generation of geographers has the potential digital wherewithal to annotate space in new and exciting ways. How we as a community of educators interpret these affordances in geography education is a story yet to be written.


Cho, Y. H., and K. Y. T. Lim, (2015). “Effectiveness of Collaborative Learning with 3D Virtual Worlds” in British Journal of Educational Technology, 48(1) pp 202-211.

Lim, I. J. E., Low, A. L. Y., and K. Y. T. Lim, (2024). “Optimising learning environments: a microclimate study of a school campus in Singapore using an integrated environment modeller simulation tool (IEMsim)” in Chova, L. G., Martinez, C. G., & Lees, J. (Eds.) Proceedings of the 18th annual International Technology, Education and Development Conference.

Lim. K. Y. T., (2009). “The Six Learnings of Second Life: A Framework for Designing Curricular Interventions In-world” Journal of Virtual Worlds Research, 2(1) pp. 4-11.

Lim, K. Y. T., and R. Lim, “Semiotics, memory and Augmented Reality: History education with Learner-Generated Augmentation” British Journal of Educational Technology, special section on “Beyond observation and interaction: Augmented Reality though the lens of constructivism and constructionism”, 51(3) pp 673-691

Moed, A. (2002). Annotate space: Interpretation and storytelling on location. Interactive telecommunications program, New York University.

Zhao, J., Wallgrün, J. O., Sajjadi, P., La Femina, P., Lim, K. Y. T., Springer, J., and A. Klippel, (2021). “Longitudinal effects in the effectiveness of educational virtual field trips” Journal of Educational Computing Research, 60(4), 1008-1034.

Leave a Reply

Your email address will not be published. Required fields are marked *