One of the ways in which we are already seeing the use of AR is as an improvement on typical curricula for anatomy in Health Sciences Education.
Over the past several decades Higher Education institutions have had to keep up with the pace of rapidly changing technologies, with the pace increasing even more in recent years. I see the next wave of advances coming from the world of Virtual and Augmented Reality (VR and AR) and Artificial Intelligence (AI). As pedagogies continue to evolve over time, technology increasingly becomes an early influence as institutions evaluate their programs, faculty, students and even spaces. As designers, we at SmithGroupJJR continually strive to stay apprised of advances in the realm of educational technology including VR/AR and AI as they have recently started making inroads in educational environments.
One of the ways in which we are already seeing the use of AR is as an improvement on typical curricula for anatomy in Health Sciences Education. Several companies use tablets or mobile devices to scan posters and other two-dimensional physical data that then are able to generate an AR digital learning experience. Project Esper is working towards this goal with “mixed reality” learning tools.
Rather than schools continuing to rely on gross anatomy labs, they will soon have the possibility of completely rethinking how to develop collaborative digital environments. Students and faculty will be able to work together with AR and other technologies in flexible spaces to learn the skills that once relied on very specific and technical spaces. The possibilities for cross-collaboration and Interprofessional Education could expand with these technologies allowing interaction across departments or even universities. In our experience, many institutions struggle to find the resources to create dedicated collaborative spaces. With the advent of new technologies and the reduction of highly technical and costly spaces, these challenges can be minimized creating more spatial flexibility over time.
At all levels of education, large corporations such as Google are starting to revolutionize the way we think about teaching. Google released their Cardboard tool for VR through mobile phones several years ago, and they, along with other developers, continue to create new ways to leverage this inexpensive, accessible technology. Google Expeditions was announced earlier this year as a way for students of all ages to work with VR environments through the learning process. Much of the focus of this technology has been on K-12 education, but I envision many other applications as well. As an art history minor in college, I had never seen the sculpture of David by Michelangelo or the historical cave paintings in Lasceaux, France from nearly 20,000 years ago. Having the ability to experience—even virtually—the Galleria dell’Accademia Museum in Florence or actually visiting a historical site of art or archaeology would have made such a huge difference to me, and I suspect many other students as well. I’m confident that the majority of higher education programs would benefit from leveraging VR technology to not only change pedagogical approaches, but also make the learning experience even more engaging and valuable for students.
As with many other trends and technologies, the art community has been at the forefront of pushing the boundaries with AR and VR. At Emerson College, art students have been investigating both AR and VR as well as blends of different and more traditional media to further their artistic visions. Moving forward, the opportunities to pursue the wide range of applications of both AR and VR are nearly endless for both students and faculty.
Artificial Intelligence differs from both VR and AR simply due to the complexity of the systems required to support it—for the time being anyway. At the most base concept, AI is smart software that learns or adapts as you use it. The idea of AI instructors has made its way into both K-12 and higher education. Georgia Tech was one of the first institutions to fully implement an AI element into their Computer Science program. Dubbed Jill Watson by IBM, the entity is able to answer student forum questions at a rate which a human instructor would likely not be able to keep up with, saving time and energy.
Software tools for learning and tutoring are also incorporating more limited aspects of AI to create smarter tools for learning. Even the Alexa from Amazon is looking to take over the role of Teaching Assistant. With the rise in popularity of Massive Online Open Courses (MOOCs), the ability to leverage tools such as AI may in fact allow for increased accessibility of higher education over time for more students by reducing operating costs. It could even potentially allow faculty to focus more on research rather than broad teaching requirements, increasing our efficiencies in institutional knowledge. Considering spatial impacts, formal and didactic pedagogies could change with this technology and have massive and long-lasting implications for academic institution planning. This all begs the question of how faculty, teaching assistants, and instructors fit into the educational picture in 5, 10, or 20 years, and how the spatial and physical environments of learning may change throughout the same time period.
For now, I believe the complex human relationships that are formed between faculty and student remain critical to successful educational programs, and these technologies can serve to enhance the experience of everyone involved. The use of all of these technologies as they evolve has exciting implications for many aspects of the higher education experience, which I plan to keep evaluating dynamically through our projects. My hope is that as these technologies develop and gain traction, we will have the ability to work with colleges and universities to incorporate more flexibility into their facilities and further enhance the development of new educational paradigms.