
Knowing there is a human on the other side of the screen who cares about your learning and is there to support you is like a vitamin to an online student. That connection with a teacher nourishes a student’s engagement, motivation, and performance and when one is learning in physical isolation from their instructor and their peers, it’s even more important.
As social creatures, we scan our environment for “kindness cues” of social inclusion to determine whether we are safe (both physically and psychologically). This same scanning occurs in online classes and all too often the non-verbal and verbal cues our brain is wired to seek out (a smile, a warm vocal intonation) are missing, resulting in ambiguity. That ambiguity keeps our brains on high alert, using the same cognitive bandwidth we use for learning.
Welcoming and imperfect human presence mitigates psychological threat and belongingness uncertainty, phenomena that prevent students, particularly those with minoritized identities, from achieving their full potential. Yes, there’s much more to effective, equitable online teaching but research conducted on community college students, the most diverse population in higher ed, has shown that having such a connection with an online instructor is the exception rather than the norm. And the research I conducted with our project team, led by Di Xu at UC Irvine, validated the positive influence of humanized online teaching on student belonging and a reduction in racial equity gaps in online undergraduate STEM courses at community colleges and access-oriented universities.
I’ve been curious about the development of synthetic media and how it might play a role in online teaching. Synthetic media is content (image, video, or text) that is generated by AI. The term is often negatively associated with deepfakes. And this tech is making its way into the online teaching landscape.
This spring, I am facilitating a community of practice comprised of a dozen faculty and staff from community colleges across California to kick the tires on synthetic media tools and explore how or if they may have a place in one’s teaching toolkit. Here is the video I generated using my own digital double (created with HeyGen) as a promotion for the community of practice. After creating my “digital double” ( or “custom avatar” as it’s called in HeyGen) with a 3-minute training video of me speaking in my backyard wearing a denim jacket and orange shirt, I typed a transcript into HeyGen and this is the video that it produced.
As you watch it, what do you notice? How do you feel? How would it resonate with different students?
Our CoP cohort has had some very compelling conversations about synthetic media, delving into other features and capabilities, as well. Like the ability to translate videos into different language with impressive lip synching. And uploading a doc/PDF and having it generate an avatar providing an AI generated summary. As we all move forward into the AI era, it’s vital that we lean in and have conversations about things — even the things that creep us out. Because how we use technology changes us. Pull up a chair folks. You are needed at this table.
Curious to hear about what we’ve been learning (and feeling)? And whether we see a place for this technology in our practice? On Thursday, April 10th from 10:00-11:00 am PT / 1:00-2:00 ET, I’ll be hosting a conversation with four of our CoP participants. Care to join us in the uncanny valley? Register for free here. The archive of the session will be hosted on the CVC@ONE YouTube channel.