How does using AI affect our relationships with each other?
Let's think about each other as humans when we're cranking out our AI-generated emails.
Since the dawn of the generative AI in education era back in late 2022, everyone and their cousin seems to have developed some kind of AI literacy framework. Some have been created by organizations (like this one from Educause), some by individuals (I like Maha Bali’s critical AI literacy framework), some by higher ed institutions (they are legion, but here’s one from Stanford), and some by AI companies and their boosters. Most of them include roughly the same essential elements, with attention to knowledge and proficiency, and elements to address those pesky limitations like “ethics” and “bias,” which are often framed as surmountable and resolvable. The endpoint, many of these models suggest, is something called “ethical AI use,” which feels to me about as achievable to me as ethical consumption under capitalism.
Listen, I don't mean to sound quite so snarky as all of this. Frameworks can help people understand how to approach technology, and being a critic is easier than coming up with something new. Rather than propose my own framework in an oversaturated field, though, I wanted to call attention to something I think is missing from the way we approach learning and decision-making about generative AI.
The missing relationships in AI
I'm going to focus here on text-based generative AI: chatbots like ChatGPT (“ChattieG” to some, which still feels about right to me), Gemini, Claude, Copilot, and others. My path into this conversation winds through my disciplinary background (sociology), scholarship on communities, and the edges of my work in social psychology and a wee bit in symbolic interactionism and computer-mediated communication. I think a lot about how education does or doesn't foster community, beyond individual interactions between students and educators and students and each other. I’ve talked about community in the classroom (in this podcast, for example), taught about it in my own classes and in workshops, and, someday, maybe I’ll write something more concrete than a vague title I have: “community is not an afterthought to belonging.” Basically, community matters for education, and relationships are essential to creating community.
In brief, what I'm saying here is that our approach to AI in education is missing one of the essential purposes of writing: to communicate, and to do so in relationship, with other humans. What does it mean when AI mediates those relationships? How does AI use affect our relationships, in other words? I think this is a question we should be folding into our AI frameworks, especially as we’re considering the role of these tools in building, or eroding, trust (I’m still working on that essay! Stay tuned!).
One quick caveat: I think we need to be careful not to romanticize relationships, in the same way that we shouldn’t romanticize community.1 Some relationships are terrible, exploitative, abusive, or even just mid. What I’m considering here is focusing on the relationships we want to or have to preserve: relationships with the people we love, people with whom we have obligations due to work or other social expectations, people whose lives we impact as part of being humans on a shared planet. It’s not a super bright line between “good” and “bad” relationships, if you get to thinking about it, but I want to make clear that we are not obligated to think deeply about AI in relationships that hurt us.
When is AI use good for our human relationships?
I'm going to start with the question I find harder to answer. On the spectrum of “AI is, on balance, good for the world” and “AI is, on balance, bad for the world,” I lean pretty hard toward the latter, and so many of my uses of AI (which are indeed many, due to my day job) come with caveats. I do want to honestly entertain arguments I disagree with, at least in this context, so I’ll share a couple of examples of AI use in my own life that I think maybe, perhaps, a little bit, have the potential to enhance our human relationships.
One early summer afternoon, I sat with my kiddo on our back porch, yakking about who knows what all. I can’t recall what prompted this idea, but I pulled up the free version of ChatGPT on my phone and wrote a prompt to generate a mystery story that we could then incrementally solve together. My kiddo really enjoyed that activity and has asked me to do the same thing several times since then (I haven’t, for harm reduction reasons I’ll get to below). This activity was genuinely fun, and I’d say that, in a small way, it enhanced our relationship by bringing us together around a shared activity that required both of us to use our creative brains to solve a problem.
In an educational setting, I’ve taught a lot of people about generative AI and chatbots specifically: how they work, what they can and can’t do, what a hallucination looks like in real time, how “bias” aka inequality is recreated in writing, in images, in the structure of text, and why AI “research” tools cannot replicate the human endeavor of research, among many other things. I’ve asked participants for examples, we try them out together, and, sometimes, I’m wrong. AI use in these contexts, these relationships, facilitates understanding and is used transparently, with the consent of those in the space.
What are the common elements of positive AI use in relationships? These are just two examples, but a rough brainstorm to get us started are: transparency, collaboration, responsibility, deepening human connection, creativity, and perhaps some attention to equity (yes, even in a mystery story!). I don’t think just injecting AI into a relationship achieves these goals, necessarily, or that this initial list is definitive. But I think it helps us get started considering how AI usage might affect our relationships with others.
Let’s shift to considering the many examples of ways AI negatively impacts our relationships, where my Oscar/Clare the AI Grouch heart truly lies.
When is AI use bad for our human relationships?
The examples of AI use fracturing human relationships are many, so it's really not difficult to think of a few. Can you think of any right now?
In an educational setting, students using AI as a substitute for their own work seems like one glaring example. Another example is educators using AI to shortcut the work of feedback and teaching without students’ knowledge. AI in these cases are tools of efficiency, not relationship, and while social norms may shift to eventually make space for this kind of non-consensual AI usage, I don’t think we’re there yet, and to be blunt, I hope we don’t get there. And just because I can feel in my soul a knee-jerk reaction from AI boosters, let me just say that the purpose of these AI tools may well be primarily efficiency, and we are all overworked and need an infusion of efficiency in what we do - yes, that’s true. And, at the same time, there are effects on our relationships that are worth considering in real time. In the same way that massive technological tools like social media have reshaped our interpersonal relationships, AI is in the process of doing that, too, and I’m suggesting that we give those effects a whiff of a thought while the soup is coming up to a boil. We’re in the soup right now, friends, bobbing around with our little mushrooms together.
Beyond educational spaces, there are lots of other ways AI usage can negatively affect our interpersonal relationships. One of the classic examples of AI efficiency I often hear shared in administrative contexts is its use for email efficiency. Have AI write all of your emails! And, man, I get it. Who likes writing emails anyway? I do sometimes have AI help me with a turn of phrase or two. I have a lot more to say about email as a relationship-building (or busting) technology, but I’ll share one quick example of an AI-generated email I received that made me wince, though. Some time ago, I received an email requesting a reference that had clearly been copied and pasted from a chatbot. You know the signs: different font and color plunked in the middle of text, not the voice of the author, and overly formal, vague, and corporate-sounding. Was it the clumsiness of it or the use of AI at all that felt like it introduced a fracture into our relationship? Maybe a little bit of both, truthfully. And maybe nobody else cares whether the emails they receive are written by AI. Maybe crafting a thoughtful email, even a brief one, is the realm of the privileged now, or maybe it always was. But if you received an email like that, would you have winced, too? Be honest.
A few other wince-inducing uses of AI come to mind: there was that “Dear Sydney” Google Gemini Ad that created a stir, the hard pivot to AI by media companies and attendant loss of trust, the many examples of legal relationships damaged by fake, chatbot-generated (aka “hallucinated”) legal cases. Some of these examples are just pure clumsiness, but I’m not sure that expertise with AI fully addresses the relationship question, either.
Continuing with our brainstorming exercise, some of the common themes I see in these cases include: emphasis on individual efficiency, profitability, deceit, and flattening or eliminating human voice.
I haven't even tackled what happens when AI becomes a substitute for human relationships (professional relationships, friendships, partnerships, and others), and you don't need to look far to find examples of how terrible this can be. There are now many examples of the damage AI “companions” can do to individuals, and the problem is emerging so swiftly among young people that the American Psychological Association has a (pretty good) health advisory on the topic.
I also have a hard time wagging my finger at people who use AI as a way to address real problems where a human relationship should exist but for a variety of reasons related to capitalism and inequality does not. Who hasn't gone to AI to ask about a healthcare problem because doctors don’t take your reports of pain seriously (just hypothetically)? Even just basic access to some human relationships, like competent healthcare providers, is a privilege these days, too, and I worry that the emergence of AI will only exacerbate these inequities in access to humans.
What to do?
When we use AI, I'm suggesting that, alongside our concerns about ethics and bias (or what I like to call the reproduction of inequality because it has a less individual vector), we ask what AI use will do to the relationship we're engaging in when we use it to write, or revise, or tell a story, or write and email, or chat with our students. What do we know about the people we're in relationship with and how they would feel about us using AI? Does it enhance our connections with each other, our experience of community, or detract from it? Maybe the answer will sometimes be, meh, they wouldn’t care. But I think at least asking these questions might give us a chance at preserving what we value in human communication, and in community, by extension. I don’t think these questions will ever be resolved, as the meanings and experiences of community change over time, and, to me, resolution isn’t the point. There is no endpoint to “relationship-informed AI use,” just as there is no endpoint in “ethical AI use.” I think we can ask ourselves these questions, and keep asking them, and ask them in our classrooms, and in our meetings, and with our loved ones, too.
Clare’s Customary Caveats (™)
Even though I’ve hinted at this above, I haven't touched relationships beyond the interpersonal, although they’re implicit in all of this thinking. I remember being a wee undergrad many years ago and really coming to understand the concept of globalization and its material effects on my life. It meant that I had invisible relationships with many, many more people than I knew, as, for example, the very shirt on my back was created by humans in a factory a half a world away, transported by other humans across land and water, moved into and out of vehicles, arranged in stores (back when most readymade clothing was still purchased in stores), sold to me, all by other humans. And, of course, this whole system was, and remains, rife with inequality, and what responsibility did I have as someone with a measure of resources to be accountable to all of those people? It was a lot for me to keep in my head at the time, but the same thing is true of AI. Every time we write our mystery story-generating prompt, we’re leaning on an implicit set of relationships with people who have experienced horrible things2 as part of their role in the AI ecosystem we now inhabit, as well as people who benefit mightily from the human resources of our ideas. As they say where I live now, uffda. I admit that I really don’t know how to work through what AI use means in relationship with all of these people, let alone the rest of our planet. My point here is that it’s really hard to keep this web of relationships in mind as we’re hammering out our request to synthesize this, or shorten that and make it sound more “professional.” But that doesn’t mean those relationships don’t exist. The cruelest responses to this reality include things like “but everything else we do has similar effects” etc. etc. ad infinitum, but the second, quiet part of this response seems to be “so let’s keep doing it! Whee!” and that doesn’t feel great. I think I prefer a harm reduction approach to AI, which is something I also hope to write more about in the coming weeks.
I often come back to Miranda Joseph’s 2002 book, Against the Romance of Community, as an exemplar of this kind of critique of community.
This is an older article from Time, at least “older” in the landscape of AI development, but I think it’s still a pretty good explication of the kinds of labor involved in keeping our li’l recipe-generating, funny-image-making, email-revising conversational machines up and running. Huge content warning for literally the worst of the internet. Perrigo (Jan. 18, 2023) “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.”