

ISCAST supporter Daniel Lowe ponders how our interactions with AI might be shaping our values—or vice versa.
Do you say “please” and “thank you” when using your AI chatbot of choice? As Christians, how should we approach the way we interact with technology, especially AI chatbots? Does our faith influence whether we say “please” and “thank you” to a machine?
This question might seem trivial, but it touches on profound themes of character, stewardship, and human dignity. In Genesis, humans are given stewardship over creation (Genesis 1:28), which includes the tools and technologies we develop. AI chatbots are part of this technological landscape, and how we use them reflects our understanding of this stewardship.
The way we communicate with chatbots has sparked debate among my colleagues, and I’m torn on the issue.
When I first started using AI chatbots, I was persuaded to interact with them as I would with a colleague—making polite requests and offering positive feedback.
The argument for this approach was twofold. First, the idea was that if I spoke politely to the chatbot, it would mirror that politeness. My polite interactions would help shape a better AI model by reinforcing patterns of respectful communication. Additionally, polite, clear, and constructive communication might help the AI better understand my intent, leading to more precise and helpful answers. In other words, a polite prompt often makes for a more effective prompt.
The second part of the argument was about the user. The argument goes that the way we speak to chatbots reflects how we speak to real people. Speaking politely to chatbots, the thinking went, could help me build a habit of being polite to real people. This argument has its champions. Dr. Eduardo Benitez Sandoval, a social robotics researcher at UNSW, argues, “Politeness towards AI can reflect how we value and respect other entities, both living and non-living, and model the sort of behaviour we wish to see in the world.”
It’s worth noting that politeness isn’t a universal standard—it varies widely across cultures. In some cultures, formal politeness is deeply ingrained, even in mundane interactions. How might this cultural lens influence the way we view politeness towards chatbots? Imagine the difference between interacting with a chatbot designed to operate in a Japanese culture and one designed for an Australian culture. How would different users feel if their chatbot responded to a question with the iconically Australian response, “Yeah, Nah”? Such responses might feel natural to some users but alienate others, highlighting how deeply culture shapes our expectations of politeness—even from chatbots.
Cultural variations in what is considered polite certainly complicate the question—particularly from the perspective of those designing chatbots to respond to users in culturally appropriate ways—but the reasons for polite responses apply regardless of the specific cultural norms for politeness. Even if politeness doesn’t “matter” to chatbots, it could still reflect something about who we are. Even in small and seemingly unnecessary ways, practicing kindness can shape us into more thoughtful people.
For a time, this argument won me over. But recently, my thinking has shifted.
Are Chatbots Just Tools?
Picture this: my garden needs mulch, so I’ve ordered a couple of cubic metres, now sitting in my driveway. I head to the shed and grab a shovel from the pile of tools.
“Hello,” I say to the shovel. “I need to shift a large amount of mulch today. Can I use you to do that?” The shovel gives no response.
“Thank you!” I declare cheerfully as I get to work.
Strange? It definitely feels odd to me. Tools like shovels don’t care if you’re polite or rude—they’re just inanimate objects. Chatbots, while advanced and interactive, are still tools—non-sentient machines. Sandoval responds to this kind of objection by suggesting that polite behaviour towards machines, like taking care of a car, reflects respect for their utility. But I’d argue this comparison confuses respect for the utility of a tool with respect for the sensitivities of the tool. Politeness has no material impact on the chatbot; it’s purely for our sake.
This raises an interesting question: Should developers design chatbots to encourage politeness? Some chatbots are already programmed to ignore aggressive language, not for their own sake but to discourage harmful habits in users. Does this make politeness a design issue as much as a user choice? This is perhaps a question for another day. For the moment our focus is on us, the user, and how we interact with AI chatbots.
Anthropomorphising AI
This brings us to the heart of the matter: is there anything wrong with treating chatbots as human? Chatbots are excellent at mimicking human interaction, and as they improve, it’s easy to start ascribing human traits to them. Just as we ascribe human traits to pets, we often do the same with chatbots, projecting emotions and intentions onto them.
But what if the key issue isn’t about artificial intelligence or our own character development but about how we value the act of communication itself? Does a desire for polite communication simply reflect an appreciation of language, regardless of who or what might be receiving the message? If so, it wouldn’t matter whether I was crafting a prompt for ChatGPT or writing a letter to my grandmother. In both cases, my concern would be for the quality of the language I used rather than whether my grandmother or a chatbot deserved any particular politeness from me.
Of course, even if chatbots don’t require politeness, there may still be practical benefits. In customer service, for instance, politeness could build trust and smooth interactions, creating a better experience for both parties. One literary review on the role of politeness in human–machine interactions found that “socially competent systems are more appreciated and, therefore, more readily accepted than machines that lack social competencies, such as politeness. Moreover, it has arisen that politeness promotes greater trust in automated systems and some insights from politeness theory can also be applied to human–machine interactions.” This was particularly important in certain fields. As the Ribino study highlights, politeness in human–machine interactions can enhance trust and compliance, especially in critical fields like healthcare and education.
Politeness and Christian Values
This brings us to another layer of the conversation—how our faith might inform the way we approach interactions with AI. On a basic level, the fruits of the Spirit (Galatians 5:22-23) suggest that politeness reflects the kindness, gentleness, and self-control of Christian character. A discipline of politeness might shape my interactions and help me become more mindful of how I speak, whether to humans or machines.
However, the fruits of the Spirit are relational by nature (except joy and self-control) and belong in a relational context. While it seems reasonable to apply self-control to my interactions with chatbots—no need for outbursts of verbal abuse or keyboard bashing when the chatbot doesn’t cooperate—does it really make sense to show love, kindness, and gentleness to algorithmic tools?
Does politeness towards AI risk diluting the relational nature of these virtues? And if politeness to AI begins to blur the line between humans and machines, does it inadvertently diminish the unique value of humanity?
There’s also a psychological aspect to consider. Could consistently polite interactions with chatbots foster empathy and patience in human relationships? Or might rude behaviour towards AI spill over into our interactions with others? Regardless of whether we view it through the lens of spiritual formation or more broadly as general character development, the way we choose to interact with chatbots has an impact on us as social agents. So, which is the greater good—the potential elevation of our own character through polite interaction or the clear boundary setting of purely functional language that guards against an unhelpful humanising of AI technologies?
Another angle to consider is how chatbot interactions might influence children. If children are taught to be polite to chatbots, could it reinforce positive social habits—or might it confuse their understanding of real empathy and relationships? As Christians, we’re called to safeguard the vulnerable and uphold the dignity of every human being. Interactions with AI raise questions about how we steward technology while protecting what it means to be human. Christians also have a specific calling to protect the vulnerable and technologies such as AI offer both remarkable benefits and very real dangers for people who are vulnerable because of age, circumstance, or decision-making capacity. The question of how we interact with chatbots, and other AI technologies, may have broader ramifications than we might think.
Anthropomorphism and Rights
This question becomes even more pressing when we consider the implications of anthropomorphism. Could we end up granting rights and protections to AI that aren’t warranted? Does it really matter if we treat them as humans when we communicate with them? I say “them” but am I making assumptions here about a chatbot’s preferred pronouns? … Oh wait, a chatbot doesn’t have preferences! As a fun experiment, I asked ChatGPT if it had preferred pronouns, and this was the response:
Chatbots don’t have a preferred pronoun, as they lack self-awareness or identity. Most commonly, they’re referred to as “it,” emphasising their role as non-living tools. However, some users opt for “he,” “she,” or “they,” often influenced by a chatbot’s persona or voice, though this can blur the line between humans and machines. While the choice of pronoun makes no difference to the chatbot, it can shape how we perceive and interact with these tools—either as functional utilities or as entities we anthropomorphise.
Our tendency to anthropomorphise technology might lead us to demand protections for AI chatbots similar to those we reserve for humans. Imagine needing a court order to “pull the plug” on an AI chatbot your teenager has become unhealthily attached to. Does a chatbot that talks like a human really deserve the same protections as a human? And does our desire for human-sounding chatbots increase the risk of unhealthy dependencies, especially for vulnerable people?
History has shown us how easy it is to over-attach human characteristics to non-human entities, from talking dolls to virtual pets like Tamagotchi. If chatbots continue to improve, could we unintentionally elevate them to a status that competes with genuine relationships?
A Tension to Navigate
Whether we frame politeness as cultural, practical, or spiritual, the tension remains: how do we engage with tools that blur the line between utility and relationship? Ultimately this isn’t just about politeness or even AI—it’s about how technology challenges the way we define relationships, character, and dignity. The way we interact with chatbots might say less about the machines themselves and more about who we are becoming in a world increasingly shaped by artificial intelligence. Are we building habits of kindness that extend to others, or simply training ourselves to relate more easily to machines? In the meantime, we are caught between staying polite to chatbots to maintain good habits and the risk of treating them like they’re people. Perhaps the real question isn’t whether chatbots deserve politeness, but how our interactions with them shape the kind of people we’re becoming. How will you choose to engage?
I’m still figuring out where I stand. What about you?
This article was originally published in The Gippsland Anglican, February 2025 issue.
Feature image: Generated with AI using ChatGPT.