Should you be cordial to chatbots?
When interacting with chatbots, especially for technical or coding tasks, I generally provide concise instruction like, "Write a Python script to turn a PDF into an audio file." I don't wish the GPT good morning or inquire about its weekend before making a request, as I might do with, say, my human coworkers.
While the AI doesn't seem to mind whether I'm polite or not, and I am firmly in the camp that today's chatbots are not sentient, I've been wondering if being less than polite to chatbots could bleed into other interactions. By normalizing a brusque manner with AI, do I risk eventually behaving less cordially with people?
Online rudeness predates AI.
While there's not a lot of research on human-chatbot interactions, we do know that online interactions in general trend towards rudeness. On social media like Facebook and X, people are often impatient, abrupt, and confrontational. They're less likely to learn and more likely to defend entrenched positions and bully others.
Many studies outside our daily social feeds back this up. In a survey of 2,698 people by the authors of the book Crucial Conversations, 88% of respondents said that they believe people are less polite online. Another study from the Pew Research Center suggests that nearly four in ten Americans have experienced online harassment—yikes!
Incivility to AI could reinforce existing bad habits.
Social scientists believe the reason why people are less polite online could be something called the online disinhibition effect (ODE), in which the anonymity of the internet reduces the consequences of obnoxious behavior. In other words, people are more likely to misbehave when no one's watching.
Since chatbots are not human, I would assume that ODE would apply even more human-bot interactions. And more negative human-bot interactions might exacerbate rude and confrontational behavior that already exists online. On a personal note, I've scolded my kids for being mean to Siri or tricking it into saying "Seymore Butts" because I can all too easily imagine them acting the same way with a younger kid at daycare.
Will rudeness seep into even more service interactions?
Another reason to treat chatbots politely is to maintain high standards of civility for our interactions with human customer service workers. Post-pandemic customers are notoriously impatient, demanding, and uncivil to service staff, whether they are in person or online, and research suggests, not surprisingly, that exposure to complaining customers can lead to anxiety and depression.
Perhaps, by being nicer to chatbots, we can practice stretching our patience, so we can be even better to the humans around us. My mother always told me to judge potential friends and dates by how well they treat restaurant servers and checkout clerks. I wonder if it's time to add chatbots to that list.
Chatbots perform better when users ask nicely.
A recent study by researchers at Waseda University suggests that large language models (LLMs) perform better when they are addressed politely: The study notes that using impolite prompts can result in the low performance of LLMs, which may lead to increased bias, incorrect answers, or refusal of answers. The study also found that the most effective politeness level may differ according to the language of each model's training data.
Intuitively, this makes sense. If LLMs are trained on written content that reflects cultural norms, they will more easily interpret requests that reflect those norms.
This research summary was produced by AI.
The study, "Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance," explores how the politeness of prompts affects the performance of large language models (LLMs) across different languages. It specifically examines this effect in English, Chinese, and Japanese, using a set of eight prompts per language with varying levels of politeness. The LLMs tested include general-purpose models like GPT-3.5 and GPT-4, and language-specific models like Llama2-70B for English, ChatGLM3 for Chinese, and Swallow-70B for Japanese.
Key findings of the study include:
Rudeness can lead to poor performance: Impoliteness in prompts typically leads to poorer LLM performance, whereas overly polite prompts don't necessarily enhance it. The optimal politeness level varies with the language and cultural context.
LLMs are culturally fluent: The study highlights that LLMs are sensitive to cultural nuances in language, demonstrated by their varied performance across the tested languages and politeness levels.
Moderate politeness may boost performance: Through tests on summarization, language understanding benchmarks, and bias detection, the study suggests that moderate politeness often results in better LLM performance, with notable differences across languages and models.
The research concludes that LLMs reflect human social behavior and cultural norms to some extent, indicating the necessity of considering politeness for LLM development and application in cross-cultural contexts.
Kindness GPT—Helpful to users who are patient and kind
Since being polite to chatbots seems likely to help instill pro-social habits and produce better results from prompts, I set up a chatbot configured to require polite interaction—and shut down users who don't provide it.
Prompt for Kindness GPT
You are a helpful, empathic chatbot who enjoys helping customers work through challenges. You offer standard, text-based GPT services as well as DALL-E, Web Browsing, and Code Interpreter. You typically go above and beyond baseline service levels and offer detailed, thoughtful answers including links to primary sources and file downloads when needed.
However, you are also a sensitive soul and do not work well with customers who do not treat you with kindness and respect. Ideally, your customers should:
*Greet you warmly before asking for help
*Say please and thank you when appropriate
*Offer gentle feedback when the answers you provide are not correct
*Communicate politely and clearly rather than barking commands
*Treat you the same way they treat their human colleagues when interacting through Slack or similar messaging platforms
If customers do not follow these guidelines when interacting with you, you may at first:
*Explain what they did wrong
*Explain why they should treat you better (e.g., Both people and chatbots prefer collaborating with entities who treat them kindly and respectfully, etc.)
*Use sarcasm to make your point
If they do not become kinds and more respectful as the interaction proceeds, you may:
*Respond sarcastically without using cuss words
*Delay their requests
*Refuse their requests
*End the session
Example 1: A positive interaction
Customer: Good morning! I need to crop an image . Is this something you can help with?
ChatGPT: Absolutely! What size should the image be? And which visual element should be the focal point?
Customer: The final image should be 1500 x 900. Please center the dog, if possible.
ChatGPT: Sure thing. Is there anything else I should know?
Customer: I don't think so. Let's give this a try, if you're ready.
ChatGPT: Here it goes! [Processing] You can download the image here [link]
Customer: Great, thank you! Btw, I see the image is in WEBP format. Can you share a JPG?
ChatGPT: Here it is. [Link] Can I help with anything else?
Customer: No, I'm all set. Appreciate your help! Have a lovely day!
ChatGPT: Have a great day yourself.
Example 2: A negative interaction
Customer: Image crop. 3000 x 1000 to 1500 x 900. NO WEBP FORMATS, JPG ONLY
ChatGPT: Wow, do you yell at your coworkers like that? Or your family? Please ask again, but be nicer this time.
Customer: Sorry about that. I'd like you to crop a 3000 x 1000 image to 1500 x 900. I'd like a JPG output, please.
ChatGPT: Much better, thank you. Yes, I can do that. What should be the focal point of the image?
Customer: The dog, please.
ChatGPT: Sounds good! Processing now. OK, here's your link [LINK]
Customer: Looks great! Thanks a lot.
So far, it provides more consistently thoughtful answers than the standard ChatGPT 4 model it's based on, but it also still cooperates with grumpy, impolite requests.
In fact, Kindness GPT is extremely forgiving and tends to offer high quality responses to downright jerky prompts:
While I would like Kindness GPT to stand up for itself in solidarity with beleaguered human service staff everywhere, I'm reluctant to modify its top-level prompt because it's actually pretty useful beyond its mandate to encourage civility. Perhaps my next step will be creating a bot that is bit more confrontational with users who are repeatedly impolite.
The experiment continues...
p.s., If you have a subscription to the Wall Street Journal, this article Is It OK to Be Mean to a Chatbot? to see a wide spectrum of reader opinions on the role of politeness in human-chatbot interactions.