This is a weird time in history to be a writer. Professionally, I've been working to identify the most helpful (and most dangerous) AI writing workflows for my tech industry clients. At the same time, I've been seeing a lot of headlines about how writers will be obsolete and how most office jobs won't be far behind. The news Google is suggesting for me isn't exactly reassuring, either.
A random screenshot of my Google news feed. Scary!
Perhaps not surprisingly, AI is all over my LinkedIn feed. Posts that are strongly pro- and anti- AI both seem to get a lot of traction. My own most popular post is basically a lengthy complaint about the ongoing need for both AI writing detection and "copy humanizer" models.
But research across multiple domains suggests that emotional, polarizing content performs well, whether or not it reflects how most people actually feel. What looks like an existential battle between AI boosters and detractors could just be the algorithm's bias for engagement.
All this left me wondering: what do the vast majority of professionals, who aren't selling AI tools or consulting services, think and feel about AI?
EVERYONE IS WONDERING ABOUT AI
According to Google's Keyword Planner, there are 1,500,000 search queries every month for "artificial intelligence" and another 1,500,000 for "artificial general intelligence."
💡 Use Reddit Data for a "Vibes Check"
To answer this question, I turned to Reddit, which is relatively anonymous and actively discourages sales and personal brand building. My idea was to collect and then analyze a statistically valid sample of posts in marketing and creative subbreddits. Because my data science background is limited to Google's machine learning crash course, I used ChatGPT and Claude to help me write the code necessary to gather the posts and suggest models to make sense of them.
Ultimately, I ended up scraping and analyzing 2,082 Reddit posts mentioning AI across professional communities like r/marketing, r/copywriting, r/entrepreneur, and r/technology. Spoiler alert: The results surprised me and revealed patterns that don't fit neatly into either the "AI will save us" or "AI will destroy us" narratives.
Skip to the vibes report
The beginning of this post details how I created a dataset of 2,082 Reddit posts and analyzed it. If you want to see the results now, click here.
How I Chose the Right Subreddits
For this project, I focused on Reddit communities where professionals actively discuss AI primarily in the context of their work and creative pursuits. I specifically limited the dataset to posts from the past six months to capture how feelings have evolved in response to recent developments like big company announcements, new model releases, and AI's large-scale introduction into the workplace.
Rather than casting a wide net across all of Reddit, I selected subreddits that each offer a distinct, professionally skewed perspective, such as founders testing AI in startups, engineers evaluating tooling, and marketers and writers adapting their workflows. Some of these communities are niche but specialized; others are large and fast-moving.
The table below outlines why each subreddit was chosen:
Subreddit
Posts
Why it matters to the AI-sentiment conversation
SUBREDDIT:r/ChatGPT
POSTS:200
WHY IT MATTERS:A front-line diary of user experience with OpenAI's flagship product. Threads range from workplace automation to "is this replacing my job?" anxieties, giving direct evidence of enthusiasm and concern.
SUBREDDIT:r/ClaudeAI
POSTS:200
WHY IT MATTERS:The Anthropic user base skews tech-savvy and privacy-minded; posts often compare Claude with ChatGPT, revealing nuanced professional preferences (e.g., "safer for client data, but weaker coding help").
SUBREDDIT:r/OpenAI
POSTS:200
WHY IT MATTERS:A meta-forum about policy changes, API pricing, and corporate moves. Professionals post here when shifts (rate limits, new models) affect product roadmaps or budgets, so sentiment swings with each announcement.
SUBREDDIT:r/Entrepreneur
POSTS:200
WHY IT MATTERS:Startup founders discuss AI as an opportunity (automating ops, AI SaaS ideas) or threat (defensible moats shrinking). The blend of optimism and fear is useful for gauging business-owner sentiment.
SUBREDDIT:r/technology
POSTS:200
WHY IT MATTERS:Mainstream tech news with a broad professional audience; AI headlines dominate, and comment threads capture initial gut reactions—often skepticism about hype versus substantive breakthroughs.
SUBREDDIT:r/artificial
POSTS:200
WHY IT MATTERS:Long-form discussion of AI ethics, research, and existential risk. Professionals in policy, academia, and engineering debate practical vs. philosophical concerns, offering context for more specialized anxiety.
SUBREDDIT:r/smallbusiness
POSTS:193
WHY IT MATTERS:Like r/entrepreneur but more grounded in day-to-day operations. AI sentiment here mixes hands-on adoption (e.g. automating quotes) with concerns about client perception and cost-benefit tradeoffs.
SUBREDDIT:r/marketing + r/digital_marketing
POSTS:233
WHY IT MATTERS:Marketers explore AI's impact on SEO, PPC, and content strategy. Threads reflect both excitement over time savings and concern about voice, quality, and Google compliance.
SUBREDDIT:r/productivity + r/UXDesign
POSTS:205
WHY IT MATTERS:Focuses on real-world integration of AI into workflows and tooling. UX discussions often highlight user friction with AI interfaces, while productivity threads showcase automation and work hacks.
WHY IT MATTERS:Creative professionals discuss how AI is changing pricing, expectations, and client deliverables. Posts balance time-saving wins with anxiety about value erosion and originality.
WHY IT MATTERS:These smaller, specialized subs add valuable perspective: from visualizing AI results to worldbuilding with generative tools to testing AI reliability and edge-case failures.
How I Gathered the Data (Without a Data Science Degree)
Once I'd identified key subreddits, the next step was to collect the posts. I started by asking ChatGPT how to scrape Reddit (post content, timestamps, and URLs included) and to summarize Reddit’s API rules and best practices. After cross-referencing with Reddit's docs, I was ready to go.
I used the PRAW library (Python Reddit API Wrapper) to build a script that searched for posts across relevant subreddits using keywords like “ChatGPT,” “Claude,” and “generative AI.” The script collected each post’s title, content, date, and some engagement metrics like upvotes and comment count. I limited the scope to six months of posts, yielding 2,082 high-signal discussions.
The whole thing took about five minutes, a great reminder of how powerful AI-assisted workflows can be, even for non-experts.
import praw
import pandas as pd
from datetime import datetime, timedelta, UTC
# ✅ Your Reddit credentials
reddit = praw.Reddit(
client_id="YOUR_ID_HERE",
client_secret="YOUR_SECRET_HERE",
user_agent="YOUR_AGENT_NAME_HERE"
)
# 🔍 Search setup
subreddits = [
"marketing", "copywriting", "advertising",
"FreelanceWriters",
"Entrepreneur", "technology", "artificial",
"digital_marketing",
"smallbusiness", "dataisbeautiful", "productivity",
"UXDesign",
"ChatGPT", "OpenAI", "ClaudeAI", "QualityAssurance",
"worldbuilding"
]
query = (
'"artificial intelligence" OR ChatGPT OR GPT-4 OR Gemini OR
Claude '
'OR Midjourney OR DALL-E OR "generative AI" OR "AI tools"'
)
limit_per_subreddit = 200 # Adjust as needed
# Time filter: last 6 months
after_timestamp = int((datetime.now(UTC) -
timedelta(days=180)).timestamp())
# Scrape posts
posts = []
for subreddit_name in subreddits:
print(f"\n🔍 Searching r/{subreddit_name}...")
subreddit = reddit.subreddit(subreddit_name)
for post in subreddit.search(query, sort="new",
limit=limit_per_subreddit, time_filter="all"):
if post.created_utc >= after_timestamp:
posts.append({
"id": post.id,
"subreddit": subreddit_name,
"title": post.title,
"text": post.selftext,
"created":
datetime.fromtimestamp(post.created_utc),
"score": post.score,
"num_comments": post.num_comments,
"author": str(post.author),
"url": post.url
})
print(f"\n✅ Collected {len(posts)} posts.")
df = pd.DataFrame(posts)
df.to_csv("praw_ai_sentiment_posts.csv", index=False)
print("📝 Saved as praw_ai_sentiment_posts.csv in this folder.")
A Quick “Vibes Check” with ChatGPT
From a time and efficiency standpoint, manually reviewing all 2,082 posts was out of the question. So I started with ChatGPT’s 4o model to run a first-pass sentiment analysis, using a CSV file and a detailed prompt to classify each post as positive, negative, or neutral and which keywords (e.g., curiosity, frustration, excitement) appeared most frequently.
ChatGPT also used keyword analysis to guess the emotional tone of each post.
A word cloud based on text from 2,082 Reddit posts
A Second Opinion from Better Models
Before taking the results at face value, I manually reviewed the first 100 or so posts, and I’m glad I did. While ChatGPT’s keyword analysis picked up some interesting findings, like a large number of people building and testing AI tools, it also misread sarcasm and often mislabeled more nuanced posts as “neutral.”
To get more accurate results, I re-ran the analysis using two Hugging Face models:
cardiffnlp/twitter-roberta-base-sentiment — a RoBERTa model trained on tweets to classify posts as positive, neutral, or negative. It's more attuned to informal, emotionally charged text.
SamLowe/roberta-base-go_emotions — a multi-label emotion classifier based on the GoEmotions dataset, which tags up to 28 emotions per post (including admiration, nervousness, realization, remorse, and confusion). It’s particularly effective at teasing out emotional content in longer-form text.
These models, which are generally 75-85% accurate depending on the dataset, did better. Sarcasm was recognized more frequently, mixed emotions were surfaced more clearly, and emotional overlap (e.g., curiosity + anxiety) was handled more realistically.
What I Found: We're Sure We're Feeling Unsure
Overall results suggested that people are not extremely pro- or anti-AI despite the prevalence of emotionally charged news and AI-related engagement bait. But that doesn't mean everything's OK. While the sentiment model showed about half of posts falling into the neutral category, the emotion model revealed a persistent undercurrent of uncertainty as today's dominant vibe.
Sentiment Results: Reserving Judgement
I ran CardiffNLP's RoBERTa sentiment classifier first.
Overall Sentiment Distribution
Neutral50.3%
Negative25.0%
Positive24.8%
That heavy tilt toward neutral suggests a lot of professionals are reserving judgment. Many posts describe AI tools or workflows without strong emotion, or include both praise and concern in the same thread. Also, the relatively large percentage of negative posts suggests some level of confusion, frustration, or fear.
Emotion Classifier Results: Curiosity, Uncertainty, and Concern
To dig deeper, I used the SamLowe/roberta-base-go_emotions model, which tags posts with up to 28 emotional labels. These include obvious ones like fear or joy, but also subtler ones like realization, confusion, or admiration.
To make the data easier to interpret, I grouped the emotions into four broad categories: positive, negative, neutral, and uncertain.
This analysis told a somewhat different story than the sentiment chart alone.
Emotional Distribution Across All Posts
Uncertain32.5%
Neutral32.0%
Positive27.2%
Negative8.3%
Uncertainty was the most common emotional theme, showing up in over 30% of posts. That includes people exploring new tools, asking questions, or reflecting on how AI might affect their work.
Meanwhile, only 8% of posts reflected fear, sadness, or frustration. This result appears to contradict the sentiment analysis, which identified 25% of posts as having a negative sentiment. Some of the difference may be explained by my decision to include confusion as an uncertain emotion rather than a negative one. Overall, these results suggest that while concerns exist, outright pessimism is not dominant.
Top Emotions by Count
Here are the most frequently detected emotions (across both top and secondary tags):
Curiosity
The most dominant emotion, typically associated with questions about how AI might impact industries and jobs and building or testing new tools
Neutral
Frequently detected in posts with information sharing or general observations without a strong emotional tone
Gratitude
Often linked to time-saving features, productivity gains, or breakthrough results from AI tools
Approval & Admiration
Common in positive reviews of Claude, GPT-4, and niche tools; expresses respect or endorsement
Confusion & Realization
Arise in posts about changing market conditions, misunderstood tool behavior or when users gain new insight into model limitations
Surprise
Triggered by unexpected output, hidden capabilities, or tool limitations
Fear & Sadness
Often tied to ethical concerns, job security, or emotional fatigue related to AI's societal impact
Emotion by Community
While curiosity was the most common emotion across nearly all communities, secondary emotional tones varied by context and focus:
r/marketing and r/entrepreneur
Skewed slightly more positive and optimistic. Many users shared experiments with AI for growth, productivity, or client results, particularly in content creation, automation, and lead gen. Posts often expressed admiration for tools like Claude and GPT, tempered by concern over brand voice or quality.
r/ClaudeAI and r/technology
Showed higher levels of surprise and confusion, often in response to unexpected model behavior, bugs, or ambiguous capabilities. Some posts also expressed fear around long-term implications.
r/smallbusiness
More grounded and emotionally mixed. Users frequently toggled between gratitude (for time saved) and concern (about AI's impact on customer relationships, service quality, or cost). There's less hype here and more focus on practical, risk-aware adoption.
r/copywriting and r/FreelanceWriters
Reflected a blend of curiosity, admiration, and frustration. Writers used AI for ideation and speed, but many voiced concern over declining rates, client expectations, and the erosion of perceived value. Posts often included direct examples of tools being helpful and falling short.
How Professionals Use AI Today: Testing, Tinkering, and Therapy
If mainstream media coverage—and corporate press releases—suggest that AI is being rapidly integrated across organizations with clear workflows and guidelines, Reddit tells a different story. The data reveals a more organic, exploratory phase that's happening at the individual level first.
200+
posts explicitly mentioned trying, testing, or experimenting with AI tools—not as part of a company-wide implementation, but as personal initiatives driven by professional curiosity.
In fact, most experiments had nothing to do with team efficiency or production at scale. Instead, they focused on individuals testing what is possible with AI and sometimes finding unexpected applications. This pattern suggests that contrary to top-down implementation narratives, AI adoption is happening through grassroots experimentation by professionals, each finding their own path through trial and error.
The diversity of these experiments also contradicts the notion that AI will simply automate existing workflows. Instead, we're seeing new use cases emerge organically, often in areas that weren't initially targeted by AI developers.
01AI for Brainstorming: The Perfect Audience
A broad spectrum of professionals, including everyone from creative writers to entrepreneurs, are using AI as a non-judgmental brainstorming partner. Instead of bouncing a potentially crazy (or wildly innovative) idea off potentially skeptical friends or colleagues, they're asking AI, which may feel like a lower-stakes interaction.
❝
I've found it very helpful as a brainstorming partner. It's...helped me clarify and refine my own ideas and has given me new directions on projects I may not have thought of otherwise. And I find even just sharing progress updates with it helps keep me motivated since I don't really have anyone that understands my projects that I can share updates with.
— r/ChatGPT
This sentiment—finding an always-on audience that immediately 'gets it'—even extends into technical fields. Quality assurance professionals report using AI to "as a learning and brainstorming tool" while cybersecurity specialists use it to articulate complex vulnerabilities: "When I'm struggling to think of how to explain why a vulnerability is bad, AI can give me some talking points to research."
What's interesting here is that AI isn't being asked to create. Instead, we want it to listen, respond, and help refine human ideas. In this case, the technology serves as an infinitely patient collaborator that neither criticizes nor tires of the conversation.
💡
AI may not provide a great "reality check"
The idea of AI as a non-judgmental collaborator appears repeatedly across the dataset. However, it's important to note that AI may fail to provide constructive criticism or even encourage users to proceed with obviously terrible ideas.
02AI for Therapy: The Caring Confidant
Perhaps the most unexpected use case that emerged from the data is AI as emotional support. Multiple users described turning to AI models for motivation, self-reflection, and even therapy-adjacent interactions.
❝
I have my ideal vision of the end of the year written out alongside what I do not want to happen, and I feed that to ChatGPT or Claude AI... Then I ask it to create motivational messages for me based on my visions and designed to light a fire under me.
Others were more explicit about the therapeutic value: "ChatGPT will be a free therapist who knows you better than anyone if you talk to it same way you talk with yourself sometimes." (While AI used in this context can be comforting, it may also be concerning if people with serious mental health concerns turn to chatbots instead of a qualified mental health professional.)
Another user noted how AI's lack of human biases created space for new perspectives: "Talking to something that doesn't do human drama or get caught up in the usual stuff has been a real eye-opener. It's made me question a lot of things I took for granted and helped me see things from a new angle."
Once again, we're seeing an application of AI that has nothing to do with automation or efficiency. What makes this particularly interesting is how it contradicts the common narrative that we're using generative AI mostly for creative and knowledge work. Instead, we're seeing professionals use AI primarily as a thought partner, a sounding board, and even an emotional support tool.
03The Ethics Frontier: Interview Tools and Real-Time Coaching
At the edges of professional experimentation lie more ethically questionable applications. Several users reported detailed evaluations of interview assistance tools designed to help job seekers answer questions in real time during actual interviews.
Ethical gray zones
One detailed comparison of two "interview copilot" tools revealed sophisticated features: "During mock interviews, [Tool #1] excels with a persistent browser overlay that provides discreet STAR-based prompts... [Tool #2] gives impressive diagnostic feedback: a report on interview type, domain, and duration, plus analytics for relevance, accuracy, and clarity."
Another user was explicit about using these tools to gain an unfair advantage: "I used [Tool #1] for 2 Zoom interviews and 1 Skype interview. Their responses have no delays and are very seamless, much better than my experience with other interview copilot tools. I got the offer after using [Tool #1] and didn't get caught."
These posts included detailed pricing breakdowns and practical comparisons, showing the same methodical approach as other experiments but directed toward potentially deceptive applications. This raises important questions about how AI might undermine traditional hiring practices.
⚠️The Rabbit Holes Are Real: When AI Creates False Confidence
Not every experiment ends in success. Some of the most revealing posts documented frustration and wasted time—the dark side of AI-assisted work that rarely makes it into promotional materials.
❝
I wasted up to 2 hours of my day trying to use ChatGPT to help me interpret error messages in the Python console despite me not understanding a thing about that computer language... I persisted. I got angry. I changed lines of code, got excited when I saw the error didn't pop up, and then got my iddy biddy heart broke when a new, different error popped up.
I am typing this in utter frustration as I have been attempting to fix this issue for the past 2 hours with the help of ChatGPT, yet without success... Since then I have basically been hard-pressing CGPT for answers... and tried everything it suggested. Nothing.
These accounts might seem humorous, but they reveal something profound about human-AI interaction: the phenomenon of false competence. AI tools can make users feel like they're on the verge of solving problems outside their expertise, creating a persistent belief that success is just one more prompt away.
This emotional investment in AI experimentation appears frequently in the data. It reveals something important about human-AI interaction that isn't captured in technical documentation or capability lists: the frustration of almost-but-not-quite understanding. AI tools can create a false sense of competence—making users feel like they're just on the verge of solving problems outside their expertise, only to hit limitations they're ill-equipped to overcome.
Patterns in AI Experimentation: Six General Approaches▼
While every individual experiments with AI in their own unique way, some patterns or "types" of AI users emerged during analysis and in my manual review of a number of the posts. They are not rigid categories—most individuals shift between modes, for instance, from an Inventor to a Project Manager, depending on what they are attempting to do and the task at hand.
TNK
The Tinkerer
"I wonder what will happen if I try this."
Tinkerers want to see how much they can do with AI and may even try to break it. They tend to try out AI on side projects, developing small wrappers or custom prompts, typically without any specific business outcome in mind. They relish the process of discovery and tend to be the first to share back to their communities if they discover something interesting or unexpected.
What distinguishes tinkerers from other experimenters is that they are at home with open-ended investigation. They are prone to share their findings with communities, opening discussions with phrases like "I was messing around with ChatGPT and saw something cool." Their approach, while lacking in structure, has a way of unveiling new uses of AI that more goal-oriented users might not notice.
PM
The Project Manager
"How can this help my workflow?"
Project managers are approaching AI with systems thinking. They're methodically looking for ways to integrate AI into existing processes, often layering multiple tools (Notion + ChatGPT + Zapier) to build semi-automated workflows. Their experiments are centered around content operations, idea generation, and finding repeatable patterns that can be standardized.
Reddit posts from project managers usually discuss workflow and process and typically include comparative reviews of different tools for achieving a specific purpose. Their experimentation is controlled and outcome-driven.
SKP
The Skeptic
"Is this really better—or just faster?"
Skeptics engage with AI tools but maintain a safe distance. They're particularly on the lookout for hallucinations, tone issues, and factual errors. Their posts are more likely to be on what AI gets wrong rather than its capabilities, and they quite frequently include phrases like "I'm not sold on this yet" or "Has anyone else noticed this issue?", reflecting their role in requesting quality along with accuracy.
What is valuable about skeptics to the broader conversation is that they are committed to quality and precision. Their tests are designed for verification and validation, putting AI-generated content up against human benchmarks and vigorously fact-checking and checking citations.
ASP
The Aspiring Pro
"I'm trying to build a career—and AI might help or hurt."
Usually early-career or switching fields, aspiring professionals view AI through the lens of career development. Their content is a mix of hope and concern over how AI will affect their future careers. They use AI to accelerate learning, build portfolios, create mock projects, and generally try to "catch up" for competitive fields.
Their questions tend to be about relevance and timing, i.e., "Am I too late to get into this industry?" or "What do professionals even do anymore now that AI can do this?" Their Reddit posts are more vulnerable than other cohorts, openly discussing fears and uncertainties about career prospects and AI's impact on their futures.
TS
The Time-Saver
"Can AI do this boring thing?"
Time-savers search high and low for repetitive work and want to know what AI can perform in their place. Unlike project managers who focus on high-level workflows, time-savers focus on technical execution, such as building bots, connecting APIs, and scripting to replace manual labor. They usually look for ways to automate administrative tasks like summarizing meeting notes, pre-drafting emails, or coding.
Although they may not think of themselves as creatives, they may solve problems in creative ways. Their Reddit postings frequently include step-by-step technical details, code lines, and useful tips.
ED
The Editor
"How do I make it sound like me?"
Editors worry about the tone, style, and creativity of written content. While they use AI to generate content drafts willingly, they end up spending much time editing for voice and authenticity. Their experiments and failures are directed at prompt engineering, style guides, and persona development—anything that solves the generic "AI voice" problem.
Their Reddit discussions like to discuss the challenge of sounding "too AI-ish" and how to maintain authenticity in their communication. They're usually occupied with adjusting prompts for brand voice, studying writing patterns, and developing systems for maintaining AI-assisted content consistency.
Key Takeaway
The majority of us dabbling in AI don't fit into one category. We might apply tinkerer enthusiasm to a new personal project while maintaining skeptical diligence for client work.
Trust But Verify: The Cautious Implementation Mindset
Despite the large numbers of professionals actively experimenting with AI tools and even expressing enthusiasm, there's a persistent undercurrent of caution.
Post Sentiment Distribution
516
Positive posts
focused on productivity, speed, and structure
1,045
Neutral posts
curious or uncertain, asking questions, sharing workarounds
521
Negative posts
explicitly critical or frustrated
Even in positive posts, almost no one expressed complete trust in AI outputs. The prevailing attitude is "useful but needs a lot of supervision." This is a far cry from the more optimistic messaging we've seen from AI-first companies and consultancies.
The Trust Gap: Why We Still Double-Check Everything
The most common concern across professional communities is that AI is confidently wrong in ways that could damage professional credibility. For example, one Redditor warned colleagues about the risks of using AI to fake expertise in technical fields:
❝
ChatGPT gets things wrong a lot in fields like science, engineering, accounting, or architecture... If you're just throwing keywords into GPT and hoping it'll make you sound smart, people will notice. Experts who've been around for 10, 20 or 30+ years will call you out, and it'll backfire. You can't fake expertise, especially in fields like science, engineering, or architecture.
Indeed, organizations are increasingly adjusting their AI use policies and recommendations to recognize the fact that AI often masks errors with an authoritative tone of voice. Requiring a final human review before publication is one of the most common new standards.
The AI Voice Problem
Another subtle reason for mistrust of AI-generated work? Professionals are increasingly able to identify AI-generated text based on telltale patterns and phrases. Hundreds of posts across social media of all kinds spell out the telltale signs of "AI voice." This has also unfortunately led to flame wars in which writers accuse each other of using AI and the proliferation of both AI copy detection and AI copy "humanizing" models.
❝
Some that I notice pretty often: '...embark on your x journey...' 'X is not only a y, it is a z.' 'X are more than a y, they are a z' 'They're an x embrace from y'
One More Thing: We're Not Replacing Human Creativity
Even in the copywriting space, which is seen as especially vulnerable to AI disruption, most writers do not see AI as competition. Instead, they view it as a tool and often observe how editing AI copy drafts can take almost as much time as writing from scratch.
❝
I believe copywriters won't be out of a job just yet. Here's why. GPT or any Generative AI text tends to follow the same pattern. You have to do at least 60% yourself to make it sound human.
Some posts also suggest that organizations are beginning to see the limitations of AI as a copywriter.
❝
I used to work as a copywriter for this company. The CEO decided to replace me with beginner copywriters and AI-generated content to save costs. He was convinced that AI tools like ChatGPT could handle everything, from blog posts to social media, without human input...
Two weeks later, HR reached out to me. Apparently, the CEO realized his mistake and wanted me back...
And a few writers are finding that AI has improved their market position by increasing their productivity:
❝
I've been doubling down on copywriting ever since AI pounced onto the scene. It can write, but as I use it every day as a tool, I've long since seen the things that my clients are just now discovering, which is that: AI makes good copywriters work faster, so they can get more work, leaving less work for the average copywriter.
All this suggests professionals want AI to help, but not replace. They'll feed in the topic. They'll use the outline. They'll even keep a paragraph or two. But the creative magic still happens in their own human voice.
Conclusion: Stepping Carefully into the Future
After scanning more than 2,000 Reddit posts and having them analyzed and classified by multiple machine-learning models, I'm mostly relieved. Overall, marketers, creatives, and even more technical folk don't seem to be panicking or rushing to replace people with AI. Instead, there's an appreciation for the promise of AI technologies combined with a large helping of rational concern.
The bottom line? What people are feeling about AI isn't fear or blind optimism. It's cautious curiosity. And the relationship most professionals have with AI tools right now is closer to "helpful research assistant with occasional errors" than either "existential threat" or "game-changing revolution."
While this isn't the most exciting thing I could discover, and (sigh) it probably won't drive a ton of engagement, it is perhaps the most reassuring. Overall, we're a lot more sensible than the headlines would have you believe. And that's good news, both for the future of humanity and the continued development of AI.
Learn more about custom datasets
This article explains in more detail how to create your own custom datasets.
My GitHub page includes code used to perform the analysis featured in this article.
ABOUT THE AUTHOR
Karen Spinner is a B2B content strategist and founder of Good Content. She helps tech companies create research-backed content through a human-led, AI-enhanced creative process. When existing data doesn't tell the full story, she uses machine learning combined with custom datasets to uncover insights that spark meaningful conversations.
Outside of work, Karen is curious about how AI is changing professional and personal experiences in ways that go beyond typical business case studies.