ChatGPT AI and PR–Useful Tool or Dangerous Threat?

The new year started with a flurry of discussion about ChatGPT, both in the public at large and in the community of public relations professionals and professors.

ChatGPT is a database, not an internet search tool. The GPT stands for generative pre-trained transformer. It is created by OpenAI and free to use with a registration. In demonstrations and comments I’ve read and seen, the tool is being met with both excitement and horror.

That mixed reaction is because ChatGPT can write like a human.

I’ll share a few reactions I’ve read this week and my own thoughts here.

An article in PR Week discussed the practical implications of ChatGPT with healthy skepticism. The article’s headline summed up overall reaction: “AI can do PR, but should it?” One point the author makes is that CEOs and CCOs should not “leave sensitive information to a robot.” Indeed, certain information and tasks would not be left to a junior staffer or intern. 

Meanwhile, there is already a book on the market that offers measured guidance for how ChatGPT can be added to the arsenal of ‘COM Tech’ tools. Authors Richard Bowman and David Boyle have a positive take in their book “Prompt: A practical guide to AI-powered brand growth with ChatGPT.”

Among educators, there is the natural concern for cheating, plagiarism and lazy scholarship. One K-12 school has already banned ChatGPT. At the college level, Ryan Watkins, a Professor of Education Technology Leadership at George Washington University, wrote a helpful blog post about updating syllabi for ChatGPT. As a PR professor who teaches writing courses among others, I like Watkins’ assignment idea to have students take actual ChatGPT output and have students edit and improve on it. If you can’t beat it, join it. But this way students can’t cheat, they have to acknowledge the limitations of ChatGPT and use their own human thinking to make writing less perfunctory and information and more creative and persuasive.

I also attended a webinar this week sponsored by Packback, a company that offers a tool that  teaches students how to write by giving instant, line-by-line feedback. This spring they are launching a tool for educators to detect ChatGPT and other AI generated content. I imagine this will be similar to SafeAssign, which professors use in Blackboard and other online course systems to detect plagiarism. An educator on the panel in this webinar pointed out some limitations of ChatGPT:

  • It does not cite sources
  • It can not be really specific
  • It can not speculate or predict future
  • It refused to write a 5-page paper, and cut off after 665 words.

Even though scholarly work has a long publication lead time, there is already an academic journal article on the subject. John Pavlik authored “Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence” for Journalism and Media Education” in the current issue of Journalism and Mass Communication Educator. Pavlik addresses some good thoughts, such as whether computers really be creative and the ethical issue of accountability. What I liked most is the verbatim output in Q&A format of ChatGPT responses to a series of questions. The best part is when ChatGPT actually cautions that educators should teach students about legal and ethical limitations of itself. 

In addition to all of the above, here are my initial thoughts about ChatGPT for professionals and professors:

  • Like with Google—you don’t know what you don’t know
  • If ChatGPT is a database, selection of information is done by human agents and is therefore subjective and can be incomplete or biased
  • ChatGPT will be like other AI, such as chatbots—efficient for some but frustrating for end user
  • I wonder if the writing is purely informational or can be persuasive
  • Can tone be altered to ensure a reader-focused perspective, reflect dialogue, demonstrate responsiveness, emotion?
  • Can ChatGPT writing segment audiences and appeal to specific publics or is all writing generic?
  • Is it ethical for organizations to use AI content and pose as persons?
  • Is there legal liability for AI generated ‘misinformation’?
  • How will the publics receive such content? Will reputation be harmed by perceptions of inauthenticity?
  • Has anyone thought that AI content will be met with AI response and corporate and public machines will talk to each other but there will be no human interaction?

With regard to new technology, I’ve always been between the luddites and the lemmings, neither an immediate adaptor nor a hold out. ChatGPT will likely be added to a toolbox of communication technologies. But a human agent will be required to know when and how to use each tool.

One thought on “ChatGPT AI and PR–Useful Tool or Dangerous Threat?

Leave a Reply