Less technology can mean more sophistication

Steve Jobs, the founder and long-time leader of Apple, is credited with saying that “simplicity is the ultimate sophistication.” He was talking about the intuitive design of the phones, tablets and computers the company makes.

An open book resting on a wooden surface, showing pages fanned out, against an orange background.

But I have been embracing the concept of simplification in my own life recently, ironically due to a bad experience with Apple.

Sophistication is a misunderstood word. It is positively associated with complexity, elegance and being cultured or important. The word comes from the Greek “sophists,” a group of people who were itinerant teachers for pay, experts in rhetoric and philosophy, known for their ability to persuade. It’s how we get the notion of a “sophisticated” argument. In the fourth and fifth centuries, a sophist was a term of contempt for someone who engaged in fallacious arguments intended to mislead.

Whatever the meaning of the term, in modern usage related to technology, sophistication has come to mean having multiple devices and apps and programs. But, as I noted, I was moved by a tech company to make some significant reductions in technology in order to be more simple, or the ultimate sophistication.

The issue I had with Apple was the sudden closure of my Apple account. I don’t know what I did wrong and customer service would not tell me. I suspect I forgot to monitor my cloud storage and went over the limit. As I result, I had to start a new Apple account and rebuild much of my digital life.

Fortunately, I was able to recover some vital documents and other parts of my digital life. But in the process I made some significant, dare I say sophisticated (ie simple) changes:

  • I eliminated many apps that I was no longer using;
  • I purged my contacts, including hundreds of people who I no longer had a professional or personal relationship with. A surprising number of contacts are now deceased.
  • I no longer back anything up to the cloud, since that was the source of my problem and exposed the fact that someone else controlled a lot of my stuff. I have several high capacity thumb drives and a large external hard drive and back up to those so I am not dependent on an impersonal company to retain access to my personal documents if anything happens to my equipment.
  • I separated my work and personal lives by no longer syncing anything but calendar. Its a sophisticated, technological work-life balance.
  • I have started to favor reading hard copy books from the library over electronic anything. I still have e-books and news online. But for a pleasant diversion I am enjoying free access, no power requirement, and no interruptions by ads, emails etc. 

I thought I was doing something unique, emerging as “Tim Penning, unplugged.” But as I have spoken to people about this, I find a broader cultural movement with eliminating or reducing technology in our lives. Around the world, including various US states, there is a move to ban or limit cell phones for kids in schools and to monitor social media access. Many adults are voluntarily using technology less. I read an article that old basic flip phones are popular among young professionals. And of course there are all sorts of cautionary objections to the adoption of artificial intelligence (AI) in our culture. 

This view of technology may remind some of the Luddites, a group of British craftsmen in the 18th century who objected to automated machinery for fear of losing their lives. The term Luddite is unfairly used to indicate people just against all technology. But they were merely responding to a practical concern about their livelihood.

I am responding to a practical concern for my life.

Since going not no-tech but less tech, I have found I have greater energy and less stress. I savor times when I am walking in nature or reading a book on paper. I often set my phone in another room, shut down the computer, and do what comes naturally as opposed to waiting to respond to the next device “bing” like one of Pavlov’s dogs. 

I feel sophisticated.  

Could AI assist nonprofit news models?

I had mixed emotions when I first heard about AI news releases. On the one hand, I had the typical professor’s worry about plagiarism and enabling laziness in students. On the other hand, I teach according to what is happening in the industry. So I had to balance educating students about this tool but also working to ensure they learned how to write news style as opposed just typing in some keywords and waiting for a result.   

It reminds me of going from banning laptops in class to requiring them for use in an active learning pedagogy. 

My rule on AI news release tools became a compromise—students can use the MuckRack AI news release generator (we use MuckRack for contact lists and other things in class so they can learn a media relations software for internships and jobs), but they must edit and show me the changes they made in their submitted assignment, using track changes of comments in a PDF.

One student so far took me up on this. She did not do the edit part. AI got a poor grade. It may be hard to do particular assignments as well, such as audio news releases, video news releases, feature release style or a pitch to a particular journalist.

I also discussed AI and the quality of its output in class. Here is an example of AI output for an actual client in class. I entered the organization’s name and what they do. I shared the output—which came in 10 seconds—with students and had students them critique it based on AP style and standards for a good news release they had learned in class. 

Here is the AI news release that resulted:

The Flourish Society Launches Intergenerational Women Mentoring Program

The Flourish Society, a leading organization dedicated to empowering women, is thrilled to announce the launch of their new Intergenerational Women Mentoring Program. This program aims to bridge the gap between different generations of women by fostering mentorship relationships and providing valuable support and guidance for personal and professional development.

In today’s fast-paced world, it is essential for women to support each other and learn from the experiences of those who came before them. The Intergenerational Women Mentoring Program by The Flourish Society is designed to create a platform for women of all ages to connect, share knowledge, and build meaningful relationships that will contribute to their growth and success.

The program offers a unique opportunity for mentees to be paired with experienced mentors who can provide guidance and advice based on their own professional journeys. Mentees will have the chance to learn from the wisdom and expertise of their mentors, gaining insights on various aspects of their careers and personal lives. By connecting women from different generations, this program aims to foster a sense of community and create a supportive network that will inspire and empower women to flourish in their chosen paths.

The Flourish Society’s Intergenerational Women Mentoring Program caters to women from diverse backgrounds, professions, and age groups. It recognizes the value of intergenerational cooperation and the positive impact it can have on personal and professional growth. By encouraging women to learn from one another, the program aims to break down barriers and stereotypes, and create a more inclusive and supportive environment for women in all walks of life.

Through this mentoring program, The Flourish Society hopes to inspire women to reach their full potential and develop the confidence necessary to overcome challenges and achieve their goals. By connecting women from different generations, the program aims to foster a sense of collaboration and empowerment among women, creating a ripple effect that will benefit not only the mentees but also the mentors.

To learn more about The Flourish Society’s Intergenerational Women Mentoring Program and to get involved, visit their website and follow them on social media. Let’s celebrate the power of mentorship and support women in their journey towards personal and professional success.

Students critiqued the output pretty well. Here is a summary of their comments:

  • No format, such as a dateline, boilerplate, etc.
  • Missing key info
  • Saying “thrilled to announce”
  • No quotes
  • Not AP style
  • Repetitive, redundant
  • Cliches and platitudes
  • No news value stressed

The bottom line I told students is what I’m hearing from professionals: AI for news releases, as well as other applications, can be a first draft and get you past a blank page. But humans with news judgment and public relations skills will still be needed to make a usable final version that journalists will find acceptable and may also draw visits to an online newsroom. 

AI illustrates an old problem with technology—efficiency for the user does not necessarily mean quality for the recipient. Journalists already complain of the overwhelming quantity and poor quality of news releases and pitches they receive. They have had their own tools to write, edit, and assess the quality of news. For example, the Society of Professional Journalists (SPJ) maintains a list of tools for journalists, as does the Poynter Institute. Journalists also can simply delete or block news releases that continue to come from bad actors.

Speaking of journalists, the advent of AI for news comes as media companies continue to consider the best business model. Ad support has declined. Readership has been divided with so much available online content. People read individual articles, not complete packages in the form of newspapers or magazines or broadcast outlets. There is also increasing concern by news organizations of AI deepfakes, literal fake news and images, as discussed in a recent Axios article

A philanthropy center at the university where I work recently had an article about three non-profit news models. Whether it’s nonprofit status, nonprofit ownership, or some form of foundation support, the media industry may be turning from seeing news as a loss leader for advertising revenue to seeing news as a public good supported as a charity. 

One can only wonder if AI will contribute more news to make a donor-funded model for news more sustainable. In other words, would a non-profit model for news employ the efficiencies of AI to generate news? But then, would people pay for news generated by a machine if they can use the same machine themselves to generate and aggregate news of their interest? 

I am hopeful that the future will be news written by people, for people, and supported by people. AI may have a place, but as of now I doubt it will be primary. 

I also hope that PR professionals who know how to write, understand news, and have a desire and obligation to inform people will be assisting their journalistic counterparts in the news ecosystem. As with all professions, a benefit to society should be the primary driver for practice as opposed to efficiency for an organization. 

Cookies Can Be Ethical

The day after a session in my Advertising and Public Relations Ethics and Law class in which we discussed digital media, one student emailed me a link to a web site. She said it was an unusual example of a company showing an ethical respect for customers by allowing customers and other site visitors to customize their cookies, or the way in which they are tracked.

It wasn’t a major brand that I was aware of before my student shared it with me. It’s Levoit Air Purifiers, and they allow people to accept all, reject all, or select to turn on or off six categories of cookies, from necessary to functional, analytics to performance and more. I’d encourage you to go to the site and see for yourself.

in class we had talked about legal changes with regard to cookies and other aspects of online privacy. But there are still annoying pop-ups that you must disable to see content, and some still only give you the option to accept, not reject. Certainly very few offer this level of customization and control for the user.

Here are just a few reasons why this extreme number of options is ethical:

  • it is a demonstration of a relationship metric called “control mutuality,” in which all parties in a relationship have equal control of topics, opportunity to speak, and in this case digital tracking settings;
  • it demonstrates good discourse ethics, similar to above, meaning that the user is not at the mercy of an organization;
  • it demonstrates a unique form of utilitarianism, not in the classic sense of “greatest good for the greatest number,” but it explains cookies to the user in terms of how they might benefit them and not just the organization;
  • it’s the old fashioned “golden rule”–follks at the air purifier company have considered how they would like to be treated when a customer on someone else’s website.

For these and other reasons, respecting the rights of others and giving them agency over how they are tracked and communicated to is good ethics. It’s also smart for the company in terms of building positive, long-term relationships and reputation.

In other words, as I tell my students all the time: ethics = strategy.

ChatGPT AI and PR–Useful Tool or Dangerous Threat?

The new year started with a flurry of discussion about ChatGPT, both in the public at large and in the community of public relations professionals and professors.

ChatGPT is a database, not an internet search tool. The GPT stands for generative pre-trained transformer. It is created by OpenAI and free to use with a registration. In demonstrations and comments I’ve read and seen, the tool is being met with both excitement and horror.

That mixed reaction is because ChatGPT can write like a human.

I’ll share a few reactions I’ve read this week and my own thoughts here.

An article in PR Week discussed the practical implications of ChatGPT with healthy skepticism. The article’s headline summed up overall reaction: “AI can do PR, but should it?” One point the author makes is that CEOs and CCOs should not “leave sensitive information to a robot.” Indeed, certain information and tasks would not be left to a junior staffer or intern. 

Meanwhile, there is already a book on the market that offers measured guidance for how ChatGPT can be added to the arsenal of ‘COM Tech’ tools. Authors Richard Bowman and David Boyle have a positive take in their book “Prompt: A practical guide to AI-powered brand growth with ChatGPT.”

Among educators, there is the natural concern for cheating, plagiarism and lazy scholarship. One K-12 school has already banned ChatGPT. At the college level, Ryan Watkins, a Professor of Education Technology Leadership at George Washington University, wrote a helpful blog post about updating syllabi for ChatGPT. As a PR professor who teaches writing courses among others, I like Watkins’ assignment idea to have students take actual ChatGPT output and have students edit and improve on it. If you can’t beat it, join it. But this way students can’t cheat, they have to acknowledge the limitations of ChatGPT and use their own human thinking to make writing less perfunctory and information and more creative and persuasive.

I also attended a webinar this week sponsored by Packback, a company that offers a tool that  teaches students how to write by giving instant, line-by-line feedback. This spring they are launching a tool for educators to detect ChatGPT and other AI generated content. I imagine this will be similar to SafeAssign, which professors use in Blackboard and other online course systems to detect plagiarism. An educator on the panel in this webinar pointed out some limitations of ChatGPT:

  • It does not cite sources
  • It can not be really specific
  • It can not speculate or predict future
  • It refused to write a 5-page paper, and cut off after 665 words.

Even though scholarly work has a long publication lead time, there is already an academic journal article on the subject. John Pavlik authored “Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence” for Journalism and Media Education” in the current issue of Journalism and Mass Communication Educator. Pavlik addresses some good thoughts, such as whether computers really be creative and the ethical issue of accountability. What I liked most is the verbatim output in Q&A format of ChatGPT responses to a series of questions. The best part is when ChatGPT actually cautions that educators should teach students about legal and ethical limitations of itself. 

In addition to all of the above, here are my initial thoughts about ChatGPT for professionals and professors:

  • Like with Google—you don’t know what you don’t know
  • If ChatGPT is a database, selection of information is done by human agents and is therefore subjective and can be incomplete or biased
  • ChatGPT will be like other AI, such as chatbots—efficient for some but frustrating for end user
  • I wonder if the writing is purely informational or can be persuasive
  • Can tone be altered to ensure a reader-focused perspective, reflect dialogue, demonstrate responsiveness, emotion?
  • Can ChatGPT writing segment audiences and appeal to specific publics or is all writing generic?
  • Is it ethical for organizations to use AI content and pose as persons?
  • Is there legal liability for AI generated ‘misinformation’?
  • How will the publics receive such content? Will reputation be harmed by perceptions of inauthenticity?
  • Has anyone thought that AI content will be met with AI response and corporate and public machines will talk to each other but there will be no human interaction?

With regard to new technology, I’ve always been between the luddites and the lemmings, neither an immediate adaptor nor a hold out. ChatGPT will likely be added to a toolbox of communication technologies. But a human agent will be required to know when and how to use each tool.