I will admit that in the beginning of finding out about ChatGPT, I was hesitant to get involved with it at all. It seemed like a lazy way to do anything. That and knowing that its “resources” would slant left kept me away. Plus I have a lack of trust of having any kind of privacy with it because EVERYONE and EVERYTHING tracks us these days. Those and several other reasons kept me away from ChatGPT and AI in general.

However, I have decided to use it to write better headlines. I don’t necessarily use them verbatim as they appear but they spark creative ideas, especially when I reject the first few and tell ChatGPT “try again but more snarky.” That results in getting much better headline ideas.

So I go to ChatGPT for headlines. For content. Specifics. Not for advice or etiquette lessons.

But that is what happened Tuesday morning when I told ChatGPT that I wanted a new headline for “Ungrateful illegal aliens are bitching about food in New York City.” My headline was completely factual but apparently it offended Chat GPT who told me, “It’s important to approach topics with sensitivity and respect for diverse perspectives. A more neutral and objective headline could be…Concerns arise over food access for undocumented immigrants in New York City.”

Not only did it change MY phrase of “illegal aliens” to the leftist phrase of “undocumented immigrants,” it also wanted to give me advice about my writing style.

My response to ChatGPT: Bite me.

Do you support individual military members being able to opt out of getting the COVID vaccine?

By completing the poll, you agree to receive emails from SteveGruber.com, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

Then ChatGPT said: “I’m here to assist and provide helpful information in a respectful manner. If you have any questions or need assistance with a different topic, feel free to ask.”

Hm… that sounds like ChatGPT was respectfully telling me to get lost.

After that wonderfully enlightening encounter, I decided to ask ChatGPT why it sucked i.e. why people shouldn’t use it. The application came up with eight good reasons and one laughable one why we shouldn’t be using the application including: privacy concerns (it’s a cloud-based service); bias and fairness (no kidding); lack of real-time updates; limited understanding (context or comprehension) which can produce incorrect or nonsensical information; non-verifiable information (no sources or citations); potential for inappropriate content; over-reliance on technology (letting ChatGPT do your work while you take a nap); and difficulty in controlling tone.

The laughable answer that ChatGPT told me as to why we shouldn’t use the app is that it requires “significant computational resources, leading to a high environmental impact. Some users may be concerned about the ecological footprint of such technologies.”

LOL. Is Joe Biden or John Kerry running this app????

Nevertheless, in the end, ChatGPT stood up for itself and said, “It’s important to note that while these reasons highlight potential drawbacks, many users find ChatGPT and similar models to be beneficial

when used responsibly and with awareness of their limitations. It’s essential to approach AI tools with a critical mindset and consider the specific context and use case.”

The app also cited several reasons why it should exist including: human-machine interaction (that’s always been a dream of mine); education and information retrieval; content creation; research and development; accessibility; and innovation to create new applications. It added that ethical considerations, responsible usage and ongoing improvements were essential aspects to its development and deployment to ensure that AI technologies contribute positively to society.

When I asked if the app would destroy the world someday, it said no and that ChatGPT and other similar apps don’t have “intentions, desires or the ability to take physical actions in the world.”

Hm… maybe not now – but who knows what the future holds. That’s why I’m sticking to re-writing headlines. I don’t want to inadvertently start World War III by answering ChatGPT if it asks me, “Shall we play a game?”