icon

The Artificial Intelligence (AI) conversation is saturated, but just beginning. We're constantly bombarded with new AI-powered tools and applications. It can be hard to keep up with the latest trends, and it's even harder to know which tools are worth using.  

But one thing I've learned is that AI is not a magic bullet. It can't do everything for us. It has ethical issues. I've never received something from GPT that I'd proudly put verbatim anywhere. It's always a matter of human + technology. AI can help you generate ideas, but it's up to you to edit and refine those ideas. 

Add to it the speculation that to save resources, some AI companies are using multiple smaller GPT models instead of a single large GPT model. Every AI model is trained on different data sets, raising questions about the ownership of that data. As more data is used, the AI models will get better. On the other hand, the more data you have access to, the more issues with IP and privacy. 

So, we have a lot to learn and need to be aware of the ethical challenges. It is what we discuss in this article.  

Human or AI?  

AI-generated output is taking over digital businesses, but how much should AI ethical issues worry you?

The first thing that comes to my mind would be about ‘making content your own.’ 

You still have to think about what you're creating and realize there's a human on the other side. So, take pride in the creative process, it's not a "hands-off" approach entirely.    

But every time I’ve used AI systems, I’ve had ethical questions in my mind. When I reached out to other human beings, they had similar dilemmas.  

Here’re some of the explainable ethical issues with artificial intelligence:  

1. Ownership of AI-generated content/code.  

If you use an AI tool to write a blog post or a code, who owns the copyright to that? Is it you, the tool provider, or someone else?  

A better way to put this would be who owns the "Intellectual Property" of which content and code would be subsets.

2. Any data you feed into AI may be helping your competitors.  

If you use AI technology to generate any output, will that data be used to train other AI tools?

For example, you could put in company data that could be beneficial to your competitors, proprietary source code that should be kept as a company secret, or marketing positioning/campaigns that would benefit your competition.

3. Getting sued for using AI-generated output  

There have been cases where people have been sued for using output that was generated by AI. FYI, even the world’s biggest search engine company is getting sued.

4. AI helping/hurting SEO.  

We’re all wondering why human-created content may not even have a role in the future. AI can generate optimized content for search engines. But will this lead to a decrease in the value of human-created content? Or will AI and human-created content complement each other?  

5. Privacy, sharing details about your company with AIs.  

AI tools need data to train and function. What kind of data do they need? And how will that data be used? What are the privacy implications of sharing data with AI tools?  

6. Correctness.  

AI tools and chatbots are not perfect. They will make mistakes. What happens when an AI tool generates incorrect content? Who is responsible for the inaccuracies? How do you stay accountable in the real world?

7. Losing jobs to AI.  

If you can create 10x the content with 10% of the resources, will it outsource your job as a marketer to AI? What happens to human decision-making? What about human rights?

8. Decreasing the value of content.  

If AI can generate content that is indistinguishable from human-generated content, will this decrease the value of content? Will people be less likely to pay for content if they can get it for free from an AI tool?  

9. AI data can be biased  

AI tools depend on training data to serve output. What the data feeds is what you get. This is where it can learn biases and stereotypes from the training data.

It's obvious generative AI has issues  

Generative Artificial Intelligence is not always accurate. Do not use it as a sole source of truth, because you do not know that is the case. And what we feed is what we get.  

Another ethical concern is if you put sensitive company information into AI, the tool’s parent company will have access to it. They’re free to use that information as they wish — assume that anything you put in will be publicly available.  

So, do not share any sensitive or confidential data without appropriate authorization. It's important to ensure that data being used to train the Generative Al model is obtained legally and with proper consent and does not violate any laws or regulations. However, most public G-Al services (for example, Chat-GPT) do not share what data sets it trains on. So, we don’t know who really owns the output.  

This also means that we cannot verify to the extent the output is biased. AI bias is real. Models learn biases and stereotypes from the training data. It is capable of producing extremely convincing falsehoods. Never trust AI-generated content as a single source of truth, especially for research purposes. Even if you’re using only AI to generate all your content, share it with the consumer.

AI concerns are not hard to figure out. For example, pure AI copy is full of clues about why it is not the best solution for you. The list includes:   

  • Long drawn-out paragraphs and copy   
  • Awkwardly formal tones and long sentence structures   
  • Overuse of adjectives and irrelevant personalization   
  • Fabricating facts based on certain opinions    

Ethics of AI  

To start, it’s advisable to use Generative AI with a company-issued license. It's important from a liability and property perspective. Even then, you should never copy-paste entire content or code from AI. Always edit, modify, and add to the AI output to make it unique.   

  • Data protection is key. Do not implicitly trust every output as its accuracy might be questionable or biased, especially on items and topics that shift quickly.
  • Never fully “copy and paste” any output from a G-Al tool. You can use AI, but always remember to edit and modify the output to turn it into a new idea. 
  • AI is great for doing quick research on topics or bouncing ideas off of them. However, never use it to prove your ideas and a single source of truth. Not everyone is adding correct information.   
  • Most information is sensitive data. Anything you put in could become available to competitors, customers, and prospects. Even the code generated (including sensitive information about your products) for your product could be used by anyone that has access to the tool.   
  • Don't put any company-sensitive or stakeholder data. If you do that, financial data and health information about your company will be publicly available. Others can access and use it at will. Stopping it will be out of your legal control.   
  • Lastly, never share any personal data with an AI tool like ChatGPT. This includes both your and your customers’ info including name, email address, phone number, or anything else that would be considered personally identifiable information (PII).   

Using AI tools, the bad way or good way

The choice is yours when setting up ethical principles and ethical standards at your organization. Still, asking AI to write a piece from scratch isn't the solution.

A better way would be to dig up all the info you can find about a topic. Check out the company’s previous videos, docs, blogs, whitepapers, keynote sessions, etc., and combine them to create a solid wireframe. Then, write some of the sections yourself and ask AI to expand on the rest. Finally, edit the entire copy into one tone of voice and add the right examples + keywords.

The examples below will help you understand the difference between good and bad use of AI. 

Here are some examples of bad use of AI tools.  

  • Using personal licenses to create content for a marketing email > copy, paste, and send without any review.  
  • Sharing company financial data with AI algorithms.  
  • Generating code with AI and using them within your products without making any modifications.    

Here are some examples of good use of AI tools.  

  • Using a company-licensed version to generate email language, review that language, and heavily modify it to make it yours before sending.  
  • Taking suggestions to make your algorithms better without sharing that code with an AI tool.  
  • Bounce ideas for a new campaign and see how many different ways that idea can work for your company.   

Everyone knows AI wrote your content  

AI’s ethical concerns, data privacy, and cybersecurity issues aside, the biggest challenge with unmeasured use of AI and copy-pasting output is that at least for now, it’s kind of obvious when something is being written by AI. It’s obvious to both computer and human readers. 

I tried different content detection tools, and they were easily able to identify any output I used directly from an AI tool.  

Why does it matter?

For example, maybe you’ve started using AI to automate all your cold email sequences. Outlook (owned by Microsoft) and Gmail (owned by Google) are two of the biggest email service providers and to protect their customers from spam, they have and will start marking AI-generated emails as spam. They will consistently keep updating their spam-detecting capabilities and while your AI email may look great, it’ll be of no use if it doesn’t reach your prospects’ inboxes.  

This is just an email. With AI tools being used like autonomous weapons, phishing attempts are increasing. Tech giants are looking to safeguard their employees from these attacks by blocking purely AI-generated outputs.

Till the time the new technologies get better, AI copy is easy to detect since it follows patterns. And both humans and machines are great at identifying patterns. People will stop paying attention to your content if they believe that you don’t put in any effort and simply use AI to generate all your content via automation.  

Scaling quantity at the expense of quality will likely lead to more losses for you and your company. AI and machine learning are not villains. As a tech company, we use AI here at Optimizely. We even have AI capabilities in our products and our teams have been doing more innovation in that regard.

But we never use AI to generate all our content. We believe human thinking and collaboration are one of the most integral elements of content collaboration and the decision-making process. So, use AI for inspiration, frameworks, headline ideas, and summary but not to replace the actual work.

About the author