A trend of my career is that I typically gravitate toward examining the downsides of a particular technology when everyone else is looking at the upsides. Cloud computing is the latest example, where everyone is looking at the advantages while few consider the disadvantages. Thus, I am a cloud computing subject matter expert at the same time I am a cloud computing pundit – one who is well aware of cloud’s challenges.
It’s no secret that success lies in considering any new technology’s advantages and disadvantages. Unfortunately, when the tech press puts the hype wind in a new technology’s sails, people in the emerging market are not as receptive to listening to what’s wrong, only what’s right.
That’s how we end up doing things with a technology it was never intended to do. In many cases, a bit of understanding in the beginning would have prevented failure at the end. Let’s encourage some healthy skepticism by looking at the latest technology that’s setting the world on fire, generative AI.
What’s wrong with generative AI systems and what’s right? The old, generic advice still applies: Consider everything from a balanced viewpoint.
On a related topic: What is Generative AI?
Also see: Generative AI Startups
Feeding Generative AI with Bad Data
Generative AI models require large volumes of data to learn and generate new content. Indeed, generative AI systems such as ChatGPT recently used all the data on the Internet to train itself to provide some very impressive answers.
It’s not that we can’t find the answer in other places; it’s that we can have it respond in very specific ways. For example, “Write me a song about quantum computing used for blood testing systems.”
However, if the data used to train the model is biased or incomplete, it can negatively impact the generated output with inaccurate or even offensive results. This is the garbage in / garbage out argument, which is a core limitation of AI in general and generative AI in particular.
The more questions you ask a generative AI system – questions with known answers – the more you will see it get things wrong. The more specific the questions, the more specific the answers’ flaws. This is not the system’s fault but the fault of incorrect data used to train the system. Thus, its answers for related subject matter will also be incorrect.
For instance, I asked ChatGPT to list all the books I’ve written. It listed one book twice and two books I did not write. A quick Google search would have produced more accurate results in that case. Therefore, I don’t use generative AI systems as a single source for research.
And: ChatGPT: Understanding the ChatGPT ChatBot
Lack of Control in Generative AI
Generative AI models are designed to be autonomous, meaning they operate independently and make independent decisions. As a result, it can be challenging to control the output or ensure that the generated content aligns with the intended purpose or values.
If you’ve used a generative AI system for a certain amount of time, you’ve seen responses that are not what you wanted or in the format or context you wanted. This drives generative AI users crazy – they must ask the core question correctly and define how the system should respond. Otherwise, you end up with answers that don’t help.
For more information, also see: Top AI Software
Generative AI and Intellectual Property
With the rise of generative AI, there are concerns about intellectual property and ownership of the generated content. Determining who owns the generated content can be difficult, especially if the model was trained on publicly available data.
While this is not plagiarism per se, since it’s not copying another author’s text verbatim, it does raise some concerns that you’re leveraging somebody else’s work without attribution, which can raise ethical and legal considerations.
A more significant worry is that some combination of information is collected using generative AI, and the resulting data traces back to other copyrighted works. That creates a risk of being called out legally or ethically – or both – which removes much of the fun of having a generative AI system do all the work for you.
For more information, also see: Top AI Startups
Ethical Concerns with Generative AI
Generative AI raises ethical concerns, particularly when it involves deepfakes and synthetic media where the technology can learn from or be used to create fake or misleading content. We can only surmise the potential impact on privacy, security, and trust.
While we often question the legality and ethics of any emerging technology, the exceptional power of generative AI leads many to ask, “Should we?” instead of “Can we?”
At issue is the negative effect generative AI could have on humans, which carries its own set of concerns. Another fear is that generative AI will be weaponized to commit crimes such as fraud and security breaches that enterprises need to worry about and defend against.
For a simplistic example, new threats from generative AI might require an enterprise to spend 8 dollars for enhanced security and detection mechanisms when they only gained 5 dollars in additional value from using generative AI. For many businesses, it’s a net loss just with the appearance of this specific technology in their enterprise.
On a related topic: The AI Market: An Overview
Generative AI: The Big Question
Should generative AI become core enhancements in your IT systems? Or does the use of this evolving technology bring more bad than good? The answer to both questions is “maybe.”
We need to consider the good with the bad. Knowing the facts about generative AI and our individual use cases’ requirements will guide us on the right path. New technology always progresses as fast as we can keep up with its changes. Generative AI technology may surpass our ability to outguess its consequences before encountering them. Today more than ever, the future is now.
On a related topic: The Future of Artificial Intelligence