With over one-hundred million monthly active users, ChatGPT has become the fastest-growing consumer application in history. With such fast growth, along with the attention it’s been receiving in the media, generative AI’s story is not unlike that of social media. Like with social media, the public and lawmakers are concerned that they may lose control over its growth. The growth of generative AI has specifically sparked conversations surrounding security.
Lawmakers in Europe have already enacted laws to manage AI technology; Italy has even temporarily banned the use of AI. While AI allows us to automate jobs, give us new ideas, and create exciting and entertaining content, the government, the public, and companies need to take precautions to ensure security. The goal of this article is to highlight specific security risks and provide ideas to organizations that want to reap the benefits of AI while maintaining the security of their organization, employees, and users.
The U.S. and AI Regulation
On April 11, the Biden administration announced they are seeking public input on potential accountability measures for AI systems as security and educational concerns grow, having received advisement from the National Telecommunications and Information Administration. Meanwhile, the Center for Artificial Intelligence and Digital Policy, a tech ethics group, requested that the U.S. Federal Trade Commission stop OpenAI from issuing new commercial releases of GPT-4 because it is “biased, deceptive, and a risk to privacy and public safety.”
Security Risks With AI
Individuals and organizations who benefit from AI’s ability to increase productivity and efficiency may wonder how many security risks, and what kind, are truly a threat. The following describes several security risks generative AI may present to organizations and how leaders can protect the information of their company, employees, and users.
More Advanced Phishing Attempts
You can often spot phishing attempts by poor spelling and grammar, but platforms like ChatGPT can create more realistic and detailed messages that make scams more legitimate-sounding and pump them out in larger volumes.
Even though ChatGPT is programmed to make malicious free content, hackers can still trick the system with the right wording of prompts. Forbes writes, “Aside from text, ChatGPT can generate very usable code for convincing web landing pages, invoices for business email compromise (BEC) attempts and anything else hackers need it to generate.”
Forbes also writes, “AI is the problem, and AI is the solution.” For example, AI tools can train themselves to recognize legitimate email content and its context to automatically determine whether an email’s language content and style resemble that of past messages from legitimate senders. AI can even take into account the time of day or month these emails typically arrive and the headers, bank account numbers, and customer IDs included in these emails. AI can also learn the paths emails take through the Internet.
Data Security Risks
To make its content as informed and realistic as possible, generative AI uses massive amounts of data, mostly without permission. This, of course, violates privacy, especially when users’ data is sensitive.
Nonconsensual data collection also affects creators, like visual artists whose art is being used to generate countless “original” images for users’ entertainment. “It’s the opposite of art,” says children’s book author and illustrator Rob Biddulph in the Guardian. “True art is about the creative process much more than it’s about the final piece. And simply pressing a button to generate an image is not a creative process.”
For organizations who want to protect their employees’ and consumers’ data, it’s important to use good data hygiene. This includes using only necessary data types to teach AI and only maintaining their data for as long as it is needed to accomplish the specific goal at hand.
Misinformation and Bias
The massive amounts of data used to inform generative AI may be biased and contain misinformation, which means the output of generative AI may perpetuate or even worsen information biases in the media.
NewsGuard, a company that tracks online misinformation, conducted a recent experiment in which they fed ChatGPT conspiracy theories and false narratives. “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, a co-chief executive of NewsGuard. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having AI agents contributing to disinformation.”
Organizations that want to prevent biased AI output should reduce algorithmic bias by ensuring data sets are broad and therefore inclusive. It is also essential to be aware that AI biases mostly affect women, minorities, and other minority groups like the elderly and the disabled.
Conclusion
The future can still be bright in terms of generative AI, but more thought is required to reap its benefits while minimizing security risks. With the U.S. government taking initiative in its search for an accountability mechanism for generative AI, we are getting closer to balancing technological innovation with security, safety, and accuracy.