- ChatGPT exploded in popularity. ChatGPT, an AI language model developed by OpenAI, has become popular with many companies using it to interact with their customers.
- Data breaches herald warning. CX and marketing professionals need to take proactive steps to protect their customer information.
- Tips for keeping customer data safe. A variety of data experts share their take on avoiding ChatGPT security issues.
In an era where data breaches are becoming increasingly common, each fissure is a robust reminder that customer experience and marketing teams must remain vigilant and take proactive steps to protect their customer information. Especially in the age of generative AI, where entering customer data into a chatbot machine is the norm.
Lina G. Rugova, a business advisor at Atlas Start, LLC and president/founder of Emerge and Rise, a business incubator, said while ChatGPT is a powerful tool that has revolutionized how freelancers, businesses, brands and nonprofits interact with customers — like any technology that deals with sensitive data, there is always a risk of data breaches.
“CX and marketing leaders should be concerned about possible data breaches and know steps to avoid them because there are always higher consequences: loss of customer trust, legal and regulatory implications, operational disruptions, damage to brand reputation and competitive disadvantage as customers may choose to do business with companies and brands that they perceive as having stronger data security protocols and practices. We have seen it time and time again," she said.
CMSWire asked Rugova and a few other data experts to weigh in with their best tips for CX and marketing professionals to avoid cybersecurity threats within ChatGPT, strengthen ChatGPT security and minimize data breaches.
Breach in Trust: ChatGPT’s Security Reckoning
ChatGPT, the AI language model and chatbot developed by OpenAI, suffered its first data breach in March, resulting in the exposure of personal user information for some of its users. Apparently, it allowed some users to see titles from another active user’s chat history. Well, that could be interesting ... and invasive. The breach was discovered and reported to OpenAI by a security researcher, who found a dataset containing user IDs, usernames and email addresses associated with ChatGPT accounts on a popular hacking forum.
OpenAI acknowledged the data breach and stated that it was caused by a vulnerability in one of its third-party software dependencies, which has since been patched.
On the heels of that, news outlets reported another alleged breach over at Samsung. The company integrated OpenAI’s ChatGPT earlier in the month, and the Economist magazine of South Korea reportedly discovered three occurrences of confidential corporate information being leaked, with certain information pertaining to vital semiconductors.
Authorities took notice. On April 4, The Italian Data Protection Authority issued an “immediate temporary limitation on the processing of Italian users’ data by OpenAI,” effectively banning the app country-wide, citing “a data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service.”
Following Italy’s breach, Hong Kong’s privacy watchdog pledged to oversee and assess the potential risks of personal data leaks that may arise from the use of generative artificial intelligence applications like ChatGPT.
Related Article: ChatGPT Suffers First Data Breach, Exposes Personal Information
The Generative AI Balance: Usefulness vs. Security
Arturo 'Buanzo' Busleiman, information security professional and the founder of Buanzo Consulting, said it’s important for companies to educate themselves and then focus on educating their employees about data privacy and safeguarding sensitive information.
“I believe the rapid advancements in AI technology have opened up a world of possibilities for new businesses. These cutting-edge solutions enable startups and established companies alike to streamline processes, enhance customer service and gain valuable insights into consumer behavior,” Busleiman said. “By leveraging the power of AI, businesses can drive innovation and maintain a competitive edge in today's ever-evolving market. However, it's essential to balance the adoption of AI technologies with proper data security measures, ensuring a sustainable and responsible approach to growth.”
According to Busleiman, achieving the right balance between accurately detecting sensitive data and maintaining the system's overall usefulness requires ongoing research, development and collaboration between AI providers and users.
Rugova also emphasized the importance of taking the time to train employees and all users on data security best practices. Ensure that all employees who work with ChatGPT data get trained on data security best practices, including how to recognize and report potential security breaches. This, he said, will help create a culture of security awareness and minimize the risk of human error.
“ChatGPT is a language model that uses machine learning algorithms to generate natural language responses to prompts. Generating responses may require access to sensitive data such as customer information, product data or proprietary information,” Rugova said. “If this data is not properly secured, it could be vulnerable to cybersecurity breaches. If CX and marketing leaders are not properly trained on data security best practices, they may inadvertently expose sensitive data to risk.”
Implement Security Protocols, Control Customer Data Access
In order to prevent ChatGPT data breaches, Rugova has a few suggestions:
- Implement robust data security protocols. Start by implementing strong data security protocols to ensure that all data related to ChatGPT is protected. We all know that anything free has privacy concerns. This should include implementing firewalls, encrypting sensitive data and establishing user authentication procedures to prevent unauthorized access.
- Limit access to sensitive data. Limit access to only those individuals who require it for their job responsibilities. If companies do not have appropriate controls in place to limit access to ChatGPT and the data it uses, it could be vulnerable to unauthorized access.
- Conduct regular security audits. Regularly audit your data security protocols and systems to identify potential vulnerabilities and address them before they can be exploited. This can include penetration testing, vulnerability scans, and other security assessments.
- Partner with a trusted technology provider. A technology provider helps manage and secure data. This can include outsourcing data management and security functions to a third-party provider with expertise in data security and compliance.
“In general, data breaches can occur due to a variety of factors, such as internal negligence or malicious activity, external hacking attempts, or software vulnerabilities,” Rugova said. “To prevent data breaches, it is vital to have a comprehensive understanding of the potential risks and to implement proactive measures to address them.”
Related Article: OpenAI Incorporates Web Search Into ChatGPT With Web Browser Plugin
Beware of Scam ChatGPT Apps, Set AI Usage Guidelines
Jenson Crawford, senior director of engineering Americas at Kaleyra, has two pieces of advice for safely navigating ChatGPT:
- Beware of scam apps and websites. There is currently NO official ChatGPT app, use ONLY the official webpage: Chat.OpenAI.com. Scammers are using the popularity of ChatGPT to trick people and steal personal information.
- Partner with trusted vendors. If you are using a chatbot or virtual assistant powered by ChatGPT, make sure it is from a trusted vendor, and ensure that the vendor follows security best practices and provides regular security updates.
Ahmed Banafa, a professor at San Jose State University’s College of Engineering who focused on IoT, blockchain, cybersecurity and AI, also shared his five tips for protecting data on ChatGPT.
- No specific actions or list of steps for new products or services should be listed on ChatGPT.
- Never ask for help from ChatGPT to solve problems or bugs you are facing in your new products or services.
- No summary of confidential reports or PowerPoint presentations.
- Limit the use of the tool to general knowledge.
- Set guidelines so everyone follows in all departments.
“The best way to protect data when using ChatGPT is not to have sensitive data analysis or audit using ChatGPT. The data you entered into ChatGPT is transmitted to external servers, and it’s not in your control,” Banafa said. “The bottom line, think of ChatGPT as a stranger who stopped by your office and you are discussing confidential information with him or her — no one will do that.”