Can an AI chatbot be held liable for an illegal wiretap? A case against Old Navy may provide an answer.
- In the Central District of California, a lawsuit has been filed against Old Navy, accusing its AI chatbot of engaging in unlawful wiretapping by capturing, recording, and saving conversations.
- The plaintiff's lawsuit claims that the chatbot "convincingly imitates a human and prompts consumers to disclose personal information."
- Numerous consumer privacy lawsuits, some unrelated to AI, have been filed in California against companies such as Home Depot, General Motors, Ford, and JCPenney based on outdated wiretapping court precedents from the 1960s.
The privacy of online shopping chat conversations is becoming a focus of court challenges as generative AI tools like ChatGPT become more powerful and capable of taking over the role of customer service agents.
The use of copyrighted material to train AI is already a source of legal disputes, and the adoption of gen AI-powered chatbots by companies has opened up a new legal issue regarding consumer privacy.
Can an AI be convicted of illegal wiretapping?
Currently, a lawsuit is being played out in court regarding Old Navy's brand chatbot, which is accused of engaging in illegal wiretapping by recording, logging, and storing conversations. The suit, filed in the Central District of California, claims that the chatbot "convincingly impersonates an actual human who encourages consumers to share their personal information."
The plaintiff in the lawsuit claims that he communicated with what he thought was a human Old Navy customer service representative, but was unaware that the chatbot was recording and storing the entire conversation, including keystrokes, mouse clicks, and other data about how users navigate the site. Additionally, the suit alleges that Old Navy illegally shares consumer data with third parties without informing consumers or seeking their consent.
Old Navy, through its parent company Gap, declined to comment.
Numerous lawsuits have been filed in California against companies such as Old Navy, , , and JCPenney, alleging illegal wiretapping of private online chat conversations, although not necessarily with an AI-powered chatbot.
The lawsuit against Old Navy and other companies regarding data recording and sharing for training purposes is not as intriguing as the charges, but it brings up important privacy concerns about chatbots that need to be addressed before AI can be trusted as a personal assistant.
Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, expressed a concern about these tools, stating that we have limited knowledge about the data used to train them.
Researchers have discovered that AI-powered chatbots can potentially collect personal data from users through specific prompts, according to Raicu. Companies are concerned about the data being fed into generative AI models and the need to establish guardrails for usage as AI is deployed across corporate enterprise systems. This presents a new example of the "firewall" issues that have always been central to technology compliance. Companies such as JPMorgan and Verizon have cited the risk of employees leaking trade secrets or other proprietary information that shouldn't be shared with large language models.
Lawsuits show US lagging on AI regulations
The U.S. lags behind Europe and Canada in terms of AI regulations and online interactions, as highlighted by the Old Navy lawsuit. This is due to a wiretapping law from the 1960s that focused on privacy violations over rotary phones.
While there is no federal-level regulation on online privacy, states have different privacy rules. California has the most comprehensive laws, including the California Consumer Privacy Act, which is modeled after GDPR in Europe. Colorado, Connecticut, Utah, and Virginia also have consumer data privacy laws that give consumers the right to access and delete personal information and to opt-out of the sale of personal information. Recently, eight more states, including Delaware, Florida, and Iowa, have followed suit. However, the rules vary state by state, resulting in a patchwork system that makes it difficult for companies to do business.
Without federal online privacy legislation, companies can proceed without implementing privacy safeguards. According to Ari Lightman, professor of digital media at Carnegie Mellon University's Heinz College, generative AI, which uses natural language processing and analytics, is still imperfect. However, the models improve over time and more people interact with them. Lightman added that it is still a gray area in terms of legislation.
Personal information opt-out and ‘delete data’ issues
The regulations provide varying levels of protection for consumers, but it's uncertain whether companies can erase information. Language models cannot modify their training data.
According to Raicu, the argument is that once the model has been trained using the data, it cannot be untrained.
The Delete Act, recently passed in California, allows residents to request the deletion of their personal data from all data brokers with a single request. This legislation builds upon the California Consumer Privacy Act, which grants residents the same rights but requires them to contact 500 data brokers individually. The California Privacy Protection Agency has until January 2026 to implement the streamlined delete process.
The Italian Data Protection Protection Agency temporarily disabled ChatGPT in the country and launched an investigation over the AI's suspected breach of privacy rules. The ban was lifted after OpenAI agreed to change its online notices and privacy policy.
Privacy disclosures and liability
Companies are increasingly turning to AI-powered chatbots as they offer round-the-clock availability, can improve the efficiency of human agents, and may ultimately prove more cost-effective than hiring people. Additionally, chatbots can be a more appealing option for consumers who want to avoid lengthy wait times to speak with a human representative.
Chet Wisniewski, director and global field CTO at cybersecurity firm Sophos, considers the Old Navy case to be "a bit superficial" because, regardless of the outcome, website operators will likely put up more banners to absolve themselves of any liability.
As chatbots become more skilled in conversation, it will become increasingly difficult to distinguish between human and computer interactions.
Data privacy is not necessarily a greater concern when interacting with chatbots compared to humans or online forms, according to privacy experts. However, basic precautions such as not publicly posting sensitive information like birthdate still apply. Consumers should be aware that the data collected can be used to train AI, which may not be a significant issue for simple transactions like returns or out-of-stock items. However, as the issues become more personal, such as mental health or love, the ethical considerations become more complex.
"Although we lack norms for these things, they are already part of our society. The common thread in conversations is the need for disclosure, which is repeated because people forget," Raicu stated.
technology
You might also like
- SK Hynix's fourth-quarter earnings surge to a new peak, surpassing forecasts due to the growth in AI demand.
- Microsoft's business development chief, Chris Young, has resigned.
- EA's stock price drops 7% after the company lowers its guidance due to poor performance in soccer and other games.
- Jim Breyer, an early Facebook investor, states that Mark Zuckerberg has been rejuvenated by Meta's focus on artificial intelligence.
- Many companies' AI implementation projects lack intelligence.