The dangers of employing gen AI like ChatGPT, Google Gemini, Microsoft Copilot, and Apple Intelligence in your personal life.

The dangers of employing gen AI like ChatGPT, Google Gemini, Microsoft Copilot, and Apple Intelligence in your personal life.
The dangers of employing gen AI like ChatGPT, Google Gemini, Microsoft Copilot, and Apple Intelligence in your personal life.
  • AI tools for consumers are becoming increasingly accessible and widespread, with options such as OpenAI's ChatGPT, Google Gemini, Microsoft Copilot, and Apple Intelligence.
  • It is crucial for consumers to be aware of the diverse privacy policies of tools and the available opt-out options to protect their data.
  • There is no advantage for consumers in allowing gen AI to be trained on their data, and potential risks are still being investigated.

Numerous individuals are captivated by generative AI and are utilizing novel technology for various personal and professional purposes.

But many ignore the potential privacy ramifications, which can be significant.

While AI tools such as OpenAI's ChatGPT, Google's Gemini, Microsoft Copilot, and Apple Intelligence are becoming increasingly popular among consumers, their privacy policies regarding the use and retention of user data differ. Many consumers are unaware of how their data is or could be utilized.

The importance of being an informed consumer lies in the fact that different tools offer varying levels of control, as stated by Jodi Daniels, CEO and privacy consultant at Red Clover Advisors, who advises companies on privacy matters. Daniels emphasized that there isn't a universal opt-out option across all tools.

The increasing use of AI tools and their integration into daily life on personal computers and smartphones have made these questions even more relevant. For instance, Microsoft recently released its first Surface PCs with a dedicated Copilot button on the keyboard for easy access to the chatbot, fulfilling a promise made several months earlier. On the other hand, Apple recently unveiled its vision for AI, which involves several smaller models that run on its devices and chips. Apple executives emphasized the importance of privacy, which can be challenging with AI models.

There are various methods for safeguarding privacy in the era of advanced AI technology.

Ask AI the privacy questions it must be able to answer

Consumers should carefully review privacy policies before selecting a tool to understand how their information is used, what data is shared, and if there are options to opt-out or limit data usage. Is it possible to turn off data sharing? Can users control what data is collected and for how long it is retained? Can data be deleted? Are opt-out settings difficult to find?

Privacy professionals advise that it is a red flag if you cannot readily answer these questions or locate answers to them within the provider's privacy policies.

Daniels stated that a privacy-focused tool would inform you.

Daniels emphasized that ownership of the issue is necessary, as companies have varying values and methods of earning profits.

Grammarly, an editing tool used by many consumers and businesses, was cited as an example by her of a company that transparently explains on its website how data is utilized.

Keep sensitive data out of large language models

While some individuals are trusting when it comes to inputting sensitive data into generative AI models, Andrew Frost Moroz, founder of Aloha Browser, advises against doing so since the potential misuse of the data is unknown.

Many corporations are worried about employees using AI models for work purposes, as workers may not be aware of how their information is being used for training. If confidential documents are entered, the AI model gains access to them, which could raise concerns. To address this issue, many companies are only approving the use of custom versions of gen AI tools that maintain a separation between proprietary information and large language models.

It is crucial to exercise caution when using AI models for anything that is not intended to be shared publicly, according to Frost Moroz. Being aware of how you are using AI is important. For instance, using AI to summarize a personal legal document is not advisable. Similarly, if you have an image of a document and want to copy a particular paragraph, you can ask AI to read the text so you can copy it. However, this will make the AI model aware of the content of the document, so consumers need to be mindful of this.

Use opt-outs offered by OpenAI, Google

Each AI tool has its own privacy policies and may offer opt-out options. For instance, Gemini enables users to set a retention period and erase specific data, in addition to other activity controls.

To opt out of having their data used for ChatGPT model training, users can navigate to the profile icon on the bottom-left of the page and select Data Controls under the Settings header. Then, they need to disable the feature that says "Improve the model for everyone." While this is disabled, new conversations won't be used to train ChatGPT's models, according to an FAQ on OpenAI's website.

According to Jacob Hoffman-Andrews, a senior staff technologist at Electronic Frontier Foundation, there are risks associated with allowing gen AI to train on consumer data, and there is no clear benefit for consumers in doing so.

If personal data is published on the web, consumers may be able to remove it and it will disappear from search engines. However, untraining AI models is a complex task, and while there may be ways to mitigate the use of certain information, it is not foolproof. Effectively doing this is an area of active research.

Opt-in, such as with Microsoft Copilot, only for good reasons

Microsoft 365's Copilot feature assists users with tasks such as analytics, idea generation, and organization within Word, Excel, and PowerPoint.

Microsoft claims that it does not share consumer data with third parties without permission and does not use customer data to train Copilot or its AI features without consent.

Users can enable data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features by signing into the Power Platform admin center, selecting settings, tenant settings, and turning on data sharing.

Opting in has its advantages, such as enhancing the effectiveness of existing features. However, privacy experts caution that this comes at the cost of losing control over how data is utilized, which is a significant concern for privacy.

Consumers who have opted in with Microsoft can withdraw their consent at any time by going to the tenant settings page under Settings in the Power Platform admin center and turning off the data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features toggle.

Set a short retention period for generative AI for search

While AI can be a useful tool for generating information and ideas, it's important to be mindful of privacy concerns when using it. Setting a short retention period for gen AI tools and deleting chats after obtaining the desired information can help protect privacy. Additionally, companies should be cautious about server logs and take steps to reduce the risk of third-party access to their accounts. The privacy settings of the specific site being used should also be taken into consideration.

Elon Musk isn't wrong about Apple AI privacy concerns, says Binary Defense's David Kennedy
by Cheryl Winokur Munk

Technology