Your privacy settings and AI are forming a new bond, with Gmail and Word being the main players.

Your privacy settings and AI are forming a new bond, with Gmail and Word being the main players.
Your privacy settings and AI are forming a new bond, with Gmail and Word being the main players.
  • AI features have been quietly incorporated into programs like Gmail, Microsoft, and Facebook, which have been around for years.
  • If there is no malicious intent behind AI integration, users should have an easier option to opt-out to prevent AI invasions.
  • Microsoft's connected experiences are not used to teach AI algorithms.

It's a good idea to prioritize basic cyber hygiene at the start of the year. While we're all familiar with the need to patch, update software, and change passwords, a growing concern is the integration of AI into programs that may compromise privacy.

Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro, stated that AI's integration into software and services has raised significant questions about privacy policies that existed before the AI era. Many programs we use today, including email, bookkeeping, productivity tools, social media, and streaming apps, may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.

"The lack of consent for the use of personal information leaves us all vulnerable. It's time for apps, websites, and online services to examine their data collection practices, sharing partners, and methods, as well as assess whether it can be used to train AI models. A significant amount of catching up is required."

Where AI is already inside our daily online lives

Most of the programs and applications we use daily have potential issues that overlap.

For years, AI has been integrated into various platforms' operations, even before it became a popular term.

Facial recognition in photos and personalized content feeds are features that social media platforms like Facebook and Instagram have long employed AI for.

Owens advised that while these tools provide convenience, consumers should be aware of the potential privacy trade-offs, such as the amount of personal data being collected and how it is used to train AI systems. It is crucial for everyone to review privacy settings, understand what data is being shared, and regularly check for updates to terms of service.

Microsoft's connected experiences, which has been available since 2019 and can be activated with an optional opt-out, has recently come under scrutiny. While some press reports have claimed that it is a new feature or that its settings have been changed, Microsoft and cybersecurity experts have stated that this is not the case. Despite this, privacy experts are concerned that advances in AI could lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.

As technology advances, the potential consequences of data usage may be significantly wider, even if privacy settings remain unchanged, according to Owens.

Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models, except in instances where customers explicitly consent to using their data for custom model development. The setting enables cloud-backed features such as real-time co-authoring, cloud storage, and tools like Editor in Word that provide spelling and grammar suggestions.

Default privacy settings are an issue

Ted Miracco, CEO of security software company Approov, stated that Microsoft's connected experiences offer increased productivity but also pose privacy concerns. The default-on status of the setting could unknowingly expose individuals to data collection, and organizations should carefully consider enabling the feature.

Miracco stated that Microsoft's assurance only partially alleviates privacy concerns, but it fails to completely address them.

According to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency, perception can be its own problem.

"Enabling features automatically can be intrusive or manipulative to some users, as it shifts the responsibility to them to review and modify their privacy settings."

In an era of widespread distrust and suspicion towards AI, his perspective is that companies should be more transparent, not less.

Microsoft and other companies should prioritize default opt-out over opt-in and provide more detailed, non-technical explanations about how personal data is managed to improve public perception.

"Despite the safety of the technology, public perception is influenced by fears and assumptions, particularly in the AI era where users frequently feel powerless," he stated.

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Jochem Hummel, an assistant professor of information systems and management at Warwick Business School in England, argues that default settings that facilitate sharing are advantageous for businesses but detrimental to consumer privacy.

Hummel stated that companies can improve their products and remain competitive by defaulting to more data sharing. However, from a user's perspective, prioritizing privacy through an opt-in model for data sharing would be a more ethical approach. As long as the additional features offered through data collection are not essential, users can decide which aligns more closely with their interests.

Hummel stated that there are advantages to the current balance between AI-powered tools and privacy, as he has observed in the work of his students. Students who have grown up with constant access to technology and social media are less concerned about privacy and are using these tools with great enthusiasm, according to Hummel. For instance, his students are producing higher-quality presentations than ever before, he said.

Managing the risks

While concerns about massive copying by LLMs have been exaggerated, AI's development raises privacy concerns, according to Kevin Smith, director of libraries at Colby College.

The privacy concerns about AI have been present for years, but the deployment of large language model trained AI has brought them to the forefront. Personal information is all about relationships, so the risk that AI models could uncover data that was more secure in a more 'static' system is the real change we need to address.

In many programs, the option to disable AI features is concealed in the settings. For example, with connected experiences, open a document, then click "file," then "account," then "privacy settings." From there, go to "manage settings" and scroll down to connected experiences. Click the box to turn it off. Microsoft cautions that doing so may limit certain experiences. On the other hand, keeping the setting on enables more communication, collaboration, and AI-powered suggestions.

To disable Smart features and personalization options in Gmail, one must open the app, click on the menu, navigate to settings, select the desired account, scroll to the "general" section, and uncheck the boxes next to the various options.

Malwarebytes stated in a blog post that disabling the Microsoft feature may result in some lost functionality when working on the same document with others in the organization. However, if privacy is a concern and the feature is not used frequently, it can be turned off. The settings can be found under Privacy Settings. Despite this, there is no evidence that these connected experiences were used to train AI models.

Data privacy experts suggest that companies should take responsibility for informing users about the full scope of opt-in features, rather than relying on users to deactivate them.

"The problem lies in the unclear disclosures and lack of communication about what "connected" means and how deeply personal content is analyzed or stored, according to Chaar. For those unfamiliar with technology, it's like inviting an assistant into your home, only to discover they've been taking notes on your private conversations for a training manual."

The lack of robust systems prioritizing user consent and offering control leaves individuals vulnerable to having their data repurposed in ways they neither anticipate nor benefit from, as Chaar pointed out.

by Kevin Williams

Technology