Employees are trusting AI with sensitive workplace information

Adobe Stock
  • Key Insight: Discover how routine employee AI queries create critical enterprise data-exposure blind spots.
  • What's at Stake: Regulatory penalties, costly breaches and reputational damage for firms across industries.
  • Forward Look: Prepare for AI governance mandates, revised data policies and mandatory onboarding training.
  • Source: Bullets generated by AI with editorial review

It only takes one question typed into ChatGPT about health plans, company policies or  workplace documents to put sensitive data at risk.

Over one in four professionals have entered sensitive workplace details into a generative AI tool such as ChatGPT, Google Gemini and Microsoft Copilot to speed up daily tasks and clarify the information they don't understand, according to a new survey from software company Smallpdf. The habit is paving the way for costly data breaches and serious compliance violations, and if leaders will need to take a hands-on approach to protect their organizations' security.   

"The speed of AI adoption has wildly outpaced policy and training," says Malte Schiebelmann, SVP of product at Smallpdf. "The reality is that many employees are unknowingly jeopardizing company security in the name of efficiency." 

Read more: Walmart's VP of benefits wants AI and empathy to go hand-in-hand

Thirty-eight percent of employees are entering personal information, while 28% have shared login credentials or passwords. Sixteen percent have even disclosed health or medical records in order to better understand their healthcare plans and benefits, the survey revealed. To make matters worse, nearly 20% of employees don't remove or anonymize any of the sensitive details before entering them into AI tools, with 24% falsely believing that AI prompts remain anonymous. 

A lack of education is the primary driver behind most of this behavior, according to Schiebelman. Seventy percent of employees have never received formal training on how to use AI tools safely, Smallpdf's survey revealed, with 44% percent saying their company doesn't even have an official policy on AI. As a result, despite not feeling confident in their ability to use AI without making any compromising mistakes, many employees resort to lying to their employers about their usage of these tools.   

"The biggest mistake many leaders are making right now is assuming that employees

know better when they often don't," Schiebelmann says. "That disconnect creates a perfect storm where workers feel empowered to use AI, but aren't equipped to protect sensitive employee data." 

The price of ignorance is too high

While the exact cost of entering workplace information into AI tools varies greatly depending on what was disclosed, the average cost of a data breach in the U.S. is over $10 million, according to a report from IBM. To employees, the decisions that could eventually lead up to large-scale financial crises, such as posing a question about specific benefits, or asking the chatbot to summarize important internal documents or simplify company policies, can seem small at first — but the consequences are not. 

Read more: These AI tools are helping benefit leaders do their jobs faster and more efficiently

"This [should be] a growing concern for teams," Schiebelmann says. "When employees input sensitive information into AI tools, that data can be stored, exposed, or used to train future outside models."

In order to set up the right checks and balances and protect organizations long-term, Schiebelmann advises leaders to embed AI literacy directly into the onboarding process, rewrite existing data policies to add AI regulation and limitations, and offer ongoing employee training and resources that focus explicitly on real-world scenarios they may be seeing every day.

"It's critical that leaders stop treating AI use [and misuse] as an IT-only concern when it's just as much a culture issue," Schiebelmann says. "Leaders must prioritize guardrails that protect both employees and the company from these escalating risks in the AI era." 

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Employee communications
MORE FROM EMPLOYEE BENEFIT NEWS