How this company is making it safer to use ChatGPT at work

Matheus Bertelli from Pexels

The rise of hyperintelligent chatbots like ChatGPT has caused quite the uproar in organizations nationwide. As employees embrace new super-smart tools to help them accomplish work tasks, employers are uneasy about potentially sharing company information with the artificial intelligence.

New findings from career insights platform Fishbowl revealed that 43% of professional workers are using ChatGPT for work-related tasks. What's more, 68% of those employees are doing so without their manager's knowledge. Much of that could be due to the distrust many employers have recently voiced against these programs, which is why virtual agent management platform Espressive has created a middle ground.

"ChatGPT can become a bit of a challenge for organizations when employees believe that the answers from the AI are the correct answers for that particular organization," says Pat Calhoun, Espressive's founder and CEO. "GPT doesn't know who you are as an individual; it doesn't know where you work. What we've done is supplement that question to make sure that it's relevant to the employee." 

Read more: AI leaders urge labs to halt production on programs more powerful than ChatGPT-4

In an effort to give businesses a safe and responsible option for when their employees inevitably use ChatGPT at work, Espressive launched Barista, an extension that acts as an intermediary between a user and the chatbot. The platform began developing Barista in response to not only the demands of their clients, but to the trend of larger organizations such as JPMorgan, Google and Twitter's willingness to adopt the chatbot into their tech stack.

To get access to Barista, employers have to first have their own license to use ChatGPT. Espressive will then sync Barista into both company-issued devices and employees' personal devices, as well as into OpenAI's server. Once the connection has been established, employers can submit any parameters they wish to have, such as specific information that is and isn't allowed, or questions that can't be searched. 

"Where we want to solely rely on ChatGPT is when we don't know the answer," Calhoun says. "Once you submit information to something like ChatGPT it's out there for everyone. What we intended to do was make sure that everyone is using ChatGPT in a safe and responsible manner."

Read more: Despite demand, only 14% of employees have received ChatGPT training

Without the right safeguards, ChatGPT could become more of a hindrance to organizations and employees than a tool, according to Calhoun. Previously, the only preventative measure available to organizations looking to avoid data breaches was to bar the chatbot altogether. 

"We were hearing from customers that they were worried about two things related to ChatGPT: that people would get answers and just assume the answers were correct, and that people would start submitting either corporate confidential information or data," Calhoun says. "Barista combines internal knowledge of the company with information that GPT provides to make sure that we're delivering the best possible experience." 

The questions employees are asking of ChatGPT are often related to job details, Calhoun says. Because policies can vary greatly from company to company and state to state, certain responses from ChatGPT may be entirely inaccurate for a given employee. With Barista, if an employee works for a company based in California, for example, it tailors the response to fit with state policies and regulations.

Read more: 4 ways ChatGPT will change the way we work

As for protecting a company's data privacy, the extension will also issue a pop-up that prevents an employee from posting a certain question if it detects company code or information the employer flagged as confidential beforehand. Once data has been uploaded into GPT, that information can then be used for training purposes everywhere and by anyone, according to Calhoun. AI programs like GPT can't just learn how to code on their own, which means that most of the abilities they learn are from employees oversharing. 

Even in the face of many employers and experts voicing their concerns about the growing proliferation of programs like ChatGPT — such as Google's new chatbot Bard — the trend is continuing to gain traction. The best thing employers can do is not fight it, but invest in long-term protections that will make it profitable, Calhoun says. 

"When we started Espressive six years ago, there were certainly a lot of questions," he says. "And what we're not seeing is anything worrisome like massive job losses as a result of these virtual agents or automation. Instead, people can finally start focusing on their jobs and adding more value to the organization than they used to." 

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Employee engagement
MORE FROM EMPLOYEE BENEFIT NEWS