The perils of ChatGPT in the workplace
The rise of ChatGPT and generative AI chatbots has ushered in a new era of efficiency for employees and organisations around the globe. But while much of the media attention around ChatGPT has focussed on its phenomenal capabilities. Very little has highlighted the dangers of using it in the workplace.
As you will read in this article, many employees have already been threatened with dismissal for using ChatGPT in the workplace. Because for some employers, the benefits of the chatbot are outweighed by the dangers it poses. Specifically, the risk of employees inadvertently sharing sensitive company information with ChatGPT.
This fear has seen some of Australia’s and the world’s biggest employers, including Amazon, Apple and the Commonwealth Bank, outright banning their staff from using ChatGPT. But is this fear of sharing confidential information well-founded? When you enter information into ChatGPT, what happens to it? Does it become the property of ChatGPT? We answer these questions and more in this article.
Why employers are increasingly banning workers from using ChatGPT
ChatGPT is a marvel of modern AI, capable of understanding and generating human-like text based on the input provided to it. Its potential to enhance productivity is unknown. It will streamline customer interactions and offer on-demand information makes it a valuable tool in various business contexts. However, its power also raises concerns when it comes to safeguarding sensitive information within an organisation.
One of the key perils of using ChatGPT in the workplace is the potential for inadvertently sharing confidential company information. When employees utilise ChatGPT to generate responses or seek answers to queries, they might unknowingly share sensitive data with the AI model. This can include company strategies, trade secrets, financial data and even proprietary algorithms.
It is very easy to unknowingly share sensitive information on ChatGPT
You may not even realise that you have shared sensitive company information until it is too late. For instance, you may enter a price list or strategy document into ChatGPT, asking the chatbot to fix any typos or errors. Often, workers will do this offhandedly without thinking that they are sharing confidential information. But once you enter that information into the chatbot, it can remain on ChatGPT’s servers for a very long time.
What happens to data shared on ChatGPT?
The privacy policy of ChatGPT is explicit in its declaration that all conversations are logged and shared with both other entities and AI trainers affiliated with the platform. Specifically, the privacy policy states:
“User Content: When you use our Services, we collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services.”
ChatGPT
What this means is that any information that an employee may enter into ChatGPT is collected and stored on the servers of its parent company OpenAI. The information is collected to help train the natural language processing capabilities of ChatGPT. The obvious danger is that employees can inadvertently share their employer’s confidential information. And for how long that information will remain on OpenAI’s servers is up for debate.
In the ChatGPT privacy policy, OpenAI states that it retains information shared on the platform for “only as long as we need in order to provide our Service to you.” It also states that information will be kept on its servers for “other legitimate business purposes.”
Employer threatens employees with dismissal for sharing sensitive information on ChatGPT
While the queries, statements and other information you enter into ChatGPT is eventually deleted, this fact has not quelled employer fears. The risk of employees sharing confidential information via the chatbot is one that many companies simply are not prepared to entertain. And as a result, some are threatening to dismiss employees who use ChatGPT at work. In April 2023, Korean electronics giant Samsung threatened to dismiss any of its employees found to use AI chatbots like ChatGPT. The move came after three incidents of employees sharing sensitive confidential company information on ChatGPT in the span of 20 days.
In one incident, a Samsung employee shared the confidential source code from a malfunctioning semiconductor database and input it into ChatGPT, seeking assistance in identifying a solution. Regarding a separate case, another employee shared confidential code in an attempt to rectify issues with faulty equipment. In a third incident, an employee fed an entire meeting’s content into ChatGPT, asking it to generate meeting minutes. Adding to the complexity, these breaches occurred just three weeks after Samsung had rescinded a previous ban on employees using ChatGPT, primarily due to concerns surrounding the exact risks that these incidents have realised.
Samsung bans its workers from using ChatGPT
The three incidents of sharing sensitive information prompted Samsung to ban its employees from using ChatGPT and other AI chatbots. In a communication addressed to its workforce, Samsung highlighted the “security risks presented by generative AI.” The company further stated that its employees are no longer permitted to access applications like ChatGPT, Google Bard and Bing chat on company-owned computing devices, as well as on internal networks.
Samsung also warned its employees that once information is uploaded to third-party applications like ChatGPT, it is stored on external servers, rendering retrieval and deletion arduous tasks. And the company also made sure to mention that those employees who use ChatGPT while at work could face dismissal.
Many prominent companies have banned ChatGPT
Samsung’s ban on its staff using ChatGPT and other AI chatbots saw it join a growing list of prominent companies to do the same. In January 2023, Amazon banned its employees from sharing confidential information with ChatGPT. This was after the company realised that some of the answers the chatbot was providing resembled private Amazon internal data.
In February 2023, US bank JPMorgan Chase placed restrictions on its workforce’s ability to use ChatGPT. The restrictions were in response to concerns that sensitive financial information could be shared by staff, which could have catastrophic regulatory consequences. A host of global banks did the same shortly thereafter, including Deutsche Bank, Citigroup, Goldman Sachs and the Commonwealth Bank of Australia.
In May 2023, Apple told its staff that using ChatGPT and AI assistants like Github’s Copilot was no longer allowed. The company was concerned that its employees could share sensitive data to one of these AI chatbots, some of which are financially backed by fierce rival Microsoft.
Can I be fairly dismissed for sharing sensitive information on ChatGPT?
A significant danger posed by the use of ChatGPT in the workplace is the violation of company policies and regulations. It is likely that, if they have not already, many Australian employers will ban their employees from using ChatGPT given the risk of sharing confidential information. If such a ban is written into your employer’s company policies, disobeying it could be reasonable grounds for a disciplinary action or even dismissal.
But even if an employer does not have a specific policy banning ChatGPT, they will likely have strict protocols in place to protect sensitive information from leaving the premises. When employees unwittingly share confidential data with an AI like ChatGPT, they breach these policies, putting the company’s reputation and legal standing at risk. And such a violation of company policy could open up an employee to the risk of disciplinary action or dismissal.
It is therefore wise to exercise caution when using ChatGPT or other AI chatbots at work. It is important to remember that the information that you enter will be stored in overseas servers for an indeterminate time. And another perhaps almost as concerning, some of the answers that you receive from ChatGPT may not even be accurate.
ChatGPT can get things wrong
The fact that ChatGPT can often generate inaccurate answers and information is hardly a secret. Like many of us, you have probably experienced its flaws on a number of occasions. And it is not as if ChatGPT is hiding the fact that it can occasionally get things wrong. After all, written on its interface is the following disclaimer: “ChatGPT may produce inaccurate information about people, places, or facts.”
There have also been reputable studies that dampen some of the glowing praise ChatGPT has received for its widely lauded capabilities. A recent study by researchers from US universities Standford and Berkeley seems to highlight how ChatGPT is becoming less accurate over time.
Study aimed to test ChatGPT’s accuracy
The study aimed to test the accuracy and consistency of ChatGPT’s underlying GPT-3.5 and -4 large language models (LLM). Specifically, the study aimed to measure ChatGPT’s propensity to “drift.” That is, its tendency to provide responses with differing levels of quality and accuracy. It also tested the chatbot’s capability to follow instructions. The tests encompassed tasks such as solving mathematics problems, addressing sensitive questions, engaging in visual reasoning and generating code.
The results of the study were concerning
The results of the study provided a worrying confirmation about the inaccuracy that many have experienced while using ChatGPT.
“Overall… the behaviour of the ‘same’ LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality,”
the research team stated.
For instance, in March 2023, GPT-4 exhibited an impressive 98 per cent accuracy rate in identifying prime numbers. However, when tested three months later in June, this accuracy plummeted to less than 3 per cent for the same task. In contrast, GPT-3.5, over the same period, actually improved its prime number identification performance. When it came to generating computer code, both GPT-3.5 and GPT-4 displayed a deterioration in their abilities between March and June. What makes this situation particularly perplexing is the absence of a clear explanation for the deterioration in response quality exhibited by ChatGPT.
ChatGPT defames Australian whistle-blower
The inaccuracy of ChatGPT has had very real consequences for whistleblower Brian Hood. Over ten years ago, he had alerted the press about what turned out to be Australia’s biggest bribery scandal. But when you ask ChatGPT about his role in the scandal, the chatbot states that Mr Wood “was involved in the payment of bribes to officials in Indonesia and Malaysia.” ChatGPT also states that he was sent to gaol.
“I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me. And then I got quite angry about it,” Mr Wood told The Sydney Morning Herald. He has since filed a defamation claim against ChatGPT’s owners OpenAI.
Many workers have a false sense of security when using ChatGPT
More and more, workers are using ChatGPT and other AI chatbots to help them with their daily tasks. And they are becoming increasingly integrated into other apps. For instance, Microsoft has recently embedded an AI chatbot into its suite of office apps and web browser. As AI chatbots become increasingly familiar, many workers may overlook the importance of thoroughly vetting the information they share with them. This familiarity may see many develop a false sense of security around AI chatbots. This could lead to the careless sharing of sensitive information, potentially resulting in severe consequences.
When you are using ChatGPT or another AI chatbot at work, be careful about what you feed it. It is essentially a stranger to whom you are sharing some of your most private thoughts, information and data. You never know for how long your information will exist on an overseas server and how it could be used by third parties.
Have you been unfairly dismissed?
Have you experienced unfair dismissal, discrimination or unfair treatment for exercising your workplace rights? Look no further than A Whole New Approach (we are not lawyers) for the support and guidance you need. Time is of the essence – you have just 21 days from your dismissal date to make a claim with the Fair Work Commission.
We are a seasoned team of workplace representatives with over 30 years of experience. We are dedicated to helping you navigate the unfair dismissal process and secure the compensation you rightfully deserve. Our track record speaks volumes – we’ve assisted thousands of employees in achieving justice. We know the Fair Work Commission inside out and are unwavering in our commitment to securing the best possible outcome for you.
Our commitment to your cause starts with a free initial consultation. Where you can have all your questions answered without any obligation. What’s more, when we represent you, our fees operate on a contingency basis. This means you only pay if we win your case – ensuring you have nothing to lose and everything to gain. We work on a national basis, including, Victoria, NSW, QLD
So, if you have suffered an unfair dismissal or faced adverse actions at work, do not delay. Reach out to A Whole New Approach today at 1800 333 666.
Articles similar to: The perils of ChatGPT in the workplace
4 strange dismissals that made global headlines
How much is my unfair dismissal case worth