Managing the risks of Artificial Intelligence in the workplace
07/08/2025
Risks relating to the use of artificial intelligence tools in the workplace extend beyond the risks relating to misuse of organisations’ confidential information and intellectual property. This article looks at some other emerging risks that organisations ought to think about in relation to the adoption and use of AI systems by their employees.
Discourse regarding the use of Artificial Intelligence (AI) software in the workplace has typically focused on confidentiality, intellectual property and privacy risks that may arise from sharing sensitive commercial information with generative AI chat-bots (e.g. Chat GPT).
Concerns regarding these risks are well founded and organisations should continue to insist on robust risk management measures (from both AI software providers and staff) to protect their sensitive information, and the sensitive information of their customers.
This article looks at some other emerging risks that organisations ought to think about in relation to the adoption and use of AI systems by their employees. The first of these potential risks is a technical legal risk, while the second and third potential risk we explore below are risks that relate to the management of employees generally.
For the purposes of this article, we have used the term “AI” as a “catch-all” for any software program that has the capability of generating content without significant human oversight or interaction. For example, a software program that can automatically generate a transcript of a video call, or produce a memorandum based on a prompt from a human being.
- Could AI generated transcripts or summaries of meetings be discoverable during litigation?
In a recent general protections claim in the Federal Circuit and Family Court of Australia, the applicant identified in her claim that Microsoft Teams had produced transcripts of meetings relating to the process surrounding the redundancy of her employment. The matter was settled before the parties were required to prepare their evidence, or undertake discovery. However, it is likely that any transcripts of meetings generated by Microsoft Teams in relation to the employee’s dismissal would have been discoverable, had the matter progressed to discovery.
Most organisations are aware that Microsoft is recommending the adoption of its AI based “Co Pilot” feature as a way to improve staff productivity during and following meetings using Microsoft Teams. What is less commonly understood is that, if Co Pilot is engaged during a meeting, it will transcribe the meeting by default. Co Pilot can be used without the transcription function, but users are required to manually adjust the Co Pilot settings to turn off transcription. However, even if transcription is turned off, Co Pilot still relies on some note taking functions to generate summaries and action points at the end of a meeting.
Organisations may have long standing detailed policies and procedures designed to control traditional documents. Such policies and procedures have been implemented in part to avoid the potential for swathes of uncontrolled documents being created without appropriate decision making or oversight. This is particularly an issue given expansion of civil penalties imposed by legislation. Those considerations are not new, but uncontrolled AI generation of documents such as transcripts requires consideration of how to balance productivity gains from AI with ensuring that document production can be controlled. As with any document, AI generated meeting transcripts and summaries can be discoverable in litigation and could extend to all aspects of an organisation’s operations. Legal professional privilege issues also arise where a document was created using AI and it is not clear whether the document was created for the few grounds on which legal professional privilege can apply.
As with any development in technology, employees may be unlikely to be conscious of these risks and may permit AI tools to make records of discussions in an uncontrolled environment, which may be discoverable in litigation.
One suggestion has been to train and require employees to review and then amend or delete file notes made by AI. That is not itself an answer to the risks because:
a) all Australian jurisdictions impose criminal sanction in relation to the destruction or alteration of evidence that may be relevant to a judicial proceeding. For example, section 254 of the Crimes Act 1958 (Vic) makes it a criminal offence to destroy, conceal or render illegible any document that is reasonably likely to be required in evidence in a legal proceeding. In this context, the use of auto-delete functions on messaging apps (such as WhatsApp or Signal) may have criminal consequences;
b) any amendments to AI generated transcripts or file notes may be traceable via the document’s meta-data, which could lead to suggestions of evidence manipulation;
c) in busy commercial settings, it is unlikely that employees will review transcripts and file notes to check for content that is potentially legally damaging to their organisation;
d) if an organisation were to require its employees to do that, it may lead to a significant loss of productivity (and obviate the benefits of the AI tool in the first place);
e) it may not be possible to permanently delete or alter all copies of any AI generated transcripts or file notes, because copies of the transcripts or file notes may be stored on servers controlled by the companies that own the AI tools. It is possible that copies of such documents held on third party servers could be subpoenaed in litigation.
Given the potential productivity benefits of having AI create transcripts or summaries of meetings, we are not suggesting that employers should impose blanket bans on the use of AI for such purposes. Rather, leaders of organisations should think carefully about what the appropriate uses are for AI generated transcripts of meetings and set in place software restrictions on the ability of employees to use AI to generate transcript of meetings that do not fit within approved parameters.
For example, using an AI dictation software to generate a summary of a town hall meeting run by the CEO may be an appropriate use case. The software would enable the CEO to share important points with employees who were unable to attend the meeting and would cut down on the human labour typically involved in generating such summaries.
- Will the use of AI tools prompt unionisation of your workforce?
There have been recent reports of small numbers of Canva and Atlassian employees seeking union membership in an effort to mitigate the negative impact of the adoption of AI tools on their jobs. The Commonwealth Bank has also been in the headlines recently in relation to the proposed redundancies of 45 call centre roles, which will be replaced by AI tools.
These developments have prompted the Australian Council of Trade Unions (the ACTU) to call for the passage of laws that would limit the adoption of AI tools by employers. The laws proposed by the ACTU would require employers to consult staff before introducing any AI technology into the workplace and guarantee employees’ job security. The ACTU is also proposing that governments should not procure services from organisations who refuse to sign agreements that would constrain their use of AI tools in such a manner.
Currently, employers must consult with workers who are covered by a modern award or enterprise agreement in relation to the adoption of new technology that is likely to have a significant effect on its workers (such as termination of employment, major changes in the composition, operation or size of the workforce or in the skills required, or loss of promotion opportunities or job tenure).
Many white collar workers (who appear to be most exposed to the risk of displacement by AI tools) are not unionised and may not be covered by a modern award or enterprise agreement, so they are often not protected by these consultation obligations. Even if an organisation is bound to consult with its workforce about the introduction of a technological change, the organisation is typically able to proceed with the implementation without significant impediment. Consequently, white collar workers who are not covered by an enterprise agreement may see enterprise bargaining as an avenue to limit the integration of AI tools into the organisation if they become concerned about being displaced by such tools. However, it is an open question as to whether terms of an enterprise agreement that restrict the adoption of AI technology by the employer would be permitted under s.172 of the Fair Work Act 2009 (Cth). That may be why the ACTU is proposing laws and workplace agreements that sit outside the enterprise bargaining framework.
The implementation of legislation regulating employers’ adoption and use of AI tools is likely to be a hotly contested area of public policy that will take some time to play out. In the short term, to limit the risks of white collar unionisation and white collar enterprise bargaining, employers should think carefully about how they communicate with staff about the introduction of AI tools that have the potential to displace workers from the organisation, and the rate at which such tools are adopted.
- What effect will the use of AI have on the competency of your organisation’s workforce?
Cognitive scientists have begun researching how the use of AI tools may affect the user’s cognition. Some of the early data suggests that people who rely heavily on AI tools may experience a loss in competency in the skills that they have outsourced to AI.
For example, in a study conducted by researchers at MIT, participants were assigned to three groups: group one used a large-language model (LLM) (like Chat GPT) to produce an essay, group two was permitted to use a search engine, and group three was not permitted to use any search tools (this group was named “brain only”). The results showed that the three groups had significantly different neural connectivity patterns, with the “brain only” group demonstrating stronger and wider ranging neural networks. The group that relied on the support of a LLM to prepare their essay had difficulty quoting from their essay minutes after completing it.
AI tools clearly present organisations with the opportunity to improve the retention of corporate knowledge. An AI tool that can review the history of current and former employees’ email inboxes, and then prepare work that draws on the organisation’s archives may limit the negative impact of key employees leaving the organisation and taking significant corporate knowledge with them. However, that opportunity should be balanced with the risk of eroding skillsets that the organisation depends on to compete in the marketplace. If an organisation’s workers come to rely heavily on AI tools to perform their jobs, their skillsets may atrophy and result in lower levels of competency over time.
Obviously, organisations will be able to manage poor employee performance due to diminishing skillsets by standard performance management processes. But a corporate culture that relies heavily on AI may come to be seen as unattractive for highly skilled workers, who wish to preserve their skill sets.
- Over reliance on AI generated output.
In the context of the legal profession, there have been numerous reports of lawyers relying on fake case citations after using AI tools to undertake legal research and draft submissions. These examples reveal that information generated by AI tools may not be reliable. If AI tools cannot generally be trusted to be accurate, any potential productivity improvements associated with the commercial use of AI may be negligible, or illusory.
Leaders of organisations will have to think carefully about how to balance the objective of improving the productivity of their businesses by adopting AI tools against the legal, technological, human resources and public relations risks associated with the adoption of AI tools. Optimal outcomes will be obtained by leadership teams who can work collaboratively with and learn from one another to balance those objectives and risks.