3 Feb 2026

AI, employee grievances and how employers should respond

Grievances invented by technology are just one example of how chatbots are increasingly being misused in the workplace. Daniel Rawcliffe, an associate at ESP HR, discusses how to spot them and handle them

author_img

Vet Times Jobs

Job Title



AI, employee grievances and how employers should respond

Image: terovesalainen / Adobe Stock

An employee has sent his manager a grievance. It is lengthy, formal and packed with references to employment law and case law.

At first glance, it seems well put together – almost too well put together – and something about it doesn’t feel quite right.

What’s more, the detail in the grievance does not seem to align with the reality of the situation, and the employee is refusing to meet in person, insisting that the matter be handled in writing. If this sounds familiar, you may be dealing with a “GIT” – a grievance invented by technology. It is just one example of how artificial intelligence (AI) is increasingly being misused in the workplace.

Lawyers are reporting an increasing number of enquiries from clients who suspect their employees may be turning to AI tools such as ChatGPT to draft their grievances. And it does not stop there – they are also seeing AI used in appeals, responses to emails and, perhaps more worryingly, in outcome letters prepared by managers.

How employers can spot – and respond to – complaints crafted by AI

Many of the enquiries lawyers see in relation to questionable grievances follow a similar pattern. The grievance tends to be unusually long, detailed and formal. It is often raised in the middle of an investigation into the employee’s conduct or performance. In many cases, the employee is reluctant to attend face-to-face meetings, requesting instead that grievance or disciplinary processes be conducted entirely in writing.

Taken together, these patterns may suggest that the employee has used AI tools such as ChatGPT or Copilot to draft the grievance.

Why AI grievances are a problem

A key issue with employees using AI to draft grievances is that minor incidents can be blown out of proportion. Innocuous actions or behaviours may be presented as bullying, sexual harassment or discrimination, creating a distorted view of the situation.

AI can take minor incidents and exaggerate, embellish or reinterpret them. As a result, when an employer receives a GIT, it is not always clear which parts reflect what actually happened and which have been “enhanced” by AI. This creates uncertainty, making it harder to assess the true nature of the grievance and respond appropriately. What makes this even more problematic is the ease and speed with which AI tools can generate paragraphs of detailed, persuasive text. Employees who might not have otherwise taken the time to raise a formal grievance can now spin up convincing complaints in a matter of seconds, increasing the likelihood that grievances are submitted.

The challenge is compounded by the fact that many GITs are submitted during ongoing disciplinary, redundancy or other sensitive processes. Employees might turn to AI when they feel threatened about their job, which can result in grievances that are highly detailed and, again, inflated.

Employers must investigate these curve-ball grievances – carefully separating fact from embellishment – while simultaneously managing the original process. Naturally, this increases the complexity and sensitivity of the situation.

The difficulty for employers is that, unless a clear reason exists not to, a GIT must be treated as a genuine grievance and investigated accordingly. Even when a GIT is excessively long and includes references to incidents that either did not occur or have been overblown, each element still needs to be carefully considered and, where possible, resolved. This, of course, can take up valuable time and resources.

How to spot AI grievances

Given the proliferation of AI – particularly on social media – many lawyers have developed a sense of whether content has been AI generated. Of course, not everyone is familiar with AI-generated content or skilled at recognising its hallmarks.

With the rise of employees using AI at work, employers will want to look out for telltale signs that text may be AI generated.

These include American spelling, such as “behavior” instead of “behaviour”, “organization” instead of “organisation”, and “favoritism” instead of “favouritism”; also, placement of commas or full stops inside quotation marks (“like this.”) rather than outside (“like this”), or treating collective nouns as singular (“the team is” versus “the team are”).

Among the signs is the frequent use of em dashes (—) instead of standard punctuation, such as commas, colons or semi-colons; the overuse of stock phrases or transitions, such as “in conclusion”, “it is important to note” or “as mentioned above”; and an overly formal or unnatural tone – particularly from individuals who would not normally communicate that way, or an overly balanced or neutral tone, even when the topic might naturally invite opinion or emotion. Other giveaways are vague or generic content, such as claims lacking specifics, unrealistic examples, or no reference to personal experience or context; conditional or hedging language, such as “this could amount to discrimination” or “this may be a protected disclosure”; and the repetition of ideas or synonyms, where the same point is made multiple times in slightly different words. Recognising these patterns can help you identify when a grievance may have been AI generated.

Dealing with AI grievances

AI is not going away, and employees will continue to use – and sometimes misuse – it. To manage the rise in AI-generated grievances, employers are going to have to trust their instincts by learning the common signs of AI-generated text, so they can spot them more easily and, if something feels off, run it through an AI detection tool to confirm or disprove their concerns.

They should compare notes, so that if they suspect a grievance has been AI generated, it should be reviewed alongside other documents or communications from the employee to see whether it is consistent with their usual language and style.

Expert advice is essential; if any uncertainty exists, the document should be shared with a legal or HR specialist for a second opinion.

Reviewing and updating the practice grievance policy to ensure that any grievance requires a face-to-face meeting is a sensible move. However, in some cases, a remote meeting may be a reasonable adjustment.

But when AI use is suspected, an employer should not dismiss a grievance simply because it appears AI generated. Even if the text has been drafted with AI, the concerns raised may still be genuine. Every grievance should be treated seriously until reason to believe otherwise exists.

Summing up

AI is here to stay and we are all going to have to learn how to use it and spot any signs of abuse.

One thing is clear: just because a claim has been levelled – with the help of AI, possibly – it does not necessarily mean that the claimant is correct in their assertion or may win. Employers will still need to investigate the matter fairly.

  • This article appeared in Vet Times (3 February 2026), Volume 56, Issue 7, Pages 17-18