03 Dec 2025
by Gina Neale

Can we trust AI chatbots with mental health support in our businesses?

As public use of AI chatbots grows, many employees are already turning to tools like ChatGPT for reassurance, stress relief or help thinking through workplace concerns.

Avantus_Main.jpg 20

 

More than one in three adults (37%) have used an AI chatbot for mental health or wellbeing support (Mental Health UK), and this behaviour is emerging independently of employers.

Whether organisations choose to adopt wellbeing AI tools or not, employees will continue using them privately, making it essential for HR leaders to understand the trend, its risks and its limitations. 

At the same time, many employers are exploring where AI might fit within their wellbeing strategies, increasing the need for clear boundaries, safe guidance and informed oversight.

For HR leaders, the challenge is not whether to promote or even adopt AI, but how to safeguard employees who are already using it.

The pros: accessibility, immediacy and early support

When used carefully, AI chatbots can offer early benefits for low-level workplace concerns such as stress or managing workload, helping employees access support more quickly and comfortably. 

  1. Approachable early support: AI tools offer 24/7 availability, reducing delays and providing real-time responses. For example, an employee experiencing late-night worry may seek instant reassurance from a chatbot rather than waiting until the next day. This immediacy can help prevent concerns from escalating.
  2. Lower emotional barriers: Some employees, especially younger ones, may find it easier to open up to an AI tool than to a colleague or manager, helping them express concerns earlier.
  3. Thought reframing and structured guidance: AI can help individuals explore different perspectives through structured prompts, mood check-ins and mindfulness pathways that support reflective thinking. 
  4. Early prevention and safe signposting: Used responsibly, AI can guide individuals toward early support while directing them to qualified professionals when issues require clinical expertise.

The cons: limits in empathy, expertise and safety

It’s equally important to understand the risks, particularly as employees may turn to AI for issues it is not clinically equipped to manage.

  1. Lack of clinical expertise: AI tools are not clinically trained or emotionally intelligent. They cannot interpret trauma, crisis cues or complex emotional states.
  2. Risk of inaccurate or harmful guidance: AI outputs may reflect inaccuracies or bias in their training data and could make things worse. It has been reported that 11% of adults (Mental Health UK) who used mental health chatbots felt more anxious or depressed afterwards.
  3. No accountability or safeguarding: AI does not fall under professional or safeguarding standards and cannot be held responsible for incorrect or risky guidance.
  4. Privacy and data concerns: Employees may share sensitive information without understanding how it is stored, used or shared, which can undermine trust and raise compliance issues.
  5. Potential for overreliance: Some individuals may continue using AI for concerns requiring human intervention, delaying access to qualified support and potentially worsening outcomes.

Finding the right balance: what HR leaders should do

Wherever an organisation sits on the adoption spectrum, HR’s role is to ensure clear guidance, safe boundaries and appropriate escalation routes for employees who choose to use AI. This approach helps organisations manage risk responsibly while supporting employees who are already engaging with these tools independently. 

  1. Listen to employee needs: Engage with employees, including those with lived mental health experience, to understand how AI tools are being used and perceived. 
  2. Provide clear internal guidance: Help employees understand when AI may be helpful, when it should not be relied upon, how to use it effectively and how to escalate concerns to trained professionals.
  3. Ensure data transparency: Be clear about how any digital wellbeing tools collect, store and use data. Transparency is essential for trust and compliance.
  4. Champion inclusivity and access: Ensure that digital tools do not create barriers and that human-led alternatives remain easily available for all employees.
  5. Keep human connection at the centre: If AI tools are used at all, position them strictly as early-stage, supplementary support. Employees must always have clear, accessible pathways to real people for meaningful or complex conversations.
  6. Maintain a strong, non-digital wellbeing offer: Ensure your organisation provides robust, human-led wellbeing support, including EAPs, trained managers, mental health first aiders and clinical professionals, so that technology enhances, rather than replaces, a comprehensive wellbeing package. 

If an organisation chooses to adopt AI tools

If an organisation decides to explore digital wellbeing tools, they must ensure these solutions are developed with clinical oversight, grounded in evidence-based practice and independently evaluated. Tools should have clear use cases and be transparent about their limitations. Where reliable evidence is lacking, the tool should not be adopted. 

The future of AI and mental health at work

AI will continue to evolve and shape how employees seek informal support, regardless of whether organisations choose to adopt these tools. The priority for HR now is to stay informed, maintain strong governance and ensure that any use of AI, formal or informal, is supported by clear boundaries, robust escalation routes and reliable human support. 

Technology can play a helpful role in early-stage wellbeing conversations, but it should always sit within a wider, clinically sound and human-led wellbeing strategy.

Supplied by REBA Associate Member, Avantus

Flexible Benefits & Technology specialist providing online, highly configurable platforms to Customers and Intermediaries worldwide.

Contact us today