
All About People
Need help to resolve your business, personal or employee conflicts & disputes?
Stay up to date with our latest news and workshops.
Across all sectors, AI is being embedded into systems and processes. In HR, predictive analytics are shaping recruitment decisions. In customer service, chatbots are replacing traditional support roles. In marketing, generative tools are speeding up content production. While this brings real benefits, we’re also seeing new pressures:
• Job insecurity – People worry about being replaced or left behind
• Distrust in leadership – Especially when changes feel sudden or poorly communicated
• Ethical concerns – Questions over how AI is being used in areas like surveillance or decision-making
• Tensions between teams – Particularly over which tools to use and how they’re rolled out
These are not always open disputes. Often, it’s a slow build-up of disengagement, resistance to change, or misunderstandings across departments.
AI cuts to the core of how we see ourselves at work. It touches identity, value, and purpose. When a tool starts making decisions someone used to make, or removes a task they took pride in, it can leave them feeling uncertain, even invisible. Yet, these responses are not signs of failure, they’re signs that people care. The key is helping teams move from anxiety to agency.
We work with organisations to support people through change, especially when that change involves uncertainty. Using the CINERGY™ conflict coaching model and our Conflict First Aid programme, we help individuals and teams:
• Understand their own reactions to disruption
• Hear each other with empathy, even when views differ
• Communicate clearly, even when the answers aren’t all there yet
• Problem-solve together, rather than polarise
When you get ahead of tensions, you create space for real conversations, not just resistance.
We recommend that organisations developing their AI plans also include a people-first conflict resolution strategy:
• Be clear and consistent about why AI is being introduced
• Create channels for feedback, challenge, and participation
• Involve people across levels and functions in the process
• Make it explicit that technology enhances human work—it doesn’t erase it
Trust is essential. People don’t need all the answers, but they do need honesty and involvement.
When people push back against AI, it’s often framed as resistance to progress. But we see it differently. It’s a call for reassurance, clarity, and connection. And that’s something every organisation can work with. By addressing AI-related tensions early and constructively, you not only reduce conflict, you turn it into an opportunity to engage, evolve, and grow together.
AI isn’t just changing workplace tasks, it’s also influencing how we manage conflict itself.
Philip Corsano’s recent article for the Civil Mediation Council explores how AI is being used in dispute resolution, including predictive modelling and machine learning. While these tools can bring speed and cost savings, there are serious questions around privacy and confidentiality, especially in mediation, where trust and openness are key.
Right now, most AI analysis draws from public legal decisions, not confidential mediation cases. But the ethical questions are real: What happens if AI tools start being used during mediation sessions? Could private information be exposed, misinterpreted, or stored in ways that undermine the process?
To manage these risks, experts suggest two key safeguards:
1. Confidentiality by design – Build privacy protections into the AI tools from the start
2. Informed consent – Make sure participants understand what tools are being used, how data is handled, and have the right to opt out
Ryan Abbott and Brinson Elliott argue that AI can assist but shouldn’t replace human mediators. AI tools are good at automating routine tasks, spotting patterns, or forecasting outcomes in simple cases. But they can’t handle the nuance of human emotion, credibility, or cultural context—nor do they navigate ethical or value-based dilemmas well.
There’s also the issue of ’black box’ systems, where even the designers can’t fully explain how decisions are made. That’s not good enough when fairness and justice are on the line.
The EU’s new AI Act (2024) rightly classifies AI in justice and dispute resolution as high risk and insists that key decisions remain human-led.
Whether we’re supporting teams navigating change or designing future mediation systems, one thing remains true: conflict resolution is, at its heart, a human process. AI can support that work, but it can’t replace the need for empathy, ethics, and honest conversation.
Let’s use AI wisely, not just to be faster, but to be fairer. And let’s continue to prioritise people as we shape this new frontier together.
If you're navigating the challenges of AI in your workplace or wondering how to prepare your teams for change, we’d love to talk. Get in touch with the team here at All About People to explore how we can support your organisation with conflict resolution, coaching, and collaborative problem-solving.
Strategic Advice and Guidance for you and your organisation. Our accreditations:
Stay up to date with our latest news and workshops.