Author

Gemma Askham, journalist

‘We’re facing the biggest change management phase in human history,’ says Patrick Lynch, AI faculty lead at Hult International Business School. He’s talking about integrating AI agents as the next wave of ‘employees’. Unlike prompt-driven content-generating tools such as ChatGPT, AI agents undertake real-world actions, such as making travel bookings and adjusting reservations around your online calendar.

A 2025 Capgemini survey of 1,500 directors found that 93% believe the scaling of AI agents in the next year will give businesses a competitive edge. Deloitte already uses AI agents to guide human auditors’ focus by flagging anomalies in financial statements while HR platforms such as HireVue can evaluate candidate video interviews. Elsewhere AI customer service agents are summarising client histories, classifying queries and offering first-line responses.

‘Yes, it’s about tech, but it’s more about your people’

‘Over the next one to two years, AI systems will likely make multistep decisions autonomously, generating compliance alerts, compiling documentation and notifying stakeholders all without prompts,’ says Rohan Whitehead, data training specialist at the Institute of Analytics. ‘By 2030, interlinked AI agents may coordinate across departments via platforms like Microsoft Copilot or Salesforce Einstein, responding to performance data, market signals or policy changes.’

Rather than the displacement of human workers, what Lynch sees in AI is opportunity. ‘To steward this through, leaders must focus on the people side. Yes, it’s about tech, but it’s more about your people.’

New human roles

Getting agentic AI up and running is creating a whole new discipline of human jobs. ‘We’ll need AI trainers, AI alignment specialists and AI orchestrators,’ says tech ethicist Nell Watson, president of the European Responsible AI Office and co-author of the forthcoming book Safer Agentic AI.

AI trainers use datasets and examples to impart subject knowledge, such as accounting regulations. AI alignment specialists are the moral compass, defining ethical constraints to ensure AI agents behave fairly and consistently. And AI orchestrators lead hybrid teams. ‘They decide tasks best suited for human intuition or specialised AI agents, then design collaborative workflows,’ Watson explains.

‘The workplace will need fluent AI-human intermediaries’

Universities are already adapting. The AI for business course at London School of Economics already teaches AI governance alongside technical fluency. ‘The workplace will need professionals who aren’t just coders or analysts but fluent intermediaries between AI and human teams,’ Whitehead says.

Leadership refocus

Alongside innovative new roles, leadership must evolve. ‘In a hybrid workforce, a manager’s role shifts from task supervision to psychological safety, creating an environment where humans feel confident alongside highly capable digital teammates,’ Watson says.

That translates as a culture where employees can question or critique AI recommendations without being penalised for not trusting ‘the data’, and ensuring that human performance – particularly creative gambles – isn’t benchmarked against machines that take only optimised, predictable paths.

‘The goal is to ensure AI serves human talent, not intimidates it’

Given that AI can access vast amounts of information instantly, human teammates must not be made to feel inadequate or embarrassed to ask for help, Watson adds. ‘The goal is to ensure AI serves human talent, not intimidates it.’

Start right

Positioning agentic AI as a team collaborator, not a replacement, starts from the get-go.

‘If leadership drops it on the team via a corporate email or quarterly meeting, you’ll meet resistance and churn,’ warns Chetan Dube, founder of Quant AI and a pioneer of AI-enabled digital labour. He advises starting with a small group of first adopters who can provide feedback ahead of roll-out and champion the benefits to the rest of the workforce.

‘Just as a top-tier laptop and office perks became a draw for talent a decade ago, AI infrastructure will be a key differentiator,’ says Watson. Proof: a 2024 LinkedIn Workforce Report found candidates in digital and data roles were more likely to engage with job posts mentioning AI tools or hybrid workflows. Watson believes that rather than being ‘bogged down’ by legacy systems and manual processes, today’s top candidates already know the value of agentic assistants – particularly in taking on repetitive tasks that free them to focus on high-impact work.

Resistance reframed

To answer the thorniest question – how to convince existing employees to embrace the very tech they fear threatens their jobs – Watson is all about upskilling. ‘The message should be: “This technology is here and our job is to become masters at using it. We’ll invest in you to remain at the forefront.” This reframes learning not as a threat but as a shared journey.’

The BBC, for example, runs company-wide training on generative AI tools such as Microsoft Copilot – including when not to rely on AI.

‘Present AI tools with precise time rewards’

The ultimate sell is showing hybrid collaboration boosts wellbeing, says Andrew Lloyd, managing director at legaltech provider Search Acumen. ‘For a workforce under huge pressure to do more for less, present tools with precise time rewards. I explain it as the difference between a lawyer leaving at 5pm to put their kids to bed or being stuck in the office until 10pm.’

Untouched by tech

‘Not all work is suitable for automation,’ Whitehead says. He predicts that these fields – such as people management, strategic foresight and ethical judgment – will become more valuable rather than diminish.

‘Some tasks should never be fully automated’

Similarly, Watson insists the final why behind a decision must always fall to a human. ‘We should never fully automate tasks that require deep empathy – such as final decisions in palliative care – high-stakes negotiation with human counterparts or ultimate legal and ethical responsibility for an outcome.’

Hello, HAL…

Humanising AI will move beyond saying thank you to ChatGPT or naming your chatbot. ‘The idea of a digital teammate will become a functional reality,’ Watson says.

She envisages agents gaining legal and economic status similar to how corporations are treated as legal persons. ‘This isn’t about granting them consciousness or rights; it’s to create a practical framework so they can hold assets, sign contracts and be held accountable for their actions.’

Should AI agents gain employee status, Watson believes leaders must prepare for an exciting and, to some, unnerving future. ‘It will be one where some of the most productive and influential members of an organisation aren’t human at all,’ she says.

More information

Visit ACCA’s AI hub for resources and guidance to demystify AI and help you build knowledge

Advertisement