Top 5 AI employee fears and how to combat them


As artificial intelligence adoption surges in business, employees are left to wonder how systems placed on “automatic” can be controlled and how long it will be before their jobs are on the chopping block.

Those were two top fears revealed in a recent study by Gartner about the five main concerns workers have over generative AI and AI in general. And those fears are warranted, according to survey data. For example, IDC predicts that by 2027, 40% of current job roles will be redefined or eliminated across Global 2000 organizations adopting genAI.

A remarkable 75% of employees said they are concerned AI will make certain jobs obsolete, and about two-thirds (65%) said they are anxious about AI replacing their job, according to a 2023 survey of 1,000 US workers by professional services firm Ernst & Young (EY). About half (48%) of respondents said they are more concerned about AI today than they were a year ago, and of those, 41% believe it is evolving too quickly, EY’s AI Anxiety in Business Survey report stated.

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report.

The future of work has shifted due to genAI in particular, enabling work to be done equally well and securely across remote, field, and office environments, according to EY.

Managing highly distributed teams doing complex, interdependent tasks is not easy; finding employees trained sufficiently well to offer effective IT support across a broad security threat landscape of applications, platforms, and endpoints is also not easy. That’s where AI promises to facilitate and automate repetitive tasks like coding, data entry, research, and content creation and also amplify the effectiveness of learning in the flow of work, according to EY.

Gartner’s recent study identified five unique fears employees have about how their company will apply AI:

  • Job displacement due to AI that makes their job harder, more complicated, or less interesting
  • Inaccurate AI that creates incorrect or unfair insights that negatively impact them
  • Lack of transparency around where, when, and how the organization is using AI, or how it will impact them
  • Reputational damage that occurs because the organization uses AI irresponsibly
  • Data insecurity because the implementation of AI solutions puts personal data at risk 

“Employees are concerned about losing their job to AI; even more think their job could be significantly redesigned due to AI,” said Duncan Harris, research director for Gartner’s HR practice. “When employees have these fears, they all have a substantial impact on either the engagement of the employee, their performance, or sometimes both.”

One problem Gartner cited in its report is that organizations aren’t being fully transparent about how AI will impact their workforce. Organizations can’t just provide information about AI; they also need to provide context and details on what risks and opportunities are influencing their AI policy and how AI relates to key priorities and company strategy. 

Organizations can overcome employee AI fears and build trust by offering training or development on a range of topics, such as how AI works, how to create prompts and effectively use AI, and even how to evaluate AI output for biases or inaccuracies. And employees want to learn. According to the report, 87% of workers are interested in developing at least one AI-related skill.

AI has the potential to create high business value for organizations, but employee distrust of the technology is getting in the way, Gartner’s study found. Leaders involved in AI cite concerns about ethics, fairness, and trust in AI models as top barriers they face when implementing the technology.

Employee concerns are not fear of the technology itself, but fear about how their company will use the new technology.

“If organizations can win employees’ confidence, the benefits will extend beyond just AI projects. For example, high-trust employees have higher levels of inclusion, engagement, effort, and enterprise contribution,” Harris said.

Companies should also work on partnering with employees to create AI solutions, which will reduce fears about inaccuracy. Companies that show how AI works, provide input on where it could be helpful or harmful, and test solutions for accuracy can allay fears.

Organizations also need to formalize accountability through new governance structures that demonstrate they are taking AI threats seriously.

“For example, to boost employee trust in organizational accountability, some companies have deputized AI ethics representatives at the business unit level to oversee implementation of AI policies and practices within their departments,” Harris said.

Organizations should also establish an employee data bill of rights to serve as a foundation for their AI policies.

“The bill of rights should cover the purpose for data collection, limit the data collected to the defined purpose, commit to use data in ways that reinforce equal opportunity, and recognize employees’ right to awareness about the data collected on them,” Harris said.

Previous Story

FINRA Reminds Members of Regulatory Obligations When Using Generative Artificial Intelligence (AI) and Large Language Models

Next Story

With iOS 18, Apple deepens its connection to India