IT pros find generative AI doesn’t always play well with others


While nine out of 10 IT professionals say they want to implement generative artificial intelligence (genAI) in their organization, more than half have integration, security and privacy concerns, according to a recent survey released Wednesday by Solarwinds, an infrastructure management software firm.

The SolarWinds 2024 IT Trends report, AI: Friend or Foe? found that very few IT pros are confident in their organization’s readiness to integrate genAI. The company surveyed about 7,000 IT professionals online regarding their views of the fast-evolving technology, and despite a near-unanimous desire to adopt genAI and other AI-based tools, less than half of respondents feel their infrastructure can work with the new technology.

Only 43% said they are confident that their company’s databases can meet the increased needs of AI, and even fewer (38%) trust the quality of data or training used in developing the technology. “Because of this, today’s IT teams see AI as an advisor (33%) and a sidekick (20%) rather than a solo decision-maker,” SolarWind said in its report.

Privacy and security worries were cited as the top barriers to genAI integration, and IT pros specifically called for increased government regulations to address security (72%) and privacy (64%) issues. When asked about challenges with AI, 41% said they’ve had negative experiences; of those, privacy concerns (48%) and security risks (43%) were most often cited.

More than half of respondents also believe government regulation should play a role in combating misinformation. “To ensure successful and secure AI adoption, IT pros recognize that organizations must develop thorough policies on ethics, data privacy, and compliance, pointing to ethical considerations and concerns about job displacement as other significant barriers to AI adoption,” the report said.

SolarWinds found that more than a third of organizations still lack ethics, privacy and compliance policies in place to guide proper genAI implementation. “While talk of AI has dominated the industry, IT leaders and teams recognize the outsize risks of the still-developing technology, heightened by the rush to build AI quickly rather than smartly,” said Krishna Sai, senior vice president, technology and engineering, at SolarWinds.

Indeed, leading security experts are predicting hackers will increasingly target genAI systems and attempt to poison them by corrupting data or the models themselves. Earlier this year, the US National Institute of Standards and Technology (NIST) published a paper warning that “poisoning attacks are very powerful and can cause either an availability violation or an integrity violation.

“In particular, availability poisoning attacks cause indiscriminate degradation of the machine learning model on all samples, while targeted and backdoor poisoning attacks are stealthier and induce integrity violations on a small set of target samples,” NIST said.

Overall, the IT industry’s sentiment reflects “cautious optimism about AI despite the obstacles,” SolarWinds reported. Almost half of IT professionals (46%) want their company to move faster in implementing the technology, despite costs, challenges, and concerns, but only 43% are confident that their company’s databases can meet the increased needs of AI. Moreover, even fewer (38%) trust the quality of data or training used in developing AI technologies.

IT pros cited AIOps (Artificial Intelligence for IT Operations) as the technology that will have the most significant positive impact on their role (31%), ranking above large language models and machine learning. More than a third of respondents (38%) said their companies already use AI to make IT operations more efficient and effective.  

Previous Story

Microsoft’s Copilot+ AI PCs: Still a privacy disaster waiting to happen

Next Story

Sophos DNS Protection is now available