
Tenable’s Cloud AI Risk Report 2025 showed that heavy reliance on open-source frameworks exposed sensitive data and AI models to risk.
New research from Tenable’s Cloud AI Risk Report 2025 finds that the pace of AI adoption has outstripped security preparedness, with vulnerabilities, cloud misconfigurations and exposed data accumulating across cloud environments.
A McKinsey Global Survey found that 72 percent of organisations worldwide integrated AI into at least one business function by early 2024, up from just 50 percent two years prior.
Tenable Cloud Research analysed real-world cloud workloads across Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) between December 2022 and November 2024.
The team discovered that AI development environments are heavily reliant on open-source packages, many of which are downloaded and integrated rapidly without adequate review or security checks. Tools such as Scikit-learn and Ollama were among the most deployed, found in nearly 28 percent and 23 percent of AI workloads respectively.
While these frameworks accelerate machine learning development, they also introduce hidden vulnerabilities due to their open-source nature and dependency chains. Many AI workloads also run on Unix-based systems which uses open-source libraries, increasing the potential for unpatched vulnerabilities.
The research report also showed that AI adoption is tightly linked to heavy use of managed cloud services, which come with their own security trade-offs. Of the number of organisations using Microsoft Azure, 60 percent had configured Azure Cognitive Services, 40 percent deployed Azure Machine Learning and 28 percent used Azure AI Bot Service.
While AI capabilities are being embraced at scale, they are also increasing the complexity of securing cloud environments. Improper configurations or excessive permissions often leave critical systems and sensitive AI training data vulnerable to attack.
“Organisations are rapidly adopting open-source AI frameworks and cloud services to accelerate innovation, but few are pausing to assess the security impact,” said Nigel Ng, Senior Vice President at Tenable APJ.
“The very openness and flexibility that make these tools powerful also create pathways for attackers. Without proper oversight, these hidden exposures could erode trust in AI-driven outcomes and compromise the competitive advantage businesses are chasing.”
Some of Tenable’s recommended strategies include managing AI exposure by monitoring cloud infrastructure, classify AI models and datasets as sensitive targets, and staying updated on AI regulations and best practices.
“AI will shape the future of business, but only if it is built on a secure foundation,” Ng added. “Open-source tools and cloud services are essential, but they must be managed with care. Without visibility into what is being deployed and how it is configured, organisations risk losing control of their AI environments and the outcomes those systems produce.”
Read the full report here.