AI has been in the spotlight since ChatGPT came on the scene in November 2022. However, more than three-fourths of AI used in the workplace remains in the shadows.
According to the 2024 Work Trend Index Annual Report from Microsoft and LinkedIn, 78 percent of AI users are bringing their own AI tools to work without the approval of IT or management. That number jumps to 80 percent in small to midsize enterprises. More than half (52 percent) are reluctant to admit that they’re using AI for important tasks.
This “shadow AI” environment creates significant risk. Users may adopt AI tools that lack necessary security controls or that don’t meet regulatory requirements. They may use AI in a way that exposes sensitive data or creates inaccurate or biased output. Organizations need to educate employees about the shadow AI threat and create and enforce policies restricting its use. There are also security tools that can detect shadow AI in the workplace and prioritize their potential risk.
This “shadow AI” environment creates significant risk. Users may adopt AI tools that lack necessary security controls or that don’t meet regulatory requirements. They may use AI in a way that exposes sensitive data or creates inaccurate or biased output. Organizations need to educate employees about the shadow AI threat and create and enforce policies restricting its use. There are also security tools that can detect shadow AI in the workplace and prioritize their potential risk.
Why Shadow AI Is a Unique Threat
Shadow IT is not a new phenomenon — the use of unsanctioned software dates back to the earliest days of PCs in the workplace. However, the advent of cloud-based services significantly fueled the growth of shadow IT by enabling users to access apps and services with a few clicks and a credit card.
Various surveys show that employees frequently use cloud-based applications or services without IT’s knowledge or permission. According to data from Zylo, 83 percent of Software-as-a-Service (SaaS) is purchased outside IT control. An IDC study found that more than 60 percent of IT budgets sit outside the IT department.
However, shadow AI creates a unique set of risks. While shadow cloud threats are fairly well understood, AI is a new technology that’s constantly evolving. There’s far less certainty about the risks that it poses. Furthermore, malicious actors are aggressively targeting AI systems and models to access the sensitive data they analyze. Shadow AI tools bring exponentially greater risk of data leaks and exposure.
Various surveys show that employees frequently use cloud-based applications or services without IT’s knowledge or permission. According to data from Zylo, 83 percent of Software-as-a-Service (SaaS) is purchased outside IT control. An IDC study found that more than 60 percent of IT budgets sit outside the IT department.
However, shadow AI creates a unique set of risks. While shadow cloud threats are fairly well understood, AI is a new technology that’s constantly evolving. There’s far less certainty about the risks that it poses. Furthermore, malicious actors are aggressively targeting AI systems and models to access the sensitive data they analyze. Shadow AI tools bring exponentially greater risk of data leaks and exposure.
The Importance of Education and Good Governance
Given the proliferation of SaaS-based AI tools, the shadow AI problem is likely to get worse before it gets better. However, education can go a long way toward improving security practices. Organizations should incorporate shadow AI threats into their security awareness training programs so employees understand the risks. Training should also cover the handling of sensitive information, regulatory compliance and the ethical use of AI.
In addition, organizations should develop policies governing the use of AI in the workplace. A 2024 Tech.co study found that just 4 percent of organizations have established firm guidelines for gen AI use. Without effective guardrails, employees will tend to use whatever tools that help them do their jobs more efficiently.
A carrot-and-stick approach can help rein in the use of unsanctioned AI tools. Management and IT teams should collaborate with users to determine which AI tools are beneficial. It may be helpful to establish an advisory committee to evaluate the capabilities of various AI tools and identify potential use cases.
In addition, organizations should develop policies governing the use of AI in the workplace. A 2024 Tech.co study found that just 4 percent of organizations have established firm guidelines for gen AI use. Without effective guardrails, employees will tend to use whatever tools that help them do their jobs more efficiently.
A carrot-and-stick approach can help rein in the use of unsanctioned AI tools. Management and IT teams should collaborate with users to determine which AI tools are beneficial. It may be helpful to establish an advisory committee to evaluate the capabilities of various AI tools and identify potential use cases.
Detecting Shadow AI in Use
It’s likely that shadow AI tools are already in use, so it’s important to identify them. Microsoft Defender for Cloud Apps analyzes cloud usage data to detect and monitor unsanctioned AI tools. It can also detect sensitive information shared with Microsoft Copilot and other gen AI apps, as well as potentially unethical or illegal prompts and responses.
Similarly, the Cisco Umbrella App Discovery feature analyzes DNS requests to identify cloud apps in use. The dashboard lists the application name, vendor and usage volume, and assigns a risk score based on factors such as security posture, compliance and potential threats. Administrators can group applications based on category, risk level or business function and choose to block access to specific high-risk applications.
Cerium’s AI and security teams understand the threats of shadow AI and can help you implement the tools and processes needed to combat them. We provide security awareness training, policy development, security tools and more to help you maximize the value of AI while minimizing risk. Contact us to schedule a confidential consultation.
Similarly, the Cisco Umbrella App Discovery feature analyzes DNS requests to identify cloud apps in use. The dashboard lists the application name, vendor and usage volume, and assigns a risk score based on factors such as security posture, compliance and potential threats. Administrators can group applications based on category, risk level or business function and choose to block access to specific high-risk applications.
Cerium’s AI and security teams understand the threats of shadow AI and can help you implement the tools and processes needed to combat them. We provide security awareness training, policy development, security tools and more to help you maximize the value of AI while minimizing risk. Contact us to schedule a confidential consultation.