The threat landscape is complex and constantly changing, requiring a flexible, adaptable approach to security. Within this dynamic environment, however, security best practices are well understood. There are frameworks and standards that lay the foundation for a robust security strategy.
AI, in contrast, is a new territory that can seem daunting to even seasoned security professionals. AI models are non-deterministic, meaning that outputs can vary even with the same input. This non-deterministic behavior requires different security approaches than traditional software. AI models are also susceptible to tampering, theft or adversarial attacks that can be difficult to detect and mitigate. Additionally, AI models can continuously learn and adapt, potentially leading to new vulnerabilities after deployment.
Because of these unique challenges, each organization should tailor its AI security strategy based on its specific models, datasets and use cases. Nevertheless, there are some general guidelines that organizations should follow when developing that strategy.
Stick with Familiar Concepts
AI security differs from traditional cybersecurity in several ways, but many of the fundamental concepts still apply. Organizations still need to incorporate access controls, data loss prevention and other basic security techniques into their AI applications. Remember that a good security strategy is adaptable. Take many of the same tools, techniques and policies used to secure the traditional IT environment and adapt them to address AI-specific threats.
Lean On Standards and Frameworks
Just as many core security concepts apply to AI, so do well-established standards and frameworks, such as the NIST Risk Management Framework and the MITRE ATT&CK Matrix. In fact, these two frameworks have versions tailored to AI. Building the AI security strategy on these standards provides for consistency throughout the IT environment.
Establish Your Organization’s AI Risk Tolerance
Stakeholders throughout the business need to understand the risks associated with AI and develop policies establishing the acceptable level of risk. The risk tolerance will likely vary depending on the use case, dataset and other factors. Clearly communicating risk thresholds aids in the development of security policies and the selection, implementation and management of AI applications and tools.
Mitigate Risk Throughout the AI Lifecycle
There’s an element of risk at every phase of AI development and deployment, including model tampering, data tampering, supply chain compromise and direct compromise of the AI infrastructure. It’s critical to mitigate risk at every phase, from data acquisition through model development, training and deployment. Careful selection and monitoring of third-party models, software and data is especially important.
Focus on the Most Common Threats
As with any security strategy, it makes sense to prioritize the most likely threats to your organization. This requires not only an overall understanding of AI threats but an ongoing analysis of emerging tactics and trends. Knowing what threat actors are targeting enables you to implement the right controls and harden your defenses where AI applications are most vulnerable. It also allows for optimum use of scarce IT and security resources.
Train Your Users in AI Best Practices
Users are often the weakest link in any security strategy. Without clear policies and guidelines, users may expose the organization and its sensitive information to risk. Make sure users understand your organization’s policies for acceptable AI use to protect data and meet legal, regulatory and ethical requirements.
How Cerium Can Help
The Cerium team has developed and deployed AI applications in a wide range of organizations. Our security experts also stay abreast of the evolving AI threat landscape. Together, our team can help you identify AI risks and develop an AI strategy based on best practices. Let us help you implement the policies, procedures and controls for the safe and secure use of AI.