For most organizations, GenAI is new and uncharted territory. While AI can dramatically boost productivity and improve decision-making, it also comes with an array of threats. That’s why it’s critical to take a strategic approach to AI adoption that considers not only its benefits but also its potential risks.
A good starting point is an AI policy document. It provides guidelines for appropriate AI usage in the workplace. It should cover what types of AI employees may use, how they may and may not use them, and the consequences for violating those rules. It helps ensure that AI is used in a way that benefits the organization, aligns with its core values and mission, and does not expose sensitive information or violate legal or regulatory requirements.
AI policy documents will vary from organization to organization. In general, however, they should include the following provisions.
What Types of AI The Document Covers
Generative AI tools may be cloud-based, provided by a vendor, or developed internally. AI capabilities are also being incorporated into a wide range of applications, services, and collaboration tools. Different policies may apply to the various types of AI, so it’s important to spell out what the document covers.
What Types of AI Employees Can and Cannot Use
A complementary provision outlines the types of AI tools employees are permitted to use, those that require specific approval, and those that are strictly prohibited. This guidance may vary based on the employee’s job role, the type of task or project involved, or the potential impact on the organization.
Additionally, the policy should address non-compliant usage, such as generating risky or unethical content with AI applications. It is also crucial to highlight security concerns, including the risk of leaking sensitive information to AI tools or oversharing private content, such as project data, intellectual property, or customer information.
Whether Training is Required Before Beginning AI Use
Some organizations find it beneficial to provide employees with training before they begin using AI. This should be spelled out in the document.
How to Protect Sensitive Information When Using AI
The AI policy document should explain what constitutes sensitive information, and when it may and may not be used as input for AI. This may vary based on the type of AI — organizations may prohibit the use of sensitive information in public AI tools such as ChatGPT but apply a different set of rules for internally developed AI applications. Organizations may want to require that employees anonymize data before using it as input for AI. Organizations may also consider implementing IT security measures to restrict the ability to upload sensitive information, providing an additional layer of protection against accidental data exposure.
How Employees Can Use AI Output
AI tools can produce various outputs, from source code to legal briefs to graphics and video. While this is beneficial, it comes with the risk of inaccuracies, bias, legal risk, and ethical issues. AI output can also expose sensitive data. The AI policy document should cover the appropriate review and use of AI output and assign ownership of responsibility to the end-users utilizing the AI output.
What Steps Employees Should Take to Prevent Bias and Discrimination
AI models are developed by humans and thus incorporate inherent human biases. Employees should ensure that AI use and outputs comply with company anti-discriminatory policies and regulatory requirements.
The Extent to Which Human Oversight is Required
AI is great at automating routine tasks and eliminating the need for human input. However, overreliance on AI can create risks for the business. The AI policy document should outline the extent to which humans should oversee tasks, review and verify outputs, and audit AI tools.
How The AI Policy Document Correlates to Other Policies and Regulations
Stakeholders throughout the organization should be involved in drafting the AI policy document, ensuring it reflects a holistic understanding of how AI could impact the organization both positively and negatively. We recommend establishing an AI Steering Committee to lead the development of the policy. This committee can ensure that AI usage aligns with the organization’s overall AI strategy, vision, core values, and mission. The policy document should be reviewed periodically to ensure it remains up-to-date and relevant. Cerium’s AI team can guide you through this process by providing constructive input to improve an existing policy or helping draft a new one. As part of the Cerium AI Workshop service offering, our team can also provide a standard template for your organization to use as a foundation.