Most organizations recognize the critical importance of data privacy in establishing customer trust. In Cisco’s 2025 Data Privacy Benchmark Study, 86 percent of respondents said that data privacy has a positive impact, up from 80 percent in the previous year’s study. Although compliance with privacy regulations comes with significant costs, 95 percent of respondents saw a significant return on investment.
As in most areas of business, AI is changing the game. Consumer confidence in AI is low and falling. The 2024 Edelman Trust Barometer found that just 35 percent of U.S. consumers trusted AI, down from 50 percent in 2019. AI-generated scams, misinformation and “deepfakes” have made consumers wary of AI, and organizations must take steps to ensure that their use of AI doesn’t erode customer trust.
Organizations are benefitting from AI — in a 2024 EY survey, 97 of business leaders said that AI delivered positive returns on investment. However, organizations are also aware of the data privacy risks. The Cisco study found that 64 percent of organizations worry that they’ll inadvertently expose sensitive information. These concerns highlight the privacy and compliance challenges that come with AI adoption.
How AI Increases Privacy Risks
Data privacy is more than just preventing the exposure of sensitive information. It’s giving individuals control over how their data is collected, stored and used. That concept has evolved with the advent of AI. Consumers might not care if companies track what they look at and buy, but they probably don’t approve of companies using their data to train AI models.
AI has created new privacy risks due to the huge volume of data required for training. Inevitably, some companies are going to collect sensitive data, with or without consent, and use it without permission. More and more personal, finance, healthcare and biometric data is being collected than ever before, increasing the risk that it will be exposed or used in unacceptable ways.
An array of new government and industry regulations are emerging to address these risks, but organizations are struggling to remain compliant. Organizations need a systematic approach to ensure that innovation aligns with regulatory objectives.
A Compliance-First Approach to AI
The key is to take a compliance-first approach to AI adoption. Organizations should conduct regular reviews of their compliance practices to identify gaps and risk. They should also establish policies and procedures for collecting, storing, accessing and using data, with mechanisms for accountability. Employees should be trained to ensure they understand these policies and their responsibilities for abiding by them.
AI privacy best practices provide a foundation for developing policies and procedures. Organizations should minimize data collection, storage and processing, use techniques such as data masking and anonymization, and delete data as soon as possible. Organizations should seek explicit consent from individuals before using their data, and proactively report on how data is stored, accessed and used.
Organizations should also implement a security management framework to ensure data privacy and compliance with applicable regulations. A framework provides recommendations for security controls and facilitates continuous improvement. Particularly sensitive data, such as information about health, finance or children, should be subject to the most robust protections.
Getting Data Protection Right
Comprehensive data protection measures are also essential. These measures encompass data governance and the tools and processes needed to implement it across the IT environment. A compliance-first approach embeds data protection into systems rather than adding it later. Regular audits help ensure that data management practices continue to meet compliance requirements.
Cerium is here to help organizations maintain data privacy and compliance as they adopt and use AI. Our team understands the privacy risks associated with AI and can help organizations implement the right policies, procedures and tools. Our cybersecurity experts can also help protect against attackers seeking to compromise AI data.
Data privacy plays a key role in fostering customer trust, but AI is making privacy and compliance more difficult. Let Cerium help you capitalize on AI while ensuring robust data protection.