Artificial Intelligence (AI) is reshaping business operations — powering automation, personalization, and insights.
However, as these tools become more embedded in daily life, individuals and consumers are asking larger questions about how their data is being handled.
With privacy concerns rising alongside technological advancement, understanding how consumers perceive AI and data privacy is more important than ever.
I’ve gathered the latest statistics on AI and data privacy to help you stay informed.
AI Statistics Showing Trends Impacting Data Privacy
AI systems rely on large volumes of data, often including personal or sensitive information, to function effectively.
As these tools become more integrated into everyday operations, concerns are growing around how that data is handled.
The following stats reveal key trends shaping data protection and what you should keep in mind as you integrate AI into your business operations:
- 68% of global consumers are somewhat or very concerned about privacy online. (IAPP)
- 57% of consumers globally agree that AI poses a significant threat to their privacy. (IAPP)
- 61% of global respondents are wary about trusting AI systems. (KPMG)
- 70% of Americans say they have little to no trust in companies to make responsible decisions about how they use AI in their products. (Pew Research Center)
- 81% say the information companies collect will be used in ways that people are not comfortable with. (Pew Research Center)
These numbers show that concerns about data privacy are more than just theoretical.
As consumers grow increasingly skeptical, businesses must approach AI adoption with transparency — including clearly explaining how AI is used in their privacy policies.
Termly’s Privacy Policy Generator makes it easy to add AI-specific disclosures, and our AI Privacy Policy guide walks you through what to include and why it matters.
Business-Related AI Stats
Businesses are rapidly integrating AI into their operations.
The stats below reveal how companies are applying AI, the challenges they’re navigating, and the values driving responsible innovation.
- The market for artificial intelligence grew beyond 184 billion U.S. dollars in 2024. This staggering growth is expected to continue with the market racing past 826 billion U.S. dollars in 2030. (Statista)
- 77% of respondents told us they believe AI will have a high or transformational impact on their careers. (Thomson Reuters)
- 78% of respondents say their organizations use AI in at least one business function. (McKinsey & Company)
- 49% of technology leaders said that AI was “fully integrated” into their companies’ core business strategy. (PwC)
- 63% of workers say their organizations use generative AI. (McKinsey & Company)
- 68% say their company invests in generative AI training for employees. (Deloitte)
- 41% of executives say that workforce issues, such as training, culture, or change in work are among the top five challenges their organizations face in using generative AI. (PwC)
- 46% of executives say that differentiating their organization, products and services is one of their top three objectives for investing in responsible AI practices. (PwC)
- 84% of respondents prioritize using AI to help people use complex data or systems, and 82% promote business problem solving as the most important AI workforce applications. (Ipsos and Google)
- 42% of surveyed organizations report cost reductions from implementing AI, and 59% report revenue increases. (McKinsey & Company, as interpreted by Stanford HAI)
The integration of AI into business operations necessitates a careful balance between innovation and accountability.
Our Termly survey found that 78.1% of businesses felt no negative impact from privacy requirements and that 91.1.% are willing to prioritize data privacy if they knew it would increase customer trust and loyalty.
These numbers highlight businesses’ willingness to adapt their data practices to reassure consumers. We suggest having the same open mind when it comes to your AI policies as well.
Consumer-Focused AI Stats
Consumers are forming strong opinions about AI regarding who uses it and how it’s governed.
While public institutions inspire some confidence, private companies and government agencies are under greater scrutiny.
- Most people are uncomfortable with or unsure about AI use for HR and people management. (KPMG)
- 76% to 82% of global respondents report confidence in national universities, research institutions and defense organizations to develop, use and govern AI in the best interest of the public. (KPMG)
- One-third of respondents lack confidence in government and commercial organizations to develop, use and govern AI. (KPMG)
- Three in four would be more willing to trust an AI system when assurance mechanisms are in place. (KPMG)
- 78% of consumers believe organizations have a responsibility to use AI ethically. (Cisco)
- 84% of consumers familiar with generative AI advocate for mandatory labeling of AI-generated content. (Deloitte)
- 54% of respondents said they are willing to share their anonymized personal data to help improve AI products and decision-making. (Cisco)
- 38% of consumers find personalized product recommendations helpful and 31% say it speeds up the process. (Tech Radar)
- 40% of shoppers are frustrated by the lack of human support in AI-driven customer service, and 21% find the AI recommendations to be unreliable. (Tech Radar)
Consumers are forming opinions about what responsible AI should look like, and businesses must take note.
Privacy best practices, including ethical use, transparency, and clear user controls are becoming non-negotiable.
AI and Cybercrime Statistics
As AI evolves, so do the risks.
From increased cyberattacks to growing concerns about digital surveillance, users — especially younger generations — are becoming more protective of their online footprint.
But AI is also used as a defensive tool, reshaping how cybersecurity teams respond.
First, let’s consider the risks in the AI landscape:
- Cybercrimes affect the security of 80% of businesses globally. (The Economic Times)
- 84% say cybersecurity risk is their top concern with AI. (KPMG)
- 48% of survey respondents experienced at least one kind of security incident in the past year (up from 34% in 2023). (Deloitte)
- The number of breaches reported in the first half of 2024 affected approximately one billion people. (Deloitte)
- $4.88 million was the global average cost of a data breach in 2024. (IBM)
- Over $200 billion in estimated 2023 global spend on cybersecurity. (Ponemon Institute)
- 85% of cybersecurity professionals believe AI-driven cyberattacks are more sophisticated and harder to detect. (PatentPC)
- 67% of parents said they worry that their children may be tracked through their devices, up from 61% in 2023. (Deloitte)
- And 62% of teens are concerned they could be tracked, a significant jump from 47% in 2023. (Deloitte)
- On average, respondents took 3.7 security actions out of 17 that Deloitte probed.
Now, let’s consider stats highlighting AI as the solution:
- AI achieves up to 70% better malware detection rates compared to traditional methods alone. (VirusTotal)
- AI-based cybersecurity reduces incident response time by 96%. (PatentPC)
- Implementing machine learning algorithms has led to a 60% decrease in false positives in fraud detection, allowing security teams to focus on genuine threats. (Softjourn)
- $2.22 million was the average cost savings in million for organizations that used security AI and automation extensively in prevention versus those that didn’t. (IBM)
While AI presents new vulnerabilities, it’s also a powerful tool for defense.
Organizations that integrate AI-driven cybersecurity may be able to mitigate threats and protect user data more effectively.
AI Laws and Legislation Statistics
AI regulatory activity is surging worldwide and bills are introduced each year governing how AI can and should be used, especially in areas impacting consumer data and safety.
The stats below highlight how the legal landscape is evolving.
- In 2024, U.S. federal agencies introduced 59 AI-related regulations—more than double the number in 2023—and issued by twice as many agencies. (Stanford HAI)
- Legislative mentions of AI rose 21.3% across 75 countries since 2023 (Stanford HAI)
- At least 69 countries have proposed over 1000 AI-related policy initiatives and legal frameworks. (Mind Foundry)
- In June 2024, the EU adopted the world’s first rules on AI — The Artificial Intelligence Act. (European Parliament)
- As of March 2025, the Framework Convention on Artificial Intelligence — the first-ever international legally binding treaty in this field — has 41 signatories. (CAIDP)
- During the 2024 legislative session, state legislators considered over 150 bills relating to government use of AI. (NCSL)
- In 2024, California passed 17 bills covering AI technology, the most comprehensive legislative package in the nation on this emerging industry. (Governer’s Office)
- In the 2025 legislative session, at least 45 states and Puerto Rico introduced at least 550 AI bills. (NCSL)
- In September 2024, the Office of Management and Budget (OMB) issued a memorandum directing agencies to “improve their capacity for the responsible acquisition of AI.” (White House)
- Governors in over 10 states including Alabama, Maryland, Massachusetts, Oklahoma, Oregon and Washington, D.C. have issued executive orders to study AI use. (NCSL)
As numerous AI-related laws enacted in 2024 are slated to take effect in the coming years, their real-world impact remains to be seen.
We can anticipate that AI legislative activity will continue to intensify in 2025 as policymakers observe the effectiveness of existing laws.
Looking Forward
As AI continues to transform business operations and interactions with technology, it raises significant questions about data protection.
The latest AI privacy statistics show growing concerns among consumers, but they also reveal a willingness to adopt AI as long as adequate safeguards are in place to ensure ethical use.