Friday, July 5, 2024

AI adoption poses risks to corporate data

Share

The amount of corporate data workers put into AI tools increased 485 percent from March 2023 to March 2024, and is increasing exponentially. The trend is highest among tech workers with 23.6 percent putting corporate data into an AI tool.

A new report from Cyberhaven looks at AI adoption trends and their link to heightened risk. A worrying finding is that 73.8 percent of ChatGPT usage at work is through non-corporate accounts, that unlike enterprise versions incorporate whatever you share in public models.

The percentage of non-corporate accounts is even higher for Gemini (94.4 percent) and Bard (95.9 percent) usage. AI usage is dominated by products from the big three, with OpenAI, Google, and Microsoft AI products accounting for 96.0 percent of AI usage at work.

“The fast rise of AI mirrors past transformative shifts like the internet and cloud computing. But just as early cloud adopters navigated new challenges, today’s companies must contend with the complexities introduced by widespread AI adoption,” says Howard Ting, CEO of Cyberhaven. “Our research on AI usage and risks not only highlights the impact of these technologies but also underscores the emerging risks that could parallel those encountered during significant technological upheavals in the past.”

The most common sensitive data type put into AI tools is customer support (16.3 percent of sensitive data), which includes confidential information customers share in support tickets.

Other forms of sensitive data include source code (12.7 percent), research and development materials (10.8 percent), HR and employees records (3.9 percent), and financial documents (2.3 percent).

AI generated content is being used in potentially risky ways too, 3.4 percent of research and development materials created in March 2024 originated from AI tools, potentially creating risk if patented material was introduced. 3.2 percent of source code insertions are generated by AI outside of coding tools (which have enterprise approved coding copilots), leading to the risk of vulnerabilities. In addition three percent of graphics and design content originated from AI, which can be a problem since AI tools may output trademarked material.

You can get the full report from the Cyberhaven blog.

Image credit: BiancoBlue/depositphotos.com

Read more

Local News