Not Using AI At Work? You Might Soon Be Left Behind
If your company hasn’t embraced AI yet, you may soon find yourself lagging behind. A recent report by LayerX, a firm specialising in data security for business environments, reveals that AI adoption , particularly generative AI , is at an all-time high. Currently, 45% of employees use AI tools regularly, with ChatGPT alone reaching 43% penetration. Overall, AI now represents 11% of all activity in business applications, making it a central part of modern workflows.
However, the report highlights a major concern: security practices have not kept pace with AI adoption. Around 67% of AI usage occurs via unmanaged personal accounts, leaving IT and security teams with limited oversight. Even more alarming, 40% of files uploaded to AI tools contain sensitive data such as personally identifiable information ( PII ) or payment card information ( PCI ).
The way employees use AI tools is also a major contributor to data leakage . About 77% of employees copy and paste information from unmanaged accounts into AI platforms, making this simple action the main exit route for confidential data. The combination of AI adoption and unmanaged accounts has created a risk landscape that traditional security programmes are ill-prepared to manage.
According to the report, security leaders can no longer treat AI as just an emerging technology. It has become an integral part of daily business workflows and is now the largest uncontrolled channel for potential data loss. Organisations must urgently establish a robust AI governance framework to mitigate these risks and secure their sensitive data as AI continues to shape the future of work.
You may also like
- Julia Roberts opens up about struggling with self-confidence during initial phase of her acting career
- Big Brother ejects star George from house over 'unacceptable language' that can't air
- Kerala HC grants anticipatory bail to actress Lakshmi R. Menon in abduction case
- Andhra Pradesh: Fire erupts at firecracker factory in East Godavari, 6 killed
- PM Modi Launches STEP In Maharashtra: What Is Short-Term Employability Programme & How to Apply | Details Inside
However, the report highlights a major concern: security practices have not kept pace with AI adoption. Around 67% of AI usage occurs via unmanaged personal accounts, leaving IT and security teams with limited oversight. Even more alarming, 40% of files uploaded to AI tools contain sensitive data such as personally identifiable information ( PII ) or payment card information ( PCI ).
The way employees use AI tools is also a major contributor to data leakage . About 77% of employees copy and paste information from unmanaged accounts into AI platforms, making this simple action the main exit route for confidential data. The combination of AI adoption and unmanaged accounts has created a risk landscape that traditional security programmes are ill-prepared to manage.
According to the report, security leaders can no longer treat AI as just an emerging technology. It has become an integral part of daily business workflows and is now the largest uncontrolled channel for potential data loss. Organisations must urgently establish a robust AI governance framework to mitigate these risks and secure their sensitive data as AI continues to shape the future of work.