The Zero Trust approach: an imperative step to use generative AI safely.
It is no longer possible to ignore the impact of generative AI today. Some see it as a miracle cure for the world of work, heralding a new era in which writing tasks with low added value would be ancient history.
For others, it is the beginning of a new technological wave that should revolutionize all sectors of activity, from logistics to the development of new life-saving medicines.
But the enthusiasm generated by these technologies also raises a lot of concerns, especially in terms of privacy and data security.
Earlier this year, Samsung banned the use of generative AI. Indeed, confidential information had accidentally leaked into the public domain, following the use of ChatGPT by employees to help themselves in their work.
The Korean electronics giant is not the only one to take this path. A number of companies and even countries have banned generative AI. And it’s easy to understand why.
The security problems posed by generative AI
The use of tools such as ChatGPT and other LLM (Large Language Models) generally opens the door to an uncontrolled Shadow IT (Shadow IT), that is to say devices, software and services that are beyond the ownership or control of the IT department. Whether we are talking about an employee experimenting with AI or a company-driven initiative, once proprietary data is exposed to it, there is no way back.
According to a recent KPMG study of 300 business leaders, they anticipate a colossal impact of generative AI on their organizations. However, a majority of them say they are not ready for immediate adoption. This reservation is explained by a series of concerns, including cybersecurity (81%) and data privacy (78%) are at the top of the list.
That is why it is necessary to find the right balance between the possibility of benefiting from the power of AI to accelerate innovation on the one hand and compliance with data privacy regulations on the other hand.
To achieve this serenely, the best approach is to implement Zero Trust security controls, which allow the latest generative AI tools to be used safely, without risking compromising the company’s intellectual property or its customers’ data.
The need for a “Zero Trust” approach?
Zero Trust security is a methodology that requires strict verification of the identity of each person and each device that attempts to access the resources of the company’s network. Contrary to the traditional approach called “the strong castle”, a Zero Trust architecture consists of trusting nothing or no one.
A first step is to understand how many people use AI services and for what purpose. Then, it is necessary to give the system administrators the means to supervise and control this activity, in the event that it should be suspended urgently. The adoption of a data loss prevention service (DLP) helps to provide additional protection to prevent any sharing of sensitive data to the IAs by uninformed employees. More granular rules may even allow some users to experiment with projects containing sensitive data, while establishing stricter access and sharing limits for the majority of teams and collaborators.
To summarize, if organizations want to use AI in all its forms, they must improve their security and adopt a Zero Trust approach. The adoption of generative AI is therefore an opportunity to accelerate the transition to a security model based on Zero Trust for organizations that have not yet adopted it.
However, if it is essential to highlight these security and privacy issues, we should not take the opportunity to make a counterproductive sensationalism vis-à-vis a technology that offers tremendous development potential.
Let’s keep in mind that every significant technological advance, from mobile phones to cloud computing, brings with it new security threats. And the good news is that each time, the IT industry has been able to react proactively by strengthening security, protocol and processes. The same is true with AI.