Third-party AI tools are responsible for 55% of AI failures in business

Estimated read time 3 min read

Third-party AI tools are responsible for 55% of AI failures in business

Imagine that you are using an AI-powered weather application and that you announce to everyone a clear sky, ideal for organizing a picnic. But here you are, you find yourself in the pouring rain, with a soggy sandwich in your hands. Now imagine that your company implements an AI tool for customer support, but that integrates poorly with your CRM and loses valuable customer data.


According to a new study, third-party AI tools are responsible for more than 55% of AI-related failures in companies. These failures can lead to reputational damage, financial losses, loss of consumer confidence, or even litigation. The survey was conducted by the MIT Sloan Management Review and the Boston Consulting Group and focused on how organizations approach responsible AI by highlighting the real consequences of a breach of this rule.


“Companies have not fully adapted their third-party risk management programs to the AI context or the challenges of safely deploying complex systems such as generative AI products,” said Philip Dawson, head of AI policy at Armilla AI. “Many do not subject AI providers or their products to the types of assessments undertaken for cybersecurity, which blinds them to the risks associated with the deployment of third-party AI solutions.”

53% of companies exclusively use third-party tools


The release of ChatGPT almost a year ago triggered a generative AI technology boom. It didn’t take long for other companies to follow OpenAI and launch their own AI chatbots, including Microsoft Bing and Google Bard. The popularity and capabilities of these robots have also raised ethical challenges and questions.


As ChatGPT’s popularity skyrocketed as a standalone application and as an API, third-party companies began to take advantage of its power and develop similar AI chatbots in order to produce generative AI solutions for customer support, IT support, or grammar checking.


Of the 1,240 survey respondents in 87 countries, 78% said that their company uses third-party AI tools by accessing, purchasing or licensing them. Of these companies, 53% exclusively use third-party tools, without any in-house AI technology. While more than three-quarters of the companies surveyed use third-party AI tools, 55% of AI-related failures stem from the use of these tools.


Third-party vendor AI can be an integral part of organizations’ AI strategies


Although 78% of the companies surveyed use third-party AI tools, 20% of them have not assessed the risks they pose. The study concludes that responsible AI (RAI) is more difficult to achieve when teams use unattended suppliers, and that a more thorough evaluation of third-party tools is necessary.


“With clients in regulated sectors such as financial services, we see strong links between model risk management practices based on some kind of external regulation and what we suggest people do from the RAI point of view,” according to Triveni Gandhi, head of AI responsible for the AI company Dataiku.


Third-party vendor AI can be an integral part of organizations’ AI strategies, so the problem cannot be solved by removing the technology. Instead, the researchers recommend thorough risk assessment strategies, such as supplier audits, internal reviews and compliance with industry standards.


Given the speed with which the RAI regulatory environment is evolving, the researchers believe that organizations should prioritize responsible AI, from regulatory departments to the CEO. Organizations whose CEO is involved in RAI reported 58% more business profits than those whose CEO is not directly involved in RAI.


The study also found that organizations whose CEO is involved in RAI are almost twice as likely to invest in RAI as those whose CEO is not involved.


Source: “ZDNet.com “

You May Also Like

More From Author