AI Risk Management: Is AI a Game Changer or a Serious Threat?

  • juillet 11, 2023
1214137 blog images V2

In recent years, artificial intelligence (AI) has become an increasingly prevalent tool in the private sector, offering businesses the opportunity to reduce costs, increase personalization, reduce mistakes and prevent fraud. While AI has the potential to revolutionize the way businesses operate, there are still issues that need to be addressed to make sure it’s used safely and effectively. In particular, bias, lack of transparency, security risks and societal impacts must be managed to protect users of AI technology. Let’s explore how these issues can be addressed and what measures need to be taken to ensure the safe use of AI in the private sector. We'll discuss how AI can be both a game-changer and a serious threat, and why it's important to understand the risks associated with this technology to use it safely.

How AI is shaping the future of the private sector

AI is changing the way we do business. Across all industries organizations are adopting AI for improving decision-making, operational efficiency and customer experience. They're also using AI to manage risk and solve new business problems. We asked the now infamous AI chatbot for the benefits of AI and it told us: “AI boosts efficiency, accuracy, and decision-making, driving productivity, cost savings, and innovation across processes.”

Personalization: In healthcare, AI is generating images and EMR data through wearable devices/smart jackets that can include categories with heart rate, blood pressure, and more to give individuals personalized diet and exercise plans.

Informed risk: In the area of Risk Management, AI is being used to transform how decisions are made, allowing risk managers innovative ways to analyze data, identify trends, and create predictions through scenario planning.

Reduce mistakes: In manufacturing AI can increase efficiency, reduce costs and improve quality across industry lines. for example, AI can boost defect detection. ai platforms can now use computer vision and algorithms to inspect products and identify anomalies, leading to improved quality control and minimizing human error.

Fraud detection: The financial services industry has been one of the quickest to adopt AI in its everyday use. Mastercard now uses an AI-based platform to analyze each transaction and assign a risk score based on factors such as location, spending patterns and transaction history. If this risk score is too high for the threshold, it's flagged for human review.

Cost cutting: Lastly, AI can reduce costs across many industries. The increase in Natural Language Processing (NLP) and chatbots has already improved call center routing. When you consider this example and everything listed above, it’s clear AI can cut costs.

Exploring challenges associated with using AI

While AI can help industries in numerous ways, it’s critical to address the concerns surrounding AI. ChatGPT had this to offer, “Challenges include ethical implications, job displacement, and the need for robust data privacy and security measures to protect sensitive information.”

Data Privacy and Security: To interact with an AI such as ChatGPT, humans must share information and data. However, there are no laws detailing how data is collected, maintained, stored and transferred. Another security issue is that not all applications have encrypted conversations as a default setting, allowing unauthorized users to access messages between a human and chatbot. These are just two examples of the countless potential data privacy issues associated with AI.

Ethical Bias: AI can enhance human bias throughout its use. For example, Amazon stopped using a hiring algorithm after it favored applicants that used words such as “captivated” and “educated” — words that favored the CVs of male applicants. Other bias includes stereotypical representations deeply rooted in societies. Try typing “greatest leader of all time” in any search engine and it'll likely show prominent men. How many women are shown?

Transparency of models: Chatbots and AI must be trained by humans to work the way they're intended. However, the type of data that is trained with and how it's collected is unknown to the user. Transparency is negligible with chatbots because developers are trying to maintain their competitive edge with their platform. This threat encapsulates the two prior risks mentioned.

Societal Impact of AI: Beyond racial and gender bias in AI, there are other impacts that must be considered that can negatively affect society. For the first time artificial intelligence technology was listed as a reason for job cuts of nearly 4000 jobs in May 2023, published by a research firm. Breakthroughs in AI and robotics are leading to substantial concern of large-scale job losses. Another societal issue includes the adverse impacts of some facial recognition or predictive analysis causing loan rejections, criminal justice bias and unfair outcomes for certain groups. In 2021, the insurance company Lemonade faced backlash for using an AI system that automatically denied claims based on its users videos submissions. Organizations must adopt AI ethics guidelines to achieve a sustainable, equitable, diverse, inclusive, and transparent human-focused society and minimize the undesirable consequences of AI.

Taking measures to protect AI users

AI has exploded within the past year. New programs including ChatGPT-4 and Dall-e, with incredible processing capabilities have caused concern from top officials and developers alike. Frameworks must be put in place to make AI effective while being ethical, safe and trustworthy. The government has taken the first step with the White House publishing the “Blueprint for AI,” followed by NIST launching an AI Risk Management Framework. Microsoft has built on this and offered four more steps to follow:

  • requiring safety brakes for AI systems,
  • developing a broader legal and regulatory framework,
  • promoting transparency, and access to AI, and
  • pursuing new public-private partnerships to address societal challenges that come with new technologies.

While these seem like a good first step, others are more concerned and are calling for a six-month break from developing AI further. Geoffery Hinton, the godfather of AI, stepped down from Google recently because of his concern of AI rapidly developing before we can understand it, and or control it. There isn’t a single stop fix to issues with AI, but understanding its benefits and shortcomings can help your business use the technology to enhance your service.

While regulators are still formulating how to govern and regulate AI, every organization that develops or uses AI systems will need to develop and implement its own governance systems. At NTT DATA, we recognize that it is our responsibility to define and comply with AI Guidelines to realize a world where humans and AI can coexist in harmony.

NTT DATA’s take

Leveraging your own Enterprise Risk Management (ERM) framework, NTT DATA can help you develop a comprehensive AI governance program based on best practices for the protection of data, cybersecurity, privacy, and digital safety and standards / principles from regulatory bodies such as the like National Association of Insurance Commissioners (NAIC) for using AI systems. NTT DATA's AI governance can help you develop safe, non-discriminatory, ethical use principles that can be translated into ERM / corporate policies for the development, and implementation of AI systems and recommend controls to verify compliance with the policies. At NTT DATA, we have established an Artificial Intelligence Center of Excellence (AI CoE) and created a network of AI-specialized engineers responsible for creating AI assets and applying them to our worldwide expansion of digital enterprises. We are devoted to taking advantage of the capabilities of AI, while also mitigating any potential adverse effects through the implementation of carefully crafted governance programs.

In conclusion, AI is a powerful tool that can offer businesses many advantages, but it also presents risks that must be managed to help ensure its safe and effective use. Understanding the risks associated with AI and developing strategies to address them is essential for businesses seeking to leverage this technology. By taking steps to limit the potential risks and maximize the potential benefits of AI, businesses can provide for its responsible and effective use. By doing so, they can benefit from the potential of this game-changing technology while avoiding the serious consequences of its misuse.

Connect with us to discuss how NTT DATA's AI framework can help you leverage the power of AI safely, securely and ethically.

Subscribe to our blog

ribbon-logo-dark
Nutan Pandit headshot
Nutan Pandit

Nutan leads the Insurance Risk and Compliance practice at NTT DATA Services. She has extensive experience helping insurance organizations create their future-state risk and compliance function, implement risk technology, core business digital transformation, and create next-generation information security programs.

Eli Grossman High Res Headshot.JPG
Eli Grossman

Eli Grossman is a consultant in the Financial Services & Insurance practice at NTT DATA. Currently, Eli is combining his knowledge and skillset in both the wealth and risk management space. Internally, Eli is developing Third Party Risk Management solutions, specifically in the healthcare and insurance space. In his free time Eli enjoys reading and exploring all that Charlotte has to offer.

Related Blog Posts