Search

Want to develop a risk-management framework for AI? Treat it like a human. - VentureBeat

sambitasa.blogspot.com

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Artificial intelligence (AI) technologies offer profoundly important strategic benefits and hazards for global businesses and government agencies. One of AI’s greatest strengths is its ability to engage in behavior typically associated with human intelligence — such as learning, planning, and problem solving. AI, however, also brings new risks to organizations and individuals, and manifests those risks in perplexing ways.

It is inevitable that AI will soon face increased regulation. Over the summer, a number of federal agencies issued guidance, commenced reviews, and sought information on AI’s disruptive and, sometimes, disorderly capabilities. The time is now for organizations to prepare for the day when they will need to demonstrate their own AI systems are accountable, transparent, trustworthy, nondiscriminatory, and secure.

There are real and daunting challenges to managing AI’s new risks. Helpfully, organizations can use some recent agency initiatives as practical guides to create or enhance AI risk-management frameworks. Viewed closely, these initiatives demonstrate that AI’s new risks can be managed in many of the same established ways as risks arising out of human intelligence. Below, we’ll outline a seven-step approach to bring a human touch to an effective AI risk-management framework. But before that, let’s take a quick look at the various related government activity over the summer.

A summer of AI initiatives

While summer is traditionally a quiet time for agency action in Washington, D.C., the summer of 2021 was anything but quiet when it came to AI. On August 27, 2021, the Securities and Exchange Commission (SEC) issued a request for information asking market participants to provide the agency with testimony on the use of “digital engagement practices” or “DEPs.” The SEC’s response to digital risks posed by financial technology companies could have major ramifications for investment advisors, retail brokers, and wealth managers, which increasingly use AI to create investment strategies and drive customers to higher-revenue products. The SEC’s action followed a request for information from a group of federal financial regulators that closed earlier this summer concerning possible new AI standards for financial institutions.

While financial regulators evaluate the risks of AI to steer individuals’ economic decisions, the Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) announced on August 13, 2021, a preliminary evaluation to look at the safety of AI to steer vehicles. The NHTSA will review the causes of 11 Tesla crashes that have happened since the start of 2018, in which Tesla vehicles crashed at scenes where first responders were active, often in the dark, with either Autopilot or Traffic Aware Cruise Control engaged.

Meanwhile, other agencies sought to standardize and normalize AI risk management. On July 29, 2021, the Commerce Department’s National Institute of Standards and Technology issued a request for information to help develop a voluntary AI risk-management framework. In June 2021, the General Accountability Office (GAO) released an AI accountability framework to identify key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in designing, developing, deploying, and continuously monitoring AI systems.

Using human risk-management as a starting point

As the summer government activity portends, agencies and other important stakeholders are likely to formalize requirements to manage the risks to individuals, organizations, and society associated with AI. Although AI presents new risks, organizations may efficiently and effectively extend aspects of their existing risk-management frameworks to AI. The practical guidance offered by some risk-management frameworks developed by government entities, particularly by the GAO, the Intelligence Community’s AI Ethics Framework, and the European Commission’s High-Level Expert Group on Artificial Intelligence’s Ethics Guidelines for Trustworthy AI, provide the outline for a seven-step approach for organizations to extend their existing risk-management frameworks for humans to AI.

First, the nature of how AI is created, trained, and deployed makes it imperative to build integrity into AI at the design stage. Just as employees need to be aligned with an organization’s values, so too does AI. Organizations should set the right tone from the top on how they will responsibly develop, deploy, evaluate, and secure AI consistent with their core values and a culture of integrity.

Second, before onboarding AI, organizations should conduct a similar type of due diligence as they would for new employees or third-party vendors. As with humans, this due diligence process should be risk-based. Organizations should check the equivalent of the AI’s resume and transcript. For AI, this will take the form of ensuring the quality, reliability, and validity of data sources used to train the AI. Organizations may also have to assess the risks of using AI products where service providers are unwilling to share details about their proprietary data. Because even good data can produce bad AI, this due diligence review would include checking the equivalent of references to identify potential biases or safety concerns in the AI’s past performance. For especially sensitive AI, this due diligence may also include a deep background check to root out any security or insider threat concerns, which could require reviewing the AI’s source code with the provider’s consent.

Third, once onboarded, AI needs to be ingrained in an organization’s culture before it is deployed. Like other forms of intelligence, AI needs to understand the organization’s code of conduct and applicable legal limits, and, then, it needs to adopt and retain them over time. AI also needs to be taught to report alleged wrongdoing by itself and others. Through AI risk and impact assessments, organizations can assess, among other things, the privacy, civil liberties, and civil rights implications for each new AI system.

Fourth, once deployed, AI needs to be managed, evaluated, and held accountable. As with people, organizations should take a risk-based, conditional, and incremental approach to an AI’s assigned responsibilities. There should be a suitable period of AI probation, with advancement conditioned on producing results consistent with program and organizational objectives. Like humans, AI needs to be appropriately supervised, disciplined for abuse, rewarded for success, and able and willing to cooperate meaningfully in audits and investigations. Companies should routinely and regularly document an AI’s performance, including any corrective actions taken to ensure it produced desired results.

Sixth, as with people, AI needs to be kept safe and secure from physical harm, insider threats, and cybersecurity risks. For especially risky or valuable AI systems, safety precautions may include insurance coverage, similar to the insurance that companies maintain for key executives.

Seventh, like humans, not all AI systems will meet an organization’s core values and performance standards, and even those that do will eventually leave or need to retire. Organizations should define, develop, and implement transfer, termination, and retirement procedures for AI systems. For especially high-consequence AI systems, there should be clear mechanisms to, in effect, escort AI out of the building by disengaging and deactivating it when things go wrong.

AI, like humans, poses challenges to oversight because the inputs and decision-making processes are not always visible and change over time. By managing the new risks associated with AI in many of the same ways as people, the seemingly daunting oversight challenges associated with AI may become more approachable and help ensure that AI is as trusted and accountable as all other forms of an organization’s intelligence.

Michael K. Atkinson is a partner with law firm Crowell & Moring in Washington, D.C., and co-lead of the firm’s national security practice. He was previously Inspector General of the Intelligence Community in the Office of the Director of National Intelligence. 

Rukiya Mohamed is an associate in Crowell & Moring’s white collar and regulatory enforcement group in Washington, D.C.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member

Adblock test (Why?)



"want" - Google News
September 19, 2021 at 08:21PM
https://ift.tt/3nJ1UaR

Want to develop a risk-management framework for AI? Treat it like a human. - VentureBeat
"want" - Google News
https://ift.tt/31yeVa2
https://ift.tt/2YsHiXz

Bagikan Berita Ini

0 Response to "Want to develop a risk-management framework for AI? Treat it like a human. - VentureBeat"

Post a Comment

Powered by Blogger.