Skip to content

Financial Institutions and Artificial Intelligence collaborators are accountable for upholding safety standards, according to the Office of the Comptroller of the Currency (OCC) as stated by its acting head, Hsu.

"Acting Comptroller of the Currency Michael Hsu noted that AI makes it simpler to sidestep accountability for unfavorable results compared to other contemporary technologies."

Financial institutions and their AI collaborators must bear accountability for safety, according to...
Financial institutions and their AI collaborators must bear accountability for safety, according to Hsu, the head of the Federal Office of the Comptroller of the Currency.

Financial Institutions and Artificial Intelligence collaborators are accountable for upholding safety standards, according to the Office of the Comptroller of the Currency (OCC) as stated by its acting head, Hsu.

In the rapidly evolving world of finance, the adoption of Artificial Intelligence (AI) has become a pressing issue. The Treasury Department, recognising the potential risks and benefits, is seeking public comments on the use of AI in the financial sector.

The complexities and opacities of AI models, combined with inadequate risk management frameworks, can lead to specific vulnerabilities. These vulnerabilities are further exacerbated by interconnections among market participants who rely on the same data and models.

Treasury Secretary Janet Yellen has warned that using AI in finance carries significant risks. She has backed legislation introduced by Sens. Mark Warner and John Kennedy in December, aimed at coordinating regulatory efforts to protect markets from the potential disruptive impacts of deepfakes, trading algorithms, and other AI tools.

One area of concern is AI-powered fraud, which could sow distrust more broadly in payments and banking. Acting Comptroller of the Currency Michael Hsu has suggested that the banking and finance sector should develop a shared responsibility framework with their AI partners.

Hsu believes that a shared responsibility model for AI safety, similar to the cloud computing's shared responsibility model, could be developed. This model would ensure that both parties understand their roles in maintaining the security and integrity of AI systems.

The U.S. Artificial Intelligence Safety Institute, within the National Institute of Standards and Technology, can devise a shared responsibility framework from its consortium of over 280 stakeholder organisations. However, the exact number of member organisations and which organisations are part of the institute remains unspecified.

The use of AI in credit underwriting presents another challenge. Decisions can be hard to explain, making it difficult for companies to identify liability and fix issues when AI is involved. Hsu mentioned an example of unexpected consequences due to over-reliance on AI, involving a chatbot suggesting a refund for a bereavement flight that Air Canada does not offer.

Despite these challenges, Hsu also emphasises that AI holds promise and peril for financial stability. Before banks can pursue the next phase of development, it is crucial that they ensure proper controls are in place and accountability is established. Banks adopting AI must set up clear and effective gates between phases to ensure safety.

In the face of these complexities, it is clear that a collaborative and responsible approach is necessary to harness the potential benefits of AI in finance while mitigating its risks.

Read also: