U.S. Treasury Urges Financial Sector to Address AI-Related Cybersecurity Threats


The US Treasury Department has warned of the cybersecurity risks that AI poses to the financial sector.

The report, which was prepared under the direction of Presidential Executive Order 14110 on the Safe, secure and reliable development and use of artificial intelligencealso presents a series of recommendations for financial institutions on how to mitigate these risks.

AI-powered cyberthreats for the financial sector

Financial services and technology companies surveyed in the report recognized the threat posed by advanced AI tools such as generative AI, with some believing they would initially give threat actors the “upper hand.” .

This is because these technologies improve the sophistication of attacks such as malware and social engineeringas well as reducing barriers to entry for less skilled attackers.

Other ways cyber threat actors are using AI to target financial systems are vulnerability discovery and disinformation – including use of deepfakes impersonating individuals like CEOs in order to defraud companies.

The report recognizes that financial institutions have been using AI systems to support their operations for several years, including as part of cybersecurity and anti-fraud measures. However, some of the institutions included in the study reported that existing risk management frameworks may not be adequate to cover emerging AI technologies such as generative AI.

A number of respondents said they are paying attention to unique cyber threats to AI systems used in financial organizations, which could be a particular target for insider threat actors.

These include data poisoning attacks, which aim to corrupt the training data of the AI ​​model.

Another concern regarding in-house AI solutions identified in the report is that the resource requirements of AI systems will generally increase institutions’ direct and indirect reliance on third-party IT infrastructure and data.

According to those surveyed, factors such as how training data was collected and processed could expose financial organizations to additional financial, legal and security risks.

How to manage AI-specific cybersecurity risks

Treasury has provided a number of steps financial organizations can take to address the immediate challenges related to AI-related operational risks, cybersecurity and fraud:

  • Use applicable regulations. Although existing laws, regulations and guidance do not specifically address AI, the principles of some of them may apply to the use of AI in financial services. This includes regulations related to risk management.
  • Improve data sharing to build anti-fraud AI models. As more financial organizations deploy AI, a significant gap has emerged in fraud prevention between large and small institutions. Indeed, larger organizations tend to have much more historical data to develop anti-pollution measures.fraud AI models than smaller ones. There should therefore be more data sharing to enable smaller institutions to develop effective AI models in this area.
  • Develop best practices for data supply chain mapping. Advances in generative AI have highlighted the importance of monitoring data supply chains to ensure that models use accurate and reliable data and that privacy and security are taken into account. Therefore, the industry should develop best practices for data supply chain mapping and also consider implementing “nutrition labels” for AI systems and vendor-provided data providers. These labels would clearly identify what data was used to train the model and where it came from.
  • Addressing the AI ​​talent shortage. Financial organizations are encouraged to train less-skilled practitioners on how to use AI systems safely and provide AI training specific to their role outside of IT.
  • Implement digital identity solutions. Robust digital identity solutions can help combat AI-based fraud and strengthen cybersecurity.

THE report also recognized that the government must take more steps to help organizations address AI-based threats. This involves ensuring coordination at the state and federal levels for AI regulations, as well as globally.

Additionally, Treasury estimates that the National Institute of Standards and Technology (NIST) AI Risk Management Framework could be adapted and expanded to include more applicable content on AI governance and risk management related to the financial services sector.

Undersecretary for Domestic Finance Nellie Lian commented: “Artificial intelligence is redefining cybersecurity and fraud in the financial services industry, and the Biden Administration is committed to working with financial institutions to use the technologies emerging economies while protecting against threats to operational resilience and financial stability. »

Leave a comment