NRB’s draft AI guidelines signal a turning point for Nepal’s banking sector—but the real test lies ahead

KATHMANDU: Nepal Rastra Bank (NRB) has quietly taken one of its most forward-looking steps in recent decades by releasing draft Artificial Intelligence (AI) Guidelines for banks and financial institutions. While still in draft form, the 11-page document offers a clear window into how the central bank sees the future of banking—one where algorithms, models, and data-driven decisions increasingly shape credit, risk, fraud detection, and customer service.

Globally, artificial intelligence is no longer experimental in banking. It is already core infrastructure. Major banks in the United States, Europe, China, Singapore, and India rely on AI systems to approve loans in seconds, flag suspicious transactions instantly, and personalize financial products for millions of customers. NRB’s draft guidelines suggest that Nepal does not want to be left behind—but also does not want to rush blindly into automation without safeguards.

Why NRB Is Stepping In Now

NRB’s draft recognizes a simple reality: Nepali banks are already using AI, whether formally acknowledged or not. From automated credit scoring tools to fraud detection engines, algorithmic decision-making has quietly entered the system. What has been missing, according to NRB, is a clear governance framework.

The central bank notes that while AI can improve efficiency, reduce costs, and enhance customer experience, it can also introduce serious risks—biased decisions, opaque models, privacy breaches, cyber threats, and even systemic instability if left unchecked. The draft guidelines are therefore less about promoting AI and more about setting boundaries for its responsible use.

How the Draft Aligns With Global Banking Practice

Internationally, regulators are moving fast on AI oversight. The European Union’s AI Act classifies financial-sector AI as “high risk.” The US Federal Reserve and UK regulators demand explainability in credit and risk models. Singapore’s Monetary Authority requires clear human oversight in AI-driven banking decisions.

NRB’s draft closely mirrors these global approaches. It introduces the concept of high-risk AI systems, particularly those that can cause financial harm, operate at scale, use sensitive personal data, or function with minimal human supervision. This risk-based classification is now the global standard—and NRB is clearly borrowing from it while adapting to Nepal’s scale and capacity..

Boardroom Accountability: No Hiding Behind Algorithms

One of the strongest signals in the draft guidelines is that boards and senior management cannot hide behind technology. Even if a decision is generated by an AI system, accountability remains human.

In advanced banking markets, regulators increasingly hold boards responsible for AI failures. NRB is moving in the same direction. The draft makes it clear that boards must define AI risk tolerance, approve AI strategies, and ensure ethical use, while senior management must continuously monitor AI systems and their dependence on them.

For Nepal’s banking sector—where IT decisions were once treated as back-office matters—this represents a cultural shift.

Transparency and Customer Rights at the Core

Globally, customers are pushing back against “black-box banking,” where loans are rejected or limits reduced without explanation. NRB’s draft addresses this directly by emphasizing explainable AI.

Banks will be expected to inform customers when AI influences decisions, explain the key factors involved, and maintain detailed audit trails. This aligns with international norms and is particularly important in Nepal, where trust in digital finance is still evolving and misunderstanding can quickly lead to reputational damage.

Data, Privacy, and Bias: Lessons From Global Mistakes

International experience has shown that poorly trained AI models can reinforce discrimination—excluding rural borrowers, informal workers, or marginalized groups. NRB’s draft explicitly warns against this.

It calls for bias testing, fairness monitoring, data minimization, informed consent, and opt-out options for customers. These are not abstract principles; they are lessons drawn from costly failures seen in global banking, where institutions have faced fines, lawsuits, and public backlash over unethical AI use.

Where Nepal Can Actually Gain

If implemented thoughtfully, AI could help Nepal’s banks leap forward. Globally, AI is used to:

Extend credit to SMEs using alternative data

Detect fraud in real time

Reduce operational costs

Improve compliance and reporting accuracy

Expand financial inclusion without expanding physical branches

NRB’s draft does not block these opportunities. Instead, it tries to ensure that innovation happens within guardrails that protect customers and the financial system.

Draft Today, Direction for Tomorrow

It is crucial to note that these guidelines are still in draft form. This suggests NRB is inviting discussion, internal preparation, and gradual alignment—rather than sudden enforcement.

The real challenge will not be writing policies, but building skills, governance capacity, and institutional discipline across Nepal’s banking system. As global banking history shows, AI failures rarely stem from technology alone—they stem from weak oversight and poor judgment.

NRB’s draft AI Guidelines, if refined and implemented wisely, could become a defining framework for Nepal’s next phase of financial modernization—one that learns from the world’s successes and mistakes, rather than repeating them.

Fiscal Nepal |
Monday December 8, 2025, 12:11:19 PM |


Leave a Reply

Your email address will not be published. Required fields are marked *