The Indian Ministry of Finance has issued a directive banning the use of AI tools and applications, such as ChatGPT and DeepSeek, on official government devices. The circular, dated 29th January 2025, aims to protect confidential government data from potential security threats.
The notice, signed by Joint Secretary Pradeep Kumar Singh, cautions that AI-powered applications on office computers could jeopardise sensitive government information. To address these concerns, the ministry has advised all employees to refrain from using such tools on official devices.
The circular has received approval from the Finance Secretary and has been distributed to key government departments, including Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services.
This ban is part of a wider global apprehension regarding AI platforms handling sensitive data. Many AI models, including ChatGPT, process user inputs on external servers, raising concerns about data leakage or unauthorised access.
Similar AI restrictions have been imposed by governments and corporations worldwide. Several private companies and global organisations have already limited AI tool usage to prevent data exposure.
While the order restricts AI tools on official devices, it does not specify whether employees can use them on personal devices for work purposes. This move indicates that the government is adopting a cautious stance on AI adoption, prioritising data security over convenience.
As AI tools become more common in workplaces, it remains uncertain whether the Indian government will establish regulated AI use policies in the future. For now, finance ministry officials are required to rely on traditional methods, at least on their office computers.
Why has the ban?
The Indian Finance Ministry’s decision to ban AI tools on official devices stems from security and confidentiality concerns. Here are some of the reason why the government could be taking this step:
1. Risk of data leaks
AI models like ChatGPT and DeepSeek process user inputs on external servers, which means any sensitive government data entered into these tools could be stored, accessed, or even misused. Since government offices handle classified financial data, policy drafts, and internal communications, even unintentional sharing could pose risks.
2. Lack of control over AI models
Unlike traditional software used in government offices, AI tools are cloud-based and owned by private companies (such as OpenAI for ChatGPT). The government has no direct control over how these tools store or process information, increasing concerns about foreign access or cyber threats.
3. Compliance with data protection policies
India is working on stronger data privacy laws, including the Digital Personal Data Protection (DPDP) Act, 2023. Allowing AI tools on office devices without clear regulations could lead to violations of data protection policies, making government systems vulnerable.