SEBI on the Reporting of Artificial Intelligence and Machine Learning Systems


The Securities & Exchange Board of India released three circulars[1], on the 4th of January, 31st of January and most recently 9th of May owing to the increased usage of Artificial Intelligence & Machine Learning (AI & ML) Systems as product offerings. The Circulars increased reporting requirements for Market Intermediaries (MIs), Market Infrastructure Institutions (MIIs), Mutual Funds (MFs), while being labelled as a survey and the creation of an inventory of the landscape of such technology, to gain an in-depth understanding and ensure preparedness for AI/ML policies in the future.

Recognising that most AI/ML systems are black-boxes,i.e they can be viewed in terms of their inputs and outputs, without any knowledge of their internal workings,[2]SEBI warned intermediaries against misrepresenting the benefits of any financial products involving the use of AI & ML Technologies.


The SEBI has kept the definition of such systems amply wide, by including within the scope of the circular, any applications or systems that are offered to investors or used internally, to:

  • Facilitate investing and trading or any other purpose; or
  • Disseminating investment strategies and advice; or
  • Carrying out compliance/ operations/ activities.

The circular also covers all Fin-tech and Reg-tech initiatives involving AI/ML undertaken by market participants.

Annexure B to the Circular gives a detailed list of systems which would be deemed to be based on AI & ML Technologies. The list includes the following systems:

  1. Natural Language Processing, Sentiment Analysis, Text Mining etc.
  2. Neural Networks and their modified forms
  3. Supervised or Unsupervised ML Systems
  4. Statistical Heuristic Methods (instead of procedural algorithms)
  5. Systems using feedback mechanisms to improve their parameters
  6. Systems doing Knowledge Representation and maintaining Knowledge Bases.

Regulatory Requirements

The Circulars call for all MIs, MIIs and MFs to make quarterly submissions to the Stock Exchanges/ Depositories, SEBI directly and to the Association of Mutual Funds in India respectively, which in the first and third cases would subsequently submit a consolidated quarterly report to SEBI, while maintaining the confidentiality of the information received.

Annexure A to the Circular contains the form that must be filled by the respective stakeholders using AI/ML Technologies. It contains Yes/No questions about:

  1. Involvement of order initiation, routing and execution.
  2. Dissemination of investment trading advice or strategy.
  3. Use in the area of Cyber Security to detect attacks.
  4. Inclusion in the scope of System Audit, if applicable.

Furthermore, the form contains free text questions on the following points: 

  1. Type of Area where AI/ML is used.
  2. How was the AI/ML project implemented (internally/through solution provider/jointly)
  3. Compliance of key controls and control points with SEBI circular on cyber security control requirements.
  4. Description of the system and how it uses AI/ML.
  5. Safeguards to prevent abnormal behavior of such systems.
  6. Any adverse comments in the system audit regarding the AI/ML system.


With statistics like the Research and Markets prediction that the Global algorithmic trading market would grow at a CAGR of 10.36% during the period 2018-2022,[3] the use of AI/ML in Financial markets is expected to grow rapidly in the near future. Proactive Regulators tend to assume from expeditious growth in any sector that regulation and reporting is called for. While its benefits include investor protection and the prevention of financial crimes, the debate around these outweighing the costs is far from settled. 

The European Union in 2014 came up with a Directive on Markets in Financial Instruments[4], which imposed similar reporting requirements as the SEBI has sought to, with the circular.

US Federal Reserve Governor Lael Brainard, while discussing the regulation of AI in Financial Services writes: “ Regulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy.[5]

She goes on to add from one of her previous speeches “Likewise, it is important not to drive responsible innovation away from supervised institutions and toward less regulated and more opaque spaces in the financial system.[6]

Some legal scholars have also expressed views like, a granular regulation scheme of AI transparency will likely bring new startups in AI technology to a halt, as new entrants would have to bear the high costs of regulatory compliance and wrestle with regulatory constraints on new designs.[7]

Futurist Michael Spencer notes in his article on Forbes: Artificial Intelligence regulation may be impossible to achieve without better AI, ironically. As humans, we have to admit we no longer have the capability of regulating a world of machines, algorithms and advancements that might lead to surprising technologies with their own economic, social and humanitarian risks beyond the scope of international law, government oversight, corporate responsibility and consumer awareness.[8]

The CFA Institute’s Centre for Financial Market Integrity, while referring to proponents of self-regulation, observed: “This group notes the efficiency of allowing the participants, who are the industry experts, to craft rules that more realistically reflect the issues of the industry, thereby reducing the regulatory burden on market participants. Given the speed with which the global markets move, supporters of self-regulation also cite the benefits of a system whose flexibility allows it to respond to market developments quickly, fostering innovation. They also note the advantages of a system that is basically self-funding, relieving the government of a financial burden.[9]

Therefore, the SEBI should be circumspect while introducing reporting requirements for Financial Institutions in areas like Artificial Intelligence/ Machine Learning. While there are a few benefits of such a mechanism, growth and innovation in a fledgling sector like AI/ML must not be disincentivized by any means. That being said, building “an in-depth understanding and ensur(ing) preparedness for AI/ML policies in the future” is a constructive step. 

Author: Mr. Anant Joshi, Associate – Corporate & Commercial Law Practice at Khurana & Khurana, Advocates and IP Attorneys. In case of any queries please contact/write back to us at







[6]Lael Brainard, “Where Do Banks Fit in the Fintech Stack?” (speech at the Northwestern Kellogg Public-Private Interface Conference, April 28, 2017).

[7]Bathaee, Yavar, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018, p. 893.

[8]Michael Spencer,


Leave a Reply