Powered by AltAlpha AI
|
‘The computer is a moron. What you do with it counts’
Peter Drucker, (in the Manager and the Moron, McKinsey Quarterly)
Peter Drucker, (in the Manager and the Moron, McKinsey Quarterly)
Varun Matlani is a Securities Lawyer & Member of Gujarat’s AI Centre of Excellence at GIFT City and TiE Vadodara; Rohan Bhimajiyani is a Master’s student specializing in Constitutional Law at Gujarat National Law University.
The Securities and Exchange Board of India (“SEBI”) released a Consultation Paper on Guidelines for Responsible Usage of AI/ML in Indian Securities Markets (“Consultation Paper”), this comes at a time when we are at an inflection point for the Indian financial sector, perhaps, moving sooner away from rigid and set algorithms in trading in financial sector at large to intelligent, and adaptive AI systems. Also, an appreciable move was to first well assess the actual use-cases of the industry, SEBI had released circulars back in 2019 for assessing the use of AI by Market Intermediaries, Market Infrastructure Institutions and Mutual Funds. AI has made the computer less of a moron, especially when computer is the one which executes.
Apart from AI Chatbots, Product Recommendation tools, and Exchanges using AI for assessing DRHPs (in a financial economy that has recently led the capital markets activity by volume in Asia) and surveillance; one very interesting consumer-facing AI tool has been Zerodha’s MCP connect with Kite, which provides only information and does not executes trades as of now (but fundamentally nothing stops the tech from taking the trades too). Currently, the global regulators are having a difficulty in regulating AI in general, and even sectoral regulators in specific due to its black-box nature and computing that’s not only beyond human control, but even geographically the tangibility of data – storage, as well as retrieval.
In this article, we try to anticipate, on lines of the Consultation Paper, the regulatory framework for adaption of AI/ML by market intermediaries and institutions.
Model Governance
- Team of Internal Experts: The market participants would be required to have team with adequate skills and expertise to oversee the AI through the lifecycle; this would also include members who are expert to be appointed in the senior management.
- Training and retraining of AI/ML models especially during market stress and retraining of data to capture non-linear relationships and tail events in the data, which refers that the data set should be large enough to encompass black-swan events and other such erratic market fluctuations.
- Robust monitoring – we can anticipate quite some compliance measures on measuring and reporting of AI performance on on-going basis and through third-party audits; this is going to open up a big space in developing AI audits (interested readers can read this article in last week’s FT on how Big Four firms are racing towards building AI Audit products.)
- Clear structuring of data governance, ownership and access norms. With better open-source AI models like LLaMA and DeepSeek, which run on the infrastructure of the deployer itself, the data in in a closed-circuit, and complete ownership resides with the deployer. However, generally, when products of companies like OpenAI, Claude by Anthropic or Grok, which are ahead of the curve in some of their offerings, are used, there is a complete Blackbox in how much are these products learning from the user-data and about the (absolute) storage and privacy of the data.
- Logs for AI/ML usage – to be maintained for five years and with full verbosity, here, (although undefined) ‘Full Verbosity’, can be assumed to mean, capturing all metadata, which may include unstructured or semi-structured data. Perhaps institutions like exchanges can have infrastructure to store data from potentially megabytes to petabytes, but with respect to other intermediaries, like brokers or mutual funds, it seems quite a tough technical challenge. Further, this reminds me of the scene of a wall street movie, where a young trader rushes to find the trade slip on which she received the tip to save her job, however, couldn’t find that slip which was buried under myriads of routine slips.
- Further, AI models should operate in a way that complies with existing legal and regulatory obligations. However, would like to get clarity, on does it mean that if I develop an AI product which recommends stocks, no role of me, just the market data and analysis of the AI model, would this require me to get a Research Analyst or Investment Advisor license?
Investor Protection – this part is relatively smaller and just requires adequate disclosure. Perhaps we can anticipate listening 5-second extra audio of Investments are subject to market risk which now encompasses AI use by intermediaries, and perhaps that would fade out in a few years, when AI is accepted as common as computer by regulators.
One thing, which is seemingly clear is, the liability shall always remain on the intermediary who deploys AI, and grievance mechanisms shall continue to be in force encompassing AI based grievances in future.
However, one thing SEBI must come out with is some sort of restriction in using AI based customer support agents by customer-facing intermediaries, which have no agentic capabilities, and are beyond any help to customers, especially in finance where timing is of essence.
Testing Framework – SEBI has kept this a bit open ended, for intermediaries to develop methods for testing continuously, and rightly so, since AI as black-box systems are difficult to test, since results in one test do not guarantee same performance in exact same circumstances. SEBI has also mentioned of kill-switches and control measure by intermediaries, which may seem a bit unuseful today, but with more agentic AI into play, it shall be a very crucial step for anyone using AI.
Bias – isn’t defined by SEBI, however, biases in AI models exist, because biases exist in the training data, i.e. the real-world data.
Further, in broader scheme of things, SEBI has envisioned this to be an AI-lite approach.
One aspect which seems skipped is – assessing the global benchmark of AI and Compute capabilities by SEBI to make regulations which also include a system for democratized AI access. The current Consultation Paper, assumingly, is based on data collected from intermediaries based in India; however, with recent advents like introduction of Aladdin by Jio BlackRock, which has the world’s largest data-points and recent reports of SEBI’s investigation into trades of Jane Street (interested readers can read this Article by author to understand the report); it is essential to take into account the global parameters of how AI is being integrated in trades, which may also help SEBI in surveillance of trades based on products which are beyond framework of algo-trading.
Overall, the Consultation Paper relies on IOSCO’s Paper on AI in Capital Markets for broader principles. It may seemingly take some time for SEBI to fully come with guidelines and compliances for each intermediary (or if an AI black-swan event hits, whichever is earlier). If we try to take historic performance of SEBI’s speed, in 2008, it came up with guidelines for Algo-trading through DMA and in 2016 the first guidelines for Retail Investor Access through APIs came in. Similarly, SEBI itself has referred to IOSCO documents on AI in Financial Markets from early-2020s; so it’s quite some time to wait and watch, but to anticipate and prepare for next phases of regulatory changes to come in force.