Algorithmic Trading: Mastering the machines

Deterministic and artificially intelligent algorithms are fundamentally changing entire financial marketplaces. But do they need a new surveillance approach or can existing systems, designed for humans, still do the job?

Registered users only: To read this article sign up for a free account by clicking here.

Deterministic and artificially intelligent algorithms are fundamentally changing entire financial marketplaces. But do they need a new surveillance approach or can existing systems, designed for humans, still do the job?

How do banks need to adapt the ways in which they monitor both deterministic algorithmic trading models and the emerging systems based on machine learning and true artificial intelligence? How can we develop a conduct risk approach to these new trading mechanisms and what are the appropriate testing and controls?

One way to look at the problem is to examine the differences between human and machine traders, and from there define the difference between systems designed to control people and systems designed to control machines. One key difference is that people generally know right from wrong; they also fear punishment. So traders’ knowledge that a control function exists can by itself prevent misconduct. Machines, on the other hand, know nothing of right and wrong, and have no fears. So in order to ensure their trades comply with regulations, those behaviours have to be coded in somehow. As one senior controls leader says: “So how do you do that? You‘d need a philosopher and an ethicist just to be able to give you definitions!”

The answer to this question starts with the coding environment itself. These models tend to be coded by quants working in the trading room. This has a number of implications. First, the kinds of people doing the coding matters. 

As one banker tasked with looking at the ethics question says: “Whether you are talking about the practical or the ethical, it starts with defining how the machine is built in the first place. This code is being built by mostly young, mostly white, mostly men, with a certain attitude to risk. So you not only need to do an ethics review for the underlying purpose of the model – just because you can do something doesn’t mean you should – but also for any bias derived from that coder environment that may affect how the model executes.”

Second, coders sitting in a corner of the trading room are outside the normal application development process and tend not to use its accepted tools and procedures. This raises the question of how to impose bank-wide standards on what has been a largely agile process. It also emphasises the dangers of relying too heavily on the 1st line to evaluate and set controls for models. 

The 2nd line needs muscle
If coding is to remain in the business – and some believe it should be moved away completely, into the mainstream application development process – then independent model validation is critical. The PRA’s June 2018 supervisory statement says that testing should be by “a competent team not involved in the development of the code”. It also effectively invokes the SM&CR by stating that risk controls and each algorithm should have “an owner”. It makes clear the need for a firm’s management body to identify the relevant Senior Management Functions (SMFs) with responsibility for algorithmic trading. 

The FICC Markets Standards Board’s July 2018 transparency draft “Algorithmic Trading In FICC Markets Statement of Good Practice for FICC Market Participants” agrees, stating that good practice means that firms engaged in algorithmic trading should “have a formal risk management function independent of the front office to determine appropriate levels for pre-trade risk controls as well as to monitor the financial exposure and non-financial risks associated with algorithmic trading” and should “consider formalising a specific risk appetite for their algorithmic trading activity”.

As Chris Dickens, COO EMEA, Global Markets, HSBC, and the chair of the FMSB’s Conduct and E-commerce Working Group, says: “The starting point for this is that we need to develop a code of best practice. In general I think all market participants want the same thing and are on board.”

Adapting existing systems
Taking deterministic algorithms first, some of those responsible for designing controls point out that the issues are very similar to those they encounter with humans. First, says one Head of Business Controls, “you need to accept that there is an inevitability of machine misconduct just as there is with humans – so if you cannot stop everything before it happens, you need the fastest response to misconduct to minimise harm, which means good surveillance and monitoring.” 

The question then becomes how can banks leverage their pre-existing systems for the algorithmic environment. One immediate problem is that, leaving aside the coders themselves, algorithms create no electronic or voice communications to trigger upfront alerts or provide answers to forensic processes. This makes pre-release and pre-trade testing critical. It also means banks have to decide whether that testing is sufficient to allow the algorithms to operate, or whether they still need to satisfy the same continuous monitoring applied to human traders.  “A robust change management program, which sufficiently documents the algorithmic development process, can assist firms in making these determinations.  Such a design can bridge output trading activity from an algorithm, with inputs and variables that went into the algorithm, and enable insight into whether resulting activity is as expected,” says Stan Yakoff, Head of Americas Equities Supervision at Citadel Securities.

According to one former Head of Front Office Risk and Controls, it makes sense to run algorithmic trades through the human surveillance systems, both in the testing phase and when they are trading, to ensure that the system places orders or otherwise reacts in the way that is expected.

“Why not run the test outputs through the same trade surveillance engine that you run over your normal traders and trades?,” he says. “Ask, ‘would this trade trigger an alert?’. If not then release [or, in a live environment, allow the trade]. After all, if there wasn’t a trader ID attached, you wouldn’t know if it was a machine trade or a human trade.”

Today though, many firms exclude deterministic algo trading from trade surveillance because they assume conduct has been taken care of at the code and model review and testing stage. The assumption is superficially logical because a deterministic algorithm should behave in a linear and predictable fashion. However, testing can miss coding biases, errors or even deliberate misconduct. More perniciously, the interaction of many algorithms trading against each other does not necessarily have linear and predictable outcomes, and so any particular algorithmic trade could interact with others in a way that causes unexpected client or market harm. Flash crashes are just one obvious result. The industry has only just started to look at what controls, aside from existing circuit breakers, can mitigate this problem.

To err is human?
The challenges become even bigger with truly intelligent algorithms. Or do they? The more intelligent, the more human, and so the more obviously subject to existing human surveillance systems, right?

The problem with that interpretation comes back to the earlier point about deterrence. AI algos are human-like in the sense that have not been given a complete set of deterministic rules and constraints, but simply information and a set of learning rules. But they are completely unlike humans in lacking any understanding of conduct and so, left to themselves, will pursue success regardless of ethics or regulations.

The obvious answer here is to limit the learning capabilities so that the model can only learn to behave in certain ways. But that erodes the benefit of having AI in the first place and it may well be that to guarantee ethical behaviour, the model must be rangebound to the extent that it simply recreates a deterministic algorithm.

In that case, businesses will have to accept the hit to profitability that comes with an iterative approach of: constrain the AI engine, measure the outcomes, confirm they comply, reduce the constraints, and so on. This relies upon institutions having the discipline to forgo profits now, in order to avoid potentially huge but unknown risks. Banks will need a robust enough conduct culture to resist the temptation to de-constrain the models too early and must allow business heads and star quants to be challenged.

Ultimately conduct is behavioural management and machines do not behave like people. However, for the foreseeable future, it seems as though it is still the behaviour of humans, from coders to business heads, that is most important.

[/MM_Access_Decision

John Baskott

Welcome to 1LoD's Global Benchmarking Survey and Annual Report 2019, the largest and most comprehensive survey ever conducted on the front office control sector. This year's report was answered by over 30 of the world’s largest financial institutions. We hope you enjoy reading the findings and thank you to everyone who contributed to making it a success. To find out more about upcoming events, our careers hub or other content from 1LoD just visit www.1lod.com

John Baskott, Co-Founder, 1LoD

Lead sponsors

Accenture

Galvanize

NortonRose2

Partner sponsors

Digital Reasoning

OneTick

Researched and published by

1loD