## STATISTICAL PROCESS CONTROL to monitor a trading system

Discussions about the testing and simulation of mechanical trading systems using historical data and other methods. Trading Blox Customers should post Trading Blox specific questions in the Customer Support forum.
White Cube
Roundtable Fellow
Posts: 98
Joined: Fri Oct 20, 2006 2:25 pm
Location: London, UK

### STATISTICAL PROCESS CONTROL to monitor a trading system

Statistical process control (SPC) allows monitoring of process parameters to ensure that quality standards are met. We can use control chart to monitor the process.
The control chart for a mean is an X-chart while the control chart for the range is an R-chart. The centerline of a control chart is the expected value of the statistic. Statistical theory is used to establish an upper control limit (UCL) and lower control limit (LCL).
The position of sample statistics in relation to these control limits will tell us whether or not the process is in control. We are willing to accept a certain amount of variation, which is normal and expected. Excessive variation would indicate that a process may be out of control. Statisticians use rules of thumb to detect abnormal patterns.

I think we could use these control charts as a way to gauge the risk of system death.
The following example shows the distribution of R-Multiples from a trading system. I have displayed the control chart for the mean and the control chart for the range. The centerline of the X-chart represents the Expectancy. Every four new trade a new point will be plotted on the chart thus telling us if the system performs as expected.
Attachments
R-Multiples.JPG (34.16 KiB) Viewed 6768 times
Control Chart for the range.JPG (29.35 KiB) Viewed 6772 times
control Chart for the mean.JPG (29.59 KiB) Viewed 6771 times

Angelo
Roundtable Fellow
Posts: 90
Joined: Fri Apr 29, 2005 4:31 am
Location: Italy
I'm no expert but always thought SPC is just assuming a normal-like distribution of testing results around the mean.

So, when the number of faulted output exceeds 3 sigmas, you can conclude that itâ€™s not just bad luck, but thereâ€™s something wrong in the production process.

This is an assumption safe enough in industrial production, but I donâ€™t know if the same holds true for financial trading.

I doubt it, but even if this is a safe assumption for financial trading, I wonder if you cannot get the same results with less trouble: for example using Montecarlo analysis (already programmed in TB): when your real time results are worse than what should be expected in a reasonable period of time at 90% confidence level (or whatever you like), chances are the market have changed and are no more in tune with your system.
Attachments
MonteCarloEquityGraph_P1.png (23.21 KiB) Viewed 6736 times

sluggo
Roundtable Knight
Posts: 2986
Joined: Fri Jun 11, 2004 2:50 pm
I don't think "R Multiples" is appropriate. By definition it focuses upon individual trades. But the truth is that we trade dozens of different instruments simultaneously, often with several different trading system algorithms. We've got lots of trades going at once, in parallel, and what matters isn't the individual trades themselves but rather the sum of their collective behavior, versus time. This isn't even a new phenomenon; the Turtles traded 2 different systems on 20 different markets, more than 20 years ago.

In today's world of multiple trades in parallel, I suspect the most beneficial way to analyze system behavior is to study portfolio-level returns. Don't pretend that trades are standalone, individual entities; instead, work with their sum: the daily (or weekly, or monthly) returns. You can put these on a control chart, or scramble them using resampling / Monte Carlo techniques, or plot rolling autocorrelations, or any of a thousand other ideas.

Embrace the reality that a suite of systems is being traded on a portfolio of markets, meaning that lots of positions in lots of different markets are simultaneously in play.

Roscoe
Roundtable Knight
Posts: 250
Joined: Sat Jan 24, 2004 2:06 am
Location: Houston TX
I think that londonpopart has a potentially viable idea here. The attached pic shows the percentage contribution that each MarketSystem makes to the overall open equity of a randomly-chosen portfolio per day and, although the display is a bit cluttered, it can be clearly seen that each MarketSystem waxes and wanes in it's % contribution over time. If I understand what is being proposed here correctly (SPC?) then one would select a meaningful statistic and define appropriate upper & lower boundaries and then when a MarketSystem moves outside those boundaries that particular MarketSystem would not be traded until such time as it moves back "into range" so to speak. The question would then become "which statistic?" and that would indeed seem to be the critical issue. It may then be that only the lower boundary cross need be cause to prevent a MarketSystem trading while an upper boundary cross may well be regarded as beneficial (refer to Heating Oil and Coffee in the chart below for possible examples).

BTW: this seems to be a "survival of the fittest" concept does it not? If so then it does indeed have a strong appeal, at least to me

Excellent and thought-provoking post londonpopart, thank you and a belated welcome to the forum.

Sluggo, while I agree with your contention that the portfolio sum is more meaningful than individual MarketSystem output it is nonetheless my current thinking that too little exploration has been done of the interaction that takes place within a portfolio when position-sizing is applied, and which in fact is the very reason that I produced the chart below. This in itself might be a topic for discussion that is related to londonpopart's post?

Edit: you can also see correlation in the chart, just as a side note.
Attachments
ScreenShot_001.jpg (217.78 KiB) Viewed 6651 times

svquant
Roundtable Knight
Posts: 126
Joined: Mon Nov 07, 2005 3:39 am
Location: Silicon Valley, CA
Contact:

### CUSUM

Just an FYI there is a lot of literature on using CUSUM (part of SPC area) in order to evaluate and manage asset managers. According to a presentation I have it is used as part of a tool box to monitor some \$500B in assets... yes it did say billions.

Basically it is used to quickly detect when a manager is underperforming benchmarks or perhaps driifting in style (claimed to be 10x faster than other techniques). All the presentations I have seen state it is not a mechanical system to time the market or managers - but to give a warning sign for you to go investigate that manager and see why they are in toruble. Gee just like a manufacturing, eg do a root cause analysis and determine a corrective action plan.

Perhaps they are being less creative then people on this forum or are being too pension fund equity manager focused where there is a high cost of switching vs just turning on or off a trading system as part of a portfolio of trading systems....

Just use your favorite search engine the papers and slides are all on-line and have some fun.

Marc

BARLI
Roundtable Knight
Posts: 650
Joined: Sat Jan 17, 2004 6:01 pm
Location: USA
Marc, could you provide some insight into those mathematial models? I've been looking in different places for clues, but with my mathematical background its nto easy to understand Nelson's formulas...

hollander67
Contributor
Posts: 1
Joined: Thu Jul 10, 2008 11:43 am