(I posted this on the N forum, at the link shown at the end ... this is the same text)
On various occasions I've tried to point out the danger/absurdity of using the %Equity method, which 99.9% of all the testing and development seems to be standardized on. This post hopefully will explain it in VERY SIMPLE TERMS.
Bottom line, imho, almost every PortSim run done with %Equity is FATALLY FLAWED, and MISLEADING ... unless: a) it covers a very small number of trades, or b) it shows that the strategy is *losing* consistently.
Yes ... I'm hoping that bolded statement raises some eyebrows and encourages y'all to carefully think this through. I hope that the examples below will make it clear as to why it's important, and why future Nirvana and User testing and posts and marketing should use a different approach.
Here's the typical scenario: start with 100K, set Allocation to 10% of Equity, and test across ten years ... these values may differ a bit but I think they represent the majority of the brochures and user-posted test results.
If the size of each trade is 10% of current equity, then that means (duh) that a max of ten trades can be active at once ... but since some might argue that 5% is more common, let's use 7%. That means 14 trades at a time (please disregard the effect of margin ... the conclusions are the same ... hang in there)
So, at the outset of the PortSim backtest, that means each trade has about $7,000 to buy shares with. If the per-share price is $700, that's 10 shares. If the per-share price is $70, that's 100 shares. If it's $7/share, it buys 1000 shares.
As long as there is some kind of "reasonable" liquidity filter used for the symbol list, or it's a major list like the SP-100, then we shouldn't have any trouble getting those trades filled at a reasonable price ... let's just ignore the fact that the SP-100 was different 10 years ago, and that using it as the basis today is "cheating" since we know those symbols are going to end up doing well ... that's a whole different discussion.
If we use a cash-liquidity filter something like this: Avg(C,10) * Avg(V,10) > 100,000,000 ... this is the kind of recommended filter that seems to be the most prevalent ... then the only symbols that "get through" that filter are ones where at least $100 million is traded per day on the average over a couple of weeks. Sounds reasonable ... sounds safe. Some might even use a smaller value than 100,000,000 in order to get more symbols on the FL. But we'll stick with it.
Now, assume further that this is a "SUCCESSFUL" strategy (or ATM method, etc) ... and that over the ten years, the equity curve rises from $100,000 to $100 million ... this may sound crazy (and that *IS* the point btw) ... but if you've been following the threads related to OmniVest and ATM, you'll find PortSim outputs posted that end up with $100 billion or even occasionally $100 trillion, after ten years. So, this example is going to use a "conservative" (hahaha) ending value of $100 million.
If you check this out for the market today ... starting with over 10,000 symbols in "All US Stocks", there are 574 symbols that pass the test. HOWEVER (and this is important) ... if we start with the SP500, only 320 pass ... and if we start with the SP100 (afaik the most common of all the canned test beds), then all of them make it through that test. So ... let's just assume that we have 100 symbols to "try out" (at the HRE) in order to make the strategy work.
However ... what about those same SP100 symbols, 10 years ago? How many of them passed that liquidity test back then? To find out:
1. Edit > Data Periods > 2650 bars (ie about ten years of 260 days/yr)
2. Select the SP 100 standard list for the focus list basis
3. Create a custom OmniScript column using this formula: Avg(C,14) * Avg(V,14)
... the highest value in that sorted column = 6,000,000,000 (ie $6 billion/day averaged over a two-week period)
... number of symbols in that list with > $100,000,000 avg daily liquidity as of ten years ago: 87
... the lowest value is ZERO ... in fact Eight symbols show 0 ... that is, they were not even being traded back then
... SO - only 87 of the SP100 symbols had adequate liquidity to be traded by the test strategy as of 2600 bars ago.
Now, let's see how often in that 10-year period the liquidity filter allows enough symbols through to the list that the strategy has available for trade-prospecting. To do this, add another custom OmniScript column (everything else the same): -sum((Avg(C,14) * Avg(V,14) > 1e8),2600)
... this formula counts how many days the liquidity passes the filter, over the past ten years
... 75 of the SP100 symbols pass that filter every day ... 86 symbols pass it for at least 2500 of the 2600 days.
... So ... it looks like 86 symbols offer a "consistently adequate liquidity", out of today's SP100 list over the past ten years.
... there are 98 symbols in the SP100 list today (go figger) ... so let's summarize by saying 85% of the SP100 symbols are adequately liquid for the past 10 years.
The horse is not quite dead yet. I've noticed that in recent years, the starting list for testing has expanded to include the full SP500, presumably to provide more opportunities for the strategy to "hit" and thus allow full allocation of funds more consistently (a good thing).
If we check the SP500 using the same method as above, the result is that of the 490 symbols currently on that list:
... 55 did not exist 10 years ago
... 91 had all 2600 days pass the test
... 135 had at least 2500 days pass the test
... So ... that means only about 28% of today's SP500 symbols actually would be available for trading for the large majority of days over the past ten years.
Now, let's say that we want a bigger, more diverse list for a starting point ... ie we are working with a bigger starting population like the Russell 1000 ... so that we can apply other filters as well, to get "better but viable" candidates. Checking the Russ1k, of the 1854 symbols currently on that list:
... 230 did not exist 10 years ago
... only 139 had at least 2500 days pass the test
... So ... that means a bit less than 8% of today's Russ1k symbols actually would be available for trading for the large majority of days over the past ten years. (remember, more filters would likely be in play as well ... but assume the liquidity-percentage remains relatively constant)
If we use a list that is not "purely" large-cap for a starting point, such as the Russell 2000 ... of the 1854 symbols currently on that list:
... 844 did not exist 10 years ago
... ONLY 2 had at least 2500 days pass the test
... So ... that means less than 1% of today's Russ2k symbols actually would be available for trading for the large majority of days over the past ten years. This is to be expected since the Russ2k has low-midcap stocks that are do not have as much institutional trading ... and means for liquidity purposes, large-cap is almost a requirement.
Finally, let's say we have a several extra "picky" filter-rules and are using the list and/or our strategy doesn't fire frequently ... in that case we want to open up the starting point fully, using All Optionable Stocks. Checking the Optionables, of the 4349 symbols currently on that list:
... 1939 did not exist 10 years ago
... only 180 had at least 2500 days pass the test
... So ... that means only about 4% of today's Optionable symbols actually would be liquid enough for trading for the large majority of days over the past ten years. (remember, more filters would likely be in play as well ... but assume the liquidity-percentage remains relatively constant)
Consolidating all this ... let's just average the results, presuming that sometimes you use the SP100, sometimes the SP100, sometimes the Russ1k, and sometimes the Optionable lists ... the overall average of the percent-available symbols, over a ten year period is: (85% + 28% + 8% + 4%) / 4 = 31% are viable throughout the test period.
btw ... Dynamic Lists would raise these percentages considerably ... but DL's cannot be used with ATM so we need to stick with the analysis above
Now, let's consider the trades that are taken using the 7% allocation method, in the last year or so of that time period. 7% of $100 million is $7,000,000 ... which buys 10,000 shares of the $700 stock, 100,000 shares of the $70 stock, and 1 million shares of the $7 stock.
Hmmm. Those are some BIG trades ... even for the $700 stock. A $7 million trade, regardless of the number of shares actually bought/sold, would:
a. be very hard to get a single, clean fill ... probably many trades would fail
b. would almost certainly suffer from significant slippage in entry/exit prices
c. would almost certainly create a "pop" in the price (maybe the H for the day)
PRACTICALLY SPEAKING, I doubt that most OT users, regardless of their account size, would be comfortable "regularly" tying up more than about $100,000 in any given trade ... which means that with the 7% equity rule, we start getting uncomfortable with the trade sizes when the account reaches $1.5 million.
Hmmm again. So ... our "normal attractive" PortSim equity curve took us all the way to $100 million in 10 years ... and for many curves I've seen, it takes about half that time to get to $1.5 million.
So, here is the BIG QUESTION: How will things work in the second half of the test period, when the trade sizes required by PortSim get to be too big for our comfort level?
The answer is fairly clear ... we will LIMIT the sizes to our max comfort-level equity level ... and in order to keep our account fully funded, we will have to TAKE MORE TRADES every day (the PortSim analysis %Equity allocation model that got to $100 million used no more than 14 trades/day)
HOWEVER, presumably we have used the cool ranking and market state methods in ATM or OmniVest to pick the "best" 14 trades every day. So ... if we need to find MORE trades to keep us allocated, we need to put our money into WORSE-ranked opportunities.
How many? Well, if our account gets to $100 million or so in the final year, and if we don't want to tie up more than $100,000 in any given trade, then that means we will need ONE THOUSAND SYMBOLS IN TRADE every day. That's 986 worse-ranked symbols than the PortSim is using.
However ... look back at the analysis of how many symbols pass the liquidity test ... only 31% of our list, on the average. Generalizing, that means to get 1000 tradeable symbols using the liquidity filter described above, our starting list has to have at 3200+ symbols in it ... and our strategies have to be actively trading EVERY SINGLE ONE of them.
Clearly, this is absurd
And that's why, at the beginning of this post, I made the bold statement that PortSim runs which we very often are using to select strat's, tune ATM's, etc are USELESS ... misleading.
Please look over my earlier post in this thread that suggests alternatives.
The coolest and simplest and most flexible fix to PortSim modelling that would solve ALL of this, is ...
Allow the user to select more than one Allocation Method ... and give them a single new input that tells Port Sim to set the size of each trade based on the MINIMUM, Average (or maximum-bad) of the selected methods.
Doing this, we can use % Equity until its sizes are too big, and let Fixed $ take over above that. Or (my preferred choice by far), ALSO activate the Turtle Trader $ at Risk method as well ... and tell PortSim to use the Minimum of the three.
my guess is that making this change to OT would be a lot simpler than other possible approaches involving custom formulae, etc that I've suggested earlier. So, if you agree that this is a concern, and like the proposed solution, please drop an email to Ed or Jeff referencing this post ... the link to this post is: https://www.omnitrader.com/currentclients/otforum/thread-view.asp?th...
Posted 8/31/2018 5:48 AM (#9160 - in reply to #9159) Subject: Most-Useful PortSim-Research Alloc-Meth
Location: USA: GA, Lawrenceville
I failed to make one additional major point at the end of my earlier post (was tired of typing). That is - using compounded %Equity models over a multi-year period *hugely* biases the results for the most recent trades.
In my example I pointed out that 100k-1.5m took roughly the first 5 years, and the latter five years added $98.5m - effectively “squashing” the comparative performance differences between strategies related to the trades in the first half. And the last quarter of the test period is similarly hugely more influential than the third quarter of the period.
So - bottom line - the compounding effect makes the ten year test absurd per se. Doing a test over one or two years, as I stated at the beginning, with the same strategy and trade frequency, is not as much of a problem.
For the purposes of *comparing the relative merits* of different strategies or methods, a fixed dollars trade size is by far the best approach - it evens the playing field over time and gives each trade statistically the same influence as every other trade during the test period.
(In response to a comment from Larry re the benefits of using risk-based sizing in contrast to equity-based) …
I fully agree that dollars at risk in a trade (ie entry vs stop times shares) is highly important - and for most active stock traders, likely more important to portfolio heath than the capital tied up in the trade. The Turtle Trader plugin provides an allocation method based on volatility-risk, which is a close cousin.
These posts have been trying to encourage people to use a more mathematically rigorous and pragmatic choice for allocation than compounded % Equity. For simplicity’s sake, using the tools we all have at hand now, fixed-$ seems the most justifiable and reliable selection when trying to choose between alternative strategies or methods or parameter settings.
However - I would also consider Fixed Risk $ to be a reasonable method, as long as that same method is used in actual trading (for those that own Turtle Trader). If the strategies being considered all use some form of volatility-based exit methods (such as trailing stops based on atr-multiples), this would be the superior choice. Since it’s not compounded, it keeps a level playing field for all trades across a testing period.
It’s my observation though that most strategies produced by Nirvana tend to gravitate to exit methods that are either time based or %price based (rather than ATR-volatility based). Insofar as that may be the case for a given set of tests, fixed $ would likely yield more statistically-representative conclusions.
Here’s a challenge to the “thinkers and try-ers” in the group.
The next time you decide to run some PortSim tests to determine which strategies are best, or to fine tune parameters - *do it twice*.
The first time, use the good ole %Equity method with compounding.
The second time, run the tests the same way but using Fixed $ allocation. Maybe start with a bit more money (to avoid killing the account early on) - maybe $200k instead of $100k - but this is not essential.
Then, separately evaluate the tests to see which strategies / methods / parameter-values bubble to the top and appear to be “the best”.
I am sure that in many cases, the answers will be DIFFERENT.
In those cases, my point is that the “right choice” - the one that is most likely to hold up well in future trading - will be the one based on non-compounded fixed $ allocation.
Posted 8/31/2018 7:00 AM (#9161 - in reply to #9159) Subject: Most-Useful PortSim-Research Alloc-Meth
Location: USA: FL, Bradenton
Thanks for your thinking on that. It all tends to get a little confusing.
I agree the fixed $ may be a better way of looking at results. For futures trading in contracts with a required margin each contract increments the $ by the margin. I have tried a few runs and examined results in Excel and it looks like Port Sim uses the database margin required so that does simplify it a bit. If I set the Sim setting to a max shares of X then it only trades up to that number and eliminates that parabolic increase in trade size as the account grows.
I'm uncertain if all the gears mesh with ATM and port sim with futures. Have you done much with futures in OT?
One of the problems I have right now is when I associate my user data symbols in OT it doesn't also link the COT data. Barry says that isn't user editable. I have looked at the databases with Access but I know very little about it and don't want to screw it up. I keep hoping more people will get into futures so maybe N could put some more resources into futures.
In the meantime is this something that you could do on a fee based arrangement?
Posted 8/31/2018 7:25 AM (#9162 - in reply to #9161) Subject: Most-Useful PortSim-Research Alloc-Meth
Location: USA: GA, Lawrenceville
I’ve noticed your posts about the apparent lack of flexibility &/or documentation re how OT deals with futures data, margins etc. I’m not (currently) a futures trader - I don’t even have an active futures data feed unless a client lets me use theirs while working a project for them. So, I’ve never tried to delve into the nooks and crannies of N platform mechanics in futures. My devel work centers around stocks simply because most of the tools and concepts which work well for stocks also have significant value for futures and forex and (simple) options.
More to the point however - the KEY ISSUE in this allocation debate, INSOFAR as it applies to testing and development A vs B vs C comparisons, hinges on COMPOUNDING. % Equity naturally uses compounding. Fixed $ (either measured by per-trade Capital-commitment or by per-trade Risk-exposure) does not suffer from the potentially horribly-misleading decisions which compounding sets the stage for.
Using popular lingo, I would classify results and decision making from Compounded allocation while testing A vs B as “Fake News”.
I believe this is true for stocks, futures, forex, options, mutual funds, bitcoin, Texas hold’em, and Ponzi schemes. It’s just basic math and scientific method.
Posted 9/1/2018 10:45 AM (#9163 - in reply to #9162) Subject: The DANGER of %Equity PortSim Allocation
Location: USA: GA, Lawrenceville
Here are some additional useful clarifications and suggestions made in the parallel Nirvana thread ...
Vinay pointed out that PortSim's Account Settings tab already has an option for setting a max limit on the dollar-amount, on a "per trade" basis ... thanks again for that reminder, Vinay! My suggestion would be to keep that value <= $100,000 or so for the "typical" OT client ... unless you have a *very* large account, and *like* the idea of all
the eggs in just a few baskets. And, to stay strict to the topic ... this setting would not be necessary for the "research" mode using Fixed$.
A test regimen that I suggested:
Pick out, say, five individual strategies, all of which you think are pretty good ones.
Using the same focus list and 10 yr time frame etc, do a PortSim run on each separate strategy using Fixed $. Examine the results and make a decision from those runs only, using whatever criteria you want, about which of the five is best, second best, etc.
Now, repeat that process for the same five strats with same FL and timeframe etc, this time using % Equity. Examine those results and use the same decision criteria to rank the strategies.
My prediction is that the order in which you ranked them will end up being different. And my assertion is that the decision made from the Fixed $ runs, regarding that ranking, will be the most reliable and representative one, for future use.
Re doing this with ATM - it is a MUCH more complex playing field, and care must be taken to set param's appropriately to accomplish similar decision making comparisons. But the comparisons always should be Fixed $ vs Fixed $.
The main focus of this discussion is relative to DOING RESEARCH - not about active live trading at the HRE. The research I am speaking of is when a user is trying to answer a question like:
A. Of these five strategies, which is the best and which is the worst?
B. Within a range of possible parameters for a Block in a Strategy, what values are best?
C. (More complex) For some ATM feature such as Ranking formula, what’s the best to use?
To answer those kind of focused questions, I believe that my prior points have satisfactorily proven that Fixed$ is the allocation method that wil most properly utilize the wide variety of market fluctuations across a ten year backtest, such that the resulting choices have the highest likelihood of providing robust future performance - regardless of the allocation method used in actual trading.
I am absolutely confident of this, fwiw. It’s just math. (But of course I have been known to be wrong before … ;-)