| Pages:
1
..
26
27
28
29
30
..
47 |
Bruce
Member
 
Posts: 115
Registered: 22-7-2018
Location: Auckland - New Zealand
Member Is Offline
Mood: No Mood
|
|
An NQ update, I completed some new builds earlier in the year and over the past week or so have had some awesome results from real trades, one day
last week delivered a $20k day.
These builds were delivered using AU, build dates 2007/01/01 to 2017/06/30, 8 indicators for the Indicator test, Build = Goldv1.1, WF all data to
2020/02/28. SF = CloseToHighLow3V2, appears to be the go to SF for ES, NQ and YM. Performance report (with Comm and slippage) below is for OOS only
(2017/06/30 to 2021/03/09) and I've included a real-world equity chart for the past couple of weeks.
Peters made some further advancements on these systems and I suspect with the most recent GSB release there's undoubtedly more that could be achieved.
I hope this helps... 
Thanks received (10):
+1 matchhanson at 2021-03-17 13:11:18 +1 avatartrader at 2021-03-16 10:35:59 +1 OUrocketman at 2021-03-10 18:02:09 +1 cotila1 at 2021-03-10 12:54:16 +1 Piet at 2021-03-10 08:18:02 +1 sfuser108 at 2021-03-10 07:20:23 +1 bizgozcd at 2021-03-10 05:50:58 +1 admin at 2021-03-10 02:55:26 +1 erlendsolberg at 2021-03-10 02:53:25 +1 LucaRicatti at 2021-03-10 02:51:31
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Awesome bruce. How many contracts / systems traded to get your 25k of wins?
|
|
|
Systemholic
Banned
Posts: 15
Registered: 5-8-2019
Member Is Offline
|
|
hi bruce, i don't seem to be able to find CloseToHighLow3V2 in my AU. i did see CloseToHighLow3 but not the V2. Are they the same?
Also Peter i also cannot find your closeLessHighLowv3 . Is it available already?
tks
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Great news, Bruce. Thanks for sharing.
I got some great strategies on ES by using the CloseToHighLow3V2 secondary filter, crossANDclosed, ES 30 min, session 0830-1500.
Did you use the usual session time 08:30 to 15:00 as well or did you get even better results using alternative session times?
All the best
@Systemholic, CloseToHighLow3V2 differs from CloseToHighLow3.
CloseToHighLow3
if (close > prevDayClose) then
result = hhNorm
else if (close > prevDayClose) then
result = -1 * llNorm;
CloseToHighLow3V2
if (close > prevDayClose) then
result = hhNorm
else if (close <= prevDayClose) then
result = -1 * llNorm;
|
|
|
RandyT
Member
 
Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline
|
|
Quote: Originally posted by Carl  |
@Systemholic, CloseToHighLow3V2 differs from CloseToHighLow3.
CloseToHighLow3
if (close > prevDayClose) then
result = hhNorm
else if (close > prevDayClose) then
result = -1 * llNorm;
CloseToHighLow3V2
if (close > prevDayClose) then
result = hhNorm
else if (close <= prevDayClose) then
result = -1 * llNorm; |
Actually, CloseToHighLow3V2 is just the latest (corrected) version of CloseToHighLow3. If you are using current version of GSB, you will get the V2
indicator.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Carl  | Great news, Bruce. Thanks for sharing.
I got some great strategies on ES by using the CloseToHighLow3V2 secondary filter, crossANDclosed, ES 30 min, session 0830-1500.
Did you use the usual session time 08:30 to 15:00 as well or did you get even better results using alternative session times?
All the best
@Systemholic, CloseToHighLow3V2 differs from CloseToHighLow3.
CloseToHighLow3
if (close > prevDayClose) then
result = hhNorm
else if (close > prevDayClose) then
result = -1 * llNorm;
CloseToHighLow3V2
if (close > prevDayClose) then
result = hhNorm
else if (close <= prevDayClose) then
result = -1 * llNorm; |
830 to 1500 should be used
CrossandCloseD works, but crosssingle is best.
|
|
|
Bruce
Member
 
Posts: 115
Registered: 22-7-2018
Location: Auckland - New Zealand
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Carl  | Great news, Bruce. Thanks for sharing.
I got some great strategies on ES by using the CloseToHighLow3V2 secondary filter, crossANDclosed, ES 30 min, session 0830-1500.
Did you use the usual session time 08:30 to 15:00 as well or did you get even better results using alternative session times?
All the best
@Systemholic, CloseToHighLow3V2 differs from CloseToHighLow3.
CloseToHighLow3
if (close > prevDayClose) then
result = hhNorm
else if (close > prevDayClose) then
result = -1 * llNorm;
CloseToHighLow3V2
if (close > prevDayClose) then
result = hhNorm
else if (close <= prevDayClose) then
result = -1 * llNorm; |
Thanks Carl. These results are with the normal 830_1500 session times, however, I do also use 930_1500 for some of my systems as I have a personal
bias that 830_930 is typically 'amateur hour' though that may be dispelled statistically! 
Thanks received (2):
+1 Carl at 2021-03-13 09:12:38 +1 LucaRicatti at 2021-03-10 18:04:35
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Thanks, Peter.
I have tried PF entry mode crosssinglelevel, but my results are not as good as with crossandclosed.
I wonder what difference in our GSB settings is causing this different outcome.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Carl  |
Thanks, Peter.
I have tried PF entry mode crosssinglelevel, but my results are not as good as with crossandclosed.
I wonder what difference in our GSB settings is causing this different outcome.
|
i was getting on es about 100 fav b with cross and close and about 200 with cross.
WIll publish my nq settings in a while. Perhaps due to your tight stop?? I use $2000
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by admin  |
i was getting on es about 100 fav b with cross and close and about 200 with cross.
WIll publish my nq settings in a while. Perhaps due to your tight stop?? I use $2000 |
For my first test runs I used a 2000 USD stoploss as well.
I suspect it's my trainings filter and fitness function that is guiding me in a different direction.
For short only ES strategies based on crossANDclosed and closetoHL3 I am getting 180 in FavB.
For long only strategies much less.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
@carl Fitness we use np/dd How many long / short in fav B. Did you start at 1997 on ES data
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Hi Peter,
I've tried several build fitness functions.
Crosssinglelevel gets me 248 strats in FavB. crossANDclosed only 142.
My tests show better quality strategies with crossANDclosed, the average trade is higher than with crosssinglelevel.
Short only about 180 in FavB, long only just around 20 in FavB
Only 1 GSB run for each test, so the reliability can be improved by doing more of the same GSB runs.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Carl  |
Hi Peter,
I've tried several build fitness functions.
Crosssinglelevel gets me 248 strats in FavB. crossANDclosed only 142.
My tests show better quality strategies with crossANDclosed, the average trade is higher than with crosssinglelevel.
Short only about 180 in FavB, long only just around 20 in FavB
Only 1 GSB run for each test, so the reliability can be improved by doing more of the same GSB runs.
|
how do you define "My tests show better quality strategies with crossANDclosed"
Its an interesting comment.
you can also build with cross single, and after system build put in a close + offset>Closed filter
|
|
|
RandyT
Member
 
Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline
|
|
Quote: Originally posted by Carl  |
Hi Peter,
I've tried several build fitness functions.
Crosssinglelevel gets me 248 strats in FavB. crossANDclosed only 142.
My tests show better quality strategies with crossANDclosed, the average trade is higher than with crosssinglelevel.
Short only about 180 in FavB, long only just around 20 in FavB
Only 1 GSB run for each test, so the reliability can be improved by doing more of the same GSB runs.
|
Carl, I've seen this exact behavior and I do consider Average Trade to be an important metric when ranking systems. This has all led to the voices in
my head having a very heated conversation about what is the best measurement of the results we are seeing.
NickW has started looking at these results in the context of Count x AT x FavB#. I think this is an interesting way to measure, but again, I worry
that we are overlooking better systems that may not occur that often by weighting them with higher numbers found instead of the truly unique market
filters.
The approach of building short/long only systems is also interesting. I've found that certain entry modes work better for shorts vs. longs and it has
made me question my approach of trying to build short/long combined systems. I've looked at doing short/long only in the past but never really
embraced it.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
@Carl, Randy
there is merit to long only / short only, but less trades and lower peasons etc. Ive played but never felt it really worth while
you could do long, short then combine the metrics etc to compare with long & short. Not sure its worth all the effort
as for rare gems of systems that are hard to find. Im still of the mindset that the stronger the overall metrics, fav B, the higher change of good out
of sample.
Even with highlow3 secondary filter, it seems easy to get systems with a really low correlation to each other on losing trades
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Interesting discussion guys!
@Peter, I find ES strategies based on crossANDclosed + closetoHL3 better, because the average trade on the long side is good enough to trade live.
For a majority strategies that end up in FavB using crosssinglelevel + closetoHL3, the average trade on the long side is only about 80 USD. After
subtracting costs and slippage, only a net average trade of 50 USD remains.
Changing the training filter and/or build fitness might improve the results. I have to test this.
@Randy, "what metrics to use" or "how am I going to build and select strategies". That a difficult one, but very important. Maybe the most important
issue in the whole development process.
I have done a few proof of concept tests by using Python scripts on GSB data.
Let machine learning models look for relationships between the GSB indicators, the in sample metrics and the outcome in out of sample.
It seems that when using all indicators the machine learning models indicate the indicators are the most important.
When I select a smaller group of indicators, the ML models show that the in sample metrics are becoming more important.
It is even possible to look for the best predicting metrics to come up with a "best" fitness function.
The issue is that results are changing for every GSB run, so the results aren't stable.
And what about the curve fitting risks by doing this?
Here is one of the "feature importance" tables from randomforestregressor in Python:
Thanks received (1):
+1 loclhero at 2021-03-19 07:38:32
|
|
|
OUrocketman
Junior Member

Posts: 18
Registered: 10-5-2018
Member Is Offline
Mood: No Mood
|
|
This is an interesting discussion indeed.
Carl, interesting work using machine learning in an effort to elicit the most probable predictors of future success!
At a high level, I think it's interesting that for large number of indicators the importance is on the indicators. Seems to support the overall
notion that when we down select the indicators, we are beginning to sniff out interesting features that while may not easily be intelligible by human
inspection, are in fact there. Once this is accomplished, it seems to make intuitive sense that the big money has knowledge of this as well, and
seeks to make more money in the future in the most efficient way possible, based on the underlying features that tend to move a market--thus the
higher weight on performance.
So, it seems like, at a high level, your feature extraction verifies in some way Peter's two pass approach. I'm curious to know have you considered
ASTAB-C, RSTAB-C, WFE, OOS fitness as features as well and they didn't make the cut on the importance scale in your screenshot above?
Also, have you checked out Microsoft's open source Light Gradient Boosted Method? It's rumored to out perform randoforest, but I'll confess I'm not
currently familiar enough with the topic to assess it--longer term goal of mine.
MS stuff is here: https://lightgbm.readthedocs.io/en/latest/
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by OUrocketman  | This is an interesting discussion indeed.
Carl, interesting work using machine learning in an effort to elicit the most probable predictors of future success!
At a high level, I think it's interesting that for large number of indicators the importance is on the indicators. Seems to support the overall
notion that when we down select the indicators, we are beginning to sniff out interesting features that while may not easily be intelligible by human
inspection, are in fact there. Once this is accomplished, it seems to make intuitive sense that the big money has knowledge of this as well, and
seeks to make more money in the future in the most efficient way possible, based on the underlying features that tend to move a market--thus the
higher weight on performance.
So, it seems like, at a high level, your feature extraction verifies in some way Peter's two pass approach. I'm curious to know have you considered
ASTAB-C, RSTAB-C, WFE, OOS fitness as features as well and they didn't make the cut on the importance scale in your screenshot above?
Also, have you checked out Microsoft's open source Light Gradient Boosted Method? It's rumored to out perform randoforest, but I'll confess I'm not
currently familiar enough with the topic to assess it--longer term goal of mine.
MS stuff is here: https://lightgbm.readthedocs.io/en/latest/ |
Im no longer doing 2 pass. I do one pass force max 10 indicators, min 4
I think its likely just as good as 2 pass, but its faster.
also on nq I found 3 indicators on the first pass was better than 2.
For other markets I'm not yet sure.
@carl.
Ave trade can be boosted by long short day of week filters, the Andrea Unger pattern filters and changing the highlow3 to GSB_CloseToHighLow3v4_offset
after you have built the system
Tertiary filters now work in gsb, but at a quick test don't match. Pattern filters are being worked on now.
Thanks received (2):
+1 OUrocketman at 2021-03-20 00:27:48 +1 Carl at 2021-03-13 05:01:47
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by OUrocketman  | This is an interesting discussion indeed.
Carl, interesting work using machine learning in an effort to elicit the most probable predictors of future success!
At a high level, I think it's interesting that for large number of indicators the importance is on the indicators. Seems to support the overall
notion that when we down select the indicators, we are beginning to sniff out interesting features that while may not easily be intelligible by human
inspection, are in fact there. Once this is accomplished, it seems to make intuitive sense that the big money has knowledge of this as well, and
seeks to make more money in the future in the most efficient way possible, based on the underlying features that tend to move a market--thus the
higher weight on performance.
So, it seems like, at a high level, your feature extraction verifies in some way Peter's two pass approach. I'm curious to know have you considered
ASTAB-C, RSTAB-C, WFE, OOS fitness as features as well and they didn't make the cut on the importance scale in your screenshot above?
Also, have you checked out Microsoft's open source Light Gradient Boosted Method? It's rumored to out perform randoforest, but I'll confess I'm not
currently familiar enough with the topic to assess it--longer term goal of mine.
MS stuff is here: https://lightgbm.readthedocs.io/en/latest/ |
Thanks OUrocketman,
There are a lot of boost models: light boost, catboost, xgboost.
My teacher always says: try an ensemble of models to see what works best on your data set.
I have used the WF metrics in previous data analysis projects.
The issue with WF is, I only have a couple of hundred rows of data.
And it is easy to get 100k rows of data with build and validation results.
By using sampling I divide the GSB test data into an in sample and an out of sample part, so a large number of data points is more reliable.
I will look for these old WF test results and will let you know.
Too high parameter stability like 100% seems very good at first glance, but isn't always the case because 100% stability could also mean only a
particular set of parameter values give the best result, but making a small change in parameter value might cause the results to collapse.
So 100% parameter stability doesn't always mean the strategy is robust (I think...).
I prefer 50% to 80% GSB parameter stability, but always do an extensive rolling and anchored WF in TS and EWFO to be sure.
With all this said, not all the strategies I have selected according to this selection process were profitable going forward.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Just an update for the week.
I had many builds of GSB to try. Mainly working on custom parameters, new indicators, and learning NQ market.
Some of the new indicators are very good, and there are numerous indicators in the pipeline.
I did show some screen shots in the beta build section.
I think NQ is right now the hottest market. Metals have gone quite.
The closetohighlow3 works on many markets like ES NQ YM and likely many more
Last few days I got stuck in that I cant reproduce the results I had a few days earlier, and don't yet know why.
There things can be very time consuming to resolve.
The goal is to do the next video on NQ. ES and likely YM are 95% the same build process.
Thanks received (4):
+1 loclhero at 2021-03-20 16:29:03 +1 Piet at 2021-03-20 00:32:12 +1 OUrocketman at 2021-03-20 00:28:33 +1 mdb at 2021-03-19 20:06:25
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Here are some indicator testing on nq830 start, but start trading at 930
and start trading at 830. Note the accumdist variations on the top end of the list

Thanks received (1):
+1 Systemholic at 2021-03-25 04:11:20
|
|
|
rws
Member
 
Posts: 114
Registered: 12-6-2017
Member Is Offline
Mood: No Mood
|
|
Peak optimizers are always having risks.
There was a system in the past that had shift of parameters
from 1 to 5 up and down for all parameters in the system.
I have been testing this for the last year.
You could have all kind conditions where systems were rejected for example if the profit of the system was not 90% of the original settings with
parameter shift of 1 to 5. I find 3 is often enough.
This was stratasearch a system that was discontinued but it had some great ideas and works good on portfolios of stocks where it can find good OOS
especially with this principle of parameter stability. I use it to run on about 400 of the SP500 stocks and if IS is good there is a very good chance
that OOS is good. Walkforward is almost not necessary with so many tickers and good parameter stability.
What you typicaly see is if 2S or 3S setting goes down very rapidly, you will often have bad OOS performance.
There is complete controll in this way how stable you would like to have your parameters for many different metrics and for the selected shift size.
You will see there will be easily 10 times less systems found but the ones that are found have a much higher chance of having good OOS in different
market conditions. It is one of the best ways to avoid a system that had a lucky peak IS.
The downside of this program is that it only works on daily data but I am convinced that the principle also works intraday.
@ Peter, please evaluate this kind of parameter stability it works.
One way to predict the SP500 index is to count the number of buy/sell signals of it's stocks which seems to work reliable.
Quote: Originally posted by Carl  | Quote: Originally posted by OUrocketman  | This is an interesting discussion indeed.
Carl, interesting work using machine learning in an effort to elicit the most probable predictors of future success!
At a high level, I think it's interesting that for large number of indicators the importance is on the indicators. Seems to support the overall
notion that when we down select the indicators, we are beginning to sniff out interesting features that while may not easily be intelligible by human
inspection, are in fact there. Once this is accomplished, it seems to make intuitive sense that the big money has knowledge of this as well, and
seeks to make more money in the future in the most efficient way possible, based on the underlying features that tend to move a market--thus the
higher weight on performance.
So, it seems like, at a high level, your feature extraction verifies in some way Peter's two pass approach. I'm curious to know have you considered
ASTAB-C, RSTAB-C, WFE, OOS fitness as features as well and they didn't make the cut on the importance scale in your screenshot above?
Also, have you checked out Microsoft's open source Light Gradient Boosted Method? It's rumored to out perform randoforest, but I'll confess I'm not
currently familiar enough with the topic to assess it--longer term goal of mine.
MS stuff is here: https://lightgbm.readthedocs.io/en/latest/ |
Thanks OUrocketman,
There are a lot of boost models: light boost, catboost, xgboost.
My teacher always says: try an ensemble of models to see what works best on your data set.
I have used the WF metrics in previous data analysis projects.
The issue with WF is, I only have a couple of hundred rows of data.
And it is easy to get 100k rows of data with build and validation results.
By using sampling I divide the GSB test data into an in sample and an out of sample part, so a large number of data points is more reliable.
I will look for these old WF test results and will let you know.
Too high parameter stability like 100% seems very good at first glance, but isn't always the case because 100% stability could also mean only a
particular set of parameter values give the best result, but making a small change in parameter value might cause the results to collapse.
So 100% parameter stability doesn't always mean the strategy is robust (I think...).
I prefer 50% to 80% GSB parameter stability, but always do an extensive rolling and anchored WF in TS and EWFO to be sure.
With all this said, not all the strategies I have selected according to this selection process were profitable going forward.
|
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Hi rws,
What do you mean by "shift"?
Do you mean shifting the parameter value over bars?
So a shift of 2 is delaying the parameter value by 2 bars?
Or so mean just changing the parameter input values?
I agree an important goal is to find strategies with good parameter stability.
I think that is exactly what Peter is trying to do in the development process he describes in his Youtube video's about GC, CL and ES.
step 1. build strategies based on Nth IS price data
step 2. select strategies based on Nth IS or Nth(IS+OOS) performance
step 3. back-test the top 300 selected strategies on three separate years OOS
step 4. select the strategies that also perform well in OOS and copy these strategies to FavB
step 5. determine the different families in FavB. A family is a set of strategies that use the same combination of indicators
If a family in FavB is large, this means a lot of strategies with different parameter values (but the same indicators!) end up in the good performing
strategies. So the larger the family, the better the parameter stability.
@Peter, please correct me if I am wrong in explaining your method.
|
|
|
rws
Member
 
Posts: 114
Registered: 12-6-2017
Member Is Offline
Mood: No Mood
|
|
Hi Carl,
With 1shift it means that an optimized RSI value of 10 is changed to 9 and 11 and results compared to 10
With 2shift it means that an optimized RSI value of 10 is changed to 8 and 12 and results compared to 10.
This has some disadvantage when parameters are very small. It will give a message when there is a parameter value of 3 and there is a shift of 3 but
the building will continue. This doesn't happen very often but in effect it would avoid systems with small values of parameters so there is room for
improvement. But the final result is that only systems are build with good parameter stability.
In GSB as a result of a big family with the same systems and different parameter values it would have the same confirmation but it is not as direct,
visable and configurable as in Stratasearch.
In Stratasearch you can see the performance degrade as a result of the shift in parameters and/or you can rejects systems while building that have no
parameter stability by just coding some simple rules like:
profit with shifted parameters by 2 > profit original params x0.8
profit with shifted parameters by 3 > profit original params x0.75
drawdown with shifted parameter by 1 > drawdown original x 1.15x
etc
$V_5SPctProfitable > $V_PctProfitable x 0.7 meaning that if parameters were shifted by 5, the % profitable is maximum degraded by 0.7 times
So it gives a very clear indication if the current parameters are not in a peak and how wide this "non peak" is. You have controll about how wide you
want the "non peak" and based on what criteria and you can use it as a rejectable metric while building.
Stratasearch builds in stages, first systems, then optimizations of parameters and optional adding intermarket rules like breadth, sector strength
etc. There can be many rules for all kinds of metrics. It also allows to build system on SP500 where as a metric there is a rule that the system
should also work for a certain % on other markets like Nasdaq as a rejectable rule. So it does not build on Nasdaq in that case.
I know there is parameter stability in GSB but as you noted that even with 100% there still can be instability OOS so I mentioned the way it was done
in Stratasearch. I don't know a direct comparison in GSB vs stratasearch in this case.
I am an early adopter of GSB but didn't use GSB for a while but within a couple of weeks I have time again and I read about many good additions.
I am not here to advocate stratasearch because it is useless for futures and commodities real time. I also don't advice to testdrive stratasearch
because it takes months to test every aspects and it is a discontinued program. But after putting in many hours and long building times I am convinced
it is possible to build reliable systems OOS by having this broader user configurable parameter optimization.
Of course you can build many systems and try as much parameter changes in the hope you have the broader peak but I can't see it directly in GSB or
instruct GSB to have a broad peak. How do you know you have enough builds with different parameters for the system? You can confirm in Tradestation if
there was no peak but
so far as I can see not a way to build a broad peak.
I wanted to test a system builder that supports intermarket relations which Stratasearch has and while searching for that I found that this way of
parameter optimization in Stratasearch was more important than intermarket relations.
I think it is a valuable addition to GSB.
Quote: Originally posted by Carl  | Hi rws,
I think that is exactly what Peter is trying to do in the development process he describes in his Youtube video's about GC, CL and ES.
step 1. build strategies based on Nth IS price data
step 2. select strategies based on Nth IS or Nth(IS+OOS) performance
step 3. back-test the top 300 selected strategies on three separate years OOS
step 4. select the strategies that also perform well in OOS and copy these strategies to FavB
step 5. determine the different families in FavB. A family is a set of strategies that use the same combination of indicators
If a family in FavB is large, this means a lot of strategies with different parameter values (but the same indicators!) end up in the good performing
strategies. So the larger the family, the better the parameter stability.
@Peter, please correct me if I am wrong in explaining your method.
|
|
|
|
rws
Member
 
Posts: 114
Registered: 12-6-2017
Member Is Offline
Mood: No Mood
|
|
You can even have a custom criteria that is searching for
and optimizing stable parameters apart from profit and
average trade. That often works a bit better than only excluding
limits because it allows systems that initially weren't good because
of bad parameters to get improved.
$V_AvgTrade$V *AvgAnnReturn / $V_LossDays
*($V_MaxDrawDU / $V_1SMaxDrawDPctU)^2
*$V_MaxDrawDU / $V_2SMaxDrawDPctU
*$V_MaxDrawDU / $V_3SMaxDrawDPctU
If you want even broader stability then you could add 4 and 5 shift
*$V_MaxDrawDU / $V_4SMaxDrawDPctU
*$V_MaxDrawDU / $V_5SMaxDrawDPctU
And in addition you can set limits for
*$V_MaxDrawDU / $V_XSMaxDrawDPctU
so systems are rejected
So this is NP * AT times a relation of the drawdown
for the current parameters divided by the drawdown when
there is 1, 2 and more shifts.
$V_LossDays is the number of average days for losing trades.
Quote: Originally posted by rws  | Hi Carl,
With 1shift it means that an optimized RSI value of 10 is changed to 9 and 11 and results compared to 10
With 2shift it means that an optimized RSI value of 10 is changed to 8 and 12 and results compared to 10.
This has some disadvantage when parameters are very small. It will give a message when there is a parameter value of 3 and there is a shift of 3 but
the building will continue. This doesn't happen very often but in effect it would avoid systems with small values of parameters so there is room for
improvement. But the final result is that only systems are build with good parameter stability.
In GSB as a result of a big family with the same systems and different parameter values it would have the same confirmation but it is not as direct,
visable and configurable as in Stratasearch.
In Stratasearch you can see the performance degrade as a result of the shift in parameters and/or you can rejects systems while building that have no
parameter stability by just coding some simple rules like:
profit with shifted parameters by 2 > profit original params x0.8
profit with shifted parameters by 3 > profit original params x0.75
drawdown with shifted parameter by 1 > drawdown original x 1.15x
etc
$V_5SPctProfitable > $V_PctProfitable x 0.7 meaning that if parameters were shifted by 5, the % profitable is maximum degraded by 0.7 times
So it gives a very clear indication if the current parameters are not in a peak and how wide this "non peak" is. You have controll about how wide you
want the "non peak" and based on what criteria and you can use it as a rejectable metric while building.
Stratasearch builds in stages, first systems, then optimizations of parameters and optional adding intermarket rules like breadth, sector strength
etc. There can be many rules for all kinds of metrics. It also allows to build system on SP500 where as a metric there is a rule that the system
should also work for a certain % on other markets like Nasdaq as a rejectable rule. So it does not build on Nasdaq in that case.
I know there is parameter stability in GSB but as you noted that even with 100% there still can be instability OOS so I mentioned the way it was done
in Stratasearch. I don't know a direct comparison in GSB vs stratasearch in this case.
I am an early adopter of GSB but didn't use GSB for a while but within a couple of weeks I have time again and I read about many good additions.
I am not here to advocate stratasearch because it is useless for futures and commodities real time. I also don't advice to testdrive stratasearch
because it takes months to test every aspects and it is a discontinued program. But after putting in many hours and long building times I am convinced
it is possible to build reliable systems OOS by having this broader user configurable parameter optimization.
Of course you can build many systems and try as much parameter changes in the hope you have the broader peak but I can't see it directly in GSB or
instruct GSB to have a broad peak. How do you know you have enough builds with different parameters for the system? You can confirm in Tradestation if
there was no peak but
so far as I can see not a way to build a broad peak.
I wanted to test a system builder that supports intermarket relations which Stratasearch has and while searching for that I found that this way of
parameter optimization in Stratasearch was more important than intermarket relations.
I think it is a valuable addition to GSB.
Quote: Originally posted by Carl  | Hi rws,
I think that is exactly what Peter is trying to do in the development process he describes in his Youtube video's about GC, CL and ES.
step 1. build strategies based on Nth IS price data
step 2. select strategies based on Nth IS or Nth(IS+OOS) performance
step 3. back-test the top 300 selected strategies on three separate years OOS
step 4. select the strategies that also perform well in OOS and copy these strategies to FavB
step 5. determine the different families in FavB. A family is a set of strategies that use the same combination of indicators
If a family in FavB is large, this means a lot of strategies with different parameter values (but the same indicators!) end up in the good performing
strategies. So the larger the family, the better the parameter stability.
@Peter, please correct me if I am wrong in explaining your method.
| |
|
|
|
| Pages:
1
..
26
27
28
29
30
..
47 |