GSB Forums

Update to GSB methodology. A must read, the backpacker and the Art of war by Sun Tzu

 Pages:  1  2    4  ..  12

admin - 6-7-2018 at 06:49 PM

ES WITH 2 OSC. degradation -27.2% oos fitness 3876k
ES WITH 3 OSC. degradation -20.4% oos fitness 4495k
ES WITH 5 OSC. degradation -20.1% oos fitness 4304k

admin - 11-7-2018 at 10:18 PM

This is all my CL systems out of sample results added together.
2 systems are highly correlated when you look at the in sample period.
Out of sample Jan 12 2018

On a related note,
Best CL market degradation figure was -29%.
For those who have not seen the must see video its. https://youtu.be/2R4t9uYzfD4
build 1500 systems but not look at metrics of every second day.
then record metrics of the UN-seen days
degradation% = (1-(OOSMetrics/ISmetrics))*100







cl_all.png - 106kB closedTrade.png - 69kB

Attachment: Login to view the details

edgetrader - 18-7-2018 at 09:14 AM

Quote: Originally posted by admin  
ES WITH 2 OSC. degradation -27.2% oos fitness 3876k
ES WITH 3 OSC. degradation -20.4% oos fitness 4495k
ES WITH 5 OSC. degradation -20.1% oos fitness 4304k


Seems to me that adding more indicators to GSB could be useful. 5 oscillators can approximate what 3 more suitable oscillators would do. My suggestions are William Blau and Thomas DeMark. In the latter case you may have to use different names because DeMark trademarked his indicator names (yet not the formulas that are disclosed in his books).

admin - 18-7-2018 at 04:24 PM

Quote: Originally posted by edgetrader  
Quote: Originally posted by admin  
ES WITH 2 OSC. degradation -27.2% oos fitness 3876k
ES WITH 3 OSC. degradation -20.4% oos fitness 4495k
ES WITH 5 OSC. degradation -20.1% oos fitness 4304k


Seems to me that adding more indicators to GSB could be useful. 5 oscillators can approximate what 3 more suitable oscillators would do. My suggestions are William Blau and Thomas DeMark. In the latter case you may have to use different names because DeMark trademarked his indicator names (yet not the formulas that are disclosed in his books).


If there is anything you want, post under wish list.
http://trademaid.info/forum/viewthread.php?tid=9&page=5#pid2...
All requests from any other uses will be done in one sweep, not at each request.
Can you send code too.

admin - 1-8-2018 at 11:18 PM

I am working on both improved methodology and building crude oil (CL) systems
To build CL systems on 30 minutes bars, I was getting 22,000 iterations per one system
Using 28,29,30,31,32 minute bars, am getting 4775,000 iterations per system
Using 25,26,27,38,29,30,31,32,33,34,35 min bars taking 240,500,000 iterations per system!!
So I build 3000 systems on 29,30,31 min bars
market degradation score -19.5%
I then did market verification on,27,38,29,30,31,32,33 min bars
90 systems passed all 7 markets verification score.
market degradation score was -10.7%

I then did market verification on,26,27,38,29,30,31,32,33,34 min bars
46 systems passed all 7 markets.
market degradation score was -10.3%



I then did market verification on 25,26,27,38,29,30,31,32,33,34,35 min bars
25 systems passed.
market degradation score was -10%

What this clearly show is that systems that pass market verification on other time frames also degrade a lot less out of sample.


You might argue that the reason they passed market verification tests is the fitness was higher that the other systems
The top of 3000 systems with highest pearson degraded 13.2%
the top systems with highest fitness degraded 37% and most importantly,
the out of sample fitness in the 25_35 min bars was higher than the out of sample fitness of the systems chosen by peak fitness and top pearsons



cyrus68 - 2-8-2018 at 12:07 AM

I presume you used fairly loose performance filters.
How many indicators were used?

admin - 2-8-2018 at 01:17 AM

Quote: Originally posted by cyrus68  
I presume you used fairly loose performance filters.
How many indicators were used?

For this test I used 5 indicators, though 3 would have been fine.
I need to research more if 3 or 5 is best on CL.
Training was set to 100%, pf 1.2 and pearsons 0.95, min trades 100.
Its the pearsons on CL that makes it hard to find CL systems, not the pf. nth set to 1.

admin - 12-8-2018 at 06:21 PM

I'm still working hard on crude oil systems, at the same time debugging the next build of GSB. Some of the bugs have been hard to find, hence the delay in the next version.
I built 20,000 CL systems on 29,30,31 minute bars.
Market degradation was a very healthy -12.1 % (Thats the drop out of sample when ever second day is used as out of sample)
all of 2018 was also unseen by GSB.
When I verified the systems by all bars from 25 to 35 minutes, we got 814 systems. PF of 1.8 and pearsons 0.95 was the criteria of market verification.
The 2018/1/1 to 20190609 (yyymmdd) out of sample profits were an average of net profit $2134, aver trade $145.26 and profit factor of 5.07
A very good outcome. No walk forwards were done at this stage. It implies 2018 was a low profit, but high profit factor year for CL.

To compare with my 7 CL systems that had been Walk forwarded, and built only on 30 minute bars.
the profit for same period was an average of $2220 per system and PF of 1.866, average trade $131.69
3 of the 7 systems were a long only system and a short only system put on the same chart.(for this example treated as one system only) I'm still not convinced this is a good idea, but you get much higher in sample results - but i would expect more degradation
The OOS results of the 3 long & short system over the same period was $3060 per system, PF of 1.720

Bruce - 12-8-2018 at 09:52 PM


Any possibility of you pulling together a video of the work you're putting into the CL system?

admin - 12-8-2018 at 10:00 PM

Quote: Originally posted by TradingRails  

Any possibility of you pulling together a video of the work you're putting into the CL system?


Im keen to do this, but there are 2 issues yet to resolve.
Sometimes on my CL, the GSB results <> TS.
In current build there is no way to look at performance metrics of
all systems collectively using the wf parameters.
What I want to do is say take the top 30% or so of the 814 systems that have been walk forwarded, and then see their collective 2018 performance. Ive been working on CL systems for weeks, so also keen to be live trading them.

admin - 15-8-2018 at 10:05 PM

In chatting to the lead GSB programmer yesterday, something came to light that I never thought of.
Secondary filters (SF) are one of the most significant settings in GSB. What is the best SF is unique to every market, but its normally very easy to find.
How you find it is the ration of market degradation using nth1, and also look at the new ratio of computations/system (low is best)

For many markets, especially indices - closeD(1)-close is the best setting.
GSB was designed, started on the ES500emini (ES), so a value of 16 is typical. This equates to a $800 move.
Why we are moving to (closed(1)-close)*bigpointvalue with a roughly $800 setting is, this works much better
on all of ES, ER, EMD, NQ, DOW.
This is implemented in not released 48.12 builds on-wards.
But if we use Crude oil, at $1000 per point (compared to ES @$50 a point) is that the equivalent of a $800 move would be 0.8. The default settings of SF entry level however are 0 to 100 step 1.
This is not suitable for CL. 0 to 2 step 0.025 would be more appropriate. However with CL and closedBPV $2000 was the best value (from memory) so we might use 0 to 3000 step say 100.

Now what Ive missed is that sf closed(1)-close <> ga closed(1)-close
The reason is sf closed(1)-close is NOT NORMALIZED and ga closed(1)-close IS normalized.
Normalization in GSB means all values are made to average in the range of -100 to 100 in the last 100 bars.
It was never my intention for example to normalize closed(1)/Close, but this is what happens when SF closed(1)/close is used. This is not wrong, and it clearly works. However I am likely to have sf option of GA(normalized), closeD(1)-close,Closed(1)/close and (closed(1)-close)*bigpointvalue
So we are likely to add SF Closed(1)/Close soon.

Shown is where CF C_bpv is, the settings used for SF range and the computations / system ratio

sf.png - 24kB sf2.png - 19kB sf3.png - 29kB

admin - 17-8-2018 at 05:06 PM

SF non normalized Close/Closed(1) has been added in 48.14. On CL results were very poor. Basically on CL, any of the close CLoseD filters equally worked well, if they are normalized. On CL non normallalized closedBpv worked best, but results overall poor with non normalized SF. This implies that for each market, we need to figure out what SF filter is to be used, and if its normalized or not. This is a fairly simple and quick task

Carl - 18-8-2018 at 12:38 AM

Hi Peter,

It is already possible in GSB to switch off the SF.

Might be a good idea to add the possibility to switch off the primary filter, so we can do a quick scan on what SF works best on a particular symbol.

Thanks

admin - 18-8-2018 at 12:41 AM

Quote: Originally posted by Carl  
Hi Peter,

It is already possible in GSB to switch off the SF.

Might be a good idea to add the possibility to switch off the primary filter, so we can do a quick scan on what SF works best on a particular symbol.

Thanks

you mean switch of primary filter.
Never tried but this could be possible,else you could make primary filter the same as secondary filter
Regardless, its a simple and quick process to use nth 1 and see what works best

admin - 19-8-2018 at 09:11 PM

Here is a not complete video showing making Crude oil systems on multi time frames and using verification data of other time frames.
What will be in the complete video is in this slide.

video-goalsC.png - 52kB

Attachment: Login to view the details

Bruce - 21-8-2018 at 09:31 PM


This is really good material Peter.

If this sets a foundation for a best practice with developing systems in GSB when it comes to indices, what would the analysis combos look like for say ES? (I think you may have done an earlier video using ES, would that still apply here?)

Does the final TS workspace have all 3 data streams, I'm trying to work out what the final output (deliverable) looks like in TS.

For the CL example you've used here, will the workspace have HO, NG, RB, etc or just 30min CL?

Thanks!

admin - 21-8-2018 at 09:52 PM

Quote: Originally posted by TradingRails  

This is really good material Peter.

If this sets a foundation for a best practice with developing systems in GSB when it comes to indices, what would the analysis combos look like for say ES? (I think you may have done an earlier video using ES, would that still apply here?)

Does the final TS workspace have all 3 data streams, I'm trying to work out what the final output (deliverable) looks like in TS.

For the CL example you've used here, will the workspace have HO, NG, RB, etc or just 30min CL?

Thanks!

Thanks for the comments, its appreciated. All these tests should be done again for ES, I think. But ES is easier and more forgiving than CL I think.
The final workspace has all 4 data streams cl,ng,rb,ho but only 30 minute bars.
My last tests showed market degradation was best on ES with no cash indices. Dont think it was a big difference. But diversity should be greater with other data streams. This is a whole new universe to explore.
Currently I am less into multi market, than I am multi time frame.
But in time we can objectively figure this out using GSB.
So I would be building on 29,30,31 and validate on the other bars of 25 to 35. Validation on er,emd,nq,(dow?) MIGHT be worth doing, but wont work well using close-CloseD. (close-closedDbpv would be better)
We also need to re investigate what secondary filters are best.
close-closed{bpv) and close/Closed non normalized vs normalized.
I'm intending to have ALL of these options chosen genetically fairly soon.
Out of 20,000 CL systems, only 890 passed on all other bars 25 to 35 minutes. Of these 890 only 4 passed market validation on ng,rb,ho and the 2018 results were below average on the 4 systems. But 4 is not a big enough sample. So I decided to remove systems that validated on zero other markets. (1 was ok)
This is all a new area of research.

cyrus68 - 21-8-2018 at 10:14 PM

Thanks for an interesting video. Aiming for 20,000 unique systems, and using fairly tight performance filters, clearly showed the intention to generate tradeable systems. You had already settled the issue of number of indicators, secondary data, secondary filter...etc... in preliminary testing.

admin - 21-8-2018 at 10:25 PM

Quote: Originally posted by cyrus68  
Thanks for an interesting video. Aiming for 20,000 unique systems, and using fairly tight performance filters, clearly showed the intention to generate tradeable systems. You had already settled the issue of number of indicators, secondary data, secondary filter...etc... in preliminary testing.

Ive spend a lot of time on CL, and so have a good idea what works best from previous testing. My goal is both to figure out the best methodology, and to make all my trading systems on all markets built on multi time frame. GSB still needs some tweaks to fully realize all of this.

admin - 22-8-2018 at 04:26 PM

GSB 48.22 can now do nth tests with walk forward results.
So you can build say 2000 systems, nth1. Apply final WF parameters to all systems.
Then measure market degradation when nth is inverted. This can be compared to the same tests
without the WF parameters.
Same test like I did in the CL video, but leave 2018 out of sample, and see if the WF parameters worked better.
Keen to have more GSB cloud power from users today. (and a big thanks for those who donate me CPU power already)
I have 2 x 900 wf to do today on 29,30,31 min bars. Thats a task that would take one computer a decent amount of time.
So if you can give me your gsb share key, and run some workers, I will get you 48.22 version of GSB

Bruce - 22-8-2018 at 08:47 PM

Quote: Originally posted by admin  
GSB 48.22 can now do nth tests with walk forward results.
So you can build say 2000 systems, nth1. Apply final WF parameters to all systems.
Then measure market degradation when nth is inverted. This can be compared to the same tests
without the WF parameters.
Same test like I did in the CL video, but leave 2018 out of sample, and see if the WF parameters worked better.
Keen to have more GSB cloud power from users today. (and a big thanks for those who donate me CPU power already)
I have 2 x 900 wf to do today on 29,30,31 min bars. Thats a task that would take one computer a decent amount of time.
So if you can give me your gsb share key, and run some workers, I will get you 48.22 version of GSB


Happy to give you GSB cloud power, just haven't been able to work out exactly how to address it to you!

admin - 22-8-2018 at 08:50 PM

Quote: Originally posted by TradingRails  
Quote: Originally posted by admin  
GSB 48.22 can now do nth tests with walk forward results.
So you can build say 2000 systems, nth1. Apply final WF parameters to all systems.
Then measure market degradation when nth is inverted. This can be compared to the same tests
without the WF parameters.
Same test like I did in the CL video, but leave 2018 out of sample, and see if the WF parameters worked better.
Keen to have more GSB cloud power from users today. (and a big thanks for those who donate me CPU power already)
I have 2 x 900 wf to do today on 29,30,31 min bars. Thats a task that would take one computer a decent amount of time.
So if you can give me your gsb share key, and run some workers, I will get you 48.22 version of GSB


Happy to give you GSB cloud power, just haven't been able to work out exactly how to address it to you!

info@trademaid.info
I will give you url to 48.22 and finer details below of how to do it.


open one 48.22 worker.
In app settings ,workplace,share keys add "topsecretuniquetome" //anything you like
group percentage 100%
save app settings as say workers4peter
then open more workers, till cpu gets high. takes a few min for them to start.
wf doesnt need much ram
The offer is appreciated

admin - 24-8-2018 at 05:14 PM

I have the results of 20,000 29,30,31 minute bars - CL systems. Remove all these that dont verify on all other time frames. 25,26,27,28,32,33,34,35 min bars
Degradation of -12%
then Walk forward the results (nth=1) 29,30,31 minute bars
Compare the original non WF results with the nth out of sample WF results.
Compare the original WF results with the nth out of sample WF results.
Then compare the 2018 non WF results with 2018 WF results
Results are encouraging in cases!
I will publish next week the exact results.
Working on same test, but wf on 30 minute bars only.
and wf on all data pre 2018, then compare with 2018 results. Stuck on this due to GSB bug
And thanks to all the GSB users who contributed thier cpu power to do this. If your one of them and cant wait for the results to date, email me.


saycem - 25-8-2018 at 01:39 AM

Great work Peter, can't wait to see the findings.

boothy - 28-8-2018 at 06:56 PM

I have done some testing on Gold,

GC.30 no secondary data - OOS degradation -40.5%
GC.30.29.31 no secondary data - OOS degradation -11.5%
GC.30 with SI.30 as secondary data - OOS degradation -53.6%
GC.30.29.31 with SI.30.29.31 as secondary data - OOS degradation -11.5%

fitness function on all was net profit * avg trade

Appears to be no real benefit of having silver as secondary data, but is in line with other testing that building systems on 29.30.31 min bars is very benificial.





GC.30_NP_AT..PNG - 144kBGC.30.29.31_NP_AT.PNG - 160kBGC.30.SI_NP_AT.PNG - 130kBGC.30.29.31.SI_NP_AT.PNG - 156kB

admin - 28-8-2018 at 07:00 PM

That's excellent Boothy. My other comment is I felt gold was not tradable the last few years. Just trending badly. Is this still the case?
Also good to try market verification on 25,26,27,28,32,33,34,35 min bars. But this will reduce your sample size greatly.
I used pf 1.8 pearsons 0.95

boothy - 28-8-2018 at 09:11 PM

Quote: Originally posted by admin  
That's excellent Boothy. My other comment is I felt gold was not tradable the last few years. Just trending badly. Is this still the case?
Also good to try market verification on 25,26,27,28,32,33,34,35 min bars. But this will reduce your sample size greatly.
I used pf 1.8 pearsons 0.95


Yes I did verification on 25 - 35 min bars on GC29.30.31 there was 17 out 1000 systems that past 8/8 with pf 1.8 Pearson .95
For GC/SI 29.30.31 there was 8 out 500 systems past 8/8.
I tried to copy what you did in the CL video comparing the 2018 out of sample with the systems that past 8/8 verification (I think I did correctly) interestingly, degradation didn’t improve for the systems that past 8/8 ver for 2018. Didn’t get a chance to look much more into it, but I think 2018 in particular a bad year for gold systems.

I still want to do more testing on different fitness and filters for gold systems.

admin - 28-8-2018 at 09:40 PM

Your testing looks great, but 17 or 8 systems results cant be counted on to be statistically large enough sample. I am grateful for all the GSB community who have given me GSB workers. Some of the research im doing right now is striking and a little unexpected what works best.
Im repeating all my tests to be 100% certain. One setting changed degradation from -19 to -4.5 %, so I have to be sure this is right.
Good you agreed with my comments that gold is such a bad market now. I guess the range is very low.
GSB is a whole universe to explore, and I look forward to more results from you and others in time

admin - 2-9-2018 at 09:50 PM

I have now done the remainder of the video on crude oil.
I show if the results of building systems on multi time frames (29,30,31 & 28,29,30,31,32) and
doing walk forward on 30 vs (29,30,31) vs (28,29,30,31,32)
The implications of this are far reaching for all system development

Comments welcome


Attachment: Login to view the details

edgetrader - 4-9-2018 at 04:27 AM

Quote: Originally posted by admin  
I have now done the remainder of the video on crude oil.
I show if the results of building systems on multi time frames (29,30,31 & 28,29,30,31,32) and
doing walk forward on 30 vs (29,30,31) vs (28,29,30,31,32)
The implications of this are far reaching for all system development

Comments welcome



Looks good. As you say nth-day and 2018 are using up a lot of the data for OOS testing: When you think of it, the point in doing that is to figure out the best way to build systems, i.e. what additional bars should be used, how many indicators, etc.

Once you know the best way to build systems, you could apply it to all data in order to make production systems for live trading.

admin - 4-9-2018 at 04:33 AM

agreed. As you say, there is a difference between figuring out the best way to build a system, and building systems.
I used nth=1 then later did verify of 25...35 min bars with nth = all.
Got my first system chosen today, but there were plenty of other good ones. Will publish a TS report tomorrow

admin - 5-9-2018 at 04:21 AM

Here is my first muti market CL system. Verified on all bars from 25 to 35 minute. 2018 is out of sample. I have made 6 other systems today, and hopefully finished with CL in < 2 days


Attachment: Login to view the details


Bruce - 5-9-2018 at 04:47 AM


Hey Peter, great video and last night I followed your process and just did a quick simulation with the ES using 28 - 32 minute data, individual long and short systems over the past 5 years to get familiar with your dev process. this immediately yielded amazing initial results which I will explore further. like your work!

admin - 10-9-2018 at 07:54 PM

Thanks for the comments TradingRails
Its been a massive task, but I now have enough CL systems and have refined the methodology.
Only thing left is long only and short only CL system. CL 29,30,31 minute bar market degradation is amazing
-3.0 % on 29,30,31 minute bars, but awesome -0.5 % on the 30 minute results (of the 29,30,31) bars.
Short only requires massive cpu time, and it will degrade a lot more.
I enclose the results in xml files. These need to be imported into port folio analyst to view
All systems out of sample 1/2/2018
all systems with names like gsbclmx.y were made on 28_32 min bars, wf on 28_32 min bars and verified on 20,40, 24_36 min bars.
(24_36 means 24,25,26,27,28....35.35)
systems with LS in file name made on 30 min bars only and are a long and short only system on the same chart. The short only system is much more likely to degrade out of sample
Systems like cl23-dj are slight different session times and data streams made on 30 minute bars only
Not all results are current to today. GSBls1 is poor 2018, but the last few months are a big part of the reason,(it traded a lot) and nothing made money the last few months.
I will include one system for GSB purchasers in the private forum as time permits





Attachment: Login to view the details


admin - 12-9-2018 at 06:18 PM

I did market verification tests on natural gas. -14.6% market degradation with 8958 systems. A great result. Thats on 28.29.30.31.32 minute bars.
I expect it would improve with verification on wider time frames. 2018 results I got -$932 aver profit per system.
Conclusion, NG is a great market, and really bad right now. The low range of NG market in 2018 confirms this. My NG systems also were poor out of sample 2018. A contrast to crude oil where 2018 results were very good over all, but similar market degradation results.
Im happy to hear any other users experiences, on these or other markets.

Carl - 15-9-2018 at 02:49 AM

Here are the results on my tests on ES.
I thought, let's use a really big out-of-sample period, just to see what happens.

Build on ES 30
Verif on 25,26,...,34,35
WF on 26,28,30,32,34
No nth used
Last date December 31 2012, so out-of-sample after 2012 (!)
Average net profit 2013-2018: 17.100 USD

Build on ES 28,30,32
Verif on 25,26,...,34,35
WF on 26,28,30,32,34
No nth used
Last date December 31 2012, so out-of-sample after 2012 (!)
Average net profit 2013-2018: 18.900 USD

A most of them had great equity lines out-of-sample.

admin - 15-9-2018 at 02:51 AM

Quote: Originally posted by Carl  
Here are the results on my tests on ES.
I thought, let's use a really big out-of-sample period, just to see what happens.

Build on ES 30
Verif on 25,26,...,34,35
WF on 26,28,30,32,34
No nth used
Last date December 31 2012, so out-of-sample after 2012 (!)
Average net profit 2013-2018: 17.100 USD

Build on ES 28,30,32
Verif on 25,26,...,34,35
WF on 26,28,30,32,34
No nth used
Last date December 31 2012, so out-of-sample after 2012 (!)
Average net profit 2013-2018: 18.900 USD

A most of them had great equity lines out-of-sample.

Fantastic result Carl, thanks for publishing

admin - 21-9-2018 at 01:37 AM

Ive just thought that contracts info for cash indices should = future indices big point value. CloseDBPV will work best because if GSB genetically tries data1 futures, then switches to try the cash, the point value will be different and results likely not as good.


pv.png - 6kB

cyrus68 - 22-9-2018 at 02:15 AM

Using the secondary filter has become confusing. As I understand it, for futures strategies, it is preferable to use closeBpv if you initially wanted to use Close-Close; irrespective of whether you want to verify on other markets. For this purpose, you need to redefine the point value of the indices that are used as secondary data. It is not clear whether this applies to Close-Close Normalised.

If you want to use Close/Close or Close/Close Normalised, it is not clear whether it would be preferable to use the redefined indices or the original ones.

As for strategies developed on stocks, it makes sense to use the original indices specs, when used as secondary data. So, you will need to define the appropriate contract specs in the table, for both futures and stocks. For example: SPX and SPX1, with appropriate point values.

admin - 22-9-2018 at 02:18 AM

Quote: Originally posted by cyrus68  
Using the secondary filter has become confusing. As I understand it, for futures strategies, it is preferable to use closeBpv if you initially wanted to use Close-Close; irrespective of whether you want to verify on other markets. For this purpose, you need to redefine the point value of the indices that are used as secondary data. It is not clear whether this applies to Close-Close Normalised.

If you want to use Close/Close or Close/Close Normalised, it is not clear whether it would be preferable to use the redefined indices or the original ones.

As for strategies developed on stocks, it makes sense to use the original indices specs, when used as secondary data. So, you will need to define the appropriate contract specs in the table, for both futures and stocks. For example: SPX and SPX1, with appropriate point values.

correct, GSB is going to have to have a function called bpv where is stores its own bigpoint value, in case we used a different one to ts

Some result to share

cotila1 - 25-9-2018 at 11:46 AM

I'd like to share some result derived from the discussed approach.
I've built ES systems with TF-28, 29, 30, 31, 32 min bars, SF=CloseLessPrevCloseDBPV. Training=100% and Nth=1. Data from 2000 to 12/31/2014. Meaning period from 1/1/2015 till today is UN-SEEN.
I stopped gsb after 15000 systems built.
Degradation from Nth=NoTrd (IS days) to Trd(OoS days) is about 14%

I verified all the 15000 systems on TF 25, 26, 27, 33, 34, 35 and I got 5800 systems that are 6/6-saved as excelent. Filter verification used: R=0.95 and Min-PF=1.5 and Min #Trds=100.

I have then verified those 5800 systems over other markets-emd, rut, ym, nq - and I got systems 47 systems with VS=4/4 and 540 systems with VS=3/3, but for MultiMarket Verification I have used a bit less severe filter verification: Min R=0.90 and Min-PF=1.2 and Min #Trds=100.

I have then WF-ed ALL the 4/4 systems and the 3/4 systems with avg-trd>150 and R>0.98 and NP/DD>20 and #trd>350 (the data period always 1/1/2000-12/31/0214). The TOTAL number of systems wf-ed (as result from first and second verification) is 101. For WF I have used Nth=All.
WF Price data used: 28, 29, 30, 31, 32. As said the total number of systems wf-ed is 101.

After the WFA, I have analized all the systems (101) on the un-seen period (from 1/1/2015 onwards) to realize their behavior (all together) over this 3.5 unseen years and as you can see from table in the picture the NP WITHOUT WF Cur. Params (WFP) goes from a MIN NP of 1250$ to a MAX NP of 38710$.
The only thing I found a bit strange is that the overall results (summerized in the 2 table of the picture) over the unseen period look better WITHOUT the WF Cur. Params (WFP) rather than WITH. In both cases the results look ok.

btw see also one of the possible equity from this severe selection process: it includes expenses. Same code is plotted even on emd on a totally different TF (15 min) expenses included. So on a different market and even different TF (15 Min) from the ones the system has been built on. still looks good.

comments and criticism are more than welome :-)

comparison.jpg - 75kB ONEOF THE EQ.jpg - 84kB emd.jpg - 55kB

admin - 25-9-2018 at 05:13 PM

Great and interesting work. Were the systems wf with nth set to all or notrade etc
pf in the table would be good metric to add. Also try wf on 30 29,30,31 and see what worked best oos

cotila1 - 26-9-2018 at 07:29 AM

Thanks. Good suggestion: try wf on 29,30,31 too. In general, it might be an idea to try wf on a small sample of systems (say 20-30) with 29-31 and then with 28-32 to see where the improvement sits and afterwords apply to best choice to the enitre set of systems? It might save time while choosing best option?

Quote: Originally posted by admin  
Great and interesting work. Were the systems wf with nth set to all or notrade etc
pf in the table would be good metric to add. Also try wf on 30 29,30,31 and see what worked best oos

admin - 26-9-2018 at 01:34 PM

Quote: Originally posted by cotila1  
Thanks. Good suggestion: try wf on 29,30,31 too. In general, it might be an idea to try wf on a small sample of systems (say 20-30) with 29-31 and then with 28-32 to see where the improvement sits and afterwords apply to best choice to the enitre set of systems? It might save time while choosing best option?

Quote: Originally posted by admin  
Great and interesting work. Were the systems wf with nth set to all or notrade etc
pf in the table would be good metric to add. Also try wf on 30 29,30,31 and see what worked best oos

Problem with small sample is the number of systems is too low to prove that it helped. Even with 1000 vs 2000 systems, I can get big variations in overall stats. So its best to do WF with them all.

cotila1 - 27-9-2018 at 12:49 AM

Oh that's good to know. human & machine time is then necessary :-))

Quote: Originally posted by admin  
Quote: Originally posted by cotila1  
Thanks. Good suggestion: try wf on 29,30,31 too. In general, it might be an idea to try wf on a small sample of systems (say 20-30) with 29-31 and then with 28-32 to see where the improvement sits and afterwords apply to best choice to the enitre set of systems? It might save time while choosing best option?

Quote: Originally posted by admin  
Great and interesting work. Were the systems wf with nth set to all or notrade etc
pf in the table would be good metric to add. Also try wf on 30 29,30,31 and see what worked best oos

Problem with small sample is the number of systems is too low to prove that it helped. Even with 1000 vs 2000 systems, I can get big variations in overall stats. So its best to do WF with them all.

admin - 2-10-2018 at 11:43 PM

What has become apparent, is that its the stressing of the oscillator periods, not the change in OHLC of the bars that gives the much improved out of sample results when using multiple time frames. For this reason I likely wont make a simulated data stream that has noise added to it. I might however make a oscillator period stress-or. But this will be after improved exits and secondary filters.
Also likely to make GSB build systems, then look for the filters/exits that work on ALL systems statistically. Might also have the option to try a filter/exit while building the system. Haven't thought this through fully.
One of the GSB users has also done a bit of work on the energies. Its clear now that much of the volume is at the close of a 30 minute bar. This implies also if your execution is slow, your fills wont be great. However I now know how to be free from the need to use 30 minute bars. Maybe in the next video?

cyrus68 - 5-10-2018 at 02:41 AM

I always suspected that when GSB fits the same system to multiple data frequencies, it is the indicator period that is changed to fit each frequency. Other indicator parameters remain unchanged.

What was always unclear to me was why – in the case of 29 30 31 min bars – it is necessary to select the 30 min bar (i.e. central bar size)? All that GSB has done is to fit the system to the given frequencies and produce the metrics. Which bar size you select to run WF depends on which had the least deterioration in the averages test, as well as the OOS performance of the particular system.

There may be mysteries here that I don’t understand. But then, again, using GSB is often similar to solving mysteries.

Stressing indicator periods directly would be a good idea. An even better idea is to stress all parameters.

As for the issue of filters/exits, theoretically speaking, I don’t know enough of the innards of GSB to state whether they should be part of the build process or implemented afterwards. Having a choice would be useful. Practically speaking, changing filters has a major impact on results. Currently, we can do this as part of the build process, and test the impact on IS/OOS averages.

admin - 5-10-2018 at 02:51 PM

Quote: Originally posted by cyrus68  
I always suspected that when GSB fits the same system to multiple data frequencies, it is the indicator period that is changed to fit each frequency. Other indicator parameters remain unchanged.

What was always unclear to me was why – in the case of 29 30 31 min bars – it is necessary to select the 30 min bar (i.e. central bar size)? All that GSB has done is to fit the system to the given frequencies and produce the metrics. Which bar size you select to run WF depends on which had the least deterioration in the averages test, as well as the OOS performance of the particular system.

There may be mysteries here that I don’t understand. But then, again, using GSB is often similar to solving mysteries.

Stressing indicator periods directly would be a good idea. An even better idea is to stress all parameters.

As for the issue of filters/exits, theoretically speaking, I don’t know enough of the innards of GSB to state whether they should be part of the build process or implemented afterwards. Having a choice would be useful. Practically speaking, changing filters has a major impact on results. Currently, we can do this as part of the build process, and test the impact on IS/OOS averages.

Your points are good to discuss.
If we stress things, I just think its common sense to choose the middle period of whats stressed. I'm open to other ideas. I expect most degradation away from the central period.
Bottom line is stressing indicator periods I think is good enough, but that's not to say that we cant improve. Long term I may add option of random noise on the data, but doubt it will help. There are more pressing features to add.
regarding parameter stressing.
lets say we have
result =rsi(c,14)*atr(30)*average(c,50)
if result >offset then buy.
the value of offset matters very little, as 3 indicators range -100 to 100 are a big number, so gsb typically has few parameters apart form the oscillator periods. Secondary filters on CloseD are very insensitive to mild changes.
If we have bollengerband(c,20,0.5) then we have a 0.5 to potentially stress.
So stressing parameters has less value. If we stress say all 3 osc periods up and down independently, we have a lot of combinations. It might be worth trying but again there are more urgent features need for now.
I agree the option to have filters per system should be a choice, but its got massive danger of a curve fit - which I want to avoid.

cyrus68 - 10-10-2018 at 02:11 AM

On the issue of stressing indicator parameters, we know from Monte Carlo simulation results that randomising trades and introducing noise to indicator parameters have the biggest impact on results. Changing the starting trade date has little impact. It may well be that indicator periods, rather than other parameters, are principally responsible for the result. Theoretically possible but, in practice, unknown.

On the issue of selecting the middle bar size, I may be confused or misunderstanding things, but I need to nail it down. In the following example, why should I select the 25 min bar (middle) rather than the 20 min (less deterioration). This is just an example, as the overall results are lousy.

Let’s look at the results of the Averages test (IS/OOS), when the “Optimise Price Data” field is set to False. In the example, the result for the 30 min dataset (bottom row of the table) is presumably calculated by averaging the metrics for all systems that were applied to 30 min bars. The same applies to the 20 and 25 min datasets.

The top row of the table is presumably the result of averaging the metrics of all the datasets. But what do the second, third and fourth rows represent?

We now know that GSB calculates the average period of the indicators for the included data bars – in this case, 20 25 30 min – to create more robust indicators. So, for example, does the second row represent the result of applying systems based on the average-period indicators on the 20 min dataset? If so, we can’t see the metrics.

Regarding the issue of running WF. For example, for the 20 min dataset, is it applying systems based on starting values of the original-period indicators or the average-period indicators?


AAPL.png - 7kB

admin - 10-10-2018 at 04:37 AM

cyrus68, i will reply tomorrow on this.

admin - 10-10-2018 at 11:39 PM

"On the issue of selecting the middle bar size, I may be confused or misunderstanding things, but I need to nail it down. In the following example, why should I select the 25 min bar (middle) rather than the 20 min (less deterioration). This is just an example, as the overall results are lousy."
My thoughts are I want the broad area that works well, with margin on both sides. Im open to any one else's idea - if it varies from this.

Your question on what each row represents is a good one.
in sample compared to out of sample
the rows are the average of 20,25,30 compare to 20,25,30
the rows are the average of 20,25,30 compare to 20
the rows are the average of 20,25,30 compare to 25
the rows are the average of 20,25,30 compare to 30
20 compared to 20
25 compared to 25
30 compared to 30


"Regarding the issue of running WF. For example, for the 20 min dataset, is it applying systems based on starting values of the original-period indicators or the average-period indicators?"
This answer doesnt matter much as the first section will be optimized for peak fitness of +25% and -25% from the original ones.
Thats the default settings under WF. You can go wider than this or even user random space which does the GSB hard coded max and min indicator lengths

cyrus68 - 11-10-2018 at 03:11 AM

Thanks for the response. As I understand it, for any given system, if "Optimise Price Data' is set to False, GSB will optimise the lookback period for the indicators, across all the datasets (20 25 30). The resulting optimised lookback period is, then, implemented for all the datasets. So when we run WF for any of the datasets (say 20 min) the starting values will come from this jointly optimised lookback period.

I had, initially, misunderstood that "20 compared to 20" involved an optimisation of the lookback period for the given dataset, independently of the other datasets.

admin - 11-10-2018 at 03:14 AM

Quote: Originally posted by cyrus68  
Thanks for the response. As I understand it, for any given system, if "Optimise Price Data' is set to False, GSB will optimise the lookback period for the indicators, across all the datasets (20 25 30). The resulting optimised lookback period is, then, implemented for all the datasets. So when we run WF for any of the datasets (say 20 min) the starting values will come from this jointly optimised lookback period.

I had, initially, misunderstood that "20 compared to 20" involved an optimisation of the lookback period for the given dataset, independently of the other datasets.

correct :)

admin - 18-10-2018 at 06:38 PM

I have done testing on CL.
30 minute bars, what was the out of sample like with every second day out of sample?
and the last year.

Same test for 29,30,31 CL minute bars

and same test for CL,HO,RB,
and then we extract 30 min CL results only
and same test for CL,HO,RB,


and then we try the same tests with natural gas added.
NG doesn't correlate as well with the other markets
Will post tomorrow in the private forum
https://trademaid.info/forum/viewthread.php?tid=33&page=2#pi...





blure2.png - 15kB

admin - 25-10-2018 at 12:02 AM

Newest draft video on multi bars, the new verification score per systems etc.
There is something in this video for people new to GSB, and something for experienced GSB users, but the video is aimed at a bit of both. It is not perfect for either though.
Note also the use of favorites & stats, which is a new implementation of existing features.
15 minutes long, but I have never had a video that took so long to produce. A lot of that was due to working out methodology and preparing content.

Attachment: Login to view the details

admin - 28-10-2018 at 05:49 PM

Ive had quite a lot of feed back on the video in the post above.
A few comments.
The newer methodology is a lot better with its extra steps.
The implications are significant.
Why would you trade systems built with single time frames, when the multi time frames is so much more likely to go well out of sample?
Of course there is also the verification on other bar intervals / markets, and the verification score which helps even more.
My only answer is while it takes us time to make new code, this should be done asap.
Another addition is, In the video I showed how market degradation score of >-10% gave improved results.
I have done more tests and the better this figure is, the better out of sample results were. ie on CL use only verification score >-4%, gave -3.0% market degradation. Verification score of ->10% gave -7.1%
The video focus was market degradation/ get good out of sample results - how to get systemS with good out of sample, not how to build a single system.
That's because, to build a good trading system, you need a good foundation. If the foundation is flawed, what is built upon it will not last.
What I feel right now is good, is to verify the nth no trade (In sample) and the nth trade (Out of sample) and pick out the best systems.
Some users with low power hardware might argue, this is going to take too long. Well I think the cost of getting this wrong is greater than the cost of better hardware. Short term I can hire out a dual xeon server (=i9 performance) with 192 gb of ram for US$10 a day.

Bruce - 28-10-2018 at 08:30 PM

Quote: Originally posted by admin  
Ive had quite a lot of feed back on the video in the post above.
A few comments.
The newer methodology is a lot better with its extra steps.
The implications are significant.
Why would you trade systems built with single time frames, when the multi time frames is so much more likely to go well out of sample?
Of course there is also the verification on other bar intervals / markets, and the verification score which helps even more.
My only answer is while it takes us time to make new code, this should be done asap.
Another addition is, In the video I showed how market degradation score of >-10% gave improved results.
I have done more tests and the better this figure is, the better out of sample results were. ie on CL use only verification score >-4%, gave -3.0% market degradation. Verification score of ->10% gave -7.1%
The video focus was market degradation/ get good out of sample results - how to get systemS with good out of sample, not how to build a single system.
That's because, to build a good trading system, you need a good foundation. If the foundation is flawed, what is built upon it will not last.
What I feel right now is good, is to verify the nth no trade (In sample) and the nth trade (Out of sample) and pick out the best systems.
Some users with low power hardware might argue, this is going to take too long. Well I think the cost of getting this wrong is greater than the cost of better hardware. Short term I can hire out a dual xeon server (=i9 performance) with 192 gb of ram for US$10 a day.


Good feedback Peter, absolutely nailed it with the need for a good foundation and the modest investment required to deliver a robust system/performance results

saycem - 29-10-2018 at 12:06 AM

Agree with the above. By now its fairly safe to assume that 29,30,31 is better than 30 alone.

But what about all the other decisions and variables to consider, many which you mention in the video.
timeframes, day/swing, Secondary filter, and all the various 2nd or more data streams etc etc etc.
How long do you see this taking?
Are we all going to commence the same overlapping work?

admin - 29-10-2018 at 12:15 AM

Quote: Originally posted by saycem  
Agree with the above. By now its fairly safe to assume that 29,30,31 is better than 30 alone.

But what about all the other decisions and variables to consider, many which you mention in the video.
timeframes, day/swing, Secondary filter, and all the various 2nd or more data streams etc etc etc.
How long do you see this taking?
Are we all going to commence the same overlapping work?


Well some of that is your personal choice, and other is what works best.
The CL settings I think are the best as what im using, though long short and long only are both valid. Secondary filters are the best as I am using them, as are data streams. I think 30 min is best, but small chance 15 min.
Not sure fully what you mean by "How long do you see this taking?
Are we all going to commence the same overlapping work? "
I think CL as is, is really good.

saycem - 29-10-2018 at 12:35 AM

Agree CL looks very good.
I was genuinely interested in how long it might take to run 10k systems per setting to determine what is superior. As I don't have the hardware yet I was wondering for example how long it might take to determine
S vs S/BO vs S/BO/SM
15min vs 30min vs 60 min (many hypothesize that other timeframes are even better ie 20min? but we can't choose everything)
CloseLessPrevCloseDbpv vs GA
eod vs swing

I'm certainly not disagreeing with your method for determining the above. Just that I think it would take a long time. Plus all of us doing the same thing seemed a little inefficient - but I don't know how to solve that.

admin - 29-10-2018 at 01:56 AM

Quote: Originally posted by saycem  
Agree CL looks very good.
I was genuinely interested in how long it might take to run 10k systems per setting to determine what is superior. As I don't have the hardware yet I was wondering for example how long it might take to determine
S vs S/BO vs S/BO/SM
15min vs 30min vs 60 min (many hypothesize that other timeframes are even better ie 20min? but we can't choose everything)
CloseLessPrevCloseDbpv vs GA
eod vs swing

I'm certainly not disagreeing with your method for determining the above. Just that I think it would take a long time. Plus all of us doing the same thing seemed a little inefficient - but I don't know how to solve that.

Well it would be good for us collectively publish results.
I started on ES but then improved a lot on CL. I will then likely go back and test the concepts found on CL onto ES
I think for soy, bo & sm 30 min was best. I think I got high degradation but good results the last year - but not a lot of trades. It was sum time ago, so need to do it all again

cotila1 - 30-10-2018 at 09:05 AM

interesting video. A bit more to what I already do. Found very usefull these VSS feature.

Quote: Originally posted by admin  
Newest draft video on multi bars, the new verification score per systems etc.
There is something in this video for people new to GSB, and something for experienced GSB users, but the video is aimed at a bit of both. It is not perfect for either though.
Note also the use of favorites & stats, which is a new implementation of existing features.
15 minutes long, but I have never had a video that took so long to produce. A lot of that was due to working out methodology and preparing content.


admin - 1-11-2018 at 10:10 PM

The video is finished. Its polished a little more, and small updates. Another video on the complete updated guide to making a crude oil system is in the pipeline.
Videos dont come quick as they take a lot of time to make the content, + present.

To help promote the video, please hit the like button, if you like the content.
https://www.youtube.com/watch?v=HDeJpONE090&feature=youtu.be

admin - 12-11-2018 at 03:38 AM

Sorry for lack of updates of recent. I'm doing some very time consuming in-depth reading and research. More from me when I surface. One of the goals is to finalize the last video showing how I make systems with the newer enhanced GSB features.

admin - 13-11-2018 at 04:03 AM

I feel now I have a clear road map to test enhanced methodology. So there is light at the end of the tunnel. Still it might take me a week to get it is precise as I would like, then some time to publish. I also want to test the same method on ES and CL to confirm what worked best.
Bottom line however is all variations of what I did worked well in the last year of CL that was left for out of sample.
Im going to do a number of tests, and leave the last 3 years out of sample to validate what method works best. I may also do the identical tests with 3 years in the front part of the data as out of sample too.

admin - 13-11-2018 at 08:50 PM

This testing is going to take some time. I did tests of 10,000 systems but when I apply verification on other time frames etc, the amount of systems drops greatly. This isnt a big enough sample to make valid statistical conclusions. It is critical to get this right. So I need to make about 80,000 systems * 12 tests + time to verify the systems.
It takes me 12 hours or so CPU time to do 80,000 tests.
If anyone has spare CPU time, email me your share key and I will get you version 59.08 or later to do the tests.

admin - 14-11-2018 at 07:10 PM

It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018




soybeans.png - 42kB

soybeans2.png - 85kB

cotila1 - 15-11-2018 at 03:31 AM

Agree on Idea of verification on cloud.
An useful command would be also the ''cancel verify'' (righ click, similar to ''Cancel Nth mode''). This is usefull especially in case of huge bulk of systems verification (e.g. 80K)
thanks for effort

Quote: Originally posted by admin  
It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018





admin - 15-11-2018 at 03:33 AM

Quote: Originally posted by cotila1  
Agree on Idea of verification on cloud.
An useful command would be also the ''cancel verify'' (righ click, similar to ''Cancel Nth mode''). This is usefull especially in case of huge bulk of systems verification (e.g. 80K)
thanks for effort


Totally agree. It will happen within a few builds I hope.

cotila1 - 16-11-2018 at 03:17 AM

same by NG built pre-new generations GSB. It has gone fine un-seen data too in second half of 2018.

Quote: Originally posted by admin  
It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018






Screenshot.jpg - 106kB

admin - 16-11-2018 at 05:35 AM

Might be time to revisit NG, as the last 2 years were really poor. Anyone else got things to share? I will look monday at my systems.

boothy - 16-11-2018 at 04:19 PM

I've just checked a couple NG systems I built earlier this year and both are similar to the one above, were flat but making new highs recently.
maybe some vol is coming back into NG?



NG_Sys1_LI.jpg - 672kBNG_Sys2_LI.jpg - 714kB

admin - 17-11-2018 at 01:50 AM

The chart explains all. Im no expert on NG, but there seems to be a seasonal aspect too.
Shown is the weekly range, and weekly abs (close-closeD(1))

nat_gas.png - 143kB

boothy - 17-11-2018 at 04:46 AM

At a quick glance, I would say the range expansion would coincide with the northern hemisphere winter, by the looks of that chart.

admin - 19-11-2018 at 01:35 AM

Im not one for trading by the news, but the numerous news predicted this.
https://www.cnbc.com/2018/11/09/natural-gas-prices-up-on-col...
I have a untested theory. A new market regime starts by a daily bar of volume and range x std greater than previous daily bars.

Carl - 19-11-2018 at 10:10 AM


I think you're right.
Here are the monthly net profit figures (GSB ngdemo strategy).

ngdemo per month.bmp - 1.8MB

admin - 21-11-2018 at 09:32 PM

A few comments on crude oil systems
https://blog.ungeracademy.com/2017/04/11/energy-market/?utm_...

"Crude oil responds well to many kinds of approaches, so it works well with trend following but also counter-trend and bias, so you can really develop plenty of strategies on crude oil and get a good basket of trading system.

These two are good even for intraday breakout systems but, in any case, for trend following are very good because the lack of liquidity leads to higher inefficiency so when the trend starts is more right to continue."
I suspect GSB is trend following (breakout maybe), so it implies we are missing out on potential systems. (esp counter trend)
For those who did Andrea Ungers course, could be good to brush up how to make a counter trend system.

This system is LONG only, but about 10 trades a month.
http://www.petronelsystems.com/pegas-1-cl/
Again this implies we are missing out on a lot of trades. GSB CL trades tend to be much higher profit per trade. Note its possible that pegaus CL has multiple systems on one chart, but limited of one contract max.

It wont happen over night, but GSB can have new architecture added to support other trading types. Obvious places to start are stop & limit entry.

uhrbi - 23-11-2018 at 06:19 AM

Even after almost one year of using GSB I am certainly not close to proficient as I'd like to be and some of you already are. I say that for you to put the following in the right perspective.
I don't know what GSB sytems are in terms of category (breakout, trend following or countertrend), but from what I've seen so far is that GSB is doing very well on markets that have a stronger tendency to revert to the mean,
that is why ES and NG systems are "easier" to build, while it is a lot harder to build systems on markets that tend to do well on breakouts like CL or the German DAX for that matter.
When I look at the trades on a chart even the ES systems look like breakout systems so my intial statement might not be so obvious and possibly more related to underlying GSB architecture.

I have not come up with a clear explanation and maybe I am completely wrong on my reasoning but one thing that comes to mind is the entry.
For a breakout we need a breakout to happen so buying /selling above/below a certain price can add as a confirmation while entering at market can be either too late or is lacking the breakout confirmation.
However these are only some random thoughts from a newbie on a tendency I observed...


admin - 23-11-2018 at 03:43 PM

Quote: Originally posted by uhrbi  
Even after almost one year of using GSB I am certainly not close to proficient as I'd like to be and some of you already are. I say that for you to put the following in the right perspective.
I don't know what GSB sytems are in terms of category (breakout, trend following or countertrend), but from what I've seen so far is that GSB is doing very well on markets that have a stronger tendency to revert to the mean,
that is why ES and NG systems are "easier" to build, while it is a lot harder to build systems on markets that tend to do well on breakouts like CL or the German DAX for that matter.
When I look at the trades on a chart even the ES systems look like breakout systems so my intial statement might not be so obvious and possibly more related to underlying GSB architecture.

I have not come up with a clear explanation and maybe I am completely wrong on my reasoning but one thing that comes to mind is the entry.
For a breakout we need a breakout to happen so buying /selling above/below a certain price can add as a confirmation while entering at market can be either too late or is lacking the breakout confirmation.
However these are only some random thoughts from a newbie on a tendency I observed...



Your thoughts are appreciated.
The last few days I happen to have put a lot of thought into this. I think GSB systems are trend following & maybe breakout. The architecture doesnt have counter trend (I think). However it should be a simple matter of adding just a single (or more) specifically designed counter trend filter or oscillator into GSB.
I dont see GSB systems reverting to the mean. Your welcome to post some screen shots - as I would find this interesting.

CL is very easy to build systems on. I did one or two videos on this.

admin - 23-11-2018 at 06:05 PM

uhrbi, here is my proof of concept counter trend osc. Add this into gsb as a secondary or primary filter should make gsb counter trend.
I hope to have it in GSB in the next few weeks.

counter.trend.png - 57kB

NQ System

Bruce - 23-11-2018 at 08:49 PM


Thought I would share this @NQ system that I built a few months back now before Peter really started to teach us about multi-time frame validation.

I built this using a test - training period - then validation. Used only a 15 min time frame so I could capture a good quantity of test-training trades being subjected to a set of tight filters. I'm now developing using multi-time frame, nth, verify, etc. and starting to experience a smoother equity line. Certainly a lot more robust. Results are there includes comm and slipage. Clearly, this system is enjoying the recent volatility.



Screen Shot 2018-11-24 at 3.17.22 PM.png - 1MB

Screen Shot 2018-11-24 at 3.17.52 PM.png - 832kB Screen Shot 2018-11-24 at 3.26.49 PM.png - 1.4MB

uhrbi - 25-11-2018 at 07:12 AM


Quote:


Your thoughts are appreciated.
The last few days I happen to have put a lot of thought into this. I think GSB systems are trend following & maybe breakout. The architecture doesnt have counter trend (I think). However it should be a simple matter of adding just a single (or more) specifically designed counter trend filter or oscillator into GSB.
I dont see GSB systems reverting to the mean. Your welcome to post some screen shots - as I would find this interesting.

CL is very easy to build systems on. I did one or two videos on this.




What I meant was GSB is very good on markets with a stronger mean reverting tendency, not that GSB systems are counter trend per se.

You advised us a couple of months back not to use CL systems.
If I remember correctly you had to use quite some time and processing power as well as some new GSB features developed in the process to make good systems for CL.
So my understanding is that it wasn't very easy... and I say that with absolutely no offense because we all gained a lot from your experience,
the updated GSB functionalities and the more refined methodology for market and system validation.



admin - 25-11-2018 at 02:25 PM

Quote: Originally posted by uhrbi  

Quote:


Your thoughts are appreciated.
The last few days I happen to have put a lot of thought into this. I think GSB systems are trend following & maybe breakout. The architecture doesnt have counter trend (I think). However it should be a simple matter of adding just a single (or more) specifically designed counter trend filter or oscillator into GSB.
I dont see GSB systems reverting to the mean. Your welcome to post some screen shots - as I would find this interesting.

CL is very easy to build systems on. I did one or two videos on this.




What I meant was GSB is very good on markets with a stronger mean reverting tendency, not that GSB systems are counter trend per se.

You advised us a couple of months back not to use CL systems.
If I remember correctly you had to use quite some time and processing power as well as some new GSB features developed in the process to make good systems for CL.
So my understanding is that it wasn't very easy... and I say that with absolutely no offense because we all gained a lot from your experience,
the updated GSB functionalities and the more refined methodology for market and system validation.



A few months back when market validation tests were first made, I was cautious and was biased to dont trade unless you have done market validation. CL it turns out went well out of sample and passed market validation tests really well. However the last 1/2 of this year has not been so good. I see this is market conditions, not system performance

admin - 25-11-2018 at 11:46 PM

Quote: Originally posted by TradingRails  

Thought I would share this @NQ system that I built a few months back now before Peter really started to teach us about multi-time frame validation.

I built this using a test - training period - then validation. Used only a 15 min time frame so I could capture a good quantity of test-training trades being subjected to a set of tight filters. I'm now developing using multi-time frame, nth, verify, etc. and starting to experience a smoother equity line. Certainly a lot more robust. Results are there includes comm and slipage. Clearly, this system is enjoying the recent volatility.

Its fantastic you share your great results. Thank you. NQ seems surprisingly diversified from ES. Im still working on ES & CL, to see whats works best. NQ will be much later after I apply what new things Im learning.

Bruce - 27-11-2018 at 01:39 AM


So help me understand your latest process.

You're optimizing with nth set to NoTrd and using auto verify in a single step, this is to verify on the unseen 'nth' data, that is the NoTrd data. However when you set Nth to Trd after the optimization you get all the degradation from what appears to be the unseen data as displayed on the graphs.

Isn't this (Trd) the unseen/optimized data? All you've done is switch the NoTrd for the Trd? I'm not seeing the verification happening on the unseen Nth data.

What am I missing with this phase of the process? Thanks.

Screen Shot 2018-11-27 at 8.41.55 PM.png - 199kB

admin - 27-11-2018 at 01:42 AM

Quote: Originally posted by TradingRails  

So help me understand you latest process.

You're optimizing with nth set to NoTrd and then using auto verify, this is to verify on the unseen 'nth' data, that is the NoTrd data. However when you set Nth to Trd you get all the degradation from what appears to be the unseen data as displayed on the graphs.

Isn't this (Trd) the unseen/optimized data? All you've done is switch the NoTrd for the Trd? I'm not seeing the verification happening on the unseen Nth data.

What am I missing with this phase of the process? Thanks.

The autoverify is on the seen data. After that I will investigate further the systems that pass the 8 out of 8 time frames im using. for verification.
Further verification has to be done manually. ie you could very the systems that passed 8/8 by changing nth from no trade to trade, or from no trade to all.
Hope that helps.

Bruce - 27-11-2018 at 01:47 AM

Quote: Originally posted by admin  
Quote: Originally posted by TradingRails  

So help me understand you latest process.

You're optimizing with nth set to NoTrd and then using auto verify, this is to verify on the unseen 'nth' data, that is the NoTrd data. However when you set Nth to Trd you get all the degradation from what appears to be the unseen data as displayed on the graphs.

Isn't this (Trd) the unseen/optimized data? All you've done is switch the NoTrd for the Trd? I'm not seeing the verification happening on the unseen Nth data.

What am I missing with this phase of the process? Thanks.

The autoverify is on the seen data. After that I will investigate further the systems that pass the 8 out of 8 time frames im using. for verification.
Further verification has to be done manually. ie you could very the systems that passed 8/8 by changing nth from no trade to trade, or from no trade to all.
Hope that helps.


Roger that! Thank you that's cleared that up. :)

avatartrader - 12-12-2018 at 02:04 PM

I had a question regarding WF and using multiple data streams and I want to be sure I am understanding this correctly.

Let's say for example I am building a system on 29,30,31 minute data and want to do WF on it:

1. If the WF Price Data is empty, then it uses the first data stream specified in the Opt. Price Data only as per the documentation (i.e. the data specified at index 0) or will it use all of them?
2. If I specify WF Price Data, either as a single or multiple streams, it will WF using all of the specified data and output separate WF results for each, or does it create a single result based on either the best or the result of all of them?

The reason that I am asking is that I did a system build and when I walked forward a system, I originally did not specify WF data and when I walked forward, the graph and drop down under "Performance" shows the results for a WF on the 29 minute time frame only. In addition, when I click to display WF on the graph, it shows the OOS and Current in addition to 29m WF, but does not indicate a timeframe that the other two lines are based on.

Next, I did specify WF data as 29,30,31 and then re-ran the WF. However, neither the graph nor the performance tab reflect any WF on the 30 or 31, while the "Script" tab does have scripts for the WF versions on all timeframes. In looking at the script, the parameter values and other WF metrics (stability, etc.) are the same for all time frames.

So, I wanted to be sure I am understanding how this works correctly, or perhaps there is an issue because I did not specify the WF Price Data at system build time or before I ran the first WF?

admin - 12-12-2018 at 03:16 PM

Quote: Originally posted by Carl  
Hi rws,

For GSB systems on ES the original methodology delivered great results.
GSBSys1 has been profitable on ES and NQ in out of sample period.
GSBsys6 had also been profitable on ES out of sample.
My GSB generated system on ES (developed in June 2017) earned 13k after costs and slippage the last 12 months (9 months out of sample). And I can go on like this. Most of my GSB generated systems on ES have been profitable out of sample.

But on CL, GC and RB it seems it is much more difficult to develop systems that are profitable out of sample.
Maybe because prices behaved in a different way compared to earlier periods? Trend? Volatility?
Or maybe because the original methodology in GSB is not suitable and we need an other GSB methodology for these tickers?

Im very happy with GSB on cl,ho,rb but you have to factor in market conditions. Very little systems (GSB and non GSB made $ in 2017 on ES)
A benchmark I used was
http://www.petronelsystems.com/pegas-1-cl-0/
Look at the monthly 2018 results.
Its good to compare apples to apples, and this is apples to oranges.
Petronel had a decent live track record, and compared to a gsb system it has multiple systems in the one combined together. (Judging by the description) Its also long only - which is very valid to do for CL. However the CL market has tanked, so not favourable to CL. Petronel did make a new long short system in July. I enclose the results too. Thats flat since release.


cl.monthly.png - 108kBOOS.png - 120kB

admin - 12-12-2018 at 03:31 PM

Quote: Originally posted by avatartrader  
I had a question regarding WF and using multiple data streams and I want to be sure I am understanding this correctly.

Let's say for example I am building a system on 29,30,31 minute data and want to do WF on it:

1. If the WF Price Data is empty, then it uses the first data stream specified in the Opt. Price Data only as per the documentation (i.e. the data specified at index 0) or will it use all of them?
2. If I specify WF Price Data, either as a single or multiple streams, it will WF using all of the specified data and output separate WF results for each, or does it create a single result based on either the best or the result of all of them?

The reason that I am asking is that I did a system build and when I walked forward a system, I originally did not specify WF data and when I walked forward, the graph and drop down under "Performance" shows the results for a WF on the 29 minute time frame only. In addition, when I click to display WF on the graph, it shows the OOS and Current in addition to 29m WF, but does not indicate a timeframe that the other two lines are based on.

Next, I did specify WF data as 29,30,31 and then re-ran the WF. However, neither the graph nor the performance tab reflect any WF on the 30 or 31, while the "Script" tab does have scripts for the WF versions on all timeframes. In looking at the script, the parameter values and other WF metrics (stability, etc.) are the same for all time frames.

So, I wanted to be sure I am understanding how this works correctly, or perhaps there is an issue because I did not specify the WF Price Data at system build time or before I ran the first WF?

1) The prices in opt price data are used. (optimize price data false)
Pleae confirm its set to false. If true GSB will use only 1 of the time frames you specify. (what it considers the best one)
2) From memory it does an average of them all, but you can get the output of each data stream. ie go to the ts code and select the central bar interval of what you used.
The scripts should be the same for all time frames as the same paramaters are used on all time frames. Thats the point of opt over mutiple time frames. You get whats best for them all, not a specific one. I may not have answered your question as throughly as you like, but ask again if your still stuck. On holiday till friday night so replies are shorter. Can say we did catch too many fish to count :)
A first for our family. Roughly 4 meals for a family of 5.

avatartrader - 12-12-2018 at 03:47 PM

Thanks, Peter - that helps and is what I was guessing it may be doing. Enjoy the rest of your vacation and enjoy the fish! It's dark, cold and wet where I live in the northern hemisphere, so can't say I'm not a tad jealous...

avatartrader - 13-12-2018 at 11:27 AM

I just got done walking forward a handful of systems that I built using Nth Mode and wanted to get some feedback on my observations:

1. With Nth Mode = NoTrd, each of the generated systems has about 250-300 trades
2. With Nth Mode = All, each of the generated systems has about 500-600 trades
3. When I run the walkforward, for all of the systems that I tried (Dates = All, Nth = All), I ended up with anywhere from 275 to about 400 trades. However, the results and metrics were still very good, often exceeding the best optimization.
4. Referencing Peter's video on WF, a significant difference in the number of trades like this is not desirable, even if the number of trades is lower and the metrics ultimately better.

I haven't seen anything updated to address using WF with the updated methodology so I wanted to be sure that this guidance is still accurate? Or, when I do the WF, should I be doing it on the original settings of Nth=NoTrd and then changing it to All to evaluate post-WF?

Attached is a screenshot of one that I just did as a test. I do get similar results and behavior with generated systems using different types of entries as well.

2018-12-13_9-23-18.png - 229kB

admin - 13-12-2018 at 02:50 PM

Quote: Originally posted by avatartrader  
I just got done walking forward a handful of systems that I built using Nth Mode and wanted to get some feedback on my observations:

1. With Nth Mode = NoTrd, each of the generated systems has about 250-300 trades
2. With Nth Mode = All, each of the generated systems has about 500-600 trades
3. When I run the walkforward, for all of the systems that I tried (Dates = All, Nth = All), I ended up with anywhere from 275 to about 400 trades. However, the results and metrics were still very good, often exceeding the best optimization.
4. Referencing Peter's video on WF, a significant difference in the number of trades like this is not desirable, even if the number of trades is lower and the metrics ultimately better.

I haven't seen anything updated to address using WF with the updated methodology so I wanted to be sure that this guidance is still accurate? Or, when I do the WF, should I be doing it on the original settings of Nth=NoTrd and then changing it to All to evaluate post-WF?

Attached is a screenshot of one that I just did as a test. I do get similar results and behavior with generated systems using different types of entries as well.

The difference in the number of trades is not too bad. My experience shows on (it was either es or cl or both) that fitness, number of trades increased in the last 1 or 2 years pure out of sample, and pf and at decreased. All which is fine.
Currently this is what Im doing.
Do market validation to make sure your settings / markets are on a good foundation. Leave 1.1.2017 onwards as OOS. (*out of sample)
The do verification 25,26,27,28,32,33,34,35
Either on nth no trade, then change nth to trade on those that pass verification score 8.8
or change nth to all and do verification.
Test the out of sample (2017+) and choose systems that have good metrics in sample and OOS.
At this stage you could wf anything you like the looks of. I think its ok to use systems before or after wf, as long as it has a good wf result.
Here is my logic. Market validation showed systems collectively were good OOS 2017++ so they should be ok to trade.
Doing wf with 2017++ not seen, also gave good oos.
I am still experimenting with some of these concepts.

uhrbi - 28-12-2018 at 09:13 AM

Hello Peter,

I tried the methodology presented in your validation video 2 from
earlier this year and the last video, but I am not sure if my results
are within the range one can expect. With the settings you showed in this video
I get an initial OOS degradation of around 30% for ES which gets down to
around 16% after all the methods you've shown when I check only
systems that have a positive verification score. So my question is, if
that is within the range one can expect or am I making a mistake?

Second question is, do you start the build process all over with the
exact same setting if the degradation is "too" high?

Where does the WF-testing come into play? Do you make a WF-test after
all the other validation procedures or do you think that it is not
necessary, because robustness is tested already?

How do I make the exact same test again without having the workers
send systems from the prior test to the manager? Do I have to close
them and re-start the workers, or is there any other way?

Thank you,

Daniel

adcardoso01 - 28-12-2018 at 03:12 PM

Any thoughts/opinions on Andrea Unger courses?

Only for beginners or worth it for experienced traders also?

Overall makes sense to invest in it?

Thx!

admin - 28-12-2018 at 03:52 PM

Quote: Originally posted by uhrbi  
Hello Peter,

I tried the methodology presented in your validation video 2 from
earlier this year and the last video, but I am not sure if my results
are within the range one can expect. With the settings you showed in this video
I get an initial OOS degradation of around 30% for ES which gets down to
around 16% after all the methods you've shown when I check only
systems that have a positive verification score. So my question is, if
that is within the range one can expect or am I making a mistake?

Second question is, do you start the build process all over with the
exact same setting if the degradation is "too" high?

Where does the WF-testing come into play? Do you make a WF-test after
all the other validation procedures or do you think that it is not
necessary, because robustness is tested already?

How do I make the exact same test again without having the workers
send systems from the prior test to the manager? Do I have to close
them and re-start the workers, or is there any other way?

Thank you,

Daniel

Hi Daniel.
These are good questions
I think what youve done in 1 is fine. I did a lot of this sort of testing in the last month, but had a freak combination of data loss, backup faulure and human error, and I lost my results. I want to do more work in this area.
If your degradation is too high, i dont think you want to start again with the same settings.
The jury is not out on this one, but if youve done validation say on 25 to 35 less 29 30 31 I see much less need for wf

You dont have to close the workers, and they should not resend systems from a old test to a manager again. However I like to restart a manager each time.
There is a possible bug that will show if you dont. We are still looking into this

admin - 28-12-2018 at 03:56 PM

Quote: Originally posted by adcardoso01  
Any thoughts/opinions on Andrea Unger courses?

Only for beginners or worth it for experienced traders also?

Overall makes sense to invest in it?

Thx!

Ive done one of his courses, and liked it. Another GSB user recommend it to me, so there is at least 3 of us who have done them. I like Andre and think the courses are good. Not sure I have the patience for his methodology. You find market bias and build systems accordingly. This sort of stuff is going to be built in GSB before too long. Bottom line is few courses are worth doing but Andre's are. He did an update to the course this month which I bought, but the materiel wasnt yet published and Ive been too busy to look at the updated material.

admin - 6-1-2019 at 10:44 PM

I hope in < 1 week, a new video on a alternative way to build S&P 500 system. The concept could apply to other markets.
The new features of GSB have increased faster than the knowledge of best to use them. Hope this will help

saycem - 7-1-2019 at 03:22 AM

Looking forward to it.
Keep up the good work
Would like to get a consolidated update on how to use all new features as I have been away for a month and already feel like things have changed a lot.
:)

admin - 7-1-2019 at 04:09 AM

Quote: Originally posted by saycem  
Looking forward to it.
Keep up the good work
Would like to get a consolidated update on how to use all new features as I have been away for a month and already feel like things have changed a lot.
:)

I have done 3 proof of concept methods on ES.
One method seems clearly the best.
Tomorrow I hope to start again, and record the making of them.
If any one has spare CPU resources, let me know. A lot of building of systems, and im doing about 700+ wf for each test.
You will also get 50.70 build

 Pages:  1  2    4  ..  12