| Pages:
1
..
5
6
7
8
9
..
47 |
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
"On the issue of selecting the middle bar size, I may be confused or misunderstanding things, but I need to nail it down. In the following example,
why should I select the 25 min bar (middle) rather than the 20 min (less deterioration). This is just an example, as the overall results are lousy."
My thoughts are I want the broad area that works well, with margin on both sides. Im open to any one else's idea - if it varies from this.
Your question on what each row represents is a good one.
in sample compared to out of sample
the rows are the average of 20,25,30 compare to 20,25,30
the rows are the average of 20,25,30 compare to 20
the rows are the average of 20,25,30 compare to 25
the rows are the average of 20,25,30 compare to 30
20 compared to 20
25 compared to 25
30 compared to 30
"Regarding the issue of running WF. For example, for the 20 min dataset, is it applying systems based on starting values of the original-period
indicators or the average-period indicators?"
This answer doesnt matter much as the first section will be optimized for peak fitness of +25% and -25% from the original ones.
Thats the default settings under WF. You can go wider than this or even user random space which does the GSB hard coded max and min indicator lengths
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
Thanks for the response. As I understand it, for any given system, if "Optimise Price Data' is set to False, GSB will optimise the lookback period for
the indicators, across all the datasets (20 25 30). The resulting optimised lookback period is, then, implemented for all the datasets. So when we run
WF for any of the datasets (say 20 min) the starting values will come from this jointly optimised lookback period.
I had, initially, misunderstood that "20 compared to 20" involved an optimisation of the lookback period for the given dataset, independently of the
other datasets.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | Thanks for the response. As I understand it, for any given system, if "Optimise Price Data' is set to False, GSB will optimise the lookback period for
the indicators, across all the datasets (20 25 30). The resulting optimised lookback period is, then, implemented for all the datasets. So when we run
WF for any of the datasets (say 20 min) the starting values will come from this jointly optimised lookback period.
I had, initially, misunderstood that "20 compared to 20" involved an optimisation of the lookback period for the given dataset, independently of the
other datasets. |
correct
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
I have done testing on CL.
30 minute bars, what was the out of sample like with every second day out of sample?
and the last year.
Same test for 29,30,31 CL minute bars
and same test for CL,HO,RB,
and then we extract 30 min CL results only
and same test for CL,HO,RB,
and then we try the same tests with natural gas added.
NG doesn't correlate as well with the other markets
Will post tomorrow in the private forum
https://trademaid.info/forum/viewthread.php?tid=33&page=2#pi...
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Newest draft video on multi bars, the new verification score per systems etc.
There is something in this video for people new to GSB, and something for experienced GSB users, but the video is aimed at a bit of both. It is not
perfect for either though.
Note also the use of favorites & stats, which is a new implementation of existing features.
15 minutes long, but I have never had a video that took so long to produce. A lot of that was due to working out methodology and preparing content.
Attachment: Login to view the details
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Ive had quite a lot of feed back on the video in the post above.
A few comments.
The newer methodology is a lot better with its extra steps.
The implications are significant.
Why would you trade systems built with single time frames, when the multi time frames is so much more likely to go well out of sample?
Of course there is also the verification on other bar intervals / markets, and the verification score which helps even more.
My only answer is while it takes us time to make new code, this should be done asap.
Another addition is, In the video I showed how market degradation score of >-10% gave improved results.
I have done more tests and the better this figure is, the better out of sample results were. ie on CL use only verification score >-4%, gave -3.0%
market degradation. Verification score of ->10% gave -7.1%
The video focus was market degradation/ get good out of sample results - how to get systemS with good out of sample, not how to build
a single system.
That's because, to build a good trading system, you need a good foundation. If the foundation is flawed, what is built upon it will not last.
What I feel right now is good, is to verify the nth no trade (In sample) and the nth trade (Out of sample) and pick out the best systems.
Some users with low power hardware might argue, this is going to take too long. Well I think the cost of getting this wrong is greater than the cost
of better hardware. Short term I can hire out a dual xeon server (=i9 performance) with 192 gb of ram for US$10 a day.
|
|
|
Bruce
Member
 
Posts: 115
Registered: 22-7-2018
Location: Auckland - New Zealand
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by admin  | Ive had quite a lot of feed back on the video in the post above.
A few comments.
The newer methodology is a lot better with its extra steps.
The implications are significant.
Why would you trade systems built with single time frames, when the multi time frames is so much more likely to go well out of sample?
Of course there is also the verification on other bar intervals / markets, and the verification score which helps even more.
My only answer is while it takes us time to make new code, this should be done asap.
Another addition is, In the video I showed how market degradation score of >-10% gave improved results.
I have done more tests and the better this figure is, the better out of sample results were. ie on CL use only verification score >-4%, gave -3.0%
market degradation. Verification score of ->10% gave -7.1%
The video focus was market degradation/ get good out of sample results - how to get systemS with good out of sample, not how to build
a single system.
That's because, to build a good trading system, you need a good foundation. If the foundation is flawed, what is built upon it will not last.
What I feel right now is good, is to verify the nth no trade (In sample) and the nth trade (Out of sample) and pick out the best systems.
Some users with low power hardware might argue, this is going to take too long. Well I think the cost of getting this wrong is greater than the cost
of better hardware. Short term I can hire out a dual xeon server (=i9 performance) with 192 gb of ram for US$10 a day. |
Good feedback Peter, absolutely nailed it with the need for a good foundation and the modest investment required to deliver a robust
system/performance results
|
|
|
saycem
Junior Member

Posts: 48
Registered: 13-7-2018
Member Is Offline
Mood: No Mood
|
|
Agree with the above. By now its fairly safe to assume that 29,30,31 is better than 30 alone.
But what about all the other decisions and variables to consider, many which you mention in the video.
timeframes, day/swing, Secondary filter, and all the various 2nd or more data streams etc etc etc.
How long do you see this taking?
Are we all going to commence the same overlapping work?
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by saycem  | Agree with the above. By now its fairly safe to assume that 29,30,31 is better than 30 alone.
But what about all the other decisions and variables to consider, many which you mention in the video.
timeframes, day/swing, Secondary filter, and all the various 2nd or more data streams etc etc etc.
How long do you see this taking?
Are we all going to commence the same overlapping work? |
Well some of that is your personal choice, and other is what works best.
The CL settings I think are the best as what im using, though long short and long only are both valid. Secondary filters are the best as I am using
them, as are data streams. I think 30 min is best, but small chance 15 min.
Not sure fully what you mean by "How long do you see this taking?
Are we all going to commence the same overlapping work? "
I think CL as is, is really good.
|
|
|
saycem
Junior Member

Posts: 48
Registered: 13-7-2018
Member Is Offline
Mood: No Mood
|
|
Agree CL looks very good.
I was genuinely interested in how long it might take to run 10k systems per setting to determine what is superior. As I don't have the hardware yet I
was wondering for example how long it might take to determine
S vs S/BO vs S/BO/SM
15min vs 30min vs 60 min (many hypothesize that other timeframes are even better ie 20min? but we can't choose everything)
CloseLessPrevCloseDbpv vs GA
eod vs swing
I'm certainly not disagreeing with your method for determining the above. Just that I think it would take a long time. Plus all of us doing the same
thing seemed a little inefficient - but I don't know how to solve that.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by saycem  | Agree CL looks very good.
I was genuinely interested in how long it might take to run 10k systems per setting to determine what is superior. As I don't have the hardware yet I
was wondering for example how long it might take to determine
S vs S/BO vs S/BO/SM
15min vs 30min vs 60 min (many hypothesize that other timeframes are even better ie 20min? but we can't choose everything)
CloseLessPrevCloseDbpv vs GA
eod vs swing
I'm certainly not disagreeing with your method for determining the above. Just that I think it would take a long time. Plus all of us doing the same
thing seemed a little inefficient - but I don't know how to solve that. |
Well it would be good for us collectively publish results.
I started on ES but then improved a lot on CL. I will then likely go back and test the concepts found on CL onto ES
I think for soy, bo & sm 30 min was best. I think I got high degradation but good results the last year - but not a lot of trades. It was sum time
ago, so need to do it all again
|
|
|
cotila1
Junior Member

Posts: 78
Registered: 8-5-2017
Member Is Offline
Mood: No Mood
|
|
interesting video. A bit more to what I already do. Found very usefull these VSS feature.
Quote: Originally posted by admin  | Newest draft video on multi bars, the new verification score per systems etc.
There is something in this video for people new to GSB, and something for experienced GSB users, but the video is aimed at a bit of both. It is not
perfect for either though.
Note also the use of favorites & stats, which is a new implementation of existing features.
15 minutes long, but I have never had a video that took so long to produce. A lot of that was due to working out methodology and preparing content.
|
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
The video is finished. Its polished a little more, and small updates. Another video on the complete updated guide to making a crude oil system is in
the pipeline.
Videos dont come quick as they take a lot of time to make the content, + present.
To help promote the video, please hit the like button, if you like the content.
https://www.youtube.com/watch?v=HDeJpONE090&feature=youtu.be
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Sorry for lack of updates of recent. I'm doing some very time consuming in-depth reading and research. More from me when I surface. One of the goals
is to finalize the last video showing how I make systems with the newer enhanced GSB features.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
I feel now I have a clear road map to test enhanced methodology. So there is light at the end of the tunnel. Still it might take me a week to get it
is precise as I would like, then some time to publish. I also want to test the same method on ES and CL to confirm what worked best.
Bottom line however is all variations of what I did worked well in the last year of CL that was left for out of sample.
Im going to do a number of tests, and leave the last 3 years out of sample to validate what method works best. I may also do the identical tests with
3 years in the front part of the data as out of sample too.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
This testing is going to take some time. I did tests of 10,000 systems but when I apply verification on other time frames etc, the amount of systems
drops greatly. This isnt a big enough sample to make valid statistical conclusions. It is critical to get this right. So I need to make about 80,000
systems * 12 tests + time to verify the systems.
It takes me 12 hours or so CPU time to do 80,000 tests.
If anyone has spare CPU time, email me your share key and I will get you version 59.08 or later to do the tests.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont
remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018

|
|
|
cotila1
Junior Member

Posts: 78
Registered: 8-5-2017
Member Is Offline
Mood: No Mood
|
|
Agree on Idea of verification on cloud.
An useful command would be also the ''cancel verify'' (righ click, similar to ''Cancel Nth mode''). This is usefull especially in case of huge bulk of
systems verification (e.g. 80K)
thanks for effort
Quote: Originally posted by admin  | It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont
remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018
|
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cotila1  | Agree on Idea of verification on cloud.
An useful command would be also the ''cancel verify'' (righ click, similar to ''Cancel Nth mode''). This is usefull especially in case of huge bulk of
systems verification (e.g. 80K)
thanks for effort
|
Totally agree. It will happen within a few builds I hope.
|
|
|
cotila1
Junior Member

Posts: 78
Registered: 8-5-2017
Member Is Offline
Mood: No Mood
|
|
same by NG built pre-new generations GSB. It has gone fine un-seen data too in second half of 2018.
Quote: Originally posted by admin  | It becomes apparent that verification should be sent to the cloud. Takes a long time to verify 80,000 systems.
On a toatlly different subject, by soybeans systems have gone fine out of sample. But I dont think market validation tests were so good. I dont
remember as it was some time ago, also when gsb had less features.
Here is a report. 1 tick and $2.4 slippage per side.
Out of sample March 2018
|
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Might be time to revisit NG, as the last 2 years were really poor. Anyone else got things to share? I will look monday at my systems.
|
|
|
boothy
Junior Member

Posts: 54
Registered: 21-5-2018
Member Is Offline
Mood: No Mood
|
|
I've just checked a couple NG systems I built earlier this year and both are similar to the one above, were flat but making new highs recently.
maybe some vol is coming back into NG?

|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
The chart explains all. Im no expert on NG, but there seems to be a seasonal aspect too.
Shown is the weekly range, and weekly abs (close-closeD(1))
|
|
|
boothy
Junior Member

Posts: 54
Registered: 21-5-2018
Member Is Offline
Mood: No Mood
|
|
At a quick glance, I would say the range expansion would coincide with the northern hemisphere winter, by the looks of that chart.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Im not one for trading by the news, but the numerous news predicted this.
https://www.cnbc.com/2018/11/09/natural-gas-prices-up-on-col...
I have a untested theory. A new market regime starts by a daily bar of volume and range x std greater than previous daily bars.
|
|
|
| Pages:
1
..
5
6
7
8
9
..
47 |