| Pages:
1
..
12
13
14
15
16
17 |
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
I am not sure Max Consecutive losses metric really tells anyone anything.. More than the luck of past random order of trades.... we have win % (which
is important) results from a metric such as Max Conse.loss.. would just be fitted around past data order of trades...
If a system just had 2 consecutive losses compared to another that had 4, should one ditch the latter one ? not sure about that if other metrics line
up well..
Not an easy question though and my view above is certainly just my personal opinion, perhaps the metric has value for others though.
|
|
|
RandyT
Member
 
Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline
|
|
Quote: Originally posted by admin  | Quote: Originally posted by Systemholic  | hi Peter,
i have a wish list.
Can i suggest you consider including a new metric - Max Consecutive Loss to help user better select post build systems for further scrutiny and
testing. Current metric like NP,FF,PF and NP/DD is great but they do not really show how long a system risk (risk of loss) can persist before it
become profitable . I believe Max Consecutive Loss does give user a good idea and all things equal having this information can help user prioritise
systems to send into favourites for further testing. Not sure where is best to place this in GSB & will leave that to your best judgement. Many
thanks. |
I would like the opinions of others on this, but the request is in the job que. Hope to have it in a few weeks. Other things are being worked on now
|
I believe this performance is measured by PearsonByDate value.
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
What i do personally would want, is...
-Variance test, to be able to evaluate how the distribution looks of my strategy
-Noise Test distribution, to see that real curve is placed ok.
-Randomised OOS, to check so that OOS results are not just down to good markets and luck.
- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..
- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would
assume one could do the same for exit.
I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter
perhaps..
Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics
Thanks received (2):
+1 Bruce at 2020-09-10 16:24:48 +1 Carl at 2020-09-10 12:23:04
|
|
|
kiwibird
Junior Member

Posts: 10
Registered: 25-7-2017
Member Is Offline
Mood: No Mood
|
|
GPU
Is GSB amenable to be programed to take advantage of the GPUs on Nvidia graphics cards?
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Ive been asked that a number of times, and we did make tests on this.
All possible maths is cached in ram, which is why GSB is so fast and ram hungry.
The only improvement that can be put in GPU was the performance metrics.
I estimate is it would give a 10% improvement, which is not worth the programing time that would cost all users (in lack of future development) for
the small benefit of a few users. Currently you can easily multiply the speed by more computers which is working really well.
|
|
|
engtraderfx
Junior Member

Posts: 98
Registered: 15-10-2018
Member Is Offline
Mood: No Mood
|
|
Suggestion for plotting Walkforward...when check WF result on graph when dates are limited the only way to check out of date performance seems to be
to override settings but then lose the visual of before vs after, just wonder if its possible to just project the WF curve & showed as say dashed so
it clear its out of sample? Would stop of a lot of back & forth? thanks Dave
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by engtraderfx  | | Suggestion for plotting Walkforward...when check WF result on graph when dates are limited the only way to check out of date performance seems to be
to override settings but then lose the visual of before vs after, just wonder if its possible to just project the WF curve & showed as say dashed so
it clear its out of sample? Would stop of a lot of back & forth? thanks Dave |
can you give me a mock screen shot?
I think override orig settings will just project it forward so you see out of sample.
So I dont understand why thats not ok
|
|
|
engtraderfx
Junior Member

Posts: 98
Registered: 15-10-2018
Member Is Offline
Mood: No Mood
|
|
Hi peter, here is example. It happens when I say build systems & WF on shorter time frame (eg say up to 2016), then want to see performance to
current date I reset date & override settings again. Then need to "Use WF Parameters" to see WF on out of sample data but this removes original equity
curve, sometimes changes colors too to brown. Just had an idea that could keep current current & show extended curve like so.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by engtraderfx  | Hi peter, here is example. It happens when I say build systems & WF on shorter time frame (eg say up to 2016), then want to see performance to
current date I reset date & override settings again. Then need to "Use WF Parameters" to see WF on out of sample data but this removes original equity
curve, sometimes changes colors too to brown. Just had an idea that could keep current current & show extended curve like so.
|
You look like your using training / test / validation.
This conceptually is very obsolete way of doing things.
Use 100%training and the dates as per gsb methodology
|
|
|
zordan
Junior Member

Posts: 11
Registered: 1-7-2020
Member Is Offline
|
|
P-VALUE
I would welcome a p-value test of the in-sample/out-of-sample mean returns in order to ensure OOS results are not random
http://www.automated-trading-system.com/bootstrap-test/
Thanks!
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Im open to the opinion of others, but I think this is a step backwards.
We are currently picking the top 250 or 300 systems of 50,000 (and 50% of this data was out of sample) and looking at the entire 250/300 systems
While I dont understand p-value, I think any testing on just one system is a much weaker method of robustness
The article says "The problem with back-testing is that the results generated represent a single sample, which does not provide any information on the
sample statistic’s variability and its sampling distribution. "
we have overcome this problem with our 50,000 250 / 300 systems
In fact is better than that as the identical test is repeated 4 times - which often gives large variation in results.
the last 2 gold videos outline the finer details
https://trademaid.info/gsbhelp/Videos.html
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
Quote: Originally posted by Daniel UK1  | What i do personally would want, is...
-Variance test, to be able to evaluate how the distribution looks of my strategy
-Noise Test distribution, to see that real curve is placed ok.
-Randomised OOS, to check so that OOS results are not just down to good markets and luck.
- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..
- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would
assume one could do the same for exit.
I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter
perhaps..
Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics
|
Any possibility for these validation features you think Peter?
|
|
|
REMO755
Member
 
Posts: 181
Registered: 11-4-2021
Member Is Offline
|
|
Hello,
The macros are fine, the methodology is fine.
The freedom to choose options is necessary and valued on my part.
Right now the freedom of choice in the Software can be improved, I do not take any credit for it, I am very grateful for this machine since it makes
my work much easier and I will recommend it where the option is presented, do not hesitate.
Simple example of freedom of choice:
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by REMO755  | Hello,
The macros are fine, the methodology is fine.
The freedom to choose options is necessary and valued on my part.
Right now the freedom of choice in the Software can be improved, I do not take any credit for it, I am very grateful for this machine since it makes
my work much easier and I will recommend it where the option is presented, do not hesitate.
Simple example of freedom of choice:
|
are you saying an option to choose all?
I don't want this. The reason is there is about 2 or 3 inidcators that should not be used. They are keeped to maintain compatibility with old
settings, or saved systems
closedminuscloseD is redundant, but better than closedminuscloseDBPV
roofingfilter1pole is redundant, but better than roofingfilter1pole
you can select some or all systems on the left.
click, move mouse then shift click, and select them all, or use control click etc
Thanks received (1):
+1 REMO755 at 2021-05-27 07:33:14
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
Quote: Originally posted by Daniel UK1  | What i do personally would want, is...
-Variance test, to be able to evaluate how the distribution looks of my strategy
-Noise Test distribution, to see that real curve is placed ok.
-Randomised OOS, to check so that OOS results are not just down to good markets and luck.
- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..
- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would
assume one could do the same for exit.
I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter
perhaps..
Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics
|
Peter any feedback in these mentioned metrics, i think they would be beneficial and appreciated.
Cheers
Thanks received (1):
+1 SwedenTrader at 2021-05-29 17:15:15
|
|
|
Bruce
Member
 
Posts: 115
Registered: 22-7-2018
Location: Auckland - New Zealand
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Daniel UK1  | Quote: Originally posted by Daniel UK1  | What i do personally would want, is...
-Variance test, to be able to evaluate how the distribution looks of my strategy
-Noise Test distribution, to see that real curve is placed ok.
-Randomised OOS, to check so that OOS results are not just down to good markets and luck.
- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..
- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would
assume one could do the same for exit.
I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter
perhaps..
Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics
|
Peter any feedback in these mentioned metrics, i think they would be beneficial and appreciated.
Cheers
|
As a user of both BA (4 years) and GSB I'm of the view they serve two very different purposes and I have found the build methodologies are also very
different. Whilst there are some UI benefits with BA the real feature is being able to test ideas and couple exits strategies with entries. And it's
fast at doing that.
GSB tackles a much broader development process and Peter has developed a very robust methodology that addresses challenges like IS builds with Nth &
multi OOS testing, WFO, families, automation etc. elements that BA isn't even in the hunt at this time.
BA with its bar pattern builds and rapid development with daily bars are capabilities are all great features however I'm not convinced that it use of
monte carlo is any better that what can be achieved building 30k systems in an indicator search, building 50k systems looking for the top cohort of
300 etc. UIs can always be better and understandably these take a lot of time to get right.
Just offering some feedback as a builder and trader of systems from both apps.
Thanks received (1):
+1 SwedenTrader at 2021-06-17 10:39:13
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.
And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our
own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.
What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed
within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.
Thanks for your input
Thanks received (1):
+1 SwedenTrader at 2021-06-17 10:40:21
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Daniel UK1  | Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.
And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our
own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.
What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed
within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.
Thanks for your input
|
we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all
the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven
wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate.
|
|
|
Bruce
Member
 
Posts: 115
Registered: 22-7-2018
Location: Auckland - New Zealand
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by admin  | Quote: Originally posted by Daniel UK1  | Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.
And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our
own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.
What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed
within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.
Thanks for your input
|
we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all
the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven
wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate. |
With regards to your preference "to quickly be able to see where my strategy is placed within the distribution", I to have found that insightful in
the past however when I take a good looking system and perform a WFO test the results are far from what these graphics portrayed. This could be a
result of user errors on my part. One item you raised is randomizing the trade results, have you found this selection criteria really improves the
system performance or robustness?
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
Quote: Originally posted by Bruce  | Quote: Originally posted by admin  | Quote: Originally posted by Daniel UK1  | Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.
And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our
own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.
What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed
within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.
Thanks for your input
|
we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all
the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven
wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate. |
With regards to your preference "to quickly be able to see where my strategy is placed within the distribution", I to have found that insightful in
the past however when I take a good looking system and perform a WFO test the results are far from what these graphics portrayed. This could be a
result of user errors on my part. One item you raised is randomizing the trade results, have you found this selection criteria really improves the
system performance or robustness? |
Hi Bruce, the metrics mentioned is very difficult to prove or to quantify the robustness of, and i dont use these metrics to pick any systen (that
part is quite important), i do allow myself to discard systems based on other validation methods outside of GSB that do make sense from my own human
logical perspective ... i for example try to avoid picking final systems based on performance, i pick systems based on robustness of my WF and
passing validation methods in GSB such as noice on trade data, other timeframes, other markets.. (after i have evaluated performance for my final
build setting based on a large number of systems)
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Hi Peter,
Hopefully a suggestion to consider: adding charts of drawdown, average trade and win percentage.
Update:
A system deterioration shows up in an earlier stage in the average trade chart compared to the equity chart.
Please see example charts.
Thanks.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Carl  | Hi Peter,
Hopefully a suggestion to consider: adding charts of drawdown, average trade and win percentage.
Please see example charts.
Thanks.
|
your second file doesn't exist.
this could be done, but don't see great value in it.
IM open to getting it done though. Anyone else want it?
|
|
|
Daniel UK1
Member
 
Posts: 470
Registered: 4-6-2019
Member Is Offline
|
|
I think it could be intresting to have possiblity to see key metrics over time, as of now we can look at the curve to view progress over time, and we
can get total metrics... but it would be good to be able to see per year, key metrics such Avr trd, %win etc... or as in Carls example, as a
timeline.... in order to see if key metrics are within range... I think it would be helpful
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Daniel UK1  | | I think it could be intresting to have possiblity to see key metrics over time, as of now we can look at the curve to view progress over time, and we
can get total metrics... but it would be good to be able to see per year, key metrics such Avr trd, %win etc... or as in Carls example, as a
timeline.... in order to see if key metrics are within range... I think it would be helpful |
Thanks for comments. I will add it.
|
|
|
Carl
Member
 
Posts: 342
Registered: 10-5-2017
Member Is Offline
Mood: No Mood
|
|
Hi Peter,
Thanks for adding my suggestion.
But what I mean is not only the key metrics, but the average of key metrics.
So for example the average trade of the 30 last trades. Please see my screenshot in my post: "ma 30 AT"
The average profit factor of the last 30 trades and so on.
Thanks.
|
|
|
| Pages:
1
..
12
13
14
15
16
17 |