GSB Forums

Wish List

 Pages:  1  2    4  

admin - 16-4-2020 at 08:54 PM

Quote: Originally posted by Daniel UK1  
HI Peter, i know your list is full... however, i think it would be worthwhile since we now are reaching a quite high amount of indicators, to start to divide them in groups...
For example momentum indicators, volume etc.. *not sure where to put the moon stuff .. but i think even a small and simple grouping would help... and not to mention the possibility to change data path for whole gsb so one can easily move from ssd to dropbox for example...

Anyway, thanks for great support and development of GSB..


Its a good idea but too early.
Im upto 90 indicators, but on CL the ones over 40ish are nowhere as good as the top 40. Im not going to release indicators that I consider lemons into standard build. Other issues are some of the 40 to 90 dont match TS, so that also could be why they are lemons.

engtraderfx - 18-5-2020 at 07:29 PM

Trailing Stop on close?

Hi Peter, I am looking at potential longer term ideas, the gsb ATR trailing stop is intrabar exit, have you tried using closing price to trigger the exit (ie Close crosses ATR band, then close trade next open)? Sometimes filters out noise & lets trailing stop get closer to stop.

Thanks Dave

admin - 18-5-2020 at 07:49 PM

Quote: Originally posted by engtraderfx  
Trailing Stop on close?

Hi Peter, I am looking at potential longer term ideas, the gsb ATR trailing stop is intrabar exit, have you tried using closing price to trigger the exit (ie Close crosses ATR band, then close trade next open)? Sometimes filters out noise & lets trailing stop get closer to stop.

Thanks Dave

Hi Dave
I have no plans for intrabar features in GSB. I also don't see the need for it unless your trading daily bars. I recommend you code your exit on existing systems and see if it helps them. If it does I will consider adding.

Combine multiple strategies in 1 for the same symbol

pkotzee - 23-5-2020 at 09:23 PM

Hi Peter,

Do you think it will be possible to develop some way to combine 2, mostly uncorrelated strategies into 1 on the same symbol? Each will still have their own rules. I understand TS don't do well with running 2 or more strategies attached to separate charts on the same instrument, but since some strategies don't trade often, I think it might be worthwhile to explore this idea.

Kevin Davey wrote an article about this:
Article

Although my easylanguage expertise is a bit limited, I'll try to do it manually as per his article and post what it looks like.

Thanks

edgetrader - 24-5-2020 at 07:13 AM

Quote: Originally posted by pkotzee  
I understand TS don't do well with running 2 or more strategies attached to separate charts on the same instrument, (...)


Since a few months I'm running several strategies on the same symbol, in separate charts on same TS brokerage account. The strategies have overlapping and sometimes opposing trades on most days they trade. So far, it worked fine without issues. You can use keywords like marketposition, setstoploss, setexitonclose and they work fine on the individual strategy level (not portfolio).

The main thing you need to take care of are the attached settings marked yellow and red. Yellow settings are important because your theoretical strategy position must force its live position, at least after some time the theoretical filled.

Two side issues:
1. For Eurex markets, your custom session needs to end at least a minute before exchange session for setexitonclose to work, or else your orders may be rejected.

2. If opposing trades are exited at session end, you may buy and sell at the same time, paying double slippage and commission. Such situations can be taken care of by using unmanaged orders, but it only saves you a bit of money and may not be critical depending on how often it happens. This works on the portfolio level and only one chart can do it:
OrderTicket.CancelAllOrders(GetAccountID, Symbol, false);
OrderTicket.ClosePosition(GetAccountID, Symbol);

At Interactive Brokers, issue 2. is critical because IB thinks your orders may "cross" each other and they'd be rejected.

20100831110143Forum 103668_Strat Set.PNG - 70kB

admin - 25-5-2020 at 03:52 AM

Quote: Originally posted by pkotzee  
Hi Peter,

Do you think it will be possible to develop some way to combine 2, mostly uncorrelated strategies into 1 on the same symbol? Each will still have their own rules. I understand TS don't do well with running 2 or more strategies attached to separate charts on the same instrument, but since some strategies don't trade often, I think it might be worthwhile to explore this idea.

Kevin Davey wrote an article about this:
Article

Although my easylanguage expertise is a bit limited, I'll try to do it manually as per his article and post what it looks like.

Thanks

Whats an issue is will you limit it to one contract, even though you have two systems. If so this is a bad idea as you get one of the big winning trades, and two of the small loosing trades.
This is indirectly touched on in the newest CL video
Look at low correlation of loosing trades, not low correlation of all trades

pkotzee - 25-5-2020 at 07:13 AM

Quote: Originally posted by admin  
Quote: Originally posted by pkotzee  
Hi Peter,

Do you think it will be possible to develop some way to combine 2, mostly uncorrelated strategies into 1 on the same symbol? Each will still have their own rules. I understand TS don't do well with running 2 or more strategies attached to separate charts on the same instrument, but since some strategies don't trade often, I think it might be worthwhile to explore this idea.

Kevin Davey wrote an article about this:
Article

Although my easylanguage expertise is a bit limited, I'll try to do it manually as per his article and post what it looks like.

Thanks

Whats an issue is will you limit it to one contract, even though you have two systems. If so this is a bad idea as you get one of the big winning trades, and two of the small loosing trades.
This is indirectly touched on in the newest CL video
Look at low correlation of loosing trades, not low correlation of all trades


The idea is not to limit it to 1 contract overall, but 1 contract for each strategy. So combining 2 strategies it will be able to trade 2 contracts and so on.
And as you say, selecting multiple strategies for the same instrument based on low correlation of loosing trades/drawdown, combining them into 1 strategy but each underlining strategy can still trade 1 contract each.
Not sure if I explain it clearly.


admin - 26-5-2020 at 01:45 AM

I still feel limiting to 1 contract is not a good idea for the same reasons, but its less so if the markets are different.
pa pro can cap your contracts to one, so you can test.
Likely you make less $ for the same drawdown is my limited testing
Best confirm what Im saying, as I could very much be wrong when its two different markets.
Trading the micro contracts might be a better option.
Totally understand the logic if your risk averse or low on capital, but I see it as higher risk in many ways to limit contracts numbers

Carl - 5-6-2020 at 05:19 AM

Hi Peter,

A really nice functionality in GSB would be if we could choose a few different strategies and GSB creates a correlation matrix. Maybe a max of 10 strategies?

And another one. Choosing for example two different strategies and GSB calculates the total result of the two strategies combined.

Thanks.

cotila1 - 10-6-2020 at 12:58 PM

I really like those 2 ideas!!

Quote: Originally posted by Carl  
Hi Peter,

A really nice functionality in GSB would be if we could choose a few different strategies and GSB creates a correlation matrix. Maybe a max of 10 strategies?

And another one. Choosing for example two different strategies and GSB calculates the total result of the two strategies combined.

Thanks.

admin - 10-6-2020 at 04:42 PM

Hi cotila1
Correlation matrix (especially negative correlation) is a good thing to look at.
We have portfolio Analysts pro to do all of this and much more.. Its poor use of limited programing time too add features that are already in another product.
https://trademaid.info/pa.html
You also need to check for correlation to existing systems that have been made previously that might not be in GSB at the time.

getty002 - 19-7-2020 at 02:12 PM

Hi Peter,

1) Would you consider including the option for stop loss value to be a part of the search space?
2) Could you also allow "daily" bars as input. Currently we can hack it to 390 minutes, but if you start switching data1 time, it doesn't work. Daily should be daily, regardless of the intraday timing.
3) Ability to prioritize my cloud worker to my manager (on a different machine). Currently I'm trying to run jobs with my manager and I can't get my cloud worker to run my job, just other people's jobs :( . How is the cloud prioritization currently accomplished?

Thank you!

RandyT - 19-7-2020 at 02:27 PM

Quote: Originally posted by getty002  
Hi Peter,

1) Would you consider including the option for stop loss value to be a part of the search space?
2) Could you also allow "daily" bars as input. Currently we can hack it to 390 minutes, but if you start switching data1 time, it doesn't work. Daily should be daily, regardless of the intraday timing.
3) Ability to prioritize my cloud worker to my manager (on a different machine). Currently I'm trying to run jobs with my manager and I can't get my cloud worker to run my job, just other people's jobs :( . How is the cloud prioritization currently accomplished?

Thank you!


@getty002, I would second your requests for items #1 and #2.

Regarding #3, is your cloud worker on same local lan, or remote? If remote, you'll need to exchange "share keys" between the two instances. The one running manager, and the other node running the workers. Note you also need licenses for both of these machines if that is not obvious.

Share keys can be set under App settings->Workplace

getty002 - 19-7-2020 at 02:42 PM

Quote: Originally posted by admin  
I still feel limiting to 1 contract is not a good idea for the same reasons, but its less so if the markets are different.
pa pro can cap your contracts to one, so you can test.
Likely you make less $ for the same drawdown is my limited testing
Best confirm what Im saying, as I could very much be wrong when its two different markets.
Trading the micro contracts might be a better option.
Totally understand the logic if your risk averse or low on capital, but I see it as higher risk in many ways to limit contracts numbers


A portfolio voting mechanism (say 3 of 9 strategies) to execute 1 order has the potential to mitigate the drawdown and retain a reasonable PNL/DD ratio.

getty002 - 19-7-2020 at 02:55 PM

Quote: Originally posted by RandyT  


Regarding #3, is your cloud worker on same local lan, or remote? If remote, you'll need to exchange "share keys" between the two instances. The one running manager, and the other node running the workers. Note you also need licenses for both of these machines if that is not obvious.

Share keys can be set under App settings->Workplace


Thanks for the fast reply Randy! I tried sharing keys and this didn't work for me, it just kept running other jobs on the worker. I tried deleting the gsb shared keys and it didn't run for me. The computers are on the same LAN - is there a different solution for this?

Thank you.


RandyT - 19-7-2020 at 03:06 PM

Quote: Originally posted by getty002  

Thanks for the fast reply Randy! I tried sharing keys and this didn't work for me, it just kept running other jobs on the worker. I tried deleting the gsb shared keys and it didn't run for me. The computers are on the same LAN - is there a different solution for this?

Thank you.



If they are on the same lan, there is no need for share keys.

And just to confirm, you have two licenses?

This might be something that Peter will need to clear up for you via a Teamviewer session...

getty002 - 19-7-2020 at 03:16 PM

Quote: Originally posted by RandyT  
Quote: Originally posted by getty002  

Thanks for the fast reply Randy! I tried sharing keys and this didn't work for me, it just kept running other jobs on the worker. I tried deleting the gsb shared keys and it didn't run for me. The computers are on the same LAN - is there a different solution for this?

Thank you.



If they are on the same lan, there is no need for share keys.

And just to confirm, you have two licenses?

This might be something that Peter will need to clear up for you via a Teamviewer session...


Yes, two licenses. Strange that the same LAN didn't work for me. However, moving my key to the top of the list did seem to do the trick. Really appreciate your help. BTW, insightful comments throughout this forum from you and others. I'm learning a lot. :-)

admin - 20-7-2020 at 12:09 AM

Quote: Originally posted by getty002  
Quote: Originally posted by RandyT  
Quote: Originally posted by getty002  

Thanks for the fast reply Randy! I tried sharing keys and this didn't work for me, it just kept running other jobs on the worker. I tried deleting the gsb shared keys and it didn't run for me. The computers are on the same LAN - is there a different solution for this?

Thank you.



If they are on the same lan, there is no need for share keys.

And just to confirm, you have two licenses?

This might be something that Peter will need to clear up for you via a Teamviewer session...


Yes, two licenses. Strange that the same LAN didn't work for me. However, moving my key to the top of the list did seem to do the trick. Really appreciate your help. BTW, insightful comments throughout this forum from you and others. I'm learning a lot. :-)

if you have these set these worker settings to true, or the same share key in your manager and worker - the workers should go if they are on the same lan


sharekey.png - 70kB

getty002 - 2-8-2020 at 10:08 PM

First, let me say I appreciate what GSB offers and the capabilities it has. I realize I'm a new user here, and I may be met with some resistance, but I'd like to make a request for a stable, more user friendly GSB release. In my short time here, I've noticed a breakneck pace of development in constant new capabilities - this is indeed wonderful and welcomed. This has a side-effect though with such a small development team - QA and usability can suffer. I'm not a full-time user of GSB, therefore I'm not using the software 10 hours a day to work through and conquer all the gotchas and bugs in the software. When I do make time, it sometimes becomes an exercise in frustration on how to get things to work smoothly. What I'm recommending is that at some point in the near future, perhaps consider taking a break from feature additions and transitioning to a rev. 1.1 or 2.0 where many of the operations in GSB are really streamlined and stable. Here are some examples to consider:

1) Stabilize the code - GSB often freezes on me and will crash. This can become a loss of hours to days of effort (FYI, this is on a 256 GB RAM machine with 20% RAM utilization, Windows 2019 Server Ed., not a swap memory crash)

2) Stabilize the way file directories are used in GSB and the Resource Manager - I constantly am unable to have directories remembered or pointing to the right place - this problem has been a thorn in my side since first starting to use your software. My current thorn is an inability to use RM at all because of this problem.

3) In order to get fixes, I feel like I must use the Resource Manager since this appears to be the way you release code - but unfortunately without proper QA, I may be fixing one bug while inheriting another one. Having a stable, reliable release is really important in software development, where 99% of your users will be. I don't always want the latest bells and whistles created while sacrificing stability.

4) Consider creating one integrated GSB application. I'm unclear why maxing out my one single machine requires me to start GSB manager and then 6 instances of GSB Worker. Having developed parallel processed code for other applications, it's my belief there's enormous overhead being wasted by having so much open and having the data inefficiently passed between the manager and workers. Data is literally being copied from one directory to another on my hard drive with GSB. Standalone is unable to max my machine resources and is therefore unusable to me. Further, why not have a single application that just selects "use this machine" or "use personal cloud" or "use GSB cloud". I don't understand why there are so many applications and it makes software development and releases much more complicated for you to manage. You have to release a new version of all of them. The problem of optimization is very easily scaled and shouldn't require 7 separate applications running on a computer. There's an opportunity here for a single application which dynamically identifies the machine's resources and begins multi-threading to use those resources fully.

5) Simplify the application settings where possible, many may be tweaked by Peter only. (i.e. inactive process affinity?)

6) Consider allowing an option for Performance filters to dynamically reduce thresholds if say less than 1 in 100 systems pass the initial thresholds. I often find it difficult to find a set of input parameters that will meet the default filters, and it takes me about 15 minutes to discover this because the ramp-up time takes a long time (this is related to #3 above). I *regularly* have to cancel my job after waiting 15 minutes because I discovered my input parameters just don't produce enough solutions with the filters. How do I make this initial discovery process more efficient and much faster?

Six items are likely enough for now to digest. :-) Again, please recognize I value GSB, so take these requests/comments as constructive criticism.

admin - 2-8-2020 at 10:50 PM

Hi Getty
Thanks for your comments.
1) GSB is very stable. If its not first think I would look at is do you have ram free, and large enough windows swap file.
I have gsb on 10 to 15 servers, and I dont have to go into any of them to fix them.
2) If you use c:\gsb as default install you will have few problems. I will fix your issues via teamviewer.
in time GSB will be more supportive of other folders. GSB can be moved to any folder and paths are relative.
However you will get less problems with c:\gsb. RM is designed for c:gsb and not other paths. Its going to be much more of a pain for other paths.
3) the latest build is normally the one with less bugs. Right now anything recent will be fine. Only changes are in macro / features for gsb automation in the last few builds.
If your using gsb automation, you MUST have the most recent build
4) Totally understand why you say this. Normally apps work like this, but GSB was designed for speed. All maths is cached in ram. This makes spectacular speed increases. A massive amount of programing time was spent try to make one GSB do the equivalent of many copies. It was a total failure. If GSB did not have ram caching, there would be no need for multiple copies. 1 gsb would use 100% the CPU.
You could have that, but GSB is now 10 to 100 times faster. Simple choice I think.
The management via RM works great, so no longer an issue to manage.
A single app that you just choose standalone, or public or personal cloud has merit, but not much different to what we have now. You ideal i think is a little better than what we have now. However GSB is not a mature product and has got years more of development work to go, and this is something that would happen at the end of development, not the beginning. Im my opinion end of product is when developers fill with stuff that makes then simple, bloated and often a pain for advanced users. Look at acronis true image. Wish I could wind back the clock 10 years.
5) we have this already. There numerous levels, advanced off/on, trial user, purchase user, beta tester and SU (me only)
6) Too hard to implement and I dont like the idea. Also I have 100 to 250 workers. Not finding a system in 10 minutes is totaly diffenrt to user with one worker.
Its simple. On a new market, leave these settings very low. Then increase them once you find what works.

Peter

getty002 - 2-8-2020 at 11:24 PM

1) GSB instability is not a memory swap (RAM limitation) problem as stated in my note. It seems to occur when for example I do a create family on a worker and interact with the application directly. When I have no interaction with the session, it doesn't seem to crash. Stating that yours are stable doesn't mean that mine are.
2) My GSB is not installed to default, it's installed to Google Drive. Frankly this shouldn't matter - otherwise what's the point of specifying the directory location?
4) Having developed parallel processing applications using Open MPI on Linux, I contend that there's overhead wasted with 7 applications open. Caching in memory does not require multiple separate applications, although it was perhaps the simplest for you to implement. I believe the fact that the same data is double and triple copied on my hard drive just to manage the different applications supports my argument.
5) Where is the purchased user, non-beta release?

Peter, it seems that you will continue to push forward unabated without regard to my comments on usability or stability, so I'll just be quiet now... But please keep in mind that our time has value too. And at the moment, I feel like I'm stuck in a bit of a meat grinder with many hours lost due to the above stated concerns.

admin - 2-8-2020 at 11:55 PM

Quote: Originally posted by getty002  
1) GSB instability is not a memory swap (RAM limitation) problem as stated in my note. It seems to occur when for example I do a create family on a worker and interact with the application directly. When I have no interaction with the session, it doesn't seem to crash. Stating that yours are stable doesn't mean that mine are.
2) My GSB is not installed to default, it's installed to Google Drive. Frankly this shouldn't matter - otherwise what's the point of specifying the directory location?
4) Having developed parallel processing applications using Open MPI on Linux, I contend that there's overhead wasted with 7 applications open. Caching in memory does not require multiple separate applications, although it was perhaps the simplest for you to implement. I believe the fact that the same data is double and triple copied on my hard drive just to manage the different applications supports my argument.
5) Where is the purchased user, non-beta release?

Peter, it seems that you will continue to push forward unabated without regard to my comments on usability or stability, so I'll just be quiet now... But please keep in mind that our time has value too. And at the moment, I feel like I'm stuck in a bit of a meat grinder with many hours lost due to the above stated concerns.


1) Im happy to look into your stability issues if you can re-produce it. I agree that if my GSB are stable, doesnt mean yours is. Im not getting issues from users over this, but thats not to say you dont have issues. Many issues are often experienced by one user, and if one user can't use GSB well, I work hard to fixing it.
Right now I have a lot of lead programmers time going into bugs from one user. For this one user, the bug is critical (not just annoying) so development work for all gets slowed down.
If you have a crash, exception file in \gsb\exceptions or windows event logs / screen shots are often helpful.
2) why it matters is RM is not configured to auto detect other paths. It can be manually configure to other paths, but thats not the default. SO that makes it a little more complex and your not likely to get it right without some tech support.
3) I totally agree that is not more efficient. Personally I dont care on disk space. Ram is the issue. If i have 10 to 15 servers with an average of 256 gb of ram each, the cost saving to me (and other users) would be massive if I could run one GSB instead of 7 to 25 copies. As I said a lot of time was put into this, and it was not successful.
5) Its the same exe file, but there are unlock keys.
Beta build has more bugs and issues, and more complex so not for everyone.
vip you read the warning

https://trademaid.info/forum/viewthread.php?tid=249

admin - 3-8-2020 at 04:21 AM

Hi Getty002
Im intending to have option of path for all user settings fairly soon, but it will be in beta tester mode only. A number of users wanted this. Its in the short term job que

Systemholic - 10-9-2020 at 12:45 AM

hi Peter,

i have a wish list.

Can i suggest you consider including a new metric - Max Consecutive Loss to help user better select post build systems for further scrutiny and testing. Current metric like NP,FF,PF and NP/DD is great but they do not really show how long a system risk (risk of loss) can persist before it become profitable . I believe Max Consecutive Loss does give user a good idea and all things equal having this information can help user prioritise systems to send into favourites for further testing. Not sure where is best to place this in GSB & will leave that to your best judgement. Many thanks.

admin - 10-9-2020 at 12:49 AM

Quote: Originally posted by Systemholic  
hi Peter,

i have a wish list.

Can i suggest you consider including a new metric - Max Consecutive Loss to help user better select post build systems for further scrutiny and testing. Current metric like NP,FF,PF and NP/DD is great but they do not really show how long a system risk (risk of loss) can persist before it become profitable . I believe Max Consecutive Loss does give user a good idea and all things equal having this information can help user prioritise systems to send into favourites for further testing. Not sure where is best to place this in GSB & will leave that to your best judgement. Many thanks.

I would like the opinions of others on this, but the request is in the job que. Hope to have it in a few weeks. Other things are being worked on now

Daniel UK1 - 10-9-2020 at 02:10 AM

I am not sure Max Consecutive losses metric really tells anyone anything.. More than the luck of past random order of trades.... we have win % (which is important) results from a metric such as Max Conse.loss.. would just be fitted around past data order of trades...

If a system just had 2 consecutive losses compared to another that had 4, should one ditch the latter one ? not sure about that if other metrics line up well..

Not an easy question though and my view above is certainly just my personal opinion, perhaps the metric has value for others though.






RandyT - 10-9-2020 at 07:57 AM

Quote: Originally posted by admin  
Quote: Originally posted by Systemholic  
hi Peter,

i have a wish list.

Can i suggest you consider including a new metric - Max Consecutive Loss to help user better select post build systems for further scrutiny and testing. Current metric like NP,FF,PF and NP/DD is great but they do not really show how long a system risk (risk of loss) can persist before it become profitable . I believe Max Consecutive Loss does give user a good idea and all things equal having this information can help user prioritise systems to send into favourites for further testing. Not sure where is best to place this in GSB & will leave that to your best judgement. Many thanks.

I would like the opinions of others on this, but the request is in the job que. Hope to have it in a few weeks. Other things are being worked on now


I believe this performance is measured by PearsonByDate value.

Daniel UK1 - 10-9-2020 at 09:10 AM

What i do personally would want, is...

-Variance test, to be able to evaluate how the distribution looks of my strategy

-Noise Test distribution, to see that real curve is placed ok.

-Randomised OOS, to check so that OOS results are not just down to good markets and luck.

- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..

- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would assume one could do the same for exit.

I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter perhaps..

Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics


CaptureVSRANDOM.JPG - 72kB CaptureRandomisedOOS..JPG - 123kB CaptureNoiseTestDistribution.JPG - 65kB CaptureDistributionAnalaysis.JPG - 189kB

GPU

kiwibird - 23-9-2020 at 06:59 AM

Is GSB amenable to be programed to take advantage of the GPUs on Nvidia graphics cards?

admin - 23-9-2020 at 04:53 PM

Quote: Originally posted by kiwibird  
Is GSB amenable to be programed to take advantage of the GPUs on Nvidia graphics cards?

Ive been asked that a number of times, and we did make tests on this.
All possible maths is cached in ram, which is why GSB is so fast and ram hungry.
The only improvement that can be put in GPU was the performance metrics.
I estimate is it would give a 10% improvement, which is not worth the programing time that would cost all users (in lack of future development) for the small benefit of a few users. Currently you can easily multiply the speed by more computers which is working really well.

engtraderfx - 20-12-2020 at 05:27 PM

Suggestion for plotting Walkforward...when check WF result on graph when dates are limited the only way to check out of date performance seems to be to override settings but then lose the visual of before vs after, just wonder if its possible to just project the WF curve & showed as say dashed so it clear its out of sample? Would stop of a lot of back & forth? thanks Dave

admin - 20-12-2020 at 05:29 PM

Quote: Originally posted by engtraderfx  
Suggestion for plotting Walkforward...when check WF result on graph when dates are limited the only way to check out of date performance seems to be to override settings but then lose the visual of before vs after, just wonder if its possible to just project the WF curve & showed as say dashed so it clear its out of sample? Would stop of a lot of back & forth? thanks Dave

can you give me a mock screen shot?
I think override orig settings will just project it forward so you see out of sample.
So I dont understand why thats not ok

engtraderfx - 28-12-2020 at 05:08 PM

Hi peter, here is example. It happens when I say build systems & WF on shorter time frame (eg say up to 2016), then want to see performance to current date I reset date & override settings again. Then need to "Use WF Parameters" to see WF on out of sample data but this removes original equity curve, sometimes changes colors too to brown. Just had an idea that could keep current current & show extended curve like so.

gsb wf oos current.JPG - 149kB gsb wf oos.JPG - 158kB

admin - 28-12-2020 at 06:34 PM

Quote: Originally posted by engtraderfx  
Hi peter, here is example. It happens when I say build systems & WF on shorter time frame (eg say up to 2016), then want to see performance to current date I reset date & override settings again. Then need to "Use WF Parameters" to see WF on out of sample data but this removes original equity curve, sometimes changes colors too to brown. Just had an idea that could keep current current & show extended curve like so.


You look like your using training / test / validation.
This conceptually is very obsolete way of doing things.
Use 100%training  and the dates as per gsb methodology

zordan - 12-1-2021 at 05:44 AM

P-VALUE

I would welcome a p-value test of the in-sample/out-of-sample mean returns in order to ensure OOS results are not random
http://www.automated-trading-system.com/bootstrap-test/

Thanks!

admin - 12-1-2021 at 05:49 PM

Quote: Originally posted by zordan  
P-VALUE

I would welcome a p-value test of the in-sample/out-of-sample mean returns in order to ensure OOS results are not random
http://www.automated-trading-system.com/bootstrap-test/

Thanks!

Im open to the opinion of others, but I think this is a step backwards.
We are currently picking the top 250 or 300 systems of 50,000 (and 50% of this data was out of sample) and looking at the entire 250/300 systems
While I dont understand p-value, I think any testing on just one system is a much weaker method of robustness

The article says "The problem with back-testing is that the results generated represent a single sample, which does not provide any information on the sample statistic’s variability and its sampling distribution. "
we have overcome this problem with our 50,000 250 / 300 systems
In fact is better than that as the identical test is repeated 4 times - which often gives large variation in results.
the last 2 gold videos outline the finer details
https://trademaid.info/gsbhelp/Videos.html

Daniel UK1 - 26-5-2021 at 12:26 PM

Quote: Originally posted by Daniel UK1  
What i do personally would want, is...

-Variance test, to be able to evaluate how the distribution looks of my strategy

-Noise Test distribution, to see that real curve is placed ok.

-Randomised OOS, to check so that OOS results are not just down to good markets and luck.

- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..

- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would assume one could do the same for exit.

I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter perhaps..

Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics






Any possibility for these validation features you think Peter?

REMO755 - 26-5-2021 at 02:39 PM

Hello,

The macros are fine, the methodology is fine.

The freedom to choose options is necessary and valued on my part.

Right now the freedom of choice in the Software can be improved, I do not take any credit for it, I am very grateful for this machine since it makes my work much easier and I will recommend it where the option is presented, do not hesitate.

Simple example of freedom of choice:

Captura.JPG - 78kB

admin - 26-5-2021 at 05:04 PM

Quote: Originally posted by REMO755  
Hello,

The macros are fine, the methodology is fine.

The freedom to choose options is necessary and valued on my part.

Right now the freedom of choice in the Software can be improved, I do not take any credit for it, I am very grateful for this machine since it makes my work much easier and I will recommend it where the option is presented, do not hesitate.

Simple example of freedom of choice:



are you saying an option to choose all?
I don't want this. The reason is there is about 2 or 3 inidcators that should not be used. They are keeped to maintain compatibility with old settings, or saved systems
closedminuscloseD is redundant, but better than closedminuscloseDBPV
roofingfilter1pole is redundant, but better than roofingfilter1pole

you can select some or all systems on the left.
click, move mouse then shift click, and select them all, or use control click etc

Daniel UK1 - 29-5-2021 at 01:50 AM

Quote: Originally posted by Daniel UK1  
What i do personally would want, is...

-Variance test, to be able to evaluate how the distribution looks of my strategy

-Noise Test distribution, to see that real curve is placed ok.

-Randomised OOS, to check so that OOS results are not just down to good markets and luck.

- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..

- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would assume one could do the same for exit.

I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter perhaps..

Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics




Peter any feedback in these mentioned metrics, i think they would be beneficial and appreciated.

Cheers


CaptureDistributionAnalaysis.JPG - 189kB CaptureNoiseTestDistribution.JPG - 65kB CaptureRandomisedOOS..JPG - 123kB CaptureVSRANDOM.JPG - 72kB

Bruce - 1-6-2021 at 12:04 AM

Quote: Originally posted by Daniel UK1  
Quote: Originally posted by Daniel UK1  
What i do personally would want, is...

-Variance test, to be able to evaluate how the distribution looks of my strategy

-Noise Test distribution, to see that real curve is placed ok.

-Randomised OOS, to check so that OOS results are not just down to good markets and luck.

- Random systems VS real systems, to see for example top 250 systems compared to random 250 and the strategy in question..

- Random entry VS real entry, for IS and OOS, NO picture available, but same type of graph, to see real curve against 1000 random entry, i would assume one could do the same for exit.

I assume one would be able to define what is acceptable ranges for systems for these metrics, and that these would be definable in GSB as a filter perhaps..

Pictures is just to illustrate how it could look like, to give you an idea of others solution to display the metrics




Peter any feedback in these mentioned metrics, i think they would be beneficial and appreciated.

Cheers




As a user of both BA (4 years) and GSB I'm of the view they serve two very different purposes and I have found the build methodologies are also very different. Whilst there are some UI benefits with BA the real feature is being able to test ideas and couple exits strategies with entries. And it's fast at doing that.
GSB tackles a much broader development process and Peter has developed a very robust methodology that addresses challenges like IS builds with Nth & multi OOS testing, WFO, families, automation etc. elements that BA isn't even in the hunt at this time.
BA with its bar pattern builds and rapid development with daily bars are capabilities are all great features however I'm not convinced that it use of monte carlo is any better that what can be achieved building 30k systems in an indicator search, building 50k systems looking for the top cohort of 300 etc. UIs can always be better and understandably these take a lot of time to get right.

Just offering some feedback as a builder and trader of systems from both apps.

Daniel UK1 - 1-6-2021 at 01:40 AM

Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.

And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.

What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.

Thanks for your input





admin - 1-6-2021 at 01:45 AM

Quote: Originally posted by Daniel UK1  
Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.

And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.

What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.

Thanks for your input






we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate.

Bruce - 1-6-2021 at 02:25 AM

Quote: Originally posted by admin  
Quote: Originally posted by Daniel UK1  
Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.

And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.

What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.

Thanks for your input






we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate.


With regards to your preference "to quickly be able to see where my strategy is placed within the distribution", I to have found that insightful in the past however when I take a good looking system and perform a WFO test the results are far from what these graphics portrayed. This could be a result of user errors on my part. One item you raised is randomizing the trade results, have you found this selection criteria really improves the system performance or robustness?

Daniel UK1 - 1-6-2021 at 05:23 AM

Quote: Originally posted by Bruce  
Quote: Originally posted by admin  
Quote: Originally posted by Daniel UK1  
Hi Bruce, i did not mention BA, but yes the pics to demonstrate how others are using my mentioned metrics, is from BA.
I am not using BA to build any systems, just to test out the validation metrics described since i thought they had merit.

And yes GSB methodology is very good, robust, works great, and is far superior to BA (not even aware they have a methodoloy), however we all have our own ways of validating and testing our final strategies, sometimes its hard to quantify whats a better or whats not metric to do this.

What i do like myself and believe has most merit, is the variance test and noise test, and to quickly be able to see where my strategy is placed within the distribution, it makes sense to me personally.
Is it better than anything else or what we already have, most likely not.

Thanks for your input






we have noise test in gsb already, but I'm not convinced at all it helps us now. That and 29,30,31 min bars did help us years ago before we had all the stats and families features. Now I'm not finding it helpful and don't use it at all.
You can also test on synthetic data in gsb as is. Its important to listen to ideas from users, and often we have view we hold on to - that are proven wrong. What we have is very unique and its significantly overcome the issues of having one system you trying to validate.


With regards to your preference "to quickly be able to see where my strategy is placed within the distribution", I to have found that insightful in the past however when I take a good looking system and perform a WFO test the results are far from what these graphics portrayed. This could be a result of user errors on my part. One item you raised is randomizing the trade results, have you found this selection criteria really improves the system performance or robustness?


Hi Bruce, the metrics mentioned is very difficult to prove or to quantify the robustness of, and i dont use these metrics to pick any systen (that part is quite important), i do allow myself to discard systems based on other validation methods outside of GSB that do make sense from my own human logical perspective ... i for example try to avoid picking final systems based on performance, i pick systems based on robustness of my WF and passing validation methods in GSB such as noice on trade data, other timeframes, other markets.. (after i have evaluated performance for my final build setting based on a large number of systems)

Carl - 17-6-2021 at 02:30 AM

Hi Peter,

Hopefully a suggestion to consider: adding charts of drawdown, average trade and win percentage.

Update:
A system deterioration shows up in an earlier stage in the average trade chart compared to the equity chart.

Please see example charts.

Thanks.


Equity_DD_AT_win_20210617.JPG - 153kB

admin - 17-6-2021 at 02:35 AM

Quote: Originally posted by Carl  
Hi Peter,

Hopefully a suggestion to consider: adding charts of drawdown, average trade and win percentage.

Please see example charts.

Thanks.

your second file doesn't exist.
this could be done, but don't see great value in it.
IM open to getting it done though. Anyone else want it?

Daniel UK1 - 17-6-2021 at 02:13 PM

I think it could be intresting to have possiblity to see key metrics over time, as of now we can look at the curve to view progress over time, and we can get total metrics... but it would be good to be able to see per year, key metrics such Avr trd, %win etc... or as in Carls example, as a timeline.... in order to see if key metrics are within range... I think it would be helpful

admin - 17-6-2021 at 04:45 PM

Quote: Originally posted by Daniel UK1  
I think it could be intresting to have possiblity to see key metrics over time, as of now we can look at the curve to view progress over time, and we can get total metrics... but it would be good to be able to see per year, key metrics such Avr trd, %win etc... or as in Carls example, as a timeline.... in order to see if key metrics are within range... I think it would be helpful

Thanks for comments. I will add it.

Carl - 17-6-2021 at 09:25 PM

Hi Peter,

Thanks for adding my suggestion.

But what I mean is not only the key metrics, but the average of key metrics.

So for example the average trade of the 30 last trades. Please see my screenshot in my post: "ma 30 AT"
The average profit factor of the last 30 trades and so on.

Thanks.

admin - 18-6-2021 at 12:03 AM

Quote: Originally posted by Carl  
Hi Peter,

Thanks for adding my suggestion.

But what I mean is not only the key metrics, but the average of key metrics.

So for example the average trade of the 30 last trades. Please see my screenshot in my post: "ma 30 AT"
The average profit factor of the last 30 trades and so on.

Thanks.

give me the exact list of what you want

Carl - 18-6-2021 at 04:40 AM


My personal preference:

Drawdown
Rolling average trade of the most recent 30 trades
Rolling average pearson R of the most recent 30 trades
Rolling average profit factor of the most recent 30 trades
Rolling average win percentage of the most recent 30 trades

Thanks!

Daniel UK1 - 18-6-2021 at 05:22 AM

Quote: Originally posted by Carl  

My personal preference:

Drawdown
Rolling average trade of the most recent 30 trades
Rolling average pearson R of the most recent 30 trades
Rolling average profit factor of the most recent 30 trades
Rolling average win percentage of the most recent 30 trades

Thanks!


Yes, good metrics, perhaps the "30" can be an N for the user.

Cheers




Daniel UK1 - 18-6-2021 at 06:18 AM

Peter, would this new feature not be great to have in PA ? would be an effective solution to be able to keep track of and stop systems when they drop below the historic thresholds for the metrics above (then it would be great to have average DD Levels and Max DD also)..
this is kind of tricky to keep track of otherwise.

Carl - 18-6-2021 at 08:38 AM

Hi Daniel,

I really really would like to have a TS compiler that would be able to track a series of trading strategies to be able to select the best system based on recent performance.

To update historical performances is horrible to do it by hand. The way I do it now is every three months (or so) backtest a series of strategies "manually" in TS, take notes and select the best performing system.
It would be great if this part of the process could be automated as well.

Daniel UK1 - 18-6-2021 at 09:32 AM

Hi Carl, what i do, is with the kind help from RandyT eld creation to PA and another software, running a portfolio of all my ES strategies for example, and then that exports a tradelists per strategy by auto to a folder, then i use PA similar software, that picks up results per strategy in a ES portfolio of same strategies, i then export the results in one go as one excel file, where i get results per strategy for the priod... this i then use for picking my live systems, based on some metrics from a excel table... sounds perhaps complicated... but iit just takes
one backtest of my portfolio in MC, and then one excel file export, thats it.


Carl - 18-6-2021 at 01:48 PM

Sounds good. Do you think would be possible in TS as well?
And what kind of other software are you using for this process?

Thanks.

Daniel UK1 - 18-6-2021 at 02:38 PM

Hi Carl, yes sure, if the eld works in MC i see no reason why it would not work also for TS. PM Randy, he created the ELD, kindly shared it, and should be credited for this, and i think he runs both TS and MC so perhaps he has it already working for TS. It outputs workable fileformat for PA and MSA Portfolio same time.
Although, what MC has, i assume TS dont, is the portfolio function, meaning that i have one portfolio per parket, pre saved containing all my strategies for that market, and when i want to run my test, i just run the portfolio once.... perhaps TS has something similar you can use to avoid having to run 150 charts :)

The software i use, to output the final report of all strategies combined in one excel sheet, is the above mentioned MSA portfolio software.
This was the only solution i could find, would have loved to use PA that i also have, for this, but it does not have that functionality.

However i would like at some point to have PA to determine by auto, what strategies to run, but i have not had time yet to play around with this.

As a sidenote,
One thing i have been wanting to do for a very very long time, is to be able to determine from backtests, what metrics to use and how often is best to re evaluate what systems to run. The only software i found that was able to backtest historic symbol pics based on relative performance, was amibroker.
I got this project developed initially for some portfolio equity strategies i run, in order to backtest and pick symbol picks based on past relative performance. I figured i would be able to run this also for backtesting strategy evaluation, , so instead of equity "symbols" i will use my GSB strategies, and in the end i should be able to backtest different optimisation metrics for strategies and re optimisation periods.
it should work in theory.


admin - 19-6-2021 at 01:34 AM

a comment on this for now. The pa export can be made to export real time data. PA fails if you get a shadow trade that disappears and the trade number is out of sync
pa would then see a trade list like
trade#1
trade#2 // this trade disappears later on
trade#2

or you might get
trade#1
trade#2 // trade3 disappears later on
trade#4



so if you have a report with 1000 trades, the next trade has to be trade 1001. This implies you have the entire historical chart open on ts (bad idea)
and it has to assume no shadow trades come or go.
I need to to ignore the trade number and make the trade number internally.

The whole topic of turning of systems is complex and I've not meet anyone who gets it right... the reason is its complex
in my opinion one factor is market conditions. NG for example has range so small (I'm told, didn't look myself) that all (day trading) systems will fail at the moment

Daniel UK1 - 19-6-2021 at 03:46 AM

NG, I shut off all my NG systems start of year, oddly enough all started to fail same time... very weird..
But it does as you say, not feel like a system fail, more a bad market condition... we see iif they pick up at some point.
Last summer to winter, NG was amazing though... then it just died out.

admin - 19-6-2021 at 03:59 AM

Quote: Originally posted by Daniel UK1  
NG, I shut off all my NG systems start of year, oddly enough all started to fail same time... very weird..
But it does as you say, not feel like a system fail, more a bad market condition... we see iif they pick up at some point.
Last summer to winter, NG was amazing though... then it just died out.


s&p500 2005 was similar. range so small day trading or swing trading didn't make $.

NG is also seasonal. Cold snaps bring increased demand
Would almost make sense to also factor in a group metric too.
ie all indice day trading systems or just say nq etc.
That make things more complex again

SwedenTrader - 19-6-2021 at 07:37 AM

Quote: Originally posted by Daniel UK1  
NG, I shut off all my NG systems start of year, oddly enough all started to fail same time... very weird..
But it does as you say, not feel like a system fail, more a bad market condition... we see iif they pick up at some point.
Last summer to winter, NG was amazing though... then it just died out.


Interesting, same thing happen to my intraday NG, had never tried out swing on NG but did a couple of hours testing yesterday and came up with an OK result. It is 30% OOS and after different validations i might put this in simulation for a couple of months. 2010 was a 10K DD but beside that it felt ok. For the record im not a GSB user but i trade some of Peters systems and follow most threads here as a member of the forum. I saw same thing in CL when GSB-sys1cl went flat that swing systems still did good. I have a bias to intradays but some times it is good to diversify with both timeframes,logics,markets and so on. Also it is easier to find the more lasting edges in swings than intradays in other markets also. And i understand that intraday systems in GSB like for example Bonds and currency futures are hard or just not found the methodology yet. But if anyone has good intraday systems in those markets long/short, GSB or not i would be interested to have a chat.

Attachment: Login to view the details

Daniel UK1 - 19-6-2021 at 12:37 PM

Hey SverigeTrader :), Sure that NG swing system seems interesting enough to look into some swing systems on that market from my side, good work there.
I only normally trade intraday, except equities though, but recent good work done by Peter has lured me away into some longer term system dev..

Tried intraday bonds but have not managed to get that market to work, i assume its because the little voll. Currency futures never tried.

Cheers


Carl - 22-6-2021 at 05:07 AM

Quote: Originally posted by Daniel UK1  
NG, I shut off all my NG systems start of year, oddly enough all started to fail same time... very weird..
But it does as you say, not feel like a system fail, more a bad market condition... we see iif they pick up at some point.
Last summer to winter, NG was amazing though... then it just died out.


Hi Daniel,

My NG strats were in trouble as well.
Then I switched the entry signals. So "enter long" became "enter short" and the other way around.

Starting in August 2020, it seems NG session 0900-1430 has become more countertrend then trend following.
It looks like the trend following part is completed before the 0900 session starts, then the countertrend reaction starts in the 0900-1430 session.



NG day trading.jpg - 225kB

REMO755 - 22-6-2021 at 02:22 PM

Hello,

It would be very good after having the strategy to take it to an improvement bench to be able to polish it through filters, optimization of hours, etc.

It should be a module and could be called Bank of Improvements, from here those strategies that need a tweak to improve it could arise, for example trying different SF or TF, optimizing hours, eliminating days of the weeks, closing after a time interval, etc. .

Daniel UK1 - 22-6-2021 at 02:57 PM

Quote: Originally posted by Carl  
Quote: Originally posted by Daniel UK1  
NG, I shut off all my NG systems start of year, oddly enough all started to fail same time... very weird..
But it does as you say, not feel like a system fail, more a bad market condition... we see iif they pick up at some point.
Last summer to winter, NG was amazing though... then it just died out.


Hi Daniel,

My NG strats were in trouble as well.
Then I switched the entry signals. So "enter long" became "enter short" and the other way around.

Starting in August 2020, it seems NG session 0900-1430 has become more countertrend then trend following.
It looks like the trend following part is completed before the 0900 session starts, then the countertrend reaction starts in the 0900-1430 session.





Hi Carl, Thanks for sharing.. I would be a bit scared to flip the buy/sell like that :) ...
perhaps it works, but for how long ?
I am more into, discarding things when it does not work and start from scratch.. but your strategy looks great if you flip it around, so you are right on that one.

I see that you are using only 15, test 15,30 thats what worked best for me on NG.

My NG systems are still ok, but more on fading / pause mode ... until/if they start to perform again.

S though have crashed, and that market is hands off untiL new systems have been created, if possible that is.

Cheers

Daniel UK1 - 22-6-2021 at 03:14 PM

Quote: Originally posted by REMO755  
Hello,

It would be very good after having the strategy to take it to an improvement bench to be able to polish it through filters, optimization of hours, etc.

It should be a module and could be called Bank of Improvements, from here those strategies that need a tweak to improve it could arise, for example trying different SF or TF, optimizing hours, eliminating days of the weeks, closing after a time interval, etc. .


Hey Remo,

By doing that, Would you not be afraid of "over" polishing/fitting the strategy to an absolute perfection, just to just fit the historic price series better, than what is already done with GSB if not extremely careful?

All the decisons before a system is picked, is, should/would/could (take your pick) be done based on results from a larger universe of systems, once a single system is picked, i would be very careful of changes to that system to improve the performance on same data that is already used.

That i though, just a very personal opinion..

I do get your point of having an area to "work" on your system, after its built.. i would perhaps lean more towards validation/stress test/stats area
reject or pass area ish..


REMO755 - 22-6-2021 at 03:27 PM

Quote: Originally posted by Daniel UK1  
Quote: Originally posted by REMO755  
Hello,

It would be very good after having the strategy to take it to an improvement bench to be able to polish it through filters, optimization of hours, etc.

It should be a module and could be called Bank of Improvements, from here those strategies that need a tweak to improve it could arise, for example trying different SF or TF, optimizing hours, eliminating days of the weeks, closing after a time interval, etc. .


Hey Remo,

By doing that, Would you not be afraid of "over" polishing/fitting the strategy to an absolute perfection, just to just fit the historic price series better, than what is already done with GSB if not extremely careful?

All the decisons before a system is picked, is, should/would/could (take your pick) be done based on results from a larger universe of systems, once a single system is picked, i would be very careful of changes to that system to improve the performance on same data that is already used.

That i though, just a very personal opinion..

I do get your point of having an area to "work" on your system, after its built.. i would perhaps lean more towards validation/stress test/stats area
reject or pass area ish..



Logically, before going to the market it will be necessary to do W.F, etc., one thing is not at odds with the other.

Siem - 15-7-2021 at 05:19 AM

I think it would be handy to have a "save state" option in GSB Manager.

It would save the complete state of the program as a file. Part of the filename would be the version used so it could be "mycoolname.gsbstate10.0.26.69".
Then if you later want to view some settings / details / graphs back of a system you made (or maybe its siblings) you can use GSBManager.62.690.exe* to open that state and GSB Manager is restored to the state it was when you used the "save state" option and do you what you want.
* or whatever GSB Manager version is needed to open the file, since older versions are in the GSB folder anyway.

Daniel UK1 - 15-7-2021 at 04:47 PM

Quote: Originally posted by Siem  
I think it would be handy to have a "save state" option in GSB Manager.

It would save the complete state of the program as a file. Part of the filename would be the version used so it could be "mycoolname.gsbstate10.0.26.69".
Then if you later want to view some settings / details / graphs back of a system you made (or maybe its siblings) you can use GSBManager.62.690.exe* to open that state and GSB Manager is restored to the state it was when you used the "save state" option and do you what you want.
* or whatever GSB Manager version is needed to open the file, since older versions are in the GSB folder anyway.


Hi Siem, if you chose to save an opt setting, and in the future open that opt setting, not having set the "feature" as active in regards to auto enable new settings, your saved opt setting "should" open exactly as it was when it was saved. Before this feature was not there, so all new features was always turned on for new managers, creating major headache for saved older opt settings, since they could not be open as they was saved.. but now it should not be like that.

REMO755 - 20-7-2021 at 05:32 PM

Hello,

Would it be possible to include several ranges of hours, that is, to present several options and that the algorithm can choose the best range?


Example:

Range 1 ---- 0900-1500
Range 2 -----1000-1500
Range 3 -----1100-1500


admin - 20-7-2021 at 05:47 PM

Quote: Originally posted by REMO755  
Hello,

Would it be possible to include several ranges of hours, that is, to present several options and that the algorithm can choose the best range?


Example:

Range 1 ---- 0900-1500
Range 2 -----1000-1500
Range 3 -----1100-1500



we have that in gsb automation recently.
a new build in in pcloud folder, but a new build coming in a few hours
This build had not been publicly released.
New spread sheet too.
has exit mode sort in it

realtive.png - 29kB times.png - 267kB

Im hoping to have genetically chosen variation of this in GSB in next month or so

REMO755 - 3-8-2021 at 04:11 PM

Quote: Originally posted by admin  
Quote: Originally posted by REMO755  
Hello,

Would it be possible to include several ranges of hours, that is, to present several options and that the algorithm can choose the best range?


Example:

Range 1 ---- 0900-1500
Range 2 -----1000-1500
Range 3 -----1100-1500



we have that in gsb automation recently.
a new build in in pcloud folder, but a new build coming in a few hours
This build had not been publicly released.
New spread sheet too.
has exit mode sort in it



Im hoping to have genetically chosen variation of this in GSB in next month or so


What can be done in the spreadsheet?


admin - 3-8-2021 at 05:24 PM

@remo755
You are talking about time in the left of gsb = allowed entry times?
I assume not session time.
If so this can be done, but has not yet been done.
GSB needs modification, and the spread sheet
Its rare that the best time is not the full session time, though the last 30 min and sometimes 60 min can be skipped

I like to leave the last 60 min is, as it still gives more trades for in and out of sample testing

Edit Collection

zdenekt - 24-9-2021 at 03:43 AM

Quote: Originally posted by REMO755  
Hello,

The macros are fine, the methodology is fine.

The freedom to choose options is necessary and valued on my part.

Right now the freedom of choice in the Software can be improved, I do not take any credit for it, I am very grateful for this machine since it makes my work much easier and I will recommend it where the option is presented, do not hesitate.

Simple example of freedom of choice:



Hi,
I am also voting for something like this.
This would save a lot of time when setting indicators.

Edit Collection.jpg - 61kB

admin - 24-9-2021 at 06:34 PM

Hi Zdenekt, Im not clear what the issue is.
we have all those options. Just the formatting differs.
With the current methodology, the setting of indicators is dont by GSB, with the exception of secondary filter which is a human choice.

Ability to see larger chart

ChuckNZ - 3-1-2023 at 03:51 PM

I have read every entry in the "wish list" forum and apologize if I missed someone also asking for this.

I think it would be fantastic if I could right-click on the chart and have the option of creating a larger chart in a separate window or tab or even in my browser.

I hope this request will be considered and that it is something easy to do.

admin - 3-1-2023 at 03:58 PM

@ChuckNZ,
you can make it bigger by closing the other windows
ie controlR etc
see these


view.png - 58kB

Forced output filter suggestion

REMO755 - 25-12-2023 at 05:06 PM

Hello,

Happy Holidays!!

Is it possible to implement this suggestion in GSB?

We set a departure time different from Times:

Example:

Template:MOC
Hours: 10:00 - 14:00
Mandatory time: 15.00

Would it be possible to put forced time as a filter?

admin - 25-12-2023 at 05:30 PM

@remo
There may be translation issue in engish with your request, so i may not have interpreted this correctly.
this can be done already

entry time allowed is 10:00 to 1400, but moc exit and session end at 15:00. Is that what you want?

REMO755 - 26-12-2023 at 02:51 AM

No,
I want to leave before MOC


Plantilla:MOC 17:00
Horario: 10:00 - 14:00
Hora obligatoria: 15.00

admin - 27-12-2023 at 01:19 AM

@remo, please translate this to english. This can be done, but i need to ask the programer how. You likely would make two session times. one 1000 to 1500 with moc at 1500 and a second session from 15 to 1700. Im not sure if thats what you want. Times (for entries to be allowed would be 1000 to 1400)

REMO755 - 27-12-2023 at 11:50 AM

Hello,
The simplest example:
Regular schedule template.
Opening hours: 10:00 a.m. to 2:00 p.m.
Mandatory departure time: 16:00
Understood now? It only takes one hour for forced departure.

REMO755 - 27-12-2023 at 12:10 PM

I am sending a screenshot for a better understanding.

You can choose a departure time, now you are forced to leave at the last bar of the session.



Do we leave at the last bar of the session or do we leave at another time? Do you understand?

The best thing would be to be able to do SF with a departure time and then the algorithm would tell us the best departure time. Do we leave at the last bar of the session or do we leave at another time?

EXIT TIME.JPG - 52kB EXIT TIME.JPG - 52kB

admin - 27-12-2023 at 06:50 PM

@remo,
getting exit time by ga is not a great idea in that GA will chose a spectrum of exit times, and you dont know the best time.
What you want can be done, but not by GA.
see times of day. https://trademaid.info/gsbhelp/Exits1.html

I could add this into GSB automation to shift the time by ga, but think its not something I would not personally do.
Here is why
Best Session exit time is normally very clear. Where is there a big spike in volume, and the range reduces.
please see this video
https://www.youtube.com/watch?v=NFC7ego_Y70

Whats more important is the session start time.
While the video typically will give you best start and end time, there are some exceptions. IE gold has two legitimate start times. The Dax this could also apply to as well. (though im not sure)





REMO755 - 15-1-2024 at 03:26 PM

Hello Pedro,

Are there any updates planned for GSB?
It is necessary to be able to create systems with patterns, in other assets, currencies, etc.

admin - 15-1-2024 at 03:47 PM

@remo, Pattern filters were added a few weeks ago, but limited initial testing showed they did not work so well. All what needs to change is the types of patterns we are looking for. Short term I dont have head space to do this yet, but im open to ideas. Its been planned for years and now we have the architecture to do it. Just need to do the finer details

Karish - 27-2-2024 at 04:45 PM

Wish list:

• Discord Server (its free to create), we can modernly chat in real-time, have many channels for different topics, closed channels for updates etc, we can even talk with our mics on if so desired.

• How can one know about which indicators do not sync/match with TS?

• Is it only me?, When performing multi-threaded WF on a strategy, it just wont utilize even 10% of my ram/cpu, any way to increase it?, Or why cant we utilize cloud power to perform WFs faster?

• Zoom feature just like in EWFO so we can see everything bigger/smaller in GSB.

• More user friendly GUI and explanation.

• More indicators and methods / different building blocks for GSB to create even more unique strategies.

Thats all for now, thanks.

admin - 27-2-2024 at 05:21 PM

@karish
1 good idea but i dont have resources to do this. Cant do more...
2 https://trademaid.info/gsbhelp/GSBDiagnostics.html
3 Under app settings, you can set how many cores are used, and how many wf are done at once. Too many wf at once gets painfully slow, but maxing the cores is good idea.
4 You mean font? can can adjust font scaling in windows
5 Everything should be in the docs. And there is the box down the bottom left that explains things
6 Planned and constant development done every week. No shortage of ideas, just have limited programmer resources.

Karish - 27-2-2024 at 05:42 PM

Quote: Originally posted by admin  
@karish
1 good idea but i dont have resources to do this. Cant do more...
2 https://trademaid.info/gsbhelp/GSBDiagnostics.html
3 Under app settings, you can set how many cores are used, and how many wf are done at once. Too many wf at once gets painfully slow, but maxing the cores is good idea.
4 You mean font? can can adjust font scaling in windows
5 Everything should be in the docs. And there is the box down the bottom left that explains things
6 Planned and constant development done every week. No shortage of ideas, just have limited programmer resources.


1) I can open a Discord server for GSB and set you as an admin aswell as publish a link here for other users to join in, only if you agree to this of-course, i can open and sort out some channels and topics so it could be a nice and ready to use environment.

2) Thanks!, very cool, But is there any over-all knowledge-base of known indicators that wont match and you already know about that and working on it? that was what i was trying to refer to.

3) My CPU got 16 cores, i already set everything to 16 related to the cores settings, thats inside the Standalone, Worker & Manager, everyone set to 16, but still CPU/RAM wont go over 10%~..

4) The idea of how its done in EWFO would be great, until then i should just use a different screen resolution i guess..

5) Sometimes it doesn't, and the hyperlink leads to the docs where some things are not so deeply explained, or old information related to older builds.

6) :)


admin - 27-2-2024 at 06:26 PM

Hi Karish
I appreciate your ideas and efforts
1) if we open discord, it means there is dilution and duplicate of work on the forum. I see that as a significant negitve.
Unless there was a lot of user buy in to the idea, i would not do it.
2) All indicators match on TS. Many months of work went into this, and i can now see why most other system builders will be bad at matching results. There are so many bugs / things to fix.
NT has match issues, but that will be worked on more in time.
But to clarify. There is an issue with Hull filter when 24 hour bars are used. (not when day session is used) and there is a error on gsb break even stop (not ts break even stop)
both these bugs are on the short list to be fixed


3) you can run more WF jobs, the default is 4. You also can wf your jobs to the cloud.
However if you create families from your top systems in Favorites A or b or c or d, there should not be many to wf.
4) EWFO badly needed this issue as font issues was a massive problem. Its not been significant issue on GSB
5) if there is anything you want updated, I will try to assist. Its the methodology thats the critical thing and good to follow the latest

Karish - 27-2-2024 at 06:38 PM

Quote: Originally posted by admin  
Hi Karish
I appreciate your ideas and efforts
1) if we open discord, it means there is dilution and duplicate of work on the forum. I see that as a significant negitve.
Unless there was a lot of user buy in to the idea, i would not do it.
2) All indicators match on TS. Many months of work went into this, and i can now see why most other system builders will be bad at matching results. There are so many bugs / things to fix.
NT has match issues, but that will be worked on more in time.
But to clarify. There is an issue with Hull filter when 24 hour bars are used. (not when day session is used) and there is a error on gsb break even stop (not ts break even stop)
both these bugs are on the short list to be fixed


3) you can run more WF jobs, the default is 4. You also can wf your jobs to the cloud.
However if you create families from your top systems in Favorites A or b or c or d, there should not be many to wf.
4) EWFO badly needed this issue as font issues was a massive problem. Its not been significant issue on GSB
5) if there is anything you want updated, I will try to assist. Its the methodology thats the critical thing and good to follow the latest


1) I hear you, i agree.

2) Awesome!, Yeah famous builders out there advertise their software as a great advantage, last thing you know you find yourself as a non-paid beta-tester for a beautiful UI and buggy and un-matching results between the software and the trading platform in question..

3) Ill try to figure it out, thanks.

4) Thats ok, there's always a walk-around.

5) On it! :cool:

Karish - 28-2-2024 at 06:21 AM

Feature Suggestion:

I was reading this:
https://trademaid.info/gsbhelp/BuildingNasdaq23hrX55daySwing...
At Paragraph 10 - "We want to try one indicator at a time"

Same thing can be found on another article,
at "How to test each indicator - Now we will comment out the second and third inidcator":
https://trademaid.info/gsbhelp/Advancedoptions.html

I had that idea quite some time ago but i wanted to share:


Imagine a feature that we would call "Strategy Simplifier",
what this would do is the following:
• Just like we have the possibility to "Right-Click" >> "WF", imagine we would have a button for this too/.
• When we have a strategy developed and we will "Right-Click" >> "Strategy Simplifier",
GSB will try to remove 1 rule at a time with all the others enabled,
Example (just like in the article in Docs) but different:

Quote:
# Step #1:
//Rule #1,
Rule #2,
Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #2:
Rule #1,
//Rule #2,
Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #3:
Rule #1,
Rule #2,
//Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #4:
Rule #1,
Rule #2,
Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #5:
//Rule #1,
//Rule #2,
Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #6:
Rule #1,
//Rule #2,
//Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #7:
Rule #1,
Rule #2,
//Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #8:
//Rule #1,
Rule #2,
Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #9:
Rule #1,
//Rule #2,
Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #10:
//Rule #1,
Rule #2,
//Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #11:
//Rule #1,
//Rule #2,
//Rule #3,
Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #12:
Rule #1,
//Rule #2,
//Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #13:
//Rule #1,
Rule #2,
//Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.

# Step #14:
//Rule #1,
//Rule #2,
Rule #3,
//Rule #4,
Then Re-Test the strategy on the same data period as the Original Strategy's data period & record the results behind the scenes.



As you can see GSB is trying to remove 1 or more rules from the original strategy in order to simplify the strategy,

It will try all possible combinations first (as described above via steps),
then it will select the Best result in order to simplify the strategy.

GSB will determine if the strategy got simpler AKA better by comparing the Original stats like NP/DD etc, or what ever THE USER will decide via some settings fields,
If SimplifiedStrategy (NP/DD) > OriginalStrategy (NP/DD) then Simplify.
or any other rule or multiple rules.



we all know that the less complex the strategy, the better,
Hope it would be considered as a future feature implementation, i think it be an awesome thing to have. thanks for reading. :)


Karish - 28-2-2024 at 08:00 AM

Another Feature Suggestion:

"Strategy's Optimizations Profile" - Check strategy's surrounding parameters to construct a profile.

we would have a button for this as:
When we have a strategy developed and we will "Right-Click" >> "Strategy Optimizations Profile",

This feature will do the following:

Imagine we have a strategy that got these rules & parameters as a simple example:
Quote:
inputs:
Length(20),
NumDev(2),
MA_Length(50);

variables:
UpperBand(0),
LowerBand(0),
MA(0);

UpperBand = BollingerBand(Close, Length, NumDev);
LowerBand = BollingerBand(Close, Length, -NumDev);
MA = Average(Close, MA_Length);

if Close crosses above MA and Close crosses above UpperBand then
buy next bar at market;
if Close crosses below MA and Close crosses below LowerBand then
sellshort next bar at market;


Notice we have 3 parameters with-in this strategy example,

Just like we have with the WF feature, THE USER could per-define the area via % Nearest "50",
GSB will optimize all parameters at once with +25% & -25% (% Nearest "50"),

The Steps for this test will use the settings from inside GSB already (GSB GUI's Right Window >> Strategy >> "Params.",


As for this example our optimization field would be the following:
Quote:
Length(15..25:1),
NumDev(1.5..2.5:0.5),
MA_Length(37..63:1);


The Optimization Method will ALWAYS be Exhaustive because we want to cover all the possible parameters combinations of the strategy in order to form the profile,

so for this example we got 891 possible combinations AKA runs,
running on the same exact original strategy's data,
this should be a very fast test for GSB and greatly benefit us with more knowledge.

As the test finished, we would be presented with all the information of the profile (image attached at the bottom)
*Off-course the whole process will be done via GSB behind the scenes.., there is no reason for showing the histogram chart inside the GUI.

Results:
Quote:
Out of total 891 iterations
Profitable iterations: 787
Losing iterations: 104
% of profitable iterations (Profitability Profile): 88.32%


This is what we need,
Now with USER's per-defined rules inside the settings field of this feature GSB can determine if this is a valid result or not.
as for the default and most simple rules:
Quote:
"% of profitable iterations" > "90%"
thus we can be sure 90% of the iterations within our "50% Nearest" are atleast 90% profitable.

&

"Maximum degradation from the Highest iteration" < "30%"
thus we can be sure that all the iterations within our "50% Nearest" are lower than 30% from the Highest iteration.


other rules could be added further on with further updates and or suggestions..

With this feature we can really determine if we curve fitted our strategy to the data, and just recognize where we are standing with the strategy in terms of sensitivity of parameters,

off-course we can also perform the same test with other tests like building strategies with multiple time-frames or symbols or both, this can benefit us greatly.

hope it would get considered to be added into GSB in the future, thanks for reading :cool:.

Example of a Profile Histogram.png - 326kB

admin - 28-2-2024 at 07:14 PM

@Karish
ive asked some of the most knowledgeable users about your comment. Awaiting feedback.

I dont feel strongly on this, but your idea seems problematical. what say you have made the worlds best trading system, but it has a strong bell curve were it works well in a certain range
but works like crap outside of this range.

regardless I feel there is no substitute as good as a manual optimization of every input in a gsb system
this will tell you far more than your results shown, and my long term hope is we will have AI to interpret results
we do have wf paramater analysis which I hope to expand on over time.


High level
there are two totally different approaches gsb users have. (and likely every shade of grey in between and other methods again that I dont know about)
1) build lots of systems, and look at what works over time, filter out good systems from bad etc
I suspect this is the direction many users go.
2) find a good system, test every logical module, manually optimize every input, work out ranges, check for bad logic or logic that in effect does nothing
then optimize the entire system with what you have learnt, tweaked / added / removed
For example, if some sort of exit mode improves results a little, but its changed 2 of 1000 trades, the module is not valid and should be removed.


I am very strongly of the opinion that 2) is best for me - but it requires a different skill set to option 1)
Building the recently released gsb23 hour 5.5 day a week system,
https://trademaid.info/gsbhelp/GSBSWING60-23NQ.html
.... spent weeks of work on it, i learnt a lot and can confirm or reject conclusions that we came to via many tests with GSB automation.





Karish - 29-2-2024 at 03:29 AM

Thanks for replying back :),

I hear you, but i still think that both:

"Strategy Simplifier" & "Strategy's Optimizations Profile",

needed.

i would lean more towards "Strategy's Optimizations Profile",
Bob Pardo talks about it, here & inside the course but it is explained perfectly in a short video here:
https://www.buildingrobuststrategiesmasterclass.com/overcome...

and
System Parameter Permutation (SPP)
method was originally described by Dave Walton of StatisTrade, and is available in the paper here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2423187

Both essentially does the same thing,
creating an optimization profile of the system in question,
running all available optimizations combinations (limited by "% Nearest" that would be set by the USER),

then if we take an automated approach,
i already explained 2 rules that would determine if the system pass/fail the test, but more rules can be added in the future,
main rules are:

"% of profitable iterations" > "90%"
thus we can be sure 90% of the iterations within our "50% Nearest" are atleast 90% profitable.

that means that we should have atleast 90% of all the iterations making 1$.

&

"Maximum degradation from the Highest iteration" < "30%"
thus we can be sure that all the iterations within our "50% Nearest" are lower than 30% from the Highest iteration.

that means that we take the highest profitable result from all of our iterations and take the lowest profitable result from all of our iterations and compare them, if we got a degradation from the highest profitable result & the lowest profitable result of higher than 30% this would be a failure.


eventually we want from this test is to see if are in the right spot with our strategy and parameters,

if we fail this test it means that our parameters are not stable from our current original position with our original strategy's parameters set.

even if we run a WF on that strategy without this test WF will show low Parameters Stability Result, and thats what we dont want..

but if we would perform this test and have passed the test,
that means our parameters are stable ("50% nearest")

then if we perform a WF with ("50% nearest") also,
we can be sure that we could have a high Parameters Stability Result.


The "Strategy's Optimizations Profile" overall is just like using the "Families" feature in GSB,
but thing is that rather waiting for 50,000 systems or more and looping this procedure and selecting the Family with the highest count of Members as it considered to be a robust set of rules with different parameters,
we could potentially if the "Strategy's Optimizations Profile" implemented do this:
Right-Click on the system in question >> Perform "Strategy's Optimizations Profile",
assuming we use "50% nearest",
and 2 rules:
"% of profitable iterations" > "90%" & "Maximum degradation from the Highest iteration" < "30%"
we can tell if the system is robust enough,
we we want to be absolutely sure, we can play with the settings or just a thought increase the "% nearest" to cover a wider range and make sure we got a robust strategy.

This feature is more direct and focused on the question.
Attached some visual images.


Thats all.
i think it would be a good addition,
thanks for reading.



visual Curves example.png - 63kB

visual Histogram example of pass and fail.png - 93kB

admin - 29-2-2024 at 06:05 PM

@Karish
thanks for this and im looking into it.

If this feature is built, it could also be added into trademaid walkforard optimizer
I have numerous thoughts.
we dont have to do exhaustive wf. We currently have random space. But brute is fine.

Bobs example here is clear. However its problematic that he shows 2 inputs when we could have 4 to 10 inputs

does video 3 exist?

you should explore paramater analysis in GSB




2024-03-01_10-48-07_2d.png - 186kB

admin - 29-2-2024 at 06:32 PM

Another comment that applies to your proposal (and existing ideas like testing on other markets, adding random noise etc)
all of this can be obectively tested.
you get your 300 systems form a or systems from b
take the top 1/3 and the bottom 1/3 of them
test out of sample according to your critera you have assumed to work, then compare the out of sample results.
This is the in my opinion suburb strength of GSB

Karish - 1-3-2024 at 04:10 AM

@admin,

Peter related to the screenshot you posted covering 2 Input parameters on X and Y,
this was just an illustration to show a 3D view to be more presentable,

The main thing Bob talks about is what i suggested and he shows that in a 2D view,
you take all the optimizations iterations limited by % nearest of 50 for example.
form an "optimization profile",
then perform some analysis on the results of that optimization profile,

you want to see gradual results, not random profits and loses, that would mean that your parameters space of the strategy is robust and not due to a random chance, thats the most important part ("% of profitable iterations" > "90%")

as-well as the results being some what in tact with others ("Maximum degradation from the Highest iteration" < "30%")

about implementing this into EWFO, thats an idea.., but would love to see that in GSB most of all, because the transition of the strategy and its files into EWFO would cost time, it is better to just have it under our right-click inside GSB.

a right-click on a strategy and choose "Strategy's Optimizations Profile",
and GSB will perform everything behind the scenes, or maybe we could have another Tab in the center of the GUI just like we can view different charts, we could see the results of the strategy's Optimizations Profile in a Histogram form just like in the image i attached above,

i would say this is a true must have feature, it can really increase our confidence in a strategy and most likely increase the likelihood of the strategy performing well on a WF test and even the likelihood of the WF performing well into the future.


BTW:
if we could have the "Strategy Simplifier" feature as-well, we could perform the "Strategy Simplifier" test in order to see if GSB can increase the simplicity of our strategy with remove some rules that wont contribute much for the strategy,
and the "Strategy's Optimizations Profile" after that test "Strategy's Optimizations Profile" would run much faster if some unnecessary rules would be removed, thats also a creative thinking about it.


anyway, this is powerful, hope to see this in GSB in the future.. :),
it is all up to you my friend.

attached a perfect example with some fields added, something close to this example would be great to see inside GSB when implemented.



perfect example with more fields.png - 49kB

Karish - 1-3-2024 at 12:44 PM

Some more small suggestions:

• For some reason not sure why, maybe it is a C# thing, but the GUI is kinda slow when there are too many rows (strategies) inside the Unique-Systems tab and click the Create Family, i also noticed that i must Remove Families before re-running the search again (play button), because the GUI just freezes and makes me lose hope every time, so i re-start GSB again with all my progress lost.., this should somehow get fixed..

• The number that shows next to the Unique-Systems (123) and Walk-Forward (123) tabs is not getting updated when deleting a strategy from the tab.

• Would be cool to have the "Entry Modes", "Weights Mode", "Entry Level Mode", "EntryLevelValue" etc as an optional list of Use/Do Not use with min/max/steps just like the "Built-In Indicators form we got already, Let the software decide what's good based on genetics etc, rather than have them fixed.

• "# of Filters" when set to 1 for example, it will be forced up on all systems.., would be cool if it wasn't, some systems will have it and some wont, again.. more freedom, let the genetics spit its results, would it be 0,1,2 etc..

• "Stops" same thing, it is currently forced on all systems..

• "Targets" also the same, it is currently forced on all systems..

• When using "Nth" it seems that the rules we set in side "Performance Filters" wont hold any merit, as if the NoTrd period of the "Nth" wont take any of its results into account.


__
That's all so far :),
thanks for reading.

 Pages:  1  2    4