GSB Forums

Not logged in [Login - Register]

Futures and forex trading contains substantial risk and is not for every investor. An investor could
potentially lose all or more than the initial investment. Risk capital is money that can be lost without
jeopardizing ones’ financial security or life style. Only risk capital should be used for trading and only
those with sufficient risk capital should consider trading. Past performance is not necessarily indicative of
future results
Go To Bottom

Printable Version  
 Pages:  1  ..  30    32    34  ..  98
Author: Subject: General support questions.
Daniel UK1
Member
***




Posts: 470
Registered: 4-6-2019
Member Is Offline


[*] posted on 4-12-2019 at 06:06 AM


Thanks Peter,
Yes Windiff is a good tool to compare, however i dont think its the solution to this.

In order to use it i would need first to know the original setting, derived from my saved opt setting, and if i have this, i would not not need windiff i believe.

None of my saved opt settings, is saved with version number of manager, i was always in the belief that what was saved could also be loaded in the future no matter which version it was loaded into. Now it seems its not like that.


Since i have most likely hundreds of saved opt settings, since i document and save each test setting, and i arrive to a few settings that i build live systems on, its a lot. And i have these saved from early 2019...

So windiff will not solve this, and i appreciate the solution you sent, but that would require me to know which GSB version was used for a specific saved opt setting.

What about a GSB mode, that have ALL settings/features set to FALSE/0, in this mode you could load a saved Opt setting from any previous version, and i assume i could then be sure that only the settings used in the saved opt setting would be loaded?

Thank for considering this issue as very important, in my book the most important one to solve right now, not only going ahead, but also to make use of previous saved opt settings for all users.
Apart from this i like GSB a lot and its an important part in my process.

Daniel


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 4-12-2019 at 07:06 PM


Quote: Originally posted by Daniel UK1  
Thanks Peter,
Yes Windiff is a good tool to compare, however i dont think its the solution to this.

In order to use it i would need first to know the original setting, derived from my saved opt setting, and if i have this, i would not not need windiff i believe.

None of my saved opt settings, is saved with version number of manager, i was always in the belief that what was saved could also be loaded in the future no matter which version it was loaded into. Now it seems its not like that.


Since i have most likely hundreds of saved opt settings, since i document and save each test setting, and i arrive to a few settings that i build live systems on, its a lot. And i have these saved from early 2019...

So windiff will not solve this, and i appreciate the solution you sent, but that would require me to know which GSB version was used for a specific saved opt setting.

What about a GSB mode, that have ALL settings/features set to FALSE/0, in this mode you could load a saved Opt setting from any previous version, and i assume i could then be sure that only the settings used in the saved opt setting would be loaded?

Thank for considering this issue as very important, in my book the most important one to solve right now, not only going ahead, but also to make use of previous saved opt settings for all users.
Apart from this i like GSB a lot and its an important part in my process.

Daniel

If all new settings are set to false might fix this problem completely.
I will look into this.


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 5-12-2019 at 12:22 PM


Hi Peter,

Here are a couple of fresh bugs in GSB:

1. I set "Macros / On Opt. Completed" == true, and run optimization process. When I pause optimization macro automatically starts, which is totally unexpected. When I manually terminate optimization process by pressing terminate button, macro also starts automatically, which is counter-intuitive as I canceled everything.

I expect GSB not to start automatically any macro when users pauses or terminates optimization process.


2. Another issue is not strictly a bug, but is an usability issue. I open "Price Data" dialog and define a new price data series by closing an existing one:

01.png - 17kB

GSB Manager makes a clone of the price series, but sets focus to the "[new] contains 0 items" tree, instead of selecting newly cloned price series:

02.png - 19kB

And I then have to scroll up the list to find price series that I've just cloned.
That takes some time and effort.

I expect GSB Manager to automatically set focus to the newly cloned price series:
04.png - 12kB

Since I am using IQFeed data, and need to enter all price series manually, and for every ticker I need to make someting like 25,26,27,28,29,30,31,32,33,34,35 min price series, so I do a lot of cloning operations. And this wrongly set focus makes my life harder than it should be.


P.S. Also while creating the screenshots for this post I cloned a price series, and "Price Data" dialog went into some crazy state, where CL tree became duplicated, SoyB tree contained some wront items e.t.c. I had to restart "Price Data" dialog. So, there are some bugs here too:
03.png - 32kB




Thanks received (1):

+1 rterbush at 2019-12-13 10:47:57
View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 5-12-2019 at 03:19 PM


Quote: Originally posted by Sten  
Hi Peter,

Here are a couple of fresh bugs in GSB:

1. I set "Macros / On Opt. Completed" == true, and run optimization process. When I pause optimization macro automatically starts, which is totally unexpected. When I manually terminate optimization process by pressing terminate button, macro also starts automatically, which is counter-intuitive as I canceled everything.

I expect GSB not to start automatically any macro when users pauses or terminates optimization process.
Its no big deal, but this post should go here.
https://trademaid.info/forum/viewthread.php?tid=6


2. Another issue is not strictly a bug, but is an usability issue. I open "Price Data" dialog and define a new price data series by closing an existing one:



GSB Manager makes a clone of the price series, but sets focus to the "[new] contains 0 items" tree, instead of selecting newly cloned price series:



And I then have to scroll up the list to find price series that I've just cloned.
That takes some time and effort.

I expect GSB Manager to automatically set focus to the newly cloned price series:


Since I am using IQFeed data, and need to enter all price series manually, and for every ticker I need to make someting like 25,26,27,28,29,30,31,32,33,34,35 min price series, so I do a lot of cloning operations. And this wrongly set focus makes my life harder than it should be.


P.S. Also while creating the screenshots for this post I cloned a price series, and "Price Data" dialog went into some crazy state, where CL tree became duplicated, SoyB tree contained some wront items e.t.c. I had to restart "Price Data" dialog. So, there are some bugs here too:


Thanks for that. We can fix those issues in future builds.


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 9-12-2019 at 10:58 AM


I'm currently watching a video "Genetic system Builder, how to make a Crude Oil Futures day trading system" (probably watching it a 3rd or 4th time).

And in the video Peter mentions CL system uses 30 min bars + 60 min secondary data. The problem is that in current builds there is no "Secondary Data" parameter. At least I am not able to find it.

So, how do I supply secondary data to the GSB?





secondary_data.png - 741kB


View user's profile View All Posts By User
Carl
Member
***




Posts: 342
Registered: 10-5-2017
Member Is Offline

Mood: No Mood

[*] posted on 9-12-2019 at 11:40 AM


Quote: Originally posted by Sten  
I'm currently watching a video "Genetic system Builder, how to make a Crude Oil Futures day trading system" (probably watching it a 3rd or 4th time).

And in the video Peter mentions CL system uses 30 min bars + 60 min secondary data. The problem is that in current builds there is no "Secondary Data" parameter. At least I am not able to find it.

So, how do I supply secondary data to the GSB?


Hi Sten,

Just go to - Tools - Price Data and select one of the price items in the left window.
Then click the second line and choose the secondary data stream.
And click on the third line and add the third data stream and so on.



View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 9-12-2019 at 07:32 PM


Quote: Originally posted by Sten  
I'm currently watching a video "Genetic system Builder, how to make a Crude Oil Futures day trading system" (probably watching it a 3rd or 4th time).

And in the video Peter mentions CL system uses 30 min bars + 60 min secondary data. The problem is that in current builds there is no "Secondary Data" parameter. At least I am not able to find it.

So, how do I supply secondary data to the GSB?






Hi Sten,
just keep in mind this video was done well before the current methodology. What im reasonable confident in is that 30 min, with ho rb ng as data2 will be best combination. Im not confident that adding data2 as 60 min cl is going to help. All this needs to be tested using the method in the first video, nov 2019
https://trademaid.info/gsbhelp/Videos.html

I would be interested in the results of this, as will other users.


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 11-12-2019 at 10:57 AM


Quote: Originally posted by admin  

Hi Sten,
just keep in mind this video was done well before the current methodology. What im reasonable confident in is that 30 min, with ho rb ng as data2 will be best combination. Im not confident that adding data2 as 60 min cl is going to help. All this needs to be tested using the method in the first video, nov 2019
https://trademaid.info/gsbhelp/Videos.html

I would be interested in the results of this, as will other users.


Hi Peter! Yes, you are probably right. I tried to generate systems for NG using:

data1: 29,30,31 min bars
data2: 58,60,62 min bars

And the results were less than satisfying. I then tried to use just:

data1: 30 min bars
data2: 60 min bars

Slightly better, but still far from being ideal. GSB generates systems that almost do not trade for 4-5 years between 2010 and 2015.

I am now trying to build systems for NG using this setup:

data1: NG, 30 min
data2: CL, 30 min
data3: HO, 30 min
data4: RB, 30 min

and verify on the same series + some random ticks added. Let's see if this produces better systems.


View user's profile View All Posts By User
Daniel UK1
Member
***




Posts: 470
Registered: 4-6-2019
Member Is Offline


[*] posted on 11-12-2019 at 02:53 PM


Hi Sten, in my own dev to find best settings for NG, i have found setup only using NG market works best to build on.. thats my own preference, trying to avoid other markets to build on in order to avoid being dependant on correlation going ahead.. However using other markets for test builds have not produced better stats for me, in my research for NG.
Best degradation using market validation macro i have reached for NG was - 9% if i not remember incorrectly. This was building on years 2007- 2015 02 28

I am sure other people have reached better degradation numbers, but this is my best achieved number.

This was done some ago, and new methodology using WF stats was not introduced by Peter, hence only using market validation stats on for changes to evaluate each setting change..

Pic is from one of my live traded system on NG using 15min data, started live beginning 2019 with the system.



Captureng.JPG - 214kB Capturengstats.JPG - 127kB


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 11-12-2019 at 03:27 PM


Daniel UK1, thanks! I also prefer the simplest solution and going to avoid adding other markets if possible. NG,CL,HO,RB build is still running - I'll compare the results to NG only build and make a decision.

Out of curiosity, 9% is a degradation between what and what? There are a number of possibilities:

- Degradation of system running on all OOS data with system on all In-Sample data we have (2007-2019);
- Degradation "Walk-Forward with orig parameters" on OOS pre 2015 (B / A);
- Degradation "Walk-Forward with orig parameters" on OOS 2015-2018 (C / D);
- Degradation "Walk-Forward with orig parameters" on OOS 2018-2019 (E / F);

Then we can measure degradation of, let's say, 8 out of 8 verified systems with a bunch of all systems. E.t.c.

I am trying to develop my own workflow, and currently I feel I'm overloaded with all possibilities. I understand the scientific method and the idea of improving one parameter at a time by comparing current results to a baseline.

But if we want to compare our process to what others achieved - like "I have market degradation 9% on NG, and others get 5% degradation" we need to make sure we are using the same methodology to measure this parameter. And I feel right now I do not understand completely how do we measure that.


View user's profile View All Posts By User
Daniel UK1
Member
***




Posts: 470
Registered: 4-6-2019
Member Is Offline


[*] posted on 11-12-2019 at 04:58 PM


HI Sten, -9% degradation is the degradation between A and B training period, using macro market validation macro that i think can find in the macro folder.
Market validation macro provides stats for IS and OOS degradation, and then two more OOS periods, depending on what you use as settings.
I used this method for evaluating every little change and built 50k systems, and then in the end when i have narrowed down to perhaps 2-5 opt setups, i was doing the final WF stats methodology to validate my findings on these final setups, and making sure i ended up with the best one.


However since some time back, i am doing WF stats according to Peters methodology for each and every change and 50k systems, which is taking much much more time, but i believe worth it in the end.

And yes comparing against each other is sometimes a difficult task, since perhaps all is using a small tweak here and there to the the original methodology from Peter. Just wanted to share what i found working best that might would help. And my degradation number was only for the training Period.


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 11-12-2019 at 06:15 PM


Hi Sten, Daniel. What your publishing here is really good. Right now I have not done work on NG with the newer methodology.
It would be good to publish the stats with ng30, ng30/60, ng with hg rb cl, ng15 etc.
Right now there is more free cloud power than normal too, as im not using it. (In Vietnam now looking at beautiful beach view out the window.)


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 13-12-2019 at 05:40 AM


Quote: Originally posted by Daniel UK1  
HI Sten, -9% degradation is the degradation between A and B training period, using macro market validation macro that i think can find in the macro folder.
Market validation macro provides stats for IS and OOS degradation, and then two more OOS periods, depending on what you use as settings.
I used this method for evaluating every little change and built 50k systems, and then in the end when i have narrowed down to perhaps 2-5 opt setups, i was doing the final WF stats methodology to validate my findings on these final setups, and making sure i ended up with the best one.


However since some time back, i am doing WF stats according to Peters methodology for each and every change and 50k systems, which is taking much much more time, but i believe worth it in the end.


I found "MarketValidationStats.gsbmacro" macro file in the GSB Data\Settings\Macros folder. However in recent build this macro is the same as "wf_stats5.gsbmacro" and "m3-wf_stats.gsbmacro". There is no difference in these files when I compare them byte to byte. I checked this on several machines to ensure I did not override macro code myself by accident (I believe there is some bug in GSB with macro saving which results in overriding macro file with the wrong code).

And wf_stats5 macro does not provide IS to OOS comparision. As far as I understand how it works, it compares original equity curve using Training input parameters to the equity curve with input parameters from Walk Forward. And it does that for 3 different OOS date periods:

- pre 2015-06-30 (OOS, B / A);
- 2015-06-30 - 2018-02-28 (OOS, D / C);
- 2018-02-28 - 2019-02-28 (OOS, F / E);

So wf_stats5 macro from Peter's methodology only answers the question if WF helps to improve system performance on OOS data. It DOES NOT not compare IS to OOS metrics (if I understand correctly).

And this is what really bothers me. As I am more interested in comparing OOS to IS degradation for the bunch of systems in order to assess the amount of curve fitting we've made during a system building process. And only after that, when I am relatively sure the build process does not produce curve fitted systems I can look at things like "if WF improves systems performance".

So I need a way to compare OOS to IS degradation. Probably something like:

build systems with "Nth Day Mode" set to NoTrd;
save stats into Stats A;
set "Nth Day Mode" to Trd;
rebacktest;
save stats into Stats B.

Then look at the B / A degradation. This directly compares OOS and IS performance. But there are a number of questions, like should I do this for all 50000 generated systems or only for top 250? If only for top 250, I have to ensure the selection process does not use OOS data to select the best systems (currently it does as we set "Auto Nth Date Mode" All).
Which date period to use (probably 1900 - 2015-06-30). E.t.c.

I think something like this should be part of the methodology.


Comments are appreciated.


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 13-12-2019 at 06:41 AM


Hi Sten,
This is a high level reply to your comments. You many not get out of me till my monday.
For us to understand each other, it may take a number of replies.
There is a new variant of m3 that uses global dates 20190228. This is to make the benchmark consistent for all users.
The IS is all pre 20150630 and half of that is IS due to it being built on nth no trade, but converted to nth all post build.
you say "macro does not provide IS to OOS comparison."
Im not clear on your thinking.
Perhaps it is i think its OOS, but you think its OOS, but not compared to IS.
Under app settings there is an option to normalize by the amount of bars.
So I think m3 gives you two oos periods, and shows how wf affects things, and vss.
Your welcome to try what you suggest, as it can be done. There is no equity curve comparison. We are just looking at the numbers.
Perhaps your more interested in degradation, and im more interested in results.

Problem with degradation is not all years are equal. ie 2007 - 2008 was extremely profitable, and years after this were much harder.
This one factor alone is going to give the appearance of very high denigration

I like to pick the top 250 of 50,000 as thats a good use of cpu , just to work on 250 systems
I also see not much point to work with the bottom 49000 systems when we know they were poor in the in sample period


View user's profile View All Posts By User
Daniel UK1
Member
***




Posts: 470
Registered: 4-6-2019
Member Is Offline


[*] posted on 13-12-2019 at 07:13 AM


Quote: Originally posted by Sten  
Quote: Originally posted by Daniel UK1  
HI Sten, -9% degradation is the degradation between A and B training period, using macro market validation macro that i think can find in the macro folder.
Market validation macro provides stats for IS and OOS degradation, and then two more OOS periods, depending on what you use as settings.
I used this method for evaluating every little change and built 50k systems, and then in the end when i have narrowed down to perhaps 2-5 opt setups, i was doing the final WF stats methodology to validate my findings on these final setups, and making sure i ended up with the best one.


However since some time back, i am doing WF stats according to Peters methodology for each and every change and 50k systems, which is taking much much more time, but i believe worth it in the end.


I found "MarketValidationStats.gsbmacro" macro file in the GSB Data\Settings\Macros folder. However in recent build this macro is the same as "wf_stats5.gsbmacro" and "m3-wf_stats.gsbmacro". There is no difference in these files when I compare them byte to byte. I checked this on several machines to ensure I did not override macro code myself by accident (I believe there is some bug in GSB with macro saving which results in overriding macro file with the wrong code).

And wf_stats5 macro does not provide IS to OOS comparision. As far as I understand how it works, it compares original equity curve using Training input parameters to the equity curve with input parameters from Walk Forward. And it does that for 3 different OOS date periods:

- pre 2015-06-30 (OOS, B / A);
- 2015-06-30 - 2018-02-28 (OOS, D / C);
- 2018-02-28 - 2019-02-28 (OOS, F / E);

So wf_stats5 macro from Peter's methodology only answers the question if WF helps to improve system performance on OOS data. It DOES NOT not compare IS to OOS metrics (if I understand correctly).

And this is what really bothers me. As I am more interested in comparing OOS to IS degradation for the bunch of systems in order to assess the amount of curve fitting we've made during a system building process. And only after that, when I am relatively sure the build process does not produce curve fitted systems I can look at things like "if WF improves systems performance".

So I need a way to compare OOS to IS degradation. Probably something like:

build systems with "Nth Day Mode" set to NoTrd;
save stats into Stats A;
set "Nth Day Mode" to Trd;
rebacktest;
save stats into Stats B.

Then look at the B / A degradation. This directly compares OOS and IS performance. But there are a number of questions, like should I do this for all 50000 generated systems or only for top 250? If only for top 250, I have to ensure the selection process does not use OOS data to select the best systems (currently it does as we set "Auto Nth Date Mode" All).
Which date period to use (probably 1900 - 2015-06-30). E.t.c.

I think something like this should be part of the methodology.


Comments are appreciated.


Hi Sten, my macro called market validation stats, that was provided from GSB/Peter, gives you IS against OOS degradation, so this was what i was referring to before.. I agree that degradation is important. But anyway in my GSB, WF stats is not the same as market validation stats. I have had issues myself with GSB macros changes but after investigation is have found that its because i open them from EDIT in macro buttons location... i have (i think) no see an instance where a macro have changed as in saved a change just from opening them from left side GUI as when you manually load one to be active (unless you open one from active and have radio button SAVE MACRO FILE, ticked... but i am sure Peter can confirm exactly when and how the macro is getting saved. According to my knowledge a macro shall ONLY save if you open them from EDIT or from active filed and have radio button save ticked.

My market validation macro that i refer to gives degradation numbers on build dates, and then after end build date to whatever i chose.And then a total degradation from start of build date to end global date..
Anyway, if you ask peter i think he can send you this macro if you dont have it.

Daniel


View user's profile View All Posts By User
Sten
Junior Member
**




Posts: 35
Registered: 25-10-2019
Member Is Offline


[*] posted on 13-12-2019 at 08:27 AM


Here is what I mean by saying wf_stats5 macro does not compare IS and OOS. Let's take a look at how wf_stats5 macro calculates Stats A and Stats B.

wf_stats5.png - 15kB

First it switches grid to Favorites D.
Then it sets Dates and Global Dates to 1900-01-01 - 2015-06-30. It also sets DatesMode to Trd. I.e. it uses our training period.

According to the methodology "Nth Day Mode" should be set to All, however macro does not set this explicitly. So we are using both IS and OOS data here from our training period (I am calling IS only data that GSB sees when it builds strategies, i.e. only "Nth Mode: NoTrd", when we start to gather stats from Nth Mode: All we contaminate our first OOS period making all data prior to 2015-06-30 effectively In-Sample, but that is another story). In the post above I incorrectly stated

"- Degradation "Walk-Forward with orig parameters" on OOS pre 2015 (B / A);"

we are actually using both IS and OOS data on this dates period. But what is critical to understand both Stats A and Stats B are calculated on all pre 2015-06-30 data, as we set Nth Mode: All and macro does not change this.

Anyway, macro sets date ranges to 1900-01-01 - 2015-06-30.
It then uses UseWfCurParams to set "Use WF Cur. Params." to False. Then does OverrideOriginalSettings to rebacktest systems using original system input parameters from training period.

Saves stats to Stats A.

Then macro turns on "Use WF Cur. Params." to True and rebacktests systems again.

Saves stats to Stats B.

My point is, what all these manipulations do, they compare system metrics by running a system on the same dates range but with different input parameters: first with original inputs from the build process on the training data (Stats A) then with inputs from the Walk-Forward test (Stats B).

The only difference between Stats A and Stats B is "Use WF Cur. Params." set to False for Stats A and "Use WF Cur. Params." set to True for Stats B.

So this test answers the question if Walk-Forward parameters improve system performance on a given dates range period. But wf_stats macro does not directly compare system performance on IS and on OOS data. And I think it is critical to measure that IS/OOS degradation in some way when building systems.


P.S. I wrote Peter e-mail asking to send me MarketValidationStats macro. I'll take a look at this macro, maybe it does what I need. I'm just surprised the current methodology does not have IS/OOS metrics comparision step.

The test with "Use WF Cur. Params." set to False/True seems to be very indirect way to check for the amount of curve fitting during system building process - who said that if Cur. Walk-Forward parameters improve system performance that equals to a less curve-fitted systems?

But this may be a way to go, I need to think more on this. At least this test can not be the only system robustness test we use.



View user's profile View All Posts By User
Daniel UK1
Member
***




Posts: 470
Registered: 4-6-2019
Member Is Offline


[*] posted on 13-12-2019 at 10:56 AM


Hi Sten, i see, i think we all have our own ways of testing robustness, for me validation on other markets in and outside of GSB, and as low degradation is most important together with as stable as possible WF parameters.. i am not sure that we all use in detail exactly the same methodology, however for myself i use Peters, but testing variations of it, trying to validate that my twists are better, but have not been able to do this yet though.. please see provided screenshot of what market vlidation macro will give you... this is not ES though..

Capturengshow.JPG - 68kB


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 13-12-2019 at 06:35 PM


Hi Sten
the systems are built with nth no trade pre 2015, but the post build setting is nth all. So technically this is 50% IS, 50% oos, but im treating it as IS. There is an option to test degradation per system if thats what your after. (I need to look up how this is done)


Im still not clear on how I need to help you. I feel the heart of the issue is you want oos degradation, and I want oos highest results.


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 13-12-2019 at 10:00 PM


Sten,
degradation is here. Note that this is not uptodate. The post build nth etc settings are now on the left side.
https://trademaid.info/gsbhelp/Systemstatistics-degradation....

There is more change of curve fit doing this by system vs per group of systems. I havnt thought this all through.


View user's profile View All Posts By User
RandyT
Member
***


Avatar


Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline


[*] posted on 15-12-2019 at 10:58 AM
System Parameters and Data2


Greetings, new GSB user here. Looking forward to catching up with all of you.

I have a question regarding the parameters I am seeing in some development runs I am attempting. I see Secondary Filter settings in the parameter list and I see that it is being applied to Data1. I have a Data2 in the configuration and would have expected this to apply to Data2. Do I misunderstand or have missed some setting?

Related, it appears that despite the Entry mode settings, the SF Entry mode has ability to use any available even though not enabled?



mVvU0LO.png - 18kB


View user's profile View All Posts By User
marka
Newbie
*




Posts: 1
Registered: 22-6-2019
Member Is Offline


[*] posted on 15-12-2019 at 07:15 PM


Quote: Originally posted by rterbush  
Greetings, new GSB user here. Looking forward to catching up with all of you.

I have a question regarding the parameters I am seeing in some development runs I am attempting. I see Secondary Filter settings in the parameter list and I see that it is being applied to Data1. I have a Data2 in the configuration and would have expected this to apply to Data2. Do I misunderstand or have missed some setting?

Related, it appears that despite the Entry mode settings, the SF Entry mode has ability to use any available even though not enabled?


Welcome Rterbush, good to have you in the community.
SF and inidicators can be applied to any data stream. GSB genetically chooses what works best.
SF entry mode should only use what its set too. But if its set to GA, it will use any of the inidicators that its set too of the 41 or so possible. GA SF however works best with the closed type of inidicators




Thanks received (1):

+1 rterbush at 2019-12-16 13:37:20
View user's profile View All Posts By User
RandyT
Member
***


Avatar


Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline


[*] posted on 16-12-2019 at 02:00 PM
Optimization Best Practices


Did not intend to create this post as a new forum topic, but subscribers will find it here: https://trademaid.info/forum/viewthread.php?tid=260


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 16-12-2019 at 06:56 PM


the post is basically
" Best practices for Optimization


On to my next question...

I'm doing some exploration in CL market and noticed in Peter's somewhat old video he posted back in 2/2018 that he is configured for "Test @ Beginning" and is doing Test, Training and Validation.

Is this still considered best practice or has that evolved. It does not seem to be the default settings in GSB."

Thats an old video, and now im doing The other thing to note is I am using 100% training, but with nth 1/80 pre 2015.6.30
So the question could be argued to be obsolete.

However I acknowledge that there always will be other ways to use GSB than how I do it.

If training period is shoter, the thing thats really important to avoid is training on 2007-2008 as the range is extreme. You are then making a system that expects such range in the future - which of course we know didnt happen.
Im using pre 2015.6.30 but this is a much wider span of time that balances out the 2007-2008 period.

Hope this helps


View user's profile View All Posts By User
RandyT
Member
***


Avatar


Posts: 123
Registered: 5-12-2019
Location: Colorado, USA
Member Is Offline


[*] posted on 16-12-2019 at 07:18 PM


And Peter, is that your approach to all markets then? 100% training and 1/80 nth?


View user's profile View All Posts By User
admin
Super Administrator
*********




Posts: 5060
Registered: 7-4-2017
Member Is Offline

Mood: No Mood

[*] posted on 16-12-2019 at 08:44 PM


Quote: Originally posted by RandyT  
And Peter, is that your approach to all markets then? 100% training and 1/80 nth?


currently yes, but ive not yet gone for other markets for some time. Likely thats the task for January. Big question is what market?


View user's profile View All Posts By User
 Pages:  1  ..  30    32    34  ..  98

  Go To Top

Trademaid forum. Software tools for TradeStation, MultiCharts & NinjaTrader
[Queries: 67] [PHP: 36.4% - SQL: 63.6%]