GSB Forums

General support questions.

 Pages:  1  2    4  ..  25

Gregorian - 7-7-2017 at 08:41 AM

Just confirming that upgrading from 16GB to 32GB RAM has solved the problem of GSB stopping during long runs.

Still stopping unexpectedly

Gregorian - 9-7-2017 at 05:59 AM

Spoke too soon. GSB is still locking up. It's the only program running on the PC, 3.8% App RAM, 13.0% Sys RAM, of 32 GB total, thus it seems to have ample resources. Any suggestions?

admin - 9-7-2017 at 04:42 PM

GSB has been running for 150 hours on my pc. Is yours overclocked? Can you run diagnostic program on the pc?
see https://www.passmark.com/products/bit.htm
are you on 29.3? From the first build of GSB till recent, its been slowing down a lot if running for 24 hours or so. This is now fixed in recent builds.
Ive got 5500 speed per min after 150 hours. So speed is excellent.



150hours.png - 116kB

emsjoflo - 9-7-2017 at 08:12 PM

I'm trying to use a custom indicator ( oscillator between 0 and 1) I know is predictive. I created a txt file and populated open high low close & vol like a stock chart with the same timeframe as the primary data file. Then selected it as the secondary. I've done multiple runs and GSB is not finding any results. Even when I relax the filtering criteria. Either I'm doing something wrong, or my indicator is junk, or GSB is looking for different clues than my indicator is offering.
Adaptrade found a bunch of strategies. DLPAL doesn't support custom indicators (as far as I can tell) and I can't get Strategyquant to work with futures data (it seems geared towards MT4 and forex)


admin - 9-7-2017 at 08:17 PM

GSB will be putting its indicators on your indicator, so thats not a great scenario. Custom indicators is in the short to medium term pipeline for GSB.
Lets say you have
customIndic(close,x) where x = 5 10 15...100
GSB will genetically optimize the range from 5 to 100. Adaptrade with its limited data2 support couldnt do this unless you exported the 20
files manually. GSB will do this automatically via some TS code.

Gregorian - 10-7-2017 at 02:58 PM

Ran the PassMark Burn In Test, and it passed. 29.3 is still stopping, even with low percentage RAM use. Sometimes it will run for hours before stopping, at other times just a few minutes. Can't determine a pattern.

admin - 10-7-2017 at 04:49 PM

Can you try a new install of 29.3 using default parameters, but say 300 restarts. (you may have done this)
When it stops, has it hung, does the gui work etc? I will check to see if GSB has any low level logging.
My copy now ran for 150 hours and stopped as I set it to 300 restarts. If you have a spare hdd, you could try a new load of the os
or use virtualbox and make a vm. Im happy to try on a second computer on my end, but doubt it will play up.
If your not doing anything un-safe, disable the anti virus software. (thats a slim chance)
No users have complained of this so its more likely to be os or hardware.
You could try removing the old ram sticks, and just running with the new as a test.

crazyhedgehog - 10-7-2017 at 09:55 PM

Mine stops as well in some circumstances. Might have something to do with number of optimizations/population. When I set it to 2000/300, it was stopping all the time after a few minutes. I then set it to 1000/200 and it started running normally with no stops. There might be another factor contributing to it but I haven't figured out what exactly. This bug has existed since many versions.

crazyhedgehog - 10-7-2017 at 09:56 PM

Mine stops as well in some circumstances. Might have something to do with number of optimizations/population. When I set it to 2000/300, it was stopping all the time after a few minutes. I then set it to 1000/200 and it started running normally with no stops. There might be another factor contributing to it but I haven't figured out what exactly. This bug has existed since many versions.

admin - 11-7-2017 at 01:35 AM

Quote: Originally posted by crazyhedgehog  
Mine stops as well in some circumstances. Might have something to do with number of optimizations/population. When I set it to 2000/300, it was stopping all the time after a few minutes. I then set it to 1000/200 and it started running normally with no stops. There might be another factor contributing to it but I haven't figured out what exactly. This bug has existed since many versions.

Im running this now on a 16 gb machine, 3000/300 with 300 restarts.
I will keep you informed.

Carl - 11-7-2017 at 04:01 AM

Hi Peter,

You might consider putting "beginDate" and "endDate" in "inputs" instead of "vars".

When backtesting and/or optimizing the code in TS the user will notice what is happening with the date range.

admin - 11-7-2017 at 04:33 AM

Quote: Originally posted by Carl  
Hi Peter,

You might consider putting "beginDate" and "endDate" in "inputs" instead of "vars".

When backtesting and/or optimizing the code in TS the user will notice what is happening with the date range.

Will do so shortly. Good idea.

admin - 11-7-2017 at 05:01 PM

Quote: Originally posted by admin  
Quote: Originally posted by Carl  
Hi Peter,

You might consider putting "beginDate" and "endDate" in "inputs" instead of "vars".

When backtesting and/or optimizing the code in TS the user will notice what is happening with the date range.

Will do so shortly. Good idea.

Done in 29.8 onwards. Release fairly soon.

admin - 11-7-2017 at 05:06 PM

Quote: Originally posted by Gregorian  
Spoke too soon. GSB is still locking up. It's the only program running on the PC, 3.8% App RAM, 13.0% Sys RAM, of 32 GB total, thus it seems to have ample resources. Any suggestions?

300 300 4/300 is fine.
Im not saying you have no problem, but its likely os related.
Im interested to see how wide spread it is with other users.
second machine is slower, with 16gb of ram only.
Worse case mask it by lower population /generations. Its not at all a critical setting.


restarts-ok.png - 74kBrestarts2-ok.png - 83kB

Progress on stopping problem

Gregorian - 13-7-2017 at 09:02 AM

GSB has not stopped since I disabled WD SmartWare [a backup program] in the Windows tray. This program would also cause TSL to lock up, even though no backups were running at the time.

crazyhedgehog, try disabling any backup program you may have running in the background.

Carl - 13-7-2017 at 02:29 PM

Win 10, i7-7700, 32 GB RAM, SSD

Started GSB 29.7 38 hours ago.
ES 30 min 1997-2017
gen 300, pop 3000, restarts 100
stoploss 1000
fitness commission 15
report commission 15

After 38 hours running (9 restarts sofar), speed is 4600/minute and RAM usage at 50%.
In the meanwhile ran 24 WF's gen 400/ pop 300. RAM usage up to 75%, slowed down GSB considerably during the WF's.
GSB never stopped as far as I know.

admin - 13-7-2017 at 04:34 PM

my tests are the same. 3 days with 300 3000 300 and still going fine

curt999 - 27-7-2017 at 02:46 PM

i noticed gsb will work on renko bars tried some tests on nq..im using bars with real ohlc though or its meaningless..tried using nq renko as main set then 15min as second data set and gave some good results

rws - 27-7-2017 at 03:11 PM

How do you make that renko data for GSB?
Thanks

curt999 - 27-7-2017 at 04:48 PM

i was exporting with ninjatrader using cqg feed can get a year of tick data..make sure you use a backtestable renko bar like sharkindicators or there are others out there..the files are actually quite small if you use a larger renko bar like 10 on nasdaq any smaller will just make a million trades..i can send you the nq file i have it was made with rjay live renko bar type..

cyrus68 - 11-9-2017 at 01:10 AM

There is a problem with using secondary data series that aren't always matched with the primary data, because of certain holidays. For example, if you use ES as data1 and TY as data2, there will be days on which TY trades but ES doesn't, and vice versa.

I haven't found a way, in Tradestation, to get data2 to always match data1. So the only way is to pull the downloaded data into Excel and clean it up by removing the holidays on which the mismatch occurs. As a result, for backtesting purposes, you would have data that doesn't contain gaps.

However, the problem also occurs going forward, when you are trading live, I doubt that it would be possible to program GSB to deal with this issue automatically. So a possible solution would be a manual one, in which you would add a few lines of EL code to the strategy produced by GSB.

In pseudo code, this would be something like:
If the open of data1 is present and the open of data2 is missing don’t trade
Or
If the open of data1 is present and the open of data3 is missing don’t trade
Or
If the open of data2 is present and the open of data1 is missing don’t trade
.
.
etc
.
.
Else

Any suggestions are welcome.

admin - 11-9-2017 at 01:20 AM

TS & GSB do not use data unless all used data streams have data on the same bar. We put a lot of work into GSB to make sure its identical to TS.
This is what your psudo code does.
In the big picture, these differences are not significant & I like the current approach

cyrus68 - 11-9-2017 at 04:30 AM

Good to know that GSB won't try to put on trades when there are gaps in one or more of the data series. In that case, there is no need to clean up the historical data used for backtesting. That saves a lot of work.

In Builder, you DO have to clean up the historical data. Kudos to GSB.

admin - 11-9-2017 at 04:34 AM

Hi Cyrus, thanks for the kind words.
We spent massive amount of time to get GSB to = TS & MC. When we got to MT5, amibroker & ninja these are the sort of headaches we expect to see.
Different platforms can handle this situation differently.

Slowdown after a while

Gregorian - 3-11-2017 at 12:14 PM

In both 39.11 and 40.04, there is a significant slowdown after a few hours of generation. For example, on a PC where GSB is the only task running, consuming 17% of CPU and 10% of RAM, at the beginning of the run it generates something like 4,700 strategies per minute, but a few hours later, with no change in resource consumption, it degrades to 144/min.

Maybe I'm doing something wrong?

Also in 40.04 the stats window does not update with RAM use figures. Have to get those from Task Manager.

admin - 3-11-2017 at 03:14 PM

Quote: Originally posted by Gregorian  
In both 39.11 and 40.04, there is a significant slowdown after a few hours of generation. For example, on a PC where GSB is the only task running, consuming 17% of CPU and 10% of RAM, at the beginning of the run it generates something like 4,700 strategies per minute, but a few hours later, with no change in resource consumption, it degrades to 144/min.

Maybe I'm doing something wrong?

Also in 40.04 the stats window does not update with RAM use figures. Have to get those from Task Manager.

Is that in a stand-alone, worker or manager? Can you repeat the identical test again?For a long time when I run 20 or 30 workers, I get one that stalls. We have been try to address this for a while. Its improved but not fixed.
Im on 40.05 standalone right now, and havnt noticed the ram figures go, but I will look out for it.

Gregorian - 3-11-2017 at 11:01 PM

It's a standalone. Just ran 40.4 on a different instrument, and after six hours went from 3,400/min to 239/min, so that's 3 for 3 degradations today.

admin - 4-11-2017 at 01:09 AM

Quote: Originally posted by Gregorian  
It's a standalone. Just ran 40.4 on a different instrument, and after six hours went from 3,400/min to 239/min, so that's 3 for 3 degradation today.

Can you send me your entire folder ziped up, so I can test.
(ie send via dropbox url. sending a exe wont get to me)
Anything in exception folder?
Most likely its unique to your setup.

Not getting any trades with SPY EOD data

philcollins - 11-11-2017 at 10:24 AM

I'm trying to generate some algos for EOD stock data (SPY as a first test), but GSB doesn't seem to produce any results.

The data (~15 years) seems to be imported alright. Data lines look like this:

Code:
20161201,09:00,219.437,219.437,217.866,218.244,64083890,0 20161202,09:00,218.383,218.960,217.975,218.383,54539260,0 20161205,09:00,219.357,220.103,219.129,219.646,49510242,0


I turned all backtest and train filters off and still nothing. The contract definition is as follows:
Code:
"Name": "SPY", "SecType": 0, "Exchange": "GLOBEX", "Currency": "USD", "TicksPerPoint": 100, "PointValue": 1.0, "Digits": 2, "SessionClose": "23:59:00", "Session1CloseFrom": "23:59:00", "Session1CloseTo": "23:59:59", "Session2CloseFrom": null, "Session2CloseTo": null



Can anyone point me in the right direction? I've attached the optimization settings used.

Edit: Generating algos for the sample ES data works OK.

Attachment: Login to view the details

curt999 - 12-11-2017 at 09:58 AM

did you also turn off exit on close

philcollins - 12-11-2017 at 10:07 AM

Quote: Originally posted by curt999  
did you also turn off exit on close

Yeah I did.

admin - 12-11-2017 at 03:47 PM

Quote: Originally posted by philcollins  
I'm trying to generate some algos for EOD stock data (SPY as a first test), but GSB doesn't seem to produce any results.


Edit: Generating algos for the sample ES data works OK.


Try turning fitness and reports commissions to zero initially.
All performance filters set to pass everything, except test to have pf of 1.1
You training period is also too high.
Under app settings,gui you can also have top update enabled, and latest update enabled to see each system gsb is running.
See these results in the tabs next to unque-systems.
Dont leave these settings to true unless you have to. Its much slower.
You can then see what metrics GSB is producing.

philcollins - 13-11-2017 at 05:43 AM

Cheers I got it working now after some fiddling. I had to enable a couple of settings in the GUI update settings (update unique etc.) and disable the secondary filter-mode.

I'm now running into another problem: all generated strategies seem to start trading at 01/01/2008 even though I have data from 2002 to 2016. Any ideas? I tried lowering 'Max bars back' in app settings (lowest possible seems to be 300), but this didn't help. I'm using a 50/50 split.

I've attached my settings. I tried both 'spy.1440.Minute.foo.txt' and 'spy.405.Minute.foo.txt' as filenames but this didn't seem to make a difference.

Attachment: Login to view the details

Attachment: Login to view the details


admin - 13-11-2017 at 03:53 PM

Quote: Originally posted by philcollins  
Cheers I got it working now after some fiddling. I had to enable a couple of settings in the GUI update settings (update unique etc.) and disable the secondary filter-mode.

I'm now running into another problem: all generated strategies seem to start trading at 01/01/2008 even though I have data from 2002 to 2016. Any ideas? I tried lowering 'Max bars back' in app settings (lowest possible seems to be 300), but this didn't help. I'm using a 50/50 split.

I've attached my settings. I tried both 'spy.1440.Minute.foo.txt' and 'spy.405.Minute.foo.txt' as filenames but this didn't seem to make a difference.


It might be that the systems require high volatility to meet your performance metrics. Lower them and I think you will get systems trade trade earlier.
If this doesnt fix it, read below.


disabling SF will tend to give a massive drop in profit per trade.
You should try GA or CloseD and see what works best.
But on daily bars it might have merit as the amount of trades will have dropped due to daily bars.
Can you upload your data, and a saved system.

philcollins - 13-11-2017 at 04:45 PM

Quote: Originally posted by admin  

It might be that the systems require high volatility to meet your performance metrics. Lower them and I think you will get systems trade trade earlier.
If this doesnt fix it, read below.


disabling SF will tend to give a massive drop in profit per trade.
You should try GA or CloseD and see what works best.
But on daily bars it might have merit as the amount of trades will have dropped due to daily bars.
Can you upload your data, and a saved system.

I chopped off 1 year at the beginning so that it starts at 2003 instead of 2002 and now the first trades come in at 01/01/2009 instead of 01/01/2008. So I guess some offset ends up wrong somehow, but consistently wrong.

(SF enabled works btw, at first glance it just yields less systems.)

I've sent you the SPY data in a private message, perhaps you can tell what's going on.

Edit: lowering the filter restrictions didn't work.

admin - 13-11-2017 at 06:01 PM

Quote: Originally posted by philcollins  
Quote: Originally posted by admin  


Edit: lowering the filter restrictions didn't work.

I suspect GSB thinks you have a hole in data, but im not sure.
try some other data like $spx.x. @ES.D I know its not tradable, but its just a test.

check just before 18 dec 2007
You have old settings for test, I would also loosing back test termination too.



TRAINING.png - 16kB

parrdo101 - 9-12-2017 at 03:52 PM

Peter had me make a custom session for Trade Station for using ES, $IDX, $SPX.X; it was 8:30am to 3pm exchange time. Those hours happen also to be EXACTLY the regular session for $IDX AND $SPX.X. Careful inspection of reg session and my custom session shows they're exactly for the same times and days - shaded green.

I am running my OOS exactly only in the Trade Station chart with the GSB optimizer script. $6333+ net profit for the OOS period. Now I switch - to doublecheck things as I do - both $IDX and $SPX.X - I switch their regular session to my same-hours custom session. -$518 (minus) net profit.... What about this?



reg.JPG - 65kB cust.JPG - 62kB

admin - 10-12-2017 at 03:51 PM

Quote: Originally posted by parrdo101  
Peter had me make a custom session for Trade Station for using ES, $IDX, $SPX.X; it was 8:30am to 3pm exchange time. Those hours happen also to be EXACTLY the regular session for $IDX AND $SPX.X. Careful inspection of reg session and my custom session shows they're exactly for the same times and days - shaded green.

I am running my OOS exactly only in the Trade Station chart with the GSB optimizer script. $6333+ net profit for the OOS period. Now I switch - to double check things as I do - both $IDX and $SPX.X - I switch their regular session to my same-hours custom session. -$518 (minus) net profit.... What about this?
the important thing is that ts results = gsb. If this is the case, pursue this no further. If you did want to look more into this, My guess is your computer time zone is not central usa time. (ok but the issue might relate to this) I would try closing ts, set computer to central time and test again.




parrdo101 - 13-12-2017 at 05:42 PM

Checking in...If you had to pick only one you'd be stuck with forever:

1. Always optimize datastreams routinely on WF optimize. Set to True.

2. Never optimize datastreams routinely on WF optimize. Set to False.

Which would you?

admin - 13-12-2017 at 06:35 PM

Quote: Originally posted by parrdo101  
Checking in...If you had to pick only one you'd be stuck with forever:

1. Always optimize datastreams routinely on WF optimize. Set to True.

2. Never optimize datastreams routinely on WF optimize. Set to False.

Which would you?

This is a good question.
I would not optimize data streams as default
Typically I get the same final results or I get better results with much worse linearity in the equity curve. Ive only used opt data streams 10 or so times.
It would also only be usefull when you have very similar data streams. ie es,$spx,$idx etc while if you had say advance/decline ratio custom indicator its not likely to switch from that to $spx etc

parrdo101 - 14-12-2017 at 11:37 AM

One can't even select whether to optimize the datastreams per everything in the snippet, right?

But, I've been assuming it WILL always optimize the secondary datastreams in the GSB "first," "primal" optimize (not wf optimize)...(?) Do I have that right?




inhere.JPG - 27kB

admin - 14-12-2017 at 02:27 PM

Quote: Originally posted by parrdo101  
One can't even select whether to optimize the datastreams per everything in the snippet, right?

But, I've been assuming it WILL always optimize the secondary datastreams in the GSB "first," "primal" optimize (not wf optimize)...(?) Do I have that right?

Many features are hidden unless you have advanced mode on
(View from top menu, the advanced mode)
the GSB quick start guide might be worth a read again.
By the way, you were very good at attention to detail spotting that the test criteria had changed from pearsons to profit factor.
Note also that the profit factor figure used is lower on allmarkets settings, vs ES settings.

WF re-run with different dates

Gregorian - 17-12-2017 at 05:17 PM

Is it possible to run a WF on an old strategy generated with GSB, but with different dates? For example, I'd like to re-run WF on a strategy generated and saved by GSB six weeks ago, but this time including the last six weeks of data.

From what I can tell, the WF date range on a saved strategy seems to be hard-coded and is unchangeable.
Try to edit the gsb system file (save it by right clicking) and try changing the dates. 6 weeks unlikely to make any difference though

admin - 17-12-2017 at 08:27 PM

More through reply.
I havnt done this, but you should be able to right click and save the system. Then edit the saved system in notepad, change the dates. Then load the system again. To be save load it in a new copy of GSB incase over writing the system didnt work.
6 weeks should be very little time compared to many years of data, so I wouldnt think its significant.
I like to see parameter stability for 3 or more wf runs, so again its not likely to change things.

parrdo101 - 19-12-2017 at 06:58 AM

Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

What are your top 3 look for's? Sorts in the columns? Prefer stated as a sorting of columns, not something like "look for equity curve closest to straight line"; would prefer that stated more specifically e.g., like, "Step 1: Grab the next highest Pearson from the top which has the next lower number of trades
than the top Pearson, in the Full Period column" (Please reference directly any discretions that may use such as whether Full Period or Test Period columns, whatever, etc.)

It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.

parrdo101 - 19-12-2017 at 10:53 AM

Quote: Originally posted by admin  
I'm intending to change this in the next week.
I want commission to be subtracted off system curve, IS curve and OOS curve.
One value for all
This value is NOT taken off fitness

The a commission value to be taken off FITNESS in system curve, IS curve and OOS curve.
One value for all
This value is not taken off equity curves.
EWFO works like this.
It means you can shift GSB to look for fatter trades, but get a totally level playing field for the equity curves,
but can choose to use commission on both as is the traditional usage of commission field.

I tested with $1 on each commission field and it worked fine. Probably your figure is to high to pass your filters in app settings.
If you use fitness netprofit * average trade, the commission value to fitness will in effect be irrelevant.
Fitness netprofit by itself, commission in fitness will be very significant.
Commission with Fitness in NP & PF I think we be irrelevant too.


So essentially, to incorporate a realistic commission within when the GSB is finding systems (say 29.76 round trip for ES), you have to use Net Profit for fitness function? And, if so, this is going to show up in the graph curves as a slower rising equity curve? I'm not clear.

(Please also note an unanswered, 2nd question in my preceding post. I've noticed several instances when I'm piling on with the questions, you just answer the last one. Several out there this way that got missed I saw.)

admin - 19-12-2017 at 01:51 PM

If fitness is np*at, GSB is going to look for high average profit trades. Adding commission in fitness will not likely change anything at all.
If fitness is net-profit only, commission is critical and I would double the actual value.
If you don't double this figure, you might get a system that does lots of trades, that makes all the money on 2007 to 2008, and makes nothing any other time.
However I very rarely use anything other than np*AT fitness, and unless you have a very good reason- I wouldn't do anything else.
The commission in the reports section will show up in the graph only. (slower rising curve)
The commission in the fitness section will be internally used by GSB
Will reply to previous question next. sorry I missed it.

admin - 19-12-2017 at 02:06 PM

Quote: Originally posted by parrdo101  
Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

What are your top 3 look for's? Sorts in the columns? Prefer stated as a sorting of columns, not something like "look for equity curve closest to straight line"; would prefer that stated more specifically e.g., like, "Step 1: Grab the next highest Pearson from the top which has the next lower number of trades
than the top Pearson, in the Full Period column" (Please reference directly any discretions that may use such as whether Full Period or Test Period columns, whatever, etc.)
It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.


I have filters on to only get the top % of the best systems. (Full period.)
The amount isnt critical but I dont want to look at 10,000 systems at it would take me months.
so for ES, i had tight metrics stored under app settings.
Something like $80000 np, 1.8pf? pearsons 0.985
Pearsons is what I mean by closest to a straight line. (R-F) Coulomb.
(Other markets specs might be much lower)
So I sort the entire period by np/dd and pick systems with decent pf, sort on pf and pick some with curves i like, sort on pearsons, sometimes sort on np, or fitness. I feel for ES right now with very low range, high PF is the safest - not high pearsons.
On ES, as we are on extreme historic low range, I also want to see the last 12 months or so be profitable. Validation period of 8% is good for this too. This can be added in the filters.
These are good questions to ask, and some of it was covered here in the walk forward video. This is a must see for GSB users.
https://www.youtube.com/watch?v=CQy-yP_kBMM



rws - 19-12-2017 at 03:48 PM

Till what date is this test. Till 2017 januar or december 2017?



Quote: Originally posted by admin  
Quote: Originally posted by parrdo101  
Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

What are your top 3 look for's? Sorts in the columns? Prefer stated as a sorting of columns, not something like "look for equity curve closest to straight line"; would prefer that stated more specifically e.g., like, "Step 1: Grab the next highest Pearson from the top which has the next lower number of trades
than the top Pearson, in the Full Period column" (Please reference directly any discretions that may use such as whether Full Period or Test Period columns, whatever, etc.)
It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.


I have filters on to only get the top % of the best systems. (Full period.)
The amount isnt critical but I dont want to look at 10,000 systems at it would take me months.
so for ES, i had tight metrics stored under app settings.
Something like $80000 np, 1.8pf? pearsons 0.985
Pearsons is what I mean by closest to a straight line. (R-F) Coulomb.
(Other markets specs might be much lower)
So I sort the entire period by np/dd and pick systems with decent pf, sort on pf and pick some with curves i like, sort on pearsons, sometimes sort on np, or fitness. I feel for ES right now with very low range, high PF is the safest - not high pearsons.
On ES, as we are on extreme historic low range, I also want to see the last 12 months or so be profitable. Validation period of 8% is good for this too. This can be added in the filters.
These are good questions to ask, and some of it was covered here in the walk forward video. This is a must see for GSB users.
https://www.youtube.com/watch?v=CQy-yP_kBMM



admin - 19-12-2017 at 04:20 PM

Quote: Originally posted by rws  
Till what date is this test. Till 2017 januar or december 2017?


Not clear what you are referring too.

admin - 19-12-2017 at 04:24 PM

Quote: Originally posted by parrdo101  
Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.


There is no way you should ever trade without doing a WF.
ES with secondary filter closed is high chance it will work, but I would not dream of live trading without WF. If no WF software existed, then you could wait 6 months for out of sample results, but the WF software exits, is good and fast.

rws - 19-12-2017 at 04:25 PM


I already saw it was till november 2017 in the movie in tradestation

Quote: Originally posted by admin  
Quote: Originally posted by parrdo101  
Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.


There is no way you should ever trade without doing a WF.
ES with secondary filter closed is high chance it will work, but I would not dream of live trading without WF. If no WF software existed, then you could wait 6 months for out of sample results, but the WF software exits, is good and fast.

rws - 19-12-2017 at 04:33 PM

Just wonder, how are the WF parameters chosen, based on the fitness criteria only?
Is it taken into account that while optimizing with WF there could be local peak? I mean sometimes systems work fantastic with x=10 but when x is 9 or 11 performance is much less.
That could also result in performance issues if markets change a bit OS.

For example there could also be a good performance at for example x=5 while at x=3 or 4 and x = 6 and 7 performance is more stable.
Do you consider the shape of the profit curve based on parameter(s)



Quote: Originally posted by admin  
Quote: Originally posted by parrdo101  
Say you've no WF test in the GSB; you're going to pick something for the future directly out of the the GSB systems finds:

It looks to be there could exist a very smart sort key given this sit. Could look good even without a WF? I'm beginning to think this GSB has it in there actually somewhere.


There is no way you should ever trade without doing a WF.
ES with secondary filter closed is high chance it will work, but I would not dream of live trading without WF. If no WF software existed, then you could wait 6 months for out of sample results, but the WF software exits, is good and fast.

admin - 19-12-2017 at 04:45 PM

Quote: Originally posted by rws  
Just wonder, how are the WF parameters chosen, based on the fitness criteria only?
Is it taken into account that while optimizing with WF there could be local peak? I mean sometimes systems work fantastic with x=10 but when x is 9 or 11 performance is much less.
That could also result in performance issues if markets change a bit OS.

For example there could also be a good performance at for example x=5 while at x=3 or 4 and x = 6 and 7 performance is more stable.
Do you consider the shape of the profit curve based on parameter(s)



This is a good question.
wf will choose whatever gives peak fitness in each run.
ideally that should give the last 3 or more wf runs using the same parameter.
This means the parameter is consistently the best value.

A "local peak" is possible but not likely with the GSB methodology.
if it happens you are going to see a parameter not be stable.

If I was manually optimizing a value, I tend to look for the best area, not the peak value. However GSB can be multi-dimensional, to this approach is too simplistic. The exception would be say the stop value. I tend too add that after the system is built and look for the best area, not the best value.

rws - 19-12-2017 at 06:09 PM

3 runs of the same WF parameters could also mean that the market has not changed just enough to cause troubles.

These optimizers are used in Amibroker when running WF:
CMA-ES, SPSO and Tribes engines

These engines avoid local peaks according to the software writer.

CMA-ES works very fast, even for bigger portfolios of 5 min tickers and
many years testing.



Quote: Originally posted by admin  
Quote: Originally posted by rws  
Just wonder, how are the WF parameters chosen, based on the fitness criteria only?
Is it taken into account that while optimizing with WF there could be local peak? I mean sometimes systems work fantastic with x=10 but when x is 9 or 11 performance is much less.
That could also result in performance issues if markets change a bit OS.

For example there could also be a good performance at for example x=5 while at x=3 or 4 and x = 6 and 7 performance is more stable.
Do you consider the shape of the profit curve based on parameter(s)



This is a good question.
wf will choose whatever gives peak fitness in each run.
ideally that should give the last 3 or more wf runs using the same parameter.
This means the parameter is consistently the best value.

A "local peak" is possible but not likely with the GSB methodology.
if it happens you are going to see a parameter not be stable.

If I was manually optimizing a value, I tend to look for the best area, not the peak value. However GSB can be multi-dimensional, to this approach is too simplistic. The exception would be say the stop value. I tend too add that after the system is built and look for the best area, not the best value.

admin - 19-12-2017 at 06:53 PM

Quote: Originally posted by rws  
3 runs of the same WF parameters could also mean that the market has not changed just enough to cause troubles.

Yes, but if you think like that, you should put your money from brokerage account into fixed interested, or under your mattress.
Trading systems can and do fail- even when you know what your doing. If you dont know what your doing they are highly likely to fail.
Ideally you trade many systems, markets and time frames to diversify your risk. 3 runs with the same parameters, good performance metrics etc, with a good architecture is a risk im willing to take. Ive been trading 17 years, and never had a period where I stopped trading. This is totally a personal decision.

nothing wrong with CMA-ES, SPSO and Tribes engines but im not sure there is any great advantage.
For small runs of WF in the days when I did all WF in TS, brute force gave similar results to genetic, if the genetic sample was say 10% of the brute. Go less than 10% your very often ok.

rws - 19-12-2017 at 07:32 PM

There is nothing wrong with optimizing and I think even less
with optimizing optimization and avoid local peaks if possible.

Local peaks is a issue that can be dealt with and I hope you
could improve that in the future.

Here a description of other optimizing software (not amibroker):
Optimize function works differently. It does not seek performance peaks; instead it looks for stable performance ranges and places the parameters into their centers. This does not necessarily result in the maximum backtest performance, but in the highest likeliness to reproduce the hypothetical performance in real trading.






Quote: Originally posted by admin  
Quote: Originally posted by rws  
3 runs of the same WF parameters could also mean that the market has not changed just enough to cause troubles.

Yes, but if you think like that, you should put your money from brokerage account into fixed interested, or under your mattress.
Trading systems can and do fail- even when you know what your doing. If you dont know what your doing they are highly likely to fail.
Ideally you trade many systems, markets and time frames to diversify your risk. 3 runs with the same parameters, good performance metrics etc, with a good architecture is a risk im willing to take. Ive been trading 17 years, and never had a period where I stopped trading. This is totally a personal decision.

nothing wrong with CMA-ES, SPSO and Tribes engines but im not sure there is any great advantage.
For small runs of WF in the days when I did all WF in TS, brute force gave similar results to genetic, if the genetic sample was say 10% of the brute. Go less than 10% your very often ok.

admin - 19-12-2017 at 08:13 PM

Quote: Originally posted by rws  


Here a description of other optimizing software (not amibroker):
Optimize function works differently. It does not seek performance peaks; instead it looks for stable performance ranges and places the parameters into their centers. This does not necessarily result in the maximum backtest performance, but in the highest likeliness to reproduce the hypothetical performance in real trading.

I see the merits to what you say.
Its not the same, but the TS WF did have the option to stress test.
ie add +-5% and +-10% on the parameters, and look for the best area. My tests on this showed degraded performance in the OOS curve. To me this means the concept looked good, but wasn't helpful in practice.

curt999 - 19-12-2017 at 08:25 PM

I also want to see the last 12 months or so be profitable.

I look at this first usually..maybe it might be beneficial to add some sort of column to the results that lists performance over the past 12 months..so you could sort by it..

admin - 19-12-2017 at 08:32 PM

Quote: Originally posted by curt999  
I also want to see the last 12 months or so be profitable.

I look at this first usually..maybe it might be beneficial to add some sort of column to the results that lists performance over the past 12 months..so you could sort by it..

I use validation for this, but if thats not going to work for you, it wouldnt be hard to add another filter in the app settings.

rws - 20-12-2017 at 05:14 AM

Once you have peaks in a parameter optimization, changing (other) parameters or changed market conditions could also cause OS degradation because of that peak. There has been a lot of writing and testing about this issue lately (also on Quantopian).

I don't mind too much if WF parameters change a bit over the different iterations as long as the profit curve is linear. I would rather see a better profit curve because of WF optimization than the exact the same parameters. Sure parameters should not change too much.

Parameters can also change a lot in WF because of peak optimization. I think if you look for an average good area of a parameter instead of a peak when optimizing you will see less change in WF parameters too.

When market conditions change I think it is not more than logical that there could be a better setting for a parameter if that would not be a peak.


Quote: Originally posted by admin  
Quote: Originally posted by rws  


Here a description of other optimizing software (not amibroker):
Optimize function works differently. It does not seek performance peaks; instead it looks for stable performance ranges and places the parameters into their centers. This does not necessarily result in the maximum backtest performance, but in the highest likeliness to reproduce the hypothetical performance in real trading.

I see the merits to what you say.
Its not the same, but the TS WF did have the option to stress test.
ie add +-5% and +-10% on the parameters, and look for the best area. My tests on this showed degraded performance in the OOS curve. To me this means the concept looked good, but wasn't helpful in practice.

admin - 20-12-2017 at 01:27 PM

Quote: Originally posted by rws  
Once you have peaks in a parameter optimization, changing (other) parameters or changed market conditions could also cause OS degradation because of that peak. There has been a lot of writing and testing about this issue lately (also on Quantopian).

I don't mind too much if WF parameters change a bit over the different iterations as long as the profit curve is linear. I would rather see a better profit curve because of WF optimization than the exact the same parameters. Sure parameters should not change too much.

Parameters can also change a lot in WF because of peak optimization. I think if you look for an average good area of a parameter instead of a peak when optimizing you will see less change in WF parameters too.

When market conditions change I think it is not more than logical that there could be a better setting for a parameter if that would not be a peak.


if this is your belief, you could just do rolling WF. Proof would be what the out of sample curve is like. The problem with WF is you have reduced your sample size greatly of each run. Its particularly bad unless you have very big sample size.
I'm open to adding other options for genetic optimization, but its extremely low priority.

admin - 22-12-2017 at 03:47 PM

Quote: Originally posted by admin  
Quote: Originally posted by curt999  
I also want to see the last 12 months or so be profitable.

I look at this first usually..maybe it might be beneficial to add some sort of column to the results that lists performance over the past 12 months..so you could sort by it..

I use validation for this, but if thats not going to work for you, it wouldnt be hard to add another filter in the app settings.

This is todays beta.


latestDays.png - 6kB

parrdo101 - 1-1-2018 at 08:00 AM

How many secondary data files can be loaded for a GSB run? Some limit? Completely forgot to ask.

admin - 1-1-2018 at 03:36 PM

Quote: Originally posted by parrdo101  
How many secondary data files can be loaded for a GSB run? Some limit? Completely forgot to ask.

Unlimted as long as you have the ram. It is a bit slower from memory if you increase this a lot.

Syncing and Stop Orders not activating

Gregorian - 9-2-2018 at 01:37 PM

I have an NQ strategy with a Stop Loss of $1000. As you can see from the Trade Manager pic:

1. The open position for NQ shows over $4000 in losses, but the stop is not triggered and Trade Manager shows it as "unsent", even though the Properties is set to send stops to the TS Stop Server.

2. The Strategy Performance Report shows the strategy as flat, so I then noticed that there are three one contract short positions open, whereas the strategy only trades one contract. Apparently the strategy has gotten out of sync with the real world.

The Wait for UROut option is not checked. Might this be the problem? I was under the impression that if you had, for example, a long position and an order to open a short position came through, the long position would be closed automatically.

Stop Orders.jpg - 1.6MB

admin - 9-2-2018 at 02:43 PM

Quote: Originally posted by Gregorian  
I have an NQ strategy with a Stop Loss of $1000. As you can see from the Trade Manager pic:

1. The open position for NQ shows over $4000 in losses, but the stop is not triggered and Trade Manager shows it as "unsent", even though the Properties is set to send stops to the TS Stop Server.

2. The Strategy Performance Report shows the strategy as flat, so I then noticed that there are three one contract short positions open, whereas the strategy only trades one contract. Apparently the strategy has gotten out of sync with the real world.

The Wait for UROut option is not checked. Might this be the problem? I was under the impression that if you had, for example, a long position and an order to open a short position came through, the long position would be closed automatically.

I am no epert on this. Wait for urout is normally ticked. If not I think you may get double fill under freak conditions. ie a stop order is filled as the market hits it, moments later ts cancels the stop.
TS forum would be better to ask this. Often sims are not as reliable as actual execution. I do have trobule fro time to time with execution at TS. MOC failed & last week I had a moc system that left a profit target after moc.
No execution is perfect.
I sold TS code that reconciles TS chart to live account, but it was only designed for one trading system per account. Think it was $200. It could be improved beyond this but I dont have the time to invest into it as its not a two minute improvement

cyrus68 - 14-2-2018 at 01:03 AM

Regarding the contracts table in GSB, I'm not sure if it is set up wrong, or it was filled in a hurry or i'm reading it wrong. The column called 'Ticks' appears to refer to tick value NOT tick size.
the info for GC is correct. A tick size of 0.1 translates to a tick value of 10.
but for AD which has a tick size of 0.0001 and a tick value of 10, why is there a 10000 entry?
ES has a tick size of 0.25 and a tick value of 12.50, so what's 4 doing there?

admin - 14-2-2018 at 01:12 AM

There are 4 ticks per point for ES. So 1/4 = 0.25
price scale is 1/10000. Hope this looks ok to you. Ive never traded AD so dont know it so well.




ad-fut.png - 15kB

cyrus68 - 15-2-2018 at 02:09 AM

Thanks for the clarification. So we need to enter ticks per point for each instrument.
In which case, 10000 for AD is correct.
Some software packages use tick size.
In the case of GSB it uses a different input, which is fine.

cyrus68 - 17-2-2018 at 12:43 AM

When you use a validation period in GSB, does the test period remain OOS?
In other words, does the test period data remain unseen by GSB or is it seen?

admin - 17-2-2018 at 12:49 AM

Quote: Originally posted by cyrus68  
When you use a validation period in GSB, does the test period remain OOS?
In other words, does the test period data remain unseen by GSB or is it seen?

very good question.
GSB fitness doesnt see this period, but in once sence because we the human have seen it, its not out of sample.
This is still ok as long as you do a WF.
Buy the way, its not easy or common to get a bad WF result on NG.
I got one today. Fantastic final equity curve, really bad OOS curve and parameter stability of zero

bad-wf.png - 120kB

cyrus68 - 17-2-2018 at 01:11 AM

the choice of session times for CL was intelligent.
I see that you have the same ones for NG.
looks like it is working out well.

admin - 17-2-2018 at 03:47 AM

Quote: Originally posted by cyrus68  
the choice of session times for CL was intelligent.
I see that you have the same ones for NG.
looks like it is working out well.

NG is just amazing.
note the stability is very healthy, curve consistent profitable in most runs too.

ng-great.png - 143kB

cotila1 - 23-2-2018 at 03:25 AM

Hi Peter, not sure if on yr plan already, but I think would be useful if WF shows the PASS/FAIL result flag at the end of the process?

admin - 23-2-2018 at 04:45 AM

Quote: Originally posted by cotila1  
Hi Peter, not sure if on yr plan already, but I think would be useful if WF shows the PASS/FAIL result flag at the end of the process?

The answer is still a bit grey.
Is the curve nice, Anchored stability score, how close is the oos and current curve if you anchor them on the top right instead of the left bottom.
Hope to have that in while

cyrus68 - 25-2-2018 at 10:40 PM

I generated systems on version 43.27 and the speed was absolutely atrocious. Here are the specs:

Hardware was i9 7900 with 64 gb ram

Instrument was NG with 15min data. The settings were the same as Peter’s with the following differences:
4 secondary data streams, operators * and /. Commission 2.5 and slippage 10 per trade.

7 workers were activated, and all showed their status as ‘running’.
However, only one worker showed systems. The others were blank.
The average speed per worker was an abysmal 1.2k. RAM usage was at 62% and CPU at 95%, running at 4.01 ghz. The CPU was not overclocked.

I aborted GSB after 6 starts. Of the 12 systems shown in the single worker, only 11 were copied to the manager.
I ran 4 WF after the abortion, and they were excruciatingly slow. RAM usage was at 15%, CPU usage at 12%, running at 1.6 ghz. So, there was no resource constraint.
When I clicked on a tab in GSB, it took 5 seconds to respond. This only happens to software if your CPU or ram are maxed out.

I haven’t used GSB since last September. At that time, a setup very similar to this, on an old i7, achieved speeds of 3.5k per worker.
As far as I can see, this software is regressing, not progressing. If anybody is using a build that has decent speed and relatively few bugs, please let me know.

Also, does anybody know what is the purpose of ‘machine resources’ in the settings?

admin - 25-2-2018 at 10:48 PM

Quote: Originally posted by cyrus68  
I generated systems on version 43.27 and the speed was absolutely atrocious. Here are the specs:

Hardware was i9 7900 with 64 gb ram

Instrument was NG with 15min data. The settings were the same as Peter’s with the following differences:
4 secondary data streams, operators * and /. Commission 2.5 and slippage 10 per trade.

7 workers were activated, and all showed their status as ‘running’.
However, only one worker showed systems. The others were blank.
The average speed per worker was an abysmal 1.2k. RAM usage was at 62% and CPU at 95%, running at 4.01 ghz. The CPU was not overclocked.

I aborted GSB after 6 starts. Of the 12 systems shown in the single worker, only 11 were copied to the manager.
I ran 4 WF after the abortion, and they were excruciatingly slow. RAM usage was at 15%, CPU usage at 12%, running at 1.6 ghz. So, there was no resource constraint.
When I clicked on a tab in GSB, it took 5 seconds to respond. This only happens to software if your CPU or ram are maxed out.

I haven’t used GSB since last September. At that time, a setup very similar to this, on an old i7, achieved speeds of 3.5k per worker.
As far as I can see, this software is regressing, not progressing. If anybody is using a build that has decent speed and relatively few bugs, please let me know.

Also, does anybody know what is the purpose of ‘machine resources’ in the settings?

Do a support upload and I will check this. Check you dont have exceptions in the exception folder under GSB
Machine resources is so you can schedule GSB to not work during certain days and hours, and to set the cpu mask / process priority of GSB exe files

admin - 25-2-2018 at 10:57 PM

This is on i7, not overclocked and i9 mild overclock. Its on 30 min data so will be faster than 15 min
For speed reasons, do not use different slippage and commission values in fitness vs reports. GSB has to calculate all the performance metrics twice if you make them different.
So my i9 speed is 27679 with 8 workers running. Will be faster with 12 workers
speed-machine.png - 212kB

cyrus68 - 28-2-2018 at 12:43 AM

On doing some tests on the standalone version, it appears that GSB speed can fall substantially as model complexity grows. For NG, the included model ran at 5000 per minute, while mine achieved only 2400. The differences between the two were as follows:

Operators * and /, instead of *
NG 15 min, instead of 30 min
Secondary data 4x15 min, instead of 1x60 min
Commissions and slippage 2.5/2.5 and 10/10, instead of 0/0 and 0/0

It is difficult to say which factor has the biggest impact on speed. I suspect it is the number of secondary data streams. Anyway, for me, this clarifies some methodological issues.
Under the principle of parsimony, it is best to stay with a single operator.
As for the optimal bar size, I don’t yet have a technique for determining it.
As for the secondary data, principal components analysis could extract the information content of multiple data streams into the top components. But this is not feasible in GSB. So, the alternative approach would be to include only 2 data streams that have the highest information content.
As for the inclusion or exclusion of commissions and slippage in strategy building, opinion is divided on this issue. I am noncommittal but generally opt for inclusion if the number of trades is too big.

On the general issue of speed, GSB is faster than similar software. But whether newer builds are faster than older ones is unknown. To test it, one would have to run the same model on the same hardware but 2 different builds. I’m not going to try.

admin - 28-2-2018 at 12:54 AM

Quote: Originally posted by cyrus68  
On doing some tests on the standalone version, it appears that GSB speed can fall substantially as model complexity grows. For NG, the included model ran at 5000 per minute, while mine achieved only 2400. The differences between the two were as follows:

Operators * and /, instead of *
NG 15 min, instead of 30 min
Secondary data 4x15 min, instead of 1x60 min
Commissions and slippage 2.5/2.5 and 10/10, instead of 0/0 and 0/0

It is difficult to say which factor has the biggest impact on speed. I suspect it is the number of secondary data streams. Anyway, for me, this clarifies some methodological issues.
Under the principle of parsimony, it is best to stay with a single operator.
As for the optimal bar size, I don’t yet have a technique for determining it.
As for the secondary data, principal components analysis could extract the information content of multiple data streams into the top components. But this is not feasible in GSB. So, the alternative approach would be to include only 2 data streams that have the highest information content.
As for the inclusion or exclusion of commissions and slippage in strategy building, opinion is divided on this issue. I am noncommittal but generally opt for inclusion if the number of trades is too big.

On the general issue of speed, GSB is faster than similar software. But whether newer builds are faster than older ones is unknown. To test it, one would have to run the same model on the same hardware but 2 different builds. I’m not going to try.

These are good questions.
You would expect 15 min to be 1/2 the speed of 30 min, but its a bit better than that.
In my tests today, if fitness slippage <> reports slippage there is about 14% degradation. The value for speed doesnt matter, but they should be the same.
You can get about 20% more speed
see http://www.trademaid.info/forum/post.php?action=reply&tid=92
I doubt adding operator '/' makes a difference.
Also going form 3 to 5 indicators will make 10 to 20% degradation.
Backtest termination settings were also too aggressive in older builds, so many good systems were skipped, but higher system generation speeds occurred.
More indicators might decrease speed, but more data streams certainly will.
Using a stop loss also reduced performance by about 10%.
Over time we will make speed improvements to GSB, but short term there are more pressing needs.

cyrus68 - 2-3-2018 at 01:10 AM

Let'say we save 6 systems and later load them in order to do WF.
In the first case, we load all 6 in the manager and run WF.
In the second case, we load 2 in the manager and 2 in each of 2 workers and run WF on all 6.
Does this make any difference in the speed of processing?

admin - 2-3-2018 at 01:15 AM

Quote: Originally posted by cyrus68  
Let'say we save 6 systems and later load them in order to do WF.
In the first case, we load all 6 in the manager and run WF.
In the second case, we load 2 in the manager and 2 in each of 2 workers and run WF on all 6.
Does this make any difference in the speed of processing?

Good question,
I suspect its faster for the manager, as the manager doesnt do any work. I hope next week managers can pass wf jobs to workers, including cloud workers.
My guess is over 8 wf on a single GSB, its then faster to do single threaded. It will depend on cpu type etc

Mngr vs Workers

cotila1 - 3-3-2018 at 05:10 AM

I am not sure if I am correct. If Use 1 mnager with lets say 3 workers, then the generic settings such as data files, genetic paramaters need to be set only onto manager or can I even set different settings in every single worker? In this last case (different settings in every single worker), will they be taken into account by any worker?

Data N

cotila1 - 3-3-2018 at 05:39 AM

I am trying to add a Data3 and 4 other than 1 and 2 in GSB, but I see only primary and secondary data. How to do that?
Also I am trying to add my symbols to contract list (tools-->contract list) but I cant see any sort of ADD button. Sorry if this might sound stupid

Also Peter, I see quite widespread documentation on the forum. May I suggest only ONE GSB-UsersGuide which would be updated at least at major releases? This would avoid to bother you with many questions :-)

Carl - 3-3-2018 at 08:13 AM

Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.

cotila1 - 3-3-2018 at 09:44 AM

Quote: Originally posted by Carl  
Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.


Thanks Carl, when I am on ''secondary data'' line I see the botton with dots. Once I press that I can choose only 1 file. If I press it again then it allows me to choose an other one that replaces the old one (file). I think I am loosing something :-(

Screenshot001.jpg - 13kB

Petzy - 3-3-2018 at 09:51 AM

Quote: Originally posted by cotila1  
Quote: Originally posted by Carl  
Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.


Thanks Carl, when I am on ''secondary data'' line I see the botton with dots. Once I press that I can choose only 1 file. If I press it again then it allows me to choose an other one that replaces the old one (file). I think I am loosing something :-(


Hi Cotila,
You chose the files like you would in Windows Explorer. Hold down CTRL and click on multiple files.

cotila1 - 3-3-2018 at 09:54 AM

Quote: Originally posted by cotila1  
Quote: Originally posted by Carl  
Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.


Thanks Carl, when I am on ''secondary data'' line I see the botton with dots. Once I press that I can choose only 1 file. If I press it again then it allows me to choose an other one that replaces the old one (file). I think I am loosing something :-(


Also I'd like to use my own input files dowloaded from TStation (see screenshot below) but It looks like I can't. The txt file is not accepted once I put in price data folder


Screenshot002.jpg - 19kB

Petzy - 3-3-2018 at 10:15 AM

Quote: Originally posted by cotila1  
Quote: Originally posted by cotila1  
Quote: Originally posted by Carl  
Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.


Thanks Carl, when I am on ''secondary data'' line I see the botton with dots. Once I press that I can choose only 1 file. If I press it again then it allows me to choose an other one that replaces the old one (file). I think I am loosing something :-(




Also I'd like to use my own input files dowloaded from TStation (see screenshot below) but It looks like I can't. The txt file is not accepted once I put in price data folder


It's the wrong format. The name of the file is important.

CL.30.Minute.900.1430.20180123
Symbol.minuts."Minute".start.end....... I think the rest doesn't matter

cotila1 - 3-3-2018 at 10:26 AM

Quote: Originally posted by Petzy  
Quote: Originally posted by cotila1  
Quote: Originally posted by cotila1  
Quote: Originally posted by Carl  
Hi cotila1,

Data. You can add more than one datastream in window secondary datastream.
Contracts list. Right click, clone and change the new cloned line.


Thanks Carl, when I am on ''secondary data'' line I see the botton with dots. Once I press that I can choose only 1 file. If I press it again then it allows me to choose an other one that replaces the old one (file). I think I am loosing something :-(




Also I'd like to use my own input files dowloaded from TStation (see screenshot below) but It looks like I can't. The txt file is not accepted once I put in price data folder


It's the wrong format. The name of the file is important.

CL.30.Minute.900.1430.20180123
Symbol.minuts."Minute".start.end....... I think the rest doesn't matter


Hence the name ''CL.30.Minute.900.1430.20180123.TZ=Exchange'' would not be ok unless I remove ''.TZ=Exchange''?

Still stuck to add data3, data4...

Petzy - 3-3-2018 at 10:47 AM

I think it just the "CL.30.Minute." part that is important. You have to try. (Or find it in the documentation.. There is a lot of good information there).

Don't really know what you mean with the data2 etc.
I click the ...-button. Then I get a new window. I hold down my CTRL key and then click on my files at the same time. When I do that I can chose multipe files. Then I click Open.
See the picture


Attachment: Login to view the details

cotila1 - 3-3-2018 at 01:57 PM

Quote: Originally posted by Petzy  
I think it just the "CL.30.Minute." part that is important. You have to try. (Or find it in the documentation.. There is a lot of good information there).

Don't really know what you mean with the data2 etc.
I click the ...-button. Then I get a new window. I hold down my CTRL key and then click on my files at the same time. When I do that I can chose multipe files. Then I click Open.
See the picture



Thanks Petzy, I was not using CTRL key, I dont know why I excpect to load one file at time, anyway solved. thanks

Have you seen by change even my question about the manager: If Use 1 mnager with lets say 3 workers, then the generic settings such as data files, genetic paramaters need to be set only onto manager or can I even set different settings in every single worker? In this last case (different settings in every single worker), will they be taken into account by the relative worker?

Petzy - 3-3-2018 at 02:54 PM

Nope. The manager govern all workers.
But you can run many standalone machines with different settings.

Or you can run one manager and set the cloud setting with ”unik id 1” and another manager with cloud setting ”unik id 2” and then start workers with corresponding ids. But it sounds like you would be better off running different stand alone machines. At least if tou are using one physical machine.
The latest quickstart guide explain the cloud settings very good

Gregorian - 3-3-2018 at 09:03 PM

On 43.40 I've noticed the following in an environment with 5 workers on the same PC as the manager:

1. About 10% of the time, one of the workers will never start up. Close the manager and workers, try it again, and all workers start.

2. Despite having the same Performance Filter parameters in the App Settings for both manager and workers, not all of the systems generated that display on the workers' screens make it over to the manager's screen. Seems to be about a 25% decrease in systems carried over. Does something other than the Performance Filters affect this?

Mngr Performance Filters vs Workers

cotila1 - 4-3-2018 at 04:59 AM

Thanks Petzy. Yes I understood from documentation that actually the settings of manager stear the workers.
From the experience of this forum, this happens even when I set different Performance Filter parameters in the App Settings between manager and workers?
I mean if the Performance Filter parameters I set in workers are different from the ones I set in manager, then these parameters in workers are simply ignored?

thanksText

admin - 4-3-2018 at 02:37 PM

Quote: Originally posted by Gregorian  
On 43.40 I've noticed the following in an environment with 5 workers on the same PC as the manager:

1. About 10% of the time, one of the workers will never start up. Close the manager and workers, try it again, and all workers start.

2. Despite having the same Performance Filter parameters in the App Settings for both manager and workers, not all of the systems generated that display on the workers' screens make it over to the manager's screen. Seems to be about a 25% decrease in systems carried over. Does something other than the Performance Filters affect this?


1) I normally turn my windows 10 to look like win 7. However with win 10 taskmanager Ive once seen status of a worker set to standby. This might be the cause. I will just look out to see if the problem occurs on my end. You should just start a new worker, rather than close existing worker / manager.
2) Are the same filters active on the worker and the manager?
Ie performance filter full period. Note also the sending of systems is normally delayed, but only a minute or so.

Gregorian - 5-3-2018 at 05:59 PM

Quote: Originally posted by admin  
Quote: Originally posted by Gregorian  

2. Despite having the same Performance Filter parameters in the App Settings for both manager and workers, not all of the systems generated that display on the workers' screens make it over to the manager's screen. Seems to be about a 25% decrease in systems carried over. Does something other than the Performance Filters affect this?


2) Are the same filters active on the worker and the manager?


Yes, as I said above, the Performance Filter parameters are identical for Manager and Workers. I double checked this again today.

See attached pics for an example of the discrepancy of strategies shown.

Manager.jpg - 759kB Worker 1 of 2.jpg - 967kB Worker 2 of 2.jpg - 702kB

admin - 5-3-2018 at 07:04 PM

Quote: Originally posted by Gregorian  
Quote: Originally posted by admin  
Quote: Originally posted by Gregorian  

2. Despite having the same Performance Filter parameters in the App Settings for both manager and workers, not all of the systems generated that display on the workers' screens make it over to the manager's screen. Seems to be about a 25% decrease in systems carried over. Does something other than the Performance Filters affect this?


2) Are the same filters active on the worker and the manager?


Yes, as I said above, the Performance Filter parameters are identical for Manager and Workers. I double checked this again today.

See attached pics for an example of the discrepancy of strategies shown.


The programmer spotted this issue on his machine yesterday and is looking into it.

The critical importance of thorough WF testing

Gregorian - 12-3-2018 at 09:27 PM

Just want to share with fellow GSB users a very important concept that was not obvious to me and therefore might not be evident to everyone else:

If you want your strategy to perform well going forward live, you must:

1. Go through the WF procedure as Peter has explained here and in his videos.

2. Use only strategies whose PAS is at least 40, preferably 50 or higher.

3. Use only strategies whose WF optimal and WF OOS curves are close or similar to one another.

I've wasted a lot of time trying out strategies that looked appealing but did not meet all of these criteria and ultimately lost money going forward. While conducting this sort of due diligence takes time, the odds of your ending up with a reliable strategy going forward will be greatly increased.

It's also worth pointing out that TSL, SQ, and AT either do not offer WF at all or fail to emphasize its importance, so on this essential front GSB stands out from its peers.

 Pages:  1  2    4  ..  25