GSB Forums

General support questions.

 Pages:  1    3    5  ..  25

admin - 12-3-2018 at 09:41 PM

Quote: Originally posted by Gregorian  
Just want to share with fellow GSB users a very important concept that was not obvious to me and therefore might not be evident to everyone else:

If you want your strategy to perform well going forward live, you must:

1. Go through the WF procedure as Peter has explained here and in his videos.

2. Use only strategies whose PAS is at least 40, preferably 50 or higher.

3. Use only strategies whose WF optimal and WF OOS curves are close or similar to one another.

I've wasted a lot of time trying out strategies that looked appealing but did not meet all of these criteria and ultimately lost money going forward. While conducting this sort of due diligence takes time, the odds of your ending up with a reliable strategy going forward will be greatly increased.

It's also worth pointing out that TSL, SQ, and AT either do not offer WF at all or fail to emphasize its importance, so on this essential front GSB stands out from its peers.

Thanks for restating this. I am a very big believer in the WF process.
50 parameter anchored stability is ideal, but I do go lower in some circumstances. if you get a parameter that goes from 99 to 98, it will drop the stability score. The reality is this 1% change in settings won't change things significantly.
This is touched on in this video.
https://www.youtube.com/watch?v=UNAEy1wYgho&t=22s
There are more ideas in the GSB pipeline in this area to come.
However my logic is, the higher the parameter stability score, the more likely it is to go well out of sample. This doesn't guarantee success but it helps. Now GSB system portfolio can be very diversified.
We have nat gas, crude oil, soybeans, copper, vxx, ES, NQ etc
Sorry you lost some $, but systems also need time to work, and loosing money for a period that matches in sample performance is to be expected. Market conditions are another. ES has been flat for a long time, but of recent systems are going to either make or loose more that is normal.
Another tip is use entire contract data length from where data is not patchy. Exception is ES where I often use 2001, but 1997 is also valid.

cyrus68 - 13-3-2018 at 06:29 AM

Interesting observations by Gregorian. I have 2 examples from my recent experiments with NG. One system has excellent stability scores, but its recent performance in the market is mediocre.

The second system has rather modest stability scores, but good performance in recent market conditions. Its Pearson score on the optimal curve is 0.993. Also, visually, you can see that its Pearson score post-2010 looks to be excellent. The parameters are stable from period 7 onward.

My own inclination would be to trade the second system but monitor its performance, and replace it with another when its performance deteriorates.








cyrus68 - 13-3-2018 at 06:34 AM

I tried to post the images via Imgur but it did not work. Maybe the attachments will work.

NG D8S1S15.png - 33kBParam NG D8S1S15.png - 12kBNG D8S1S21.png - 52kBParam NG D8S1S21.png - 13kB

Gregorian - 13-3-2018 at 12:02 PM

cyrus68: Yes, even after all three WF steps have been performed, a strategy can still disappoint. However this is the best way to increase a strategy's chances of success. Your concept of monitoring each strategy is sound.

As Peter suggested, a diversified set of strategies that have passed these steps is probably the best way to achieve ongoing success. Likely not more than a few of them will fail, so the good ones should help you achieve overall profitability.

I often wonder if the big boys at Goldman Sachs et al go through this sort of analysis with the tools they have. They may have vast computing power at their disposal, but the underlying principles and challenges they face are probably the same.

admin - 13-3-2018 at 02:50 PM

Quote: Originally posted by cyrus68  
I tried to post the images via Imgur but it did not work. Maybe the attachments will work.

Im sympathetic to your argument. The first curve is not such a consistent performer. Not all runs are profitable.
Second system you could stick a ruler on the line and see its just nice and linear past the volatile 2008 period.
You can have 100% stability, but the curves are all poor. Doesn't happen a lot, but it does happen. Note in your second parameter values, you cant see the right most side. No big deal for the forum post but make sure you check them all.

rws - 13-3-2018 at 03:05 PM

I think Goldman Sachs has many more ways to evaluate trading performance.


Next to walkforward which is more than 10 years in Amibroker, this dr Howard Bandy has for example 2 more additional measures.

http://blueowlpress.com/system-development/all-things-being-...

This measure is used in the Amibroker community and some users do have reported better and more robust OOS performance





Quote: Originally posted by Gregorian  
cyrus68: Yes, even after all three WF steps have been performed, a strategy can still disappoint. However this is the best way to increase a strategy's chances of success. Your concept of monitoring each strategy is sound.

As Peter suggested, a diversified set of strategies that have passed these steps is probably the best way to achieve ongoing success. Likely not more than a few of them will fail, so the good ones should help you achieve overall profitability.

I often wonder if the big boys at Goldman Sachs et al go through this sort of analysis with the tools they have. They may have vast computing power at their disposal, but the underlying principles and challenges they face are probably the same.

rws - 13-3-2018 at 03:05 PM

I think Goldman Sachs has many more ways to evaluate trading performance.


Next to walkforward which is more than 10 years in Amibroker, this dr Howard Bandy has for example 2 more additional measures.

http://blueowlpress.com/system-development/all-things-being-...

This measure is used in the Amibroker community and some users do have reported better and more robust OOS performance





Quote: Originally posted by Gregorian  
cyrus68: Yes, even after all three WF steps have been performed, a strategy can still disappoint. However this is the best way to increase a strategy's chances of success. Your concept of monitoring each strategy is sound.

As Peter suggested, a diversified set of strategies that have passed these steps is probably the best way to achieve ongoing success. Likely not more than a few of them will fail, so the good ones should help you achieve overall profitability.

I often wonder if the big boys at Goldman Sachs et al go through this sort of analysis with the tools they have. They may have vast computing power at their disposal, but the underlying principles and challenges they face are probably the same.

admin - 13-3-2018 at 03:20 PM

Quote: Originally posted by rws  
I think Goldman Sachs has many more ways to evaluate trading performance.


Next to walkforward which is more than 10 years in Amibroker, this dr Howard Bandy has for example 2 more additional measures.

http://blueowlpress.com/system-development/all-things-being-...

This measure is used in the Amibroker community and some users do have reported better and more robust OOS performance


I didnt fully understand that article, but it part of it seems sensible money management and trying to figure out true risk.
I'm open to more education on it.
Now I have so many new and diversified markets, I'm trading a max of 2 contracts on any one system (normally 1), and spreading capital fairly evenly over diversified markets. Portfolio Analyst Pro is excellent at doing this now as you can limit contracts per system, and total per symbol.

cyrus68 - 13-3-2018 at 10:57 PM

Gregorian: I am not about to trade either of the 2 systems. They were just examples. The main point is that linearity matters.

You are absolutely right about having a diversified portfolio and testing its performance. Peter's PA is useful, and you can also do quite a lot in Excel. This can be complementary. I also use MSA for various things. I don't use TS's portfolio maestro.

Institutions such as Goldman often use Var or cVar for risk assessment. cVar is better. What I currently can't do - and they are capable of doing - is to dynamically adjust portfolio weights according to changing market conditions.

rws: I got Bandy's book 2 years ago and unfortunately laid it down after a few chapters. I will look at it again. He was into Python programming and I'm more into Matlab.

Carl - 14-3-2018 at 03:12 AM

So what we need to investigate this is a bunch of systems and their metrics.
See what systems are still profitable in the future and investigate their historical metrics.

The backtest and WF results in GSB are great.
The only thing I am worried about is the fact that a GSB system on for example CL isn't profitable on highly correlated RB. And vice versa. Some traders use this multimarket test to see if a strategy is robust and not overfit.
Any thoughts?

admin - 14-3-2018 at 03:17 AM

Quote: Originally posted by Carl  
So what we need to investigate this is a bunch of systems and their metrics.
See what systems are still profitable in the future and investigate their historical metrics.

The backtest and WF results in GSB are great.
The only thing I am worried about is the fact that a GSB system on for example CL isn't profitable on highly correlated RB. And vice versa. Some traders use this multimarket test to see if a strategy is robust and not overfit.
Any thoughts?

Often the same can be said for emd vs Russell 2000, but it doesnt always apply. ES SOMETIMES works on Er and EMD as well.
What you say about CL & RB MIGHT work on some systems, but not others.
GSBsys1ES also was ported to NQ (Emini nasdaq), but the data2 was changed and the parameter set also changed. 11 months out of sample results were excellent on NQ.

admin - 14-3-2018 at 04:03 AM

You could test how well the nq went on es without changing parameters, and optionally change the data2.
Regardless of the results, I dont see that as a conclusive test. From memory the ER wasnt so good on gsbsys1ES, which is closer than the NQ overall.

rws - 14-3-2018 at 06:22 PM

Because GSB is a peak optimizer it is always hard to have consistent performance. It is always the best to search for a stable range of parameters which may but should be not be the parameter with the highest performance if this is a peak. I understand doing WF is a good way of looking for stable parameters but no guarantee. After having the stable range of parameters the performance should be stable with small variations of these parameters just like a market could change.

Instead of optimizing for 1 index, I optimize (non peak optimizer) on all the tickers of an index and sofar my impression is that this gives more robust OOS performance. Depending on how many stocks in the index have a buy signal and the weight of them I am currently testing this idea.





Quote: Originally posted by admin  
You could test how well the nq went on es without changing parameters, and optionally change the data2.
Regardless of the results, I dont see that as a conclusive test. From memory the ER wasnt so good on gsbsys1ES, which is closer than the NQ overall.

cyrus68 - 15-3-2018 at 12:30 AM

A useful feature would be Monte Carlo simulation that allowed parameter variation, within limits. Currently, the only kind of Monte Carlo simulation that is possible is price randomisation, done outside GSB.

admin - 15-3-2018 at 04:10 AM

Quote: Originally posted by cyrus68  
A useful feature would be Monte Carlo simulation that allowed parameter variation, within limits. Currently, the only kind of Monte Carlo simulation that is possible is price randomisation, done outside GSB.

The newest version of EWFO has updates to the nth feature.
You can see a graph of the 1 to the 50th (for example) best parameter set as 50 equity lines. Lot of work to do compared to a WF in GSB.
What is Nth. It shows the best to the nth best paramater set and plots it. TO do this you would have to re-optimize your GSB (or other system) in TS. That would take a lot of time as TS is slow. At leaset you have an idea of the paramter range expected from GSB own WF. I'm not convinced its worth the effort.
There is always more improvements to be made in methodology, but I feel the current GSB setup is very fast from a human perspective and robust. Nothing however is bullet proof and there is always risk.
Some markets are easier than others and the harder markets can be very CPU intensive requiring GSB to spend a lot of processing power.
Copper is deadly if you don't do a WF. However I've found the diversification in copper excellent compared to other markets. Portfolio analyst chooses a lot of copper systems compared to other markets. This is likely my next video.

nth.png - 204kB

rws - 15-3-2018 at 06:13 AM

Peter,

I understand you looked at Adaptrade, why not look at the these additional confirmation tool like price randomisation and Monte Carlo that is in Adaptrade. It could be an additional confirmation which would make GSB better.


Quote: Originally posted by cyrus68  
A useful feature would be Monte Carlo simulation that allowed parameter variation, within limits. Currently, the only kind of Monte Carlo simulation that is possible is price randomisation, done outside GSB.

admin - 15-3-2018 at 03:06 PM

Quote: Originally posted by rws  
Peter,

I understand you looked at Adaptrade, why not look at the these additional confirmation tool like price randomisation and Monte Carlo that is in Adaptrade. It could be an additional confirmation which would make GSB better.


Quote: Originally posted by cyrus68  
A useful feature would be Monte Carlo simulation that allowed parameter variation, within limits. Currently, the only kind of Monte Carlo simulation that is possible is price randomisation, done outside GSB.

Portfolio analyst pro has Monte Carlo. While I don't mind features such as price randomization, im not sure to what degree its helpful.
There are also much more important issues in the job que.
The most important and significant is more secondary filters and pattern filters etc. Plus other features like enhanced order exits.

Exit at Daily Profit Goal

Gregorian - 16-3-2018 at 02:00 PM

On more than a few days, my portfolio of GSB strategies will, for example, achieve a peak profit of around $1,600, then close the day at only around $800 of profit. Some other auto-trading systems have a notion of a "Daily Goal", where trading is stopped and positions are closed when a daily target profit level (or loss) is achieved.

I've done extensive testing with [simple, not ATR] trailing stops - using my own code, as the built-in EL call backtests too optimistically - but have never gotten them to increase the profit of my strategies, on a strategy-by-strategy basis. Experience suggests the Daily Goal concept would be better, however, especially when applied across all strategies being run.

Has anybody had any success with this concept? It would be easy to write into a GSB strategy, but I don't know how to write a routine that would control all strategies. This concept works best when profit across all strategies/charts is the criterion.

Have you found a third-party strategy that accomplishes this across all charts? All I've found so far is "DailyProfitTarget_ForAllStrategies" in the TS App Store, but it only works on the strategy(s) on a single chart, not across all open charts.

Peter has already stated that this is not on his to-do list, so for the time being any solution has to come from elsewhere.

admin - 16-3-2018 at 02:29 PM

Quote: Originally posted by Gregorian  
On more than a few days, my portfolio of GSB strategies will, for example, achieve a peak profit of around $1,600, then close the day at only around $800 of profit. Some other auto-trading systems have a notion of a "Daily Goal", where trading is stopped and positions are closed when a daily target profit level (or loss) is achieved.

I've done extensive testing with [simple, not ATR] trailing stops - using my own code, as the built-in EL call backtests too optimistically - but have never gotten them to increase the profit of my strategies, on a strategy-by-strategy basis. Experience suggests the Daily Goal concept would be better, however, especially when applied across all strategies being run.

Has anybody had any success with this concept? It would be easy to write into a GSB strategy, but I don't know how to write a routine that would control all strategies. This concept works best when profit across all strategies/charts is the criterion.

Have you found a third-party strategy that accomplishes this across all charts? All I've found so far is "DailyProfitTarget_ForAllStrategies" in the TS App Store, but it only works on the strategy(s) on a single chart, not across all open charts.

Peter has already stated that this is not on his to-do list, so for the time being any solution has to come from elsewhere.

Long them, I think the idea is a bad one sorry to say. Look at 2018. There have been speculator big moves and it means you would only get part of them. You will get all the choppy moves.
It might work better with daily loss .
Its not the same but related. Yesterday I put 2 copper systems on the same chart. Much better metrics, no increase in dd, but lost 30% of the over all profit compared to running them in separate charts. Lost about 6 k extra profit in the last year. Basically when there is a big profitable move, most systems trigger

cotila1 - 17-3-2018 at 12:28 PM

I've got PAS = 0 for the WF-ed system, however there is an high parameters stability (see the other screenshot)
Is it not a bit strange in this case that the PAS is equal to 0 despite the high parameters stability ??

Screenshot.jpg - 185kBScreenshot001.jpg - 212kB

admin - 18-3-2018 at 03:45 PM

Quote: Originally posted by cotila1  
I've got PAS = 0 for the WF-ed system, however there is an high parameters stability (see the other screenshot)
Is it not a bit strange in this case that the PAS is equal to 0 despite the high parameters stability ??

Anchored compares the last line to the others, and every case is a miss-match. Rolling compares the lines next to each other.
The last line as a very big change in parameters. You could compare the last settings and second the last parameters in ts and compare the equity curves. Unless results don't change significantly, I would not use that system

cotila1 - 19-3-2018 at 01:45 AM

Quote: Originally posted by admin  
Quote: Originally posted by cotila1  
I've got PAS = 0 for the WF-ed system, however there is an high parameters stability (see the other screenshot)
Is it not a bit strange in this case that the PAS is equal to 0 despite the high parameters stability ??

Anchored compares the last line to the others, and every case is a miss-match. Rolling compares the lines next to each other.
The last line as a very big change in parameters. You could compare the last settings and second the last parameters in ts and compare the equity curves. Unless results don't change significantly, I would not use that system


Thanks, so in general what are the minimum PRS and PAS values you would reccomend?

admin - 19-3-2018 at 04:26 AM

Quote: Originally posted by cotila1  
Quote: Originally posted by admin  
Quote: Originally posted by cotila1  
I've got PAS = 0 for the WF-ed system, however there is an high parameters stability (see the other screenshot)
Is it not a bit strange in this case that the PAS is equal to 0 despite the high parameters stability ??

Anchored compares the last line to the others, and every case is a miss-match. Rolling compares the lines next to each other.
The last line as a very big change in parameters. You could compare the last settings and second the last parameters in ts and compare the equity curves. Unless results don't change significantly, I would not use that system

The higher the better, but i prefer 30 or more, the last parameters not changing significantly. But a change like for 95 to 96 will detract from the stability score, but in real terms such a small change is not significant. I'm less concerned with rolling stability.
I think I spoke on this in the last youtube video I did.

Thanks, so in general what are the minimum PRS and PAS values you would reccomend?

44.09 Workers don't start

Gregorian - 20-3-2018 at 10:36 PM

Since upgrading to 44.09, my workers on the same machine as the manager do not start. The manager remains in Waiting mode. None of the Workplace settings were changed on the workers or manager. The Standalone version works fine. Suggestions?

admin - 20-3-2018 at 10:40 PM

Quote: Originally posted by Gregorian  
Since upgrading to 44.09, my workers on the same machine as the manager do not start. The manager remains in Waiting mode. None of the Workplace settings were changed on the workers or manager. The Standalone version works fine. Suggestions?

I have the same issue regardless of what version.
Its taking about 10 to 15 minutes for the workers to start.
Im aiming to fix asap, but its a job for one of my programmers.
I rebooted the sql server, and it didnt help

cyrus68 - 21-3-2018 at 08:58 AM

I can confirm that 44.03 suffers from the 10 min delay problem too. It worked fine yesterday.
The workers are on one machine, and I am not doing any cloud computing.
It must have something to do with the need for GSB to connect to the server, and ensuing problems.

admin - 21-3-2018 at 02:54 PM

Quote: Originally posted by cyrus68  
I can confirm that 44.03 suffers from the 10 min delay problem too. It worked fine yesterday.
The workers are on one machine, and I am not doing any cloud computing.
It must have something to do with the need for GSB to connect to the server, and ensuing problems.

I'm keen to resolve asap, but programmer was not working yesterday.
I don't have sql server skills so waiting for that to happen. Its number 1 on the job list. Cloud is working though as i have 44 GSB workers right now.

admin - 21-3-2018 at 03:15 PM

we had 381,000 systems not assigned to a manager, and the database was close to full. Hoping this will fix it. I think my current build.11 fixes this. I will upload .11 build

4K scaling not as good in 44.11

Gregorian - 22-3-2018 at 09:19 AM

Prior to 44.11, if we set "Override high DPI scaling behavior" in the Compatibility properties of a shortcut to GSB, fonts scaled quite well in 4K. As of 44.11, that is no longer the case. Fonts scale strangely. It's worse if we uncheck that option. Until we can get a proper 4K implementation, going back to the previous scaling would be better for now.

admin - 22-3-2018 at 02:26 PM

Quote: Originally posted by Gregorian  
Prior to 44.11, if we set "Override high DPI scaling behavior" in the Compatibility properties of a shortcut to GSB, fonts scaled quite well in 4K. As of 44.11, that is no longer the case. Fonts scale strangely. It's worse if we uncheck that option. Until we can get a proper 4K implementation, going back to the previous scaling would be better for now.

Im not aware that this has changed, but I recently went to a 4k monitor and the gui doesnt size as well as I liked. I will look into this.

cyrus68 - 23-3-2018 at 12:10 AM

Sorry to hear about your scaling problem in 4k.
My centre monitor is 34-inch 3440x1440, and GSB looks fine on default fonts in 44.11.

admin - 23-3-2018 at 12:14 AM

Quote: Originally posted by cyrus68  
Sorry to hear about your scaling problem in 4k.
My centre monitor is 34-inch 3440x1440, and GSB looks fine on default fonts in 44.11.

what build did you find better. 04 or 09 or .11
Otherwise we will reverse the last scaling changes

Petzy - 23-3-2018 at 12:42 AM

For me .09 was better.

I am not using any high resolution screens. I connect to my machines with teamviewer and the ones that I have on 1024x768 has the text a little too big.

cyrus68 - 23-3-2018 at 12:49 AM

I think the font size in 44.11 is actually larger than on previous builds, though the font size at the top of the graph is rather small.
Overall, I prefer the default font size in .09 and .04

cyrus68 - 23-3-2018 at 10:31 PM

I ran 44.11 again. This time the colour coding of the metrics at the top of the lower panel (red/green/blue/brown...etc..) has disappeared. It's all white.
Also, the play and pause buttons of the manager are greyed out.

admin - 23-3-2018 at 11:35 PM

Quote: Originally posted by cyrus68  
I ran 44.11 again. This time the colour coding of the metrics at the top of the lower panel (red/green/blue/brown...etc..) has disappeared. It's all white.
Also, the play and pause buttons of the manager are greyed out.

The paused issue I think is fixed in .13 build (not released)
The color metrics disappearing is intermittent. Not sure what makes it come and go.

cyrus68 - 26-3-2018 at 12:59 PM

I did WF only - on saved systems - without simultaneous system generation, in 44.14. All WF was done locally - no cloud.
With similar data sets, settings, cpu usage ...etc.. it took almost twice as long as in 44.11 and 44.09.
At times, the GUI was jittery and looked as though it was seizing up, all on its own.
Strange behaviour that I cannot account for.

Obviously, I'm going back to 44.09; unless there is something else that GSB is now doing in the background that also compromises 44.09.
I will try it tomorrow.

admin - 26-3-2018 at 02:47 PM

Quote: Originally posted by cyrus68  
I did WF only - on saved systems - without simultaneous system generation, in 44.14. All WF was done locally - no cloud.
With similar data sets, settings, cpu usage ...etc.. it took almost twice as long as in 44.11 and 44.09.
At times, the GUI was jittery and looked as though it was seizing up, all on its own.
Strange behaviour that I cannot account for.

Obviously, I'm going back to 44.09; unless there is something else that GSB is now doing in the background that also compromises 44.09.
I will try it tomorrow.

I'm not aware of any significant difference in the main body of GSB code. How many systems did you WF at the same time, and was it multi threaded or single? Multi threaded is much faster if you do a few WF, but if you do lots single threaded is much faster.
Maybe do a help support upload and I will try the exact systems and setup you used.

cyrus68 - 26-3-2018 at 10:12 PM

There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.

I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option.

admin - 26-3-2018 at 10:24 PM

Quote: Originally posted by cyrus68  
There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.

I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option.

I think 6 multi threaded WF is too many. It will work, but likely faster single threaded. GUI also likely to be slow. Cloud workers have soft coded limit of 2 multi threaded, and I think 10 single threaded. WF in cloud should use near zero resources on the manager. There is no way it should slow system generation down unless the worker is on the same pc as manager.
Note also there can be significant variation in speed from one test to another as the random seed affects the speed.
You could test this as 44.14 allows you to pause the manager and do wf on the cloud. You should see your cpu being very low.
The workers will also have to be 44.14 I think to do manager paused, worker doing WF.
I normally do WF to the cloud, one at a time multi threaded.
But i have a lot of workers, more than almost all users.

Walk Forward Speeds

kelsotrader - 27-3-2018 at 10:17 PM

Quote: Originally posted by admin  
Quote: Originally posted by cyrus68  
There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.

I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option.

I think 6 multi threaded WF is too many. It will work, but likely faster single threaded. GUI also likely to be slow. Cloud workers have soft coded limit of 2 multi threaded, and I think 10 single threaded. WF in cloud should use near zero resources on the manager. There is no way it should slow system generation down unless the worker is on the same pc as manager.
Note also there can be significant variation in speed from one test to another as the random seed affects the speed.
You could test this as 44.14 allows you to pause the manager and do wf on the cloud. You should see your cpu being very low.
The workers will also have to be 44.14 I think to do manager paused, worker doing WF.
I normally do WF to the cloud, one at a time multi threaded.
But i have a lot of workers, more than almost all users.



That's interesting and explains why my WF speeds have been extremely slow.
I WF baches of systems at a time and it was taking longer to do the WF tests than to generate systems.

admin - 27-3-2018 at 10:27 PM

There is a setting on delays for each WF. This is conservative. A WF will use more ram, but it takes some time to do so. This means GSB could under estimate the amount of ram a GSB has access too.
Hence we cant afford peoples PC's / servers to crash. You can reduce this time out value, and limit the amount of WF per worker.
Before this I submitted 100 wf to a server with 64 gb of ram. Many hours later it crashed due to lack of ram.
I am hoping to have batch Multi threaded WF, but they go into a Que if you do too many. I still like this as you get finished WF quickly start coming in.

wf-settings-dealy.png - 39kB

cyrus68 - 28-3-2018 at 01:37 AM

I have 64 GB of ram on an i9. GSB doesn't get anywhere near to using all the ram.
WF is mostly deadly slow (yes, I know it depends on your datasets, settings...etc..) and the GUI pretty much freezes up.

I run WF in the manager on saved systems. What is the role of workers in this context? Should you activate and run WF in workers?

Doing 100 WF on a server, with 64 GB of ram should only be tried by Harry Potter.

Until I can think of something better, I plan on running WF on saved systems - overnight - using an i7 with 32 GB of ram.

admin - 28-3-2018 at 01:52 AM

Quote: Originally posted by cyrus68  
I have 64 GB of ram on an i9. GSB doesn't get anywhere near to using all the ram.
WF is mostly deadly slow (yes, I know it depends on your datasets, settings...etc..) and the GUI pretty much freezes up.

I run WF in the manager on saved systems. What is the role of workers in this context? Should you activate and run WF in workers?

Doing 100 WF on a server, with 64 GB of ram should only be tried by Harry Potter.

Until I can think of something better, I plan on running WF on saved systems - overnight - using an i7 with 32 GB of ram.

I think its better to send wf to the workers. The manager is then more responsive. I would expect manager to be sluggish if building systems and doing 4 to 8 multi threaded wf. Thats one of the reasons I only do 2 multi threaded WF on the manager.
44.18 can also be stopped and send WF to the cloud. Shortly Im hopping you can load saved systems and WF to the cloud.
Unless you do a support upload with sample saved systems, I cant diagnose your issues very well.
100 WF to one machine was a mistake, but good that the issue was picked up as other users will do the same at some stage.
Did you also try a machine reboot, and lower the delay between WF setting. (see post above)

cyrus68 - 28-3-2018 at 02:34 AM

I don't do WF while GSB is generating systems because I run enough workers to get cpu usage close to 100%.
In this context, I tried one multi-threaded WF sent to the cloud.It resulted in a slowdown in the speed of system generation.
Given my goal of maximising the speed of system generation, running WF in the cloud is not for me.

I do WF on saved systems, with no simultaneous system generation going on in GSB. What I would like to do is to maximise the speed of WF in this context. I will try your suggestion of running the WF in workers rather than the manager. I will have to figure out how many workers to load to optimise use of cpu and ram. 2 WF x 5 workers may be a starting point.

Also worth trying out the suggestion of using single-threaded WF.

I am currently using 44.09. I will try 44.18, in due course, to see if speed and GUI issues have improved.

admin - 28-3-2018 at 03:25 PM

Quote: Originally posted by cyrus68  
I don't do WF while GSB is generating systems because I run enough workers to get cpu usage close to 100%.
In this context, I tried one multi-threaded WF sent to the cloud.It resulted in a slowdown in the speed of system generation.
Given my goal of maximising the speed of system generation, running WF in the cloud is not for me.

I do WF on saved systems, with no simultaneous system generation going on in GSB. What I would like to do is to maximise the speed of WF in this context. I will try your suggestion of running the WF in workers rather than the manager. I will have to figure out how many workers to load to optimise use of cpu and ram. 2 WF x 5 workers may be a starting point.

Also worth trying out the suggestion of using single-threaded WF.

I am currently using 44.09. I will try 44.18, in due course, to see if speed and GUI issues have improved.


When you do WF to the cloud, are you sending to your local pc(s) or someone else's pc in the cloud. If its to your own pc the wf is sent to, system generation will definatley slow down.
Best use gsbcloud3_password1234 for version .20 onward's.
Cloud is a bit messy in that there are 3 versions of GSB running.
.04 with cloud1 will be killed end of the month.
I suspect we miss communicated in that you are doing WF on your own workers. This doesnt surprise me that it slows things down.
It might be faster to use the cloud, but if you dont want to do that make some workers dedicated to doing wf. You could pause them once they connect to the manager, and let them do WF. Or you could save systems on manager, load them on worker(s).
The other workers you could set the max WF under app settings workplace to zero. There is no WF que yet on mangers, but there is on workers.
I prefer to limit the amount of concurrent WF and have them in a job que.

cyrus68 - 29-3-2018 at 11:24 PM

Doing single-threaded WF, locally, on a multitude of saved systems is many times faster than multi-threaded. Like the tortoise and the hare.

This is counter intuitive. I thought the reason for running multi-threaded WF was to gain speed.

I'm not sure about the max settings for WF in Workplace. I've set them to 10 each.

admin - 30-3-2018 at 12:36 AM

Quote: Originally posted by cyrus68  
Doing single-threaded WF, locally, on a multitude of saved systems is many times faster than multi-threaded. Like the tortoise and the hare.

This is counter intuitive. I thought the reason for running multi-threaded WF was to gain speed.

I'm not sure about the max settings for WF in Workplace. I've set them to 10 each.

single threaded is slower and takes longer to complete. Its much more cpu efficient though. But im in a hurry often to see some WF results, so tend to use multithreaded WF
Setting 10 MT is a not a great idea as its going to be slower than 10 single threaded. Best you benchmark your machine and test it. But it may vary depending on the systems, amount of data streams etc

Odd results when code transfed to TradeStation.

kelsotrader - 8-4-2018 at 06:17 PM

I am getting some very odd results from strategies when transferred to TS.
I set up data for NQ ( 15,30,60 Minutes) . Checked the data and started to create strategies.

I left the data streams on TS so did not reload them.

But the odd thing is that some (Not all) strategies when copied onto TS produced results way off those that GSB Produced.

Number of trades in one case was out by 1000.

I have double checked everything I can think of.
I suspected a caching error in TS, Checked code and to make sure the transfer was correct.

I am unable to pinpoint where the problem lies ( I don't think it is a bug in GSB) but there must be something that is causing these inconsistent results. I suspect TS is not recalculating properly from one test Strategy to another. ( I am testing and transferring a lot over and testing)

kelsotrader - 8-4-2018 at 06:29 PM

Quote: Originally posted by kelsotrader  
I am getting some very odd results from strategies when transferred to TS.
I set up data for NQ ( 15,30,60 Minutes) . Checked the data and started to create strategies.

I left the data streams on TS so did not reload them.

But the odd thing is that some (Not all) strategies when copied onto TS produced results way off those that GSB Produced.

Number of trades in one case was out by 1000.

I have double checked everything I can think of.
I suspected a caching error in TS, Checked code and to make sure the transfer was correct.

I am unable to pinpoint where the problem lies ( I don't think it is a bug in GSB) but there must be something that is causing these inconsistent results. I suspect TS is not recalculating properly from one test Strategy to another. ( I am testing and transferring a lot over and testing)


OK I am posting the above because it coursed me concern.
I have located the problem and it lies as I suspected with TS.

For whatever reason Trade Station does not always recalculate the data with the new code. This is often the case where one over writes the code with new code. Saves and has TS re calculate.

In order to get a clean calculation one needs to .

Delete any strategies attached to the Data screen.
Load up the new Strategy, set up its parameters then let TS do its calculations.

Hope that the above saves others the confusion that I have gone through.

admin - 8-4-2018 at 07:06 PM

Quote: Originally posted by kelsotrader  
Quote: Originally posted by kelsotrader  
I am getting some very odd results from strategies when transferred to TS.
I set up data for NQ ( 15,30,60 Minutes) . Checked the data and started to create strategies.

I left the data streams on TS so did not reload them.

But the odd thing is that some (Not all) strategies when copied onto TS produced results way off those that GSB Produced.

Number of trades in one case was out by 1000.

I have double checked everything I can think of.
I suspected a caching error in TS, Checked code and to make sure the transfer was correct.

I am unable to pinpoint where the problem lies ( I don't think it is a bug in GSB) but there must be something that is causing these inconsistent results. I suspect TS is not recalculating properly from one test Strategy to another. ( I am testing and transferring a lot over and testing)


OK I am posting the above because it coursed me concern.
I have located the problem and it lies as I suspected with TS.

For whatever reason Trade Station does not always recalculate the data with the new code. This is often the case where one over writes the code with new code. Saves and has TS re calculate.

In order to get a clean calculation one needs to .

Delete any strategies attached to the Data screen.
Load up the new Strategy, set up its parameters then let TS do its calculations.

Hope that the above saves others the confusion that I have gone through.


Likely the issue is if you cut and paste TS code into the same eld, the TS chart remembers the settings of the old code.
You should delete all code in the eld, hit f3 paste the new code in, hit f3.
Otherwise replace the word inputs: with vars:

kelsotrader - 8-4-2018 at 08:09 PM

Quote: Originally posted by admin  



Likely the issue is if you cut and paste TS code into the same eld, the TS chart remembers the settings of the old code.
You should delete all code in the eld, hit f3 paste the new code in, hit f3.
Otherwise replace the word inputs: with vars:


Yes that's is it exactly. I was selecting all then pasting the new code over old code. Saving F3 then having TS recalculate .

Trap you young / old players.

kelsotrader - 8-4-2018 at 08:10 PM

Quote: Originally posted by admin  



Likely the issue is if you cut and paste TS code into the same eld, the TS chart remembers the settings of the old code.
You should delete all code in the eld, hit f3 paste the new code in, hit f3.
Otherwise replace the word inputs: with vars:


Yes that's is it exactly. I was selecting all then pasting the new code over old code. Saving F3 then having TS recalculate .

Trap you young / old players.

admin - 8-4-2018 at 10:37 PM

Quote: Originally posted by kelsotrader  
Quote: Originally posted by admin  



Likely the issue is if you cut and paste TS code into the same eld, the TS chart remembers the settings of the old code.
You should delete all code in the eld, hit f3 paste the new code in, hit f3.
Otherwise replace the word inputs: with vars:


Yes that's is it exactly. I was selecting all then pasting the new code over old code. Saving F3 then having TS recalculate .

Trap you young / old players.

I think this is mentioned twice in the videos on you tube. Wont be the last time it comes up. Very simple to fix and hard to diagnose. The turn inputs into vars option has a bug. Vars: only for non wf code. Hope its fixed in .31 build

New Contracts file location?

Gregorian - 12-4-2018 at 08:19 PM

As of 44.33, it appears that GSB is no longer loading the Contracts specs from the Contracts.txt file in the folder specified in the Data portion of the App Settings. Where is the new file location?

EDIT: I just changed one of the specs, and it wrote out a new Contracts.txt file in the expected location. It just never loaded the original file. Hmmm...did the file format change?

admin - 12-4-2018 at 08:23 PM

Quote: Originally posted by Gregorian  
As of 44.33, it appears that GSB is no longer loading the Contracts specs from the Contracts.txt file in the folder specified in the Data portion of the App Settings. Where is the new file location?

Its supposed have a bigger internal contracts file, and merge the contracts file if its there. Please send me your contracts file for me to test. This was done so the supplied contracts.txt doesnt over write the users contracts.txt each time we supply a new build.
Possible this is a bug.

cyrus68 - 18-4-2018 at 10:18 PM

I ran a given data-set with (test @ beginning) = false.
Next, I ran the same data-set, over a different range, with (test @ beginning) = true.
I made sure to save the settings under a different name.
However, GSB insists on overriding the setting. This is what the graphs show.
I'm not sure how to get around this.

Test at Beginning.png - 59kB

admin - 18-4-2018 at 10:21 PM

Quote: Originally posted by cyrus68  
I ran a given data-set with (test @ beginning) = false.
Next, I ran the same data-set, over a different range, with (test @ beginning) = true.
I made sure to save the settings under a different name.
However, GSB insists on overriding the setting. This is what the graphs show.
I'm not sure how to get around this.

Could it be that GSB still has the systems in its GUI?
What about have two gsb. one with tab true, the other false.
run them both and compare

cyrus68 - 18-4-2018 at 10:25 PM

how do I clean out the cache or whatever record it is keeping?

admin - 18-4-2018 at 10:28 PM

Quote: Originally posted by cyrus68  
how do I clean out the cache or whatever record it is keeping?

cache has auto cleanup and you can ignore it.
systems in GUI, select them all and delete.
Can take while if there is a large amount

cyrus68 - 18-4-2018 at 10:45 PM

when I started the run there were no systems in the GUI. It was blank in both manager and worker.
Normally, after I have finished a run and selected the top systems, I exit manager and worker without deleting the systems that were generated.
I assume that they are deleted automatically.

What you appear to be saying is that unless I specifically delete the systems, GSB will retain a memory of the previous settings applied to a given dataset and override any new settings. That's problematic.

admin - 18-4-2018 at 10:50 PM

Quote: Originally posted by cyrus68  
when I started the run there were no systems in the GUI. It was blank in both manager and worker.
Normally, after I have finished a run and selected the top systems, I exit manager and worker without deleting the systems that were generated.
I assume that they are deleted automatically.

What you appear to be saying is that unless I specifically delete the systems, GSB will retain a memory of the previous settings applied to a given dataset and override any new settings. That's problematic.

If you close GSB all is lost. You dont need to close workers, but they will retain systems from any other mangagers and or settings used. If you dont close the manager, the systems will stay until they are deleted.
So if you build ES systems. stop manager. Change data1. Build NG systems. You will see ES and NG in the manager.

cyrus68 - 18-4-2018 at 11:06 PM

Given that I always close both manager and workers after a run and start afresh with a new dataset and settings, I shouldn't be running into the current problem of my settings being overridden.

Somehow GSB has memory of the previous setting for this dataset and continues to override my current setting. I don't know how to break its hold.

admin - 18-4-2018 at 11:24 PM

Quote: Originally posted by cyrus68  
Given that I always close both manager and workers after a run and start afresh with a new dataset and settings, I shouldn't be running into the current problem of my settings being overridden.

Somehow GSB has memory of the previous setting for this dataset and continues to override my current setting. I don't know how to break its hold.

Send me team viewer details later (email). Im in and out the next few hours. (leaving now)

cyrus68 - 19-4-2018 at 01:16 AM

this has turned out to be a non-issue. There is nothing wrong with using the settings in GSB.
I was temporarily confused by the placement of the training and optimisation periods in the graph.

parrdo101 - 27-4-2018 at 11:02 AM

All the code able to be retained from the GSB Trial Period: (further details, circumstances: a paid subscription to GSB has never happened):

Is one free to share that code with anyone?

I absolutely wouldn't, without clearing this with Mr. Zwag beforehand. This is not an after-the-fact dumb_ah.


cyrus68 - 3-5-2018 at 05:39 AM

In the GUI under "Exits" there is the selection "Market on Day Close". It is obvious what it does.
However, at some point, the selection "Built-in Market on Day Close" has appeared.
I'm not sure what it is supposed to do and have left it at the default "False" setting.
I can't find anything in the docs, so any explanation would be appreciated.

admin - 3-5-2018 at 03:39 PM

Quote: Originally posted by cyrus68  
In the GUI under "Exits" there is the selection "Market on Day Close". It is obvious what it does.
However, at some point, the selection "Built-in Market on Day Close" has appeared.
I'm not sure what it is supposed to do and have left it at the default "False" setting.
I can't find anything in the docs, so any explanation would be appreciated.

You can use setexitonclose which reduces the amount of code in GSB a reasonable amount.
Other wise you get code that says
If (TimeHms >= 150000 And TimeHms <= 150059) Then
Begin
BuyToCover this bar on close;
Sell this bar on close;
End;

I will update the docs after a few more builds of GSB




moc.png - 35kB

parrdo101 - 8-5-2018 at 05:52 AM

"All the code able to be retained from the GSB Trial Period: (further details, circumstances: a paid subscription to GSB has never happened):

Is one free to share that code with anyone?"

Bump - what's the deal here Peter?

admin - 8-5-2018 at 04:04 PM

Quote: Originally posted by parrdo101  
"All the code able to be retained from the GSB Trial Period: (further details, circumstances: a paid subscription to GSB has never happened):

Is one free to share that code with anyone?"

Bump - what's the deal here Peter?

The intent behind the free systems is that to qualify for them, you must down load and try GSB - but you don't have to buy.
Giving the free GSB systems to non trial users breaks the intent of this. The systems for GSB purchasers must stay in the hands of GSB purchasers only. The systems a trail or paid users make, belong to them.
" paid subscription to GSB has never happened):"
There will be a small annual renew charge for GSB purchasers to keep money flowing for further development costs of GSB. GSB is an expensive project due to the amount of hours that go into its development. The first purchasers of GSB got a very good deal of 3 copies of GSB and free updates. If this renewal is not paid, the GSB purchasers gets to use forever the GSB version 1 year from their purchase date.
I think this deal is fair for all.

toddsk136 - 8-5-2018 at 08:45 PM

I uninstall GSB because custom indicators did not show up on left screen. And i install back GSB. It gives me error install. Before custom ind.

never show up in compiling except the build in indicator.

admin - 8-5-2018 at 09:30 PM

Quote: Originally posted by toddsk136  
I uninstall GSB because custom indicators did not show up on left screen. And i install back GSB. It gives me error install. Before custom ind.

never show up in compiling except the build in indicator.

Hi Todd,
you might have advanced mode turned off. (view advanced up top menu). If that doesnt fix it email me your team viewer details

cyrus68 - 28-5-2018 at 11:21 PM

I have a problem that may be the result of a bug or my wrong settings, and I can’t figure out which it is.

I generated systems with the following setting: training=100 and Trd=1, which essentially designates 50% of the data for OOS testing. Unusually, some of the WFs have current curves that are severely truncated, as though they have only been calculated over half the data period. The rest of the WFs look normal. I haven’t seen this sort of result with other settings, such as 40/30/30 and Trd=2.

I wonder if this may have anything to do with the ‘Optimise Data Stream’ setting. Generally, I prefer to set it to True. This allows the current curve to generalise the role of the data streams over the whole data set. Setting it to False restricts optimising their role, over the training period only.

The first pic is for the total curve and the second is for the OOS curve.


Total Curve.png - 68kBOOS Curve.png - 75kB

admin - 28-5-2018 at 11:26 PM

Quote: Originally posted by cyrus68  
I have a problem that may be the result of a bug or my wrong settings, and I can’t figure out which it is.

I generated systems with the following setting: training=100 and Trd=1, which essentially designates 50% of the data for OOS testing. Unusually, some of the WFs have current curves that are severely truncated, as though they have only been calculated over half the data period. The rest of the WFs look normal. I haven’t seen this sort of result with other settings, such as 40/30/30 and Trd=2.

I wonder if this may have anything to do with the ‘Optimise Data Stream’ setting. Generally, I prefer to set it to True. This allows the current curve to generalise the role of the data streams over the whole data set. Setting it to False restricts optimising their role, over the training period only.

The first pic is for the total curve and the second is for the OOS curve.


I dont understand what you have done. Can you so a support upload and save the system concerned.
Nearly all of my attempts to optimize data streams may have given better final results, but curves were less linear or other negtives, so I dont like using the setting. You also need a lot more WF iterations due to many more combinations.
By trd do you mean nth?

cyrus68 - 28-5-2018 at 11:42 PM

Yes, I have saved the system. I'll try the support upload.

If the training period is at the beginning of the data-set and Optimise data streams is set to False, you may get a poor generalisation of the role of the data streams, going forward. Yes, it does increase WF processing time.

admin - 29-5-2018 at 12:49 AM

Quote: Originally posted by cyrus68  
I have a problem that may be the result of a bug or my wrong settings, and I can’t figure out which it is.

I generated systems with the following setting: training=100 and Trd=1, which essentially designates 50% of the data for OOS testing. Unusually, some of the WFs have current curves that are severely truncated, as though they have only been calculated over half the data period. The rest of the WFs look normal. I haven’t seen this sort of result with other settings, such as 40/30/30 and Trd=2.

I wonder if this may have anything to do with the ‘Optimise Data Stream’ setting. Generally, I prefer to set it to True. This allows the current curve to generalise the role of the data streams over the whole data set. Setting it to False restricts optimising their role, over the training period only.

The first pic is for the total curve and the second is for the OOS curve.

can you send a screen shot with the difference between total and OOS circled.
I don't understand what you mean.
I see no problem with the screen shots. The x axis is set to trade number, so a wf current curve gives similar profit with much less trades (good)
If you set x axis to date, the curves will end at the same point.

cyrus68 - 29-5-2018 at 06:20 AM

the first pic has total (red), WF current and WF OOS curves. The WF curves have almost half the number of trades and about 2/3 the profit. This is unusual. They should be in the same range.

the second pic shows the OOS (red) for nth trd=1.

here is a pic by date.

Total curve by date.png - 68kB

admin - 30-5-2018 at 04:58 AM

Quote: Originally posted by cyrus68  
the first pic has total (red), WF current and WF OOS curves. The WF curves have almost half the number of trades and about 2/3 the profit. This is unusual. They should be in the same range.

the second pic shows the OOS (red) for nth trd=1.

here is a pic by date.

Hope to send an update tomorrow on this

cyrus68 - 14-6-2018 at 04:40 AM

There doesn't seem to be a way of introducing division, in setting the fitness criterion. For example: net profit/(average of top 5 DDs). I fiddled around but only multiplication seems to be allowed.

I take it that a weight of 2 means raising the variable to the power 2. For example avg trade^2 is avg trade squared. And avg trade^-2 is the square root of avg trade.

admin - 14-6-2018 at 04:48 AM

Quote: Originally posted by cyrus68  
There doesn't seem to be a way of introducing division, in setting the fitness criterion. For example: net profit/(average of top 5 DDs). I fiddled around but only multiplication seems to be allowed.

I take it that a weight of 2 means raising the variable to the power 2. For example avg trade^2 is avg trade squared. And avg trade^-2 is the square root of avg trade.


I will ask programmer on aver of 5 worse dd. I think ewfo used np/ave 5 worst dd.
I think you are correct on both points.
Regardless its my OPINION that best fitness = np*at
Remember also the final performance metrics can change a lot after Walk forward.

admin - 14-6-2018 at 10:55 PM

Quote: Originally posted by cyrus68  
There doesn't seem to be a way of introducing division, in setting the fitness criterion. For example: net profit/(average of top 5 DDs). I fiddled around but only multiplication seems to be allowed.



In next build (else the one after) we will have np/(5x worst dd)
You are correct that current mode is not useable

cyrus68 - 16-6-2018 at 05:16 AM

When you export a trade list from GSB to PA, how do you associate the file with a particular symbol?
There is no editing option to do this, as far as I can see.
Renaming the file, xyz.@NG instead of xyz doesn't work either.
Maybe the only sound way is to import via TS.

rws - 16-6-2018 at 06:25 AM

Cyrus68,

I take it that a weight of 2 means raising the variable to the power 2. For example avg trade^2 is avg trade squared. And avg trade^-2 is the square root of avg trade.

trade^-2 = 1/trade^2 and square root is trade^0.5

I find sometimes multiplying with (winners avgprofit)/(losers avg loss) and/or /(avg holding bars) and/or /(losers average holding bars) made things better but that was in another optimizer and a complete different algorithm.




Quote: Originally posted by admin  
Quote: Originally posted by cyrus68  
There doesn't seem to be a way of introducing division, in setting the fitness criterion. For example: net profit/(average of top 5 DDs). I fiddled around but only multiplication seems to be allowed.

I take it that a weight of 2 means raising the variable to the power 2. For example avg trade^2 is avg trade squared. And avg trade^-2 is the square root of avg trade.


I will ask programmer on aver of 5 worse dd. I think ewfo used np/ave 5 worst dd.
I think you are correct on both points.
Regardless its my OPINION that best fitness = np*at
Remember also the final performance metrics can change a lot after Walk forward.

admin - 17-6-2018 at 04:13 PM

Quote: Originally posted by cyrus68  
When you export a trade list from GSB to PA, how do you associate the file with a particular symbol?
There is no editing option to do this, as far as I can see.
Renaming the file, xyz.@NG instead of xyz doesn't work either.
Maybe the only sound way is to import via TS.

You can edit the first line of the pa file.
Symbol=ES BigPointValue 50 Slippage 0 Commission 0 (Slippage Mode TradePerSide, Commission Mode
Or in PA you can add a new contract, or in GSB you might be able to change the csv file systems are made on, and add to contracts.txt file (tools, contracts, list)

cyrus68 - 17-6-2018 at 10:16 PM

I don't understand anything of what you are saying.
There is already a comprehensive dictionary set up in PA for futures symbols.
Let's say you export the trade list for an NG system, from GSB, as a PA file, and call it NG xyz.
When you add the file in PA, it is added as: symbol=NG, strategy=NG xyz and type=stock.
The only thing that you can edit are the weight and the costs.

admin - 17-6-2018 at 10:18 PM

Quote: Originally posted by cyrus68  
I don't understand anything of what you are saying.
There is already a comprehensive dictionary set up in PA for futures symbols.
Let's say you export the trade list for an NG system, from GSB, as a PA file, and call it NG xyz.
When you add the file in PA, it is added as: symbol=NG, strategy=NG xyz and type=stock.
The only thing that you can edit are the weight and the costs.

Are you talking pa pro, or pa cloud?
You can edit much more, esp on pro.

admin - 17-6-2018 at 10:18 PM

Quote: Originally posted by cyrus68  
I don't understand anything of what you are saying.
There is already a comprehensive dictionary set up in PA for futures symbols.
Let's say you export the trade list for an NG system, from GSB, as a PA file, and call it NG xyz.
When you add the file in PA, it is added as: symbol=NG, strategy=NG xyz and type=stock.
The only thing that you can edit are the weight and the costs.

see settings, future dictionary on PA PRO, add symbol.

cyrus68 - 17-6-2018 at 10:41 PM

@NG is already in the dictionary.
I don't need to add it to the dictionary.
the issue is this: how do you associate the file/strategy that you have just added, namely NG xyz, with the symbol @NG?

admin - 17-6-2018 at 11:06 PM

Quote: Originally posted by cyrus68  
@NG is already in the dictionary.
I don't need to add it to the dictionary.
the issue is this: how do you associate the file/strategy that you have just added, namely NG xyz, with the symbol @NG?

You can edit the first line of the pa file.
Symbol=NG BigPointValue 10000 Slippage 0 Commission 0 (Slippage Mode TradePerSide, Commission Mode

CHANGE TO
Symbol=@NG BigPointValue 10000 Slippage 0 Commission 0 (Slippage Mode TradePerSide, Commission Mode

Or in PA you can add a new contract called NG, or in GSB you might be able to change the csv file systems are made on, and add to contracts.txt file (tools, contracts, list)

or in GSB you might be able to change the csv file systems are made on, and add to contracts.txt file (tools, contracts, list)

admin - 18-6-2018 at 12:21 AM

Quote: Originally posted by cyrus68  
@NG is already in the dictionary.
I don't need to add it to the dictionary.
the issue is this: how do you associate the file/strategy that you have just added, namely NG xyz, with the symbol @NG?

I will add option to automatically associate NG with @NG etc to pa in next build

boosted - 18-6-2018 at 01:14 PM

Unique Systems


First time using Manager with Workers today.

I noticed something that raised a question for me regarding total Unique Systems in Manager vs Workers (4) that I saw.

Maxing out at 4 Workers today, each had its own number of Uniques Systems it created.

I waited for the 4 Workers to complete their transfer of Uniq's to Manager.

The Manager finished with 10,630 Uniq's.

The total of all 4 Workers' Uniq's came to roughly 31k vs the Managers 10k+.

Should I understand that in this case the number of final Uniq's in Manager
represents the aggregate amount of true Uniq's (10k+) vs all 4 Workers together equaling over 31k Uniq's?

Meaning there were roughly 20k Uniq's produced which were doubles+ and therefore there were really only 10k+ Uniq's as shown in Manager when finished.

admin - 18-6-2018 at 03:56 PM

Quote: Originally posted by boosted  
Unique Systems


First time using Manager with Workers today.

I noticed something that raised a question for me regarding total Unique Systems in Manager vs Workers (4) that I saw.

Maxing out at 4 Workers today, each had its own number of Uniques Systems it created.

I waited for the 4 Workers to complete their transfer of Uniq's to Manager.

The Manager finished with 10,630 Uniq's.

The total of all 4 Workers' Uniq's came to roughly 31k vs the Managers 10k+.

Should I understand that in this case the number of final Uniq's in Manager
represents the aggregate amount of true Uniq's (10k+) vs all 4 Workers together equaling over 31k Uniq's?

Meaning there were roughly 20k Uniq's produced which were doubles+ and therefore there were really only 10k+ Uniq's as shown in Manager when finished.

Can you look in exceptions folder under the manager and workers, and send me any zipped files. I suspect you should have more systems than this. If they are lost due to internet dropping out, an exception message is generated. GSB retries 10 times to resend the systems to SQL server. You can also export the csv of all workers and manager to me, so we can check if any non duplicate systems are missing.
Shown is a sneak preview of 46.02. It built systems on es,emd,rty,ym,nq in parallel. Same settings on all.


export-sys.png - 242kB

boosted - 18-6-2018 at 04:13 PM

Hi Peter,

I checked the exception files and nothing was there. I am right now making a fresh run on a simple system to see if I get a repeat of what I saw before with the workers Uniq's vs what Manager reported Uniq's. As I am watching them in real time it appears everything is working as it should. I will continue to let it run for a while and see
if that changes. If so, I will check for exception files and send to you.

Wow, nice looking preliminary results on multiple market build test. Is that 46.02 Beta using the multiple time frame feature?

admin - 18-6-2018 at 04:17 PM

Quote: Originally posted by boosted  
Hi Peter,

I checked the exception files and nothing was there. I am right now making a fresh run on a simple system to see if I get a repeat of what I saw before with the workers Uniq's vs what Manager reported Uniq's. As I am watching them in real time it appears everything is working as it should. I will continue to let it run for a while and see
if that changes. If so, I will check for exception files and send to you.

Wow, nice looking preliminary results on multiple market build test. Is that 46.02 Beta using the multiple time frame feature?

Keep me informed. If systems are missing, you could tell my putting all worker systems in excel, sorting and comparing to manager.
gsb retries 10 times every 2 sec. So a longer outage will drop some systems.
Were you on a laptop? If on wifi you should go to hard wired cable.
The screen shot was multi market, not time frame. Im not sure if multi time frame or multi market will work best. Take a lot of learning for us to figure that out.

boosted - 18-6-2018 at 04:22 PM

No, I am using Bootcamp iMac i7 7700 4.2 with 64 RAM hardwired to router which is on backup power if power goes out.

I will keep an eye on anything odd looking with Uniq's and report back if necessary.

boosted - 18-6-2018 at 07:25 PM

I ran into an odd issue after running Workspace and Workers for roughly 1hr.

After one hour of building strategies I stopped Manager then waited for a bit to make sure all data had stopped writing to Manager.

I then did a simple sort (any column it doesn't make a difference) and it looked as it should but after the sort I clicked into one of the column cells and noticed whatever
number was there suddenly changed to something different.

In this example I used the Full Period Net Profit column and sorted by largest profit then clicked in the cell and the number changed. This only happens when you sort by a positive number top to bottom. If I sort by largest Net Profit loss and click in a random cell the number does not change.

This does not happen in any Worker window only in Manager. This is an issue since it makes sorting in Manager not useable.

The pics below show an initial sort top down by profit then the second pic shows what it looks like when you start clicking on the sorted Net Profit list. The numbers in the cells change to something totally different.

There were no exceptions in the Exception file folder in Manager or Workers.





ManagerError1.PNG - 25kB ManagerError.PNG - 29kB

admin - 18-6-2018 at 08:18 PM

Quote: Originally posted by boosted  
I ran into an odd issue after running Workspace and Workers for roughly 1hr.

After one hour of building strategies I stopped Manager then waited for a bit to make sure all data had stopped writing to Manager.

I then did a simple sort (any column it doesn't make a difference) and it looked as it should but after the sort I clicked into one of the column cells and noticed whatever
number was there suddenly changed to something different.

In this example I used the Full Period Net Profit column and sorted by largest profit then clicked in the cell and the number changed. This only happens when you sort by a positive number top to bottom. If I sort by largest Net Profit loss and click in a random cell the number does not change.

This does not happen in any Worker window only in Manager. This is an issue since it makes sorting in Manager not useable.

The pics below show an initial sort top down by profit then the second pic shows what it looks like when you start clicking on the sorted Net Profit list. The numbers in the cells change to something totally different.

There were no exceptions in the Exception file folder in Manager or Workers.

email me your teamviewer.com details and I will have a look

admin - 20-6-2018 at 06:43 PM

Quote: Originally posted by boosted  
I ran into an odd issue after running Workspace and Workers for roughly 1hr.

After one hour of building strategies I stopped Manager then waited for a bit to make sure all data had stopped writing to Manager.

I then did a simple sort (any column it doesn't make a difference) and it looked as it should but after the sort I clicked into one of the column cells and noticed whatever
number was there suddenly changed to something different.

In this example I used the Full Period Net Profit column and sorted by largest profit then clicked in the cell and the number changed. This only happens when you sort by a positive number top to bottom. If I sort by largest Net Profit loss and click in a random cell the number does not change.

This does not happen in any Worker window only in Manager. This is an issue since it makes sorting in Manager not useable.

The pics below show an initial sort top down by profit then the second pic shows what it looks like when you start clicking on the sorted Net Profit list. The numbers in the cells change to something totally different.

There were no exceptions in the Exception file folder in Manager or Workers.

We are working on the fix for this issue now. I hope it will be fixed on 46.03

boosted - 20-6-2018 at 08:06 PM

Ok, great. Thanks Peter.

By the way, I just saw your prior msg to me about Teamviewer. I would of done it if I saw it and was at my
desk.

boothy - 1-7-2018 at 08:43 PM

Hi Peter,

I've ran into a little problem, not sure how I did it or how to fix it.

I was trying to change the Price data file and some how stuffed something up, now when I click on it I get an error message.

Thanks.


Capture1.PNG - 44kB

 Pages:  1    3    5  ..  25