| Pages:
1
..
8
9
10
11
12
..
98 |
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I can confirm that 44.03 suffers from the 10 min delay problem too. It worked fine yesterday.
The workers are on one machine, and I am not doing any cloud computing.
It must have something to do with the need for GSB to connect to the server, and ensuing problems.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | I can confirm that 44.03 suffers from the 10 min delay problem too. It worked fine yesterday.
The workers are on one machine, and I am not doing any cloud computing.
It must have something to do with the need for GSB to connect to the server, and ensuing problems. |
I'm keen to resolve asap, but programmer was not working yesterday.
I don't have sql server skills so waiting for that to happen. Its number 1 on the job list. Cloud is working though as i have 44 GSB workers right
now.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
we had 381,000 systems not assigned to a manager, and the database was close to full. Hoping this will fix it. I think my current build.11 fixes this.
I will upload .11 build
|
|
|
Gregorian
Junior Member

Posts: 97
Registered: 23-5-2017
Member Is Offline
Mood: No Mood
|
|
4K scaling not as good in 44.11
Prior to 44.11, if we set "Override high DPI scaling behavior" in the Compatibility properties of a shortcut to GSB, fonts scaled quite well in 4K. As
of 44.11, that is no longer the case. Fonts scale strangely. It's worse if we uncheck that option. Until we can get a proper 4K implementation, going
back to the previous scaling would be better for now.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by Gregorian  | | Prior to 44.11, if we set "Override high DPI scaling behavior" in the Compatibility properties of a shortcut to GSB, fonts scaled quite well in 4K. As
of 44.11, that is no longer the case. Fonts scale strangely. It's worse if we uncheck that option. Until we can get a proper 4K implementation, going
back to the previous scaling would be better for now. |
Im not aware that this has changed, but I recently went to a 4k monitor and the gui doesnt size as well as I liked. I will look into this.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
Sorry to hear about your scaling problem in 4k.
My centre monitor is 34-inch 3440x1440, and GSB looks fine on default fonts in 44.11.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | Sorry to hear about your scaling problem in 4k.
My centre monitor is 34-inch 3440x1440, and GSB looks fine on default fonts in 44.11. |
what build did you find better. 04 or 09 or .11
Otherwise we will reverse the last scaling changes
|
|
|
Petzy
Junior Member

Posts: 73
Registered: 24-10-2017
Location: Sweden
Member Is Offline
Mood: No Mood
|
|
For me .09 was better.
I am not using any high resolution screens. I connect to my machines with teamviewer and the ones that I have on 1024x768 has the text a little too
big.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I think the font size in 44.11 is actually larger than on previous builds, though the font size at the top of the graph is rather small.
Overall, I prefer the default font size in .09 and .04
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I ran 44.11 again. This time the colour coding of the metrics at the top of the lower panel (red/green/blue/brown...etc..) has disappeared. It's all
white.
Also, the play and pause buttons of the manager are greyed out.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | I ran 44.11 again. This time the colour coding of the metrics at the top of the lower panel (red/green/blue/brown...etc..) has disappeared. It's all
white.
Also, the play and pause buttons of the manager are greyed out. |
The paused issue I think is fixed in .13 build (not released)
The color metrics disappearing is intermittent. Not sure what makes it come and go.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I did WF only - on saved systems - without simultaneous system generation, in 44.14. All WF was done locally - no cloud.
With similar data sets, settings, cpu usage ...etc.. it took almost twice as long as in 44.11 and 44.09.
At times, the GUI was jittery and looked as though it was seizing up, all on its own.
Strange behaviour that I cannot account for.
Obviously, I'm going back to 44.09; unless there is something else that GSB is now doing in the background that also compromises 44.09.
I will try it tomorrow.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | I did WF only - on saved systems - without simultaneous system generation, in 44.14. All WF was done locally - no cloud.
With similar data sets, settings, cpu usage ...etc.. it took almost twice as long as in 44.11 and 44.09.
At times, the GUI was jittery and looked as though it was seizing up, all on its own.
Strange behaviour that I cannot account for.
Obviously, I'm going back to 44.09; unless there is something else that GSB is now doing in the background that also compromises 44.09.
I will try it tomorrow. |
I'm not aware of any significant difference in the main body of GSB code. How many systems did you WF at the same time, and was it multi threaded or
single? Multi threaded is much faster if you do a few WF, but if you do lots single threaded is much faster.
Maybe do a help support upload and I will try the exact systems and setup you used.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.
I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.
I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option. |
I think 6 multi threaded WF is too many. It will work, but likely faster single threaded. GUI also likely to be slow. Cloud workers have soft coded
limit of 2 multi threaded, and I think 10 single threaded. WF in cloud should use near zero resources on the manager. There is no way it should slow
system generation down unless the worker is on the same pc as manager.
Note also there can be significant variation in speed from one test to another as the random seed affects the speed.
You could test this as 44.14 allows you to pause the manager and do wf on the cloud. You should see your cpu being very low.
The workers will also have to be 44.14 I think to do manager paused, worker doing WF.
I normally do WF to the cloud, one at a time multi threaded.
But i have a lot of workers, more than almost all users.
|
|
|
kelsotrader
Junior Member

Posts: 29
Registered: 16-2-2018
Location: Tapanui - New Zealand
Member Is Offline
Mood: No Mood
|
|
Walk Forward Speeds
Quote: Originally posted by admin  | Quote: Originally posted by cyrus68  | There were only 6 WF (multi-threaded), with similar data sets, settings ...etc... as under 44.11
Loads of cpu and ram slack available. But this is usually the case when you do WF only.
The GUI looked like it was about to crash. Truly bizarre.
I thought multi-threaded WF was always faster. I have done as much as 9 at a time on saved systems.
I don't do WF while generating systems because I already have the cpu running close to 100%.
If I try WF in the cloud, it slows down the speed of system generation. So it is not a good option. |
I think 6 multi threaded WF is too many. It will work, but likely faster single threaded. GUI also likely to be slow. Cloud workers have soft coded
limit of 2 multi threaded, and I think 10 single threaded. WF in cloud should use near zero resources on the manager. There is no way it should slow
system generation down unless the worker is on the same pc as manager.
Note also there can be significant variation in speed from one test to another as the random seed affects the speed.
You could test this as 44.14 allows you to pause the manager and do wf on the cloud. You should see your cpu being very low.
The workers will also have to be 44.14 I think to do manager paused, worker doing WF.
I normally do WF to the cloud, one at a time multi threaded.
But i have a lot of workers, more than almost all users. |
That's interesting and explains why my WF speeds have been extremely slow.
I WF baches of systems at a time and it was taking longer to do the WF tests than to generate systems.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
There is a setting on delays for each WF. This is conservative. A WF will use more ram, but it takes some time to do so. This means GSB could under
estimate the amount of ram a GSB has access too.
Hence we cant afford peoples PC's / servers to crash. You can reduce this time out value, and limit the amount of WF per worker.
Before this I submitted 100 wf to a server with 64 gb of ram. Many hours later it crashed due to lack of ram.
I am hoping to have batch Multi threaded WF, but they go into a Que if you do too many. I still like this as you get finished WF quickly start coming
in.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I have 64 GB of ram on an i9. GSB doesn't get anywhere near to using all the ram.
WF is mostly deadly slow (yes, I know it depends on your datasets, settings...etc..) and the GUI pretty much freezes up.
I run WF in the manager on saved systems. What is the role of workers in this context? Should you activate and run WF in workers?
Doing 100 WF on a server, with 64 GB of ram should only be tried by Harry Potter.
Until I can think of something better, I plan on running WF on saved systems - overnight - using an i7 with 32 GB of ram.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | I have 64 GB of ram on an i9. GSB doesn't get anywhere near to using all the ram.
WF is mostly deadly slow (yes, I know it depends on your datasets, settings...etc..) and the GUI pretty much freezes up.
I run WF in the manager on saved systems. What is the role of workers in this context? Should you activate and run WF in workers?
Doing 100 WF on a server, with 64 GB of ram should only be tried by Harry Potter.
Until I can think of something better, I plan on running WF on saved systems - overnight - using an i7 with 32 GB of ram. |
I think its better to send wf to the workers. The manager is then more responsive. I would expect manager to be sluggish if building systems and doing
4 to 8 multi threaded wf. Thats one of the reasons I only do 2 multi threaded WF on the manager.
44.18 can also be stopped and send WF to the cloud. Shortly Im hopping you can load saved systems and WF to the cloud.
Unless you do a support upload with sample saved systems, I cant diagnose your issues very well.
100 WF to one machine was a mistake, but good that the issue was picked up as other users will do the same at some stage.
Did you also try a machine reboot, and lower the delay between WF setting. (see post above)
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
I don't do WF while GSB is generating systems because I run enough workers to get cpu usage close to 100%.
In this context, I tried one multi-threaded WF sent to the cloud.It resulted in a slowdown in the speed of system generation.
Given my goal of maximising the speed of system generation, running WF in the cloud is not for me.
I do WF on saved systems, with no simultaneous system generation going on in GSB. What I would like to do is to maximise the speed of WF in this
context. I will try your suggestion of running the WF in workers rather than the manager. I will have to figure out how many workers to load to
optimise use of cpu and ram. 2 WF x 5 workers may be a starting point.
Also worth trying out the suggestion of using single-threaded WF.
I am currently using 44.09. I will try 44.18, in due course, to see if speed and GUI issues have improved.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | I don't do WF while GSB is generating systems because I run enough workers to get cpu usage close to 100%.
In this context, I tried one multi-threaded WF sent to the cloud.It resulted in a slowdown in the speed of system generation.
Given my goal of maximising the speed of system generation, running WF in the cloud is not for me.
I do WF on saved systems, with no simultaneous system generation going on in GSB. What I would like to do is to maximise the speed of WF in this
context. I will try your suggestion of running the WF in workers rather than the manager. I will have to figure out how many workers to load to
optimise use of cpu and ram. 2 WF x 5 workers may be a starting point.
Also worth trying out the suggestion of using single-threaded WF.
I am currently using 44.09. I will try 44.18, in due course, to see if speed and GUI issues have improved. |
When you do WF to the cloud, are you sending to your local pc(s) or someone else's pc in the cloud. If its to your own pc the wf is sent to, system
generation will definatley slow down.
Best use gsbcloud3_password1234 for version .20 onward's.
Cloud is a bit messy in that there are 3 versions of GSB running.
.04 with cloud1 will be killed end of the month.
I suspect we miss communicated in that you are doing WF on your own workers. This doesnt surprise me that it slows things down.
It might be faster to use the cloud, but if you dont want to do that make some workers dedicated to doing wf. You could pause them once they connect
to the manager, and let them do WF. Or you could save systems on manager, load them on worker(s).
The other workers you could set the max WF under app settings workplace to zero. There is no WF que yet on mangers, but there is on workers.
I prefer to limit the amount of concurrent WF and have them in a job que.
|
|
|
cyrus68
Member
 
Posts: 171
Registered: 5-6-2017
Member Is Offline
Mood: No Mood
|
|
Doing single-threaded WF, locally, on a multitude of saved systems is many times faster than multi-threaded. Like the tortoise and the hare.
This is counter intuitive. I thought the reason for running multi-threaded WF was to gain speed.
I'm not sure about the max settings for WF in Workplace. I've set them to 10 each.
|
|
|
admin
Super Administrator
       
Posts: 5060
Registered: 7-4-2017
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by cyrus68  | Doing single-threaded WF, locally, on a multitude of saved systems is many times faster than multi-threaded. Like the tortoise and the hare.
This is counter intuitive. I thought the reason for running multi-threaded WF was to gain speed.
I'm not sure about the max settings for WF in Workplace. I've set them to 10 each. |
single threaded is slower and takes longer to complete. Its much more cpu efficient though. But im in a hurry often to see some WF results, so tend to
use multithreaded WF
Setting 10 MT is a not a great idea as its going to be slower than 10 single threaded. Best you benchmark your machine and test it. But it may vary
depending on the systems, amount of data streams etc
|
|
|
kelsotrader
Junior Member

Posts: 29
Registered: 16-2-2018
Location: Tapanui - New Zealand
Member Is Offline
Mood: No Mood
|
|
Odd results when code transfed to TradeStation.
I am getting some very odd results from strategies when transferred to TS.
I set up data for NQ ( 15,30,60 Minutes) . Checked the data and started to create strategies.
I left the data streams on TS so did not reload them.
But the odd thing is that some (Not all) strategies when copied onto TS produced results way off those that GSB Produced.
Number of trades in one case was out by 1000.
I have double checked everything I can think of.
I suspected a caching error in TS, Checked code and to make sure the transfer was correct.
I am unable to pinpoint where the problem lies ( I don't think it is a bug in GSB) but there must be something that is causing these inconsistent
results. I suspect TS is not recalculating properly from one test Strategy to another. ( I am testing and transferring a lot over and testing)
|
|
|
kelsotrader
Junior Member

Posts: 29
Registered: 16-2-2018
Location: Tapanui - New Zealand
Member Is Offline
Mood: No Mood
|
|
Quote: Originally posted by kelsotrader  | I am getting some very odd results from strategies when transferred to TS.
I set up data for NQ ( 15,30,60 Minutes) . Checked the data and started to create strategies.
I left the data streams on TS so did not reload them.
But the odd thing is that some (Not all) strategies when copied onto TS produced results way off those that GSB Produced.
Number of trades in one case was out by 1000.
I have double checked everything I can think of.
I suspected a caching error in TS, Checked code and to make sure the transfer was correct.
I am unable to pinpoint where the problem lies ( I don't think it is a bug in GSB) but there must be something that is causing these inconsistent
results. I suspect TS is not recalculating properly from one test Strategy to another. ( I am testing and transferring a lot over and testing)
|
OK I am posting the above because it coursed me concern.
I have located the problem and it lies as I suspected with TS.
For whatever reason Trade Station does not always recalculate the data with the new code. This is often the case where one over writes the code with
new code. Saves and has TS re calculate.
In order to get a clean calculation one needs to .
Delete any strategies attached to the Data screen.
Load up the new Strategy, set up its parameters then let TS do its calculations.
Hope that the above saves others the confusion that I have gone through.
|
|
|
| Pages:
1
..
8
9
10
11
12
..
98 |