Message boards : News : Rosetta's role in fighting coronavirus
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · 12 · 13 . . . 27 · Next
Author | Message |
---|---|
Klimax Send message Joined: 27 Apr 07 Posts: 44 Credit: 2,800,788 RAC: 604 |
I was surprised when I saw a comment from someone who had queue for a couple of days. I joined the project about 10 days ago and I usually got 2-3 extra tasks besides the 12 running tasks. My tasks take 7-8 hours to complete so I never got a large queue. Wonder why there's such a big difference. BOINC settings. For example I have 1 day of work + extra 1 day as a buffer with Rosetta runtime set at 12 hours. |
strongboes Send message Joined: 3 Mar 20 Posts: 27 Credit: 5,394,270 RAC: 0 |
Quick question for anyone, yesterday when it became clear that the server was empty I completed my running units, suspended the rest and did a project reset on my fastest set up. My thinking being that I would send my queue back to the server for redistribution. Is that how it works as although my client is empty, I have a considerable number of tasks in the "in progress" section, although I obviously do not. I had to reset the project a couple of times during the last few days as some work refused to download also but that's understandable given increase in users, and it seems that whatever was in the queue is also listed in that section? So when you reset does the client not tell the server to redistribute the work you had waiting? |
Falconet Send message Joined: 9 Mar 09 Posts: 353 Credit: 1,227,479 RAC: 917 |
Quick question for anyone, yesterday when it became clear that the server was empty I completed my running units, suspended the rest and did a project reset on my fastest set up. My thinking being that I would send my queue back to the server for redistribution. Is that how it works as although my client is empty, I have a considerable number of tasks in the "in progress" section, although I obviously do not. I had to reset the project a couple of times during the last few days as some work refused to download also but that's understandable given increase in users, and it seems that whatever was in the queue is also listed in that section? Since you reset the project, the tasks will stay in progress until they expire and get sent to another host. Next time, abort the tasks first and do a project update. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2125 Credit: 41,245,383 RAC: 9,571 |
Not only has the coronavirus spiked volunteer interest with our R@h project, but it is also spiking a lot of interest towards R@h within the lab so the communication should definitely get better. A lot has happened in a short amount of time. If the increased interest brings better communication in the forums a lot of dissent will disappear. We've been trained over the years to expect next to nothing, so all the info provided through the Admin account recently goes a remarkably long way. On 2c, we saw the effect of Charity Engine's arrival a few years ago which pretty much broke the project with the extra demands made on it and complete update on the servers to cope with it, so their return (aside from the others) is going to be another step change. About time Rosetta got the attention it deserves. There's no telling what the extra coding support will bring - observing discussions here over time makes me think there's a lot of scope for improvement. And great to hear about application updates being close - again long overdue. All excellent news |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
Rosetta for some reason assumes a work unit run time of 4 hours 30 minutes, regardless of the work unit length you select. So when you first attach, you will get twice as long a total run time as you have set the BOINC buffer for, if you use the default work unit length of 8 hours (and three times if you use 12 hour lengths, etc.) So start with a small BOINC buffer until it adjusts for the correct work unit run time, which can take a few days. |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
Quick question for anyone, yesterday when it became clear that the server was empty I completed my running units, suspended the rest and did a project reset on my fastest set up. My thinking being that I would send my queue back to the server for redistribution. Is that how it works as although my client is empty, I have a considerable number of tasks in the "in progress" section, although I obviously do not. I had to reset the project a couple of times during the last few days as some work refused to download also but that's understandable given increase in users, and it seems that whatever was in the queue is also listed in that section? No, I think when you do a reset, it doesn't abandon the WU. In order to correctly do what you were trying to do, you have to ABORT the WUs, and then update the project thru BOINC Manager. Then, on the site, the WU list will show as Abandoned and should get get resent to other people. By the way, the WUs take the same time regardless of how fast your machine is. If you machine is faster, it'll simply do more models vs a slower machine and get more credit per WU. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2125 Credit: 41,245,383 RAC: 9,571 |
If the well is running dry of new tasks, but millions are shown to be currently out in the wild, I think this would indicate that many users have quite large queues. (also evident from some comments I've seen stating folks have a couple days worth of work) With such high compute capacity at the moment, I wonder if the admins would consider limiting, somewhat, the amount of work one client can queue, and that might actually result in getting all work units completed a couple days sooner. I'm not sure how much of a problem this is. The home page is currently saying 730k tasks have been returned in the last 24hrs, while tasks in progress have been around 1.5-1.6m prior to the current shortage. I think the Boinc defaults are as low as 0.1+0.25 days plus the default runtime of 8hrs = ~16hrs total turnaround time So it's a matter of what the 'tinkerers' do more than anything People who only run Rosetta, I think, have learned to keep buffers down to around 1.5-2 days to meet deadlines which are normally 8 days but can be 3 days for important tasks. (My own are 1.5days except for my unattended machines which are 1.1days for what that's worth) A researcher popped up the other day to say they leave things for 2 days before looking at the initial results, so that kind of ties in. Where I do definitely see a problem is people who come here for the first time after running other projects with what they say are very long deadlines. Not sure which projects, but deadlines of many months. They seem to want 10+ day buffers, which are seriously inappropriate, then they complain that Rosetta tasks push ahead of their other projects to met deadline and want Rosetta to change their policies to suit them. Those are the users we could do with limiting. Plus those people who have an inordinate fear of tasks crashing, so they reduce their runtimes to ensure completion, then call on double and treble the number of tasks, making excessive calls on the servers. Rosetta seems to allow runtimes to be edited down to as little as 1hr. That option should be removed to make the minimum 2hrs, maybe even as much as 4hrs. I've definitely seen people with problems who show literally hundreds of 1hr tasks in their buffer. Some crazy things going on with some contributors. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2125 Credit: 41,245,383 RAC: 9,571 |
Currently, I have 14 machines with 80+ cores and 14+ GPUs. When I switched over to run one of them to Rosetta, from cancer research (WCG), I did NOT receive anything related to covid19. Whenever I would receive ALL WUs for covid19, I will switch over... Till then I will stick with cancer research. Let me know when covid19 is the priority here. I am NOT going to download the fold@home app. If it is not thru BOINC, I do not run it. Is that right? That's a project I've always wanted to run - especially now they're doing COVID work - but I've also been put off by it not being available via Boinc |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2125 Credit: 41,245,383 RAC: 9,571 |
The BOINC server status page is open source. We are not running the latest version of BOINC however. That'll be me - sorry. I'm also sorry to say the tasks ready to send on the server status page are <not> ready to send. There's always an 8,000ish discrepancy when tasks run out completely for several days, as they do now and as they did a few months ago - confirmed when compared with the homepage right now. I've learned to ignore it and look elsewhere. It's not important, but It's distracting, because people are regularly distracted by it. I'm going to get thrown out of here, aren't I... :( |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
Currently, I have 14 machines with 80+ cores and 14+ GPUs. When I switched over to run one of them to Rosetta, from cancer research (WCG), I did NOT receive anything related to covid19. Whenever I would receive ALL WUs for covid19, I will switch over... Till then I will stick with cancer research. Let me know when covid19 is the priority here. I am NOT going to download the fold@home app. If it is not thru BOINC, I do not run it. I agree. If LHC can do a Virtual Machine Wrapper, there's little reason to not have some sort of API/Wrapper/Whatever with Folding (right?). It'd be SO much easier to distribute work between Folding, Rosetta, and GPUGRID. Because right now, it's a manual hassle. It'd be nice to also have some sort of consolidated "credits" thing, to keep track of how much you're contributing. |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
There has been a discussion that Folding@home is likely to produce a BOINC version soon, Not ready yet, though. The discussions are very preliminary; I have been a part of them. But the developers have to do the real work, and I would guess that it won't come in the midst of the current rush. But the Folding client works well enough once you get past its idiosyncrasies setting it up; look to their forum for help - especially on Linux. I run my GPUs on Folding (by deleting the CPU slot when setting it up), and then run my CPUs on BOINC. It works great. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2125 Credit: 41,245,383 RAC: 9,571 |
It may even be the case I tried to run F@H before I even discovered the Boinc platform, but it was too opaque for me at the time - and now tbh. |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
It may even be the case I tried to run F@H before I even discovered the Boinc platform, but it was too opaque for me at the time - and now tbh. The current version is probably new since that time, and might be easier to set up. The Windows version is fairly straightforward. It has the strange property of grabbing both your CPU and GPU by default; you just delete the one you don't want (a couple of times before it remembers it). And you may have to set the "OpenCl Index" to other than the default (usually "0"), since the automatic selection does not work on some cards/motherboards/whatever. But it avoids all problems with cache settings. It automatically downloads a new work unit when the current one is 99% complete, so there is not much to set up once it is working. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,624,867 RAC: 6,812 |
On 2c, we saw the effect of Charity Engine's arrival a few years ago which pretty much broke the project with the extra demands made on it and complete update on the servers to cope with it, so their return (aside from the others) is going to be another step change. But when that time, Charity Engine entered in Rosetta@Home with their cpus. This time seems that they will use gpus (for AI training). This could be a big step for the project. |
entity Send message Joined: 8 May 18 Posts: 19 Credit: 5,972,267 RAC: 7,232 |
There has been a discussion that Folding@home is likely to produce a BOINC version soon, Not ready yet, though. I would not recommend F@H on Linux. I just left there because of the issues concerning software dependencies. As much as F@H likes to state their software will run most anywhere (which is why they suggest ignoring dependencies during install), I and others, have found that to not be the case. I never could get FAHControl running on 3 different distributions without having to go back and install deprecated software. If one is running a Long Term Support release of a distribution (which tends to be back-leveled) then the chances are greater that the install will work. It wasn't worth fiddling with. Just one man's opinion of course |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
I would not recommend F@H on Linux. I just left there because of the issues concerning software dependencies. As much as F@H likes to state their software will run most anywhere (which is why they suggest ignoring dependencies during install), I and others, have found that to not be the case. I never could get FAHControl running on 3 different distributions without having to go back and install deprecated software. If one is running a Long Term Support release of a distribution (which tends to be back-leveled) then the chances are greater that the install will work. It wasn't worth fiddling with. Just one man's opinion of course I don't know about the other distributions, but it works OK on Ubuntu LTS (16.04 and 18.04). The main problem for me has not been the distributions, but the idiosyncratic FAH default settings. They are OK on Windows, but screwball (no other term) on Linux. It is not that we haven't told them about it in years past, it is just that we have been ignored. However, for what it is worth, I put in my 2 cents again recently: https://foldingforum.org/viewtopic.php?f=17&t=32124&start=30#p311633 https://foldingforum.org/viewtopic.php?f=17&t=32124&start=30#p311670 I have it on 9 Ubuntu 18.04.4 machines, which I manage remotely over the LAN with HFM.Net running on my Windows machine. |
Ezzz Send message Joined: 19 Mar 20 Posts: 8 Credit: 32,322 RAC: 0 |
Just added my 384 threads after being gone for 2 years.... 11 servers 24/7. I don't care about the names. I'll run anything that comes down the network link... First units should be coming back in about 4 hours from the faster machines. Slowest machines are about 8 hours. All 1600 WUs downloaded should be back in about 24 to 28 hours Nice! Lots of us here only have one or a few home computers to contribute, but always nice to hear about those with some big power entering the fray! |
Millenium Send message Joined: 20 Sep 05 Posts: 68 Credit: 184,283 RAC: 0 |
In my opinion the BOINC software is much clearer to me than the Folding one. With BOINC everything is clear, how to manage projects, WUs and the graphs. And you have normal accounts on the projects you partecipate. Folding? It works but seems more confused. Granted, nothing too hard, but still BOINC is just clear. You open it and you see what you need. If Folding would come on BOINC then I could crunch on it but until then, there are dozens of worthy projects on BOINC. |
bcov Volunteer moderator Project developer Project scientist Send message Joined: 8 Nov 16 Posts: 12 Credit: 11,348 RAC: 0 |
Yes indeed. We burned through all the work units. We're currently facing 3 issues that sort of made us run out: 1. The compute power is incredible. I queued up something 2 days ago that would take more than a week to run locally and it all picked up in like 18 hours. 2. Creating these jobs takes human and CPU time on our end and we're stretched thin. I'm working my hardest to get stuff pushed through the pipeline, but for a lot of these, I have to develop new software to get the jobs set up correctly. Additionally, some stuff we can't run on R@H and so it has to be precomputed. 3. We almost have the new update out, but not yet. We're updating Rosetta by about 2 years. Once this happens, we'll be able to do interface design and the newer members of the lab who do protein design will have a much easier time submitting work. As it stands, doing design on R@H requires me to use a pretty old Rosetta without all the newest features. Once we get interface design going, we should hopefully be able to bring you a lot more work. It'll all finally make sense why we've been making these "scaffold" proteins. But, to give you an idea, queueing up a full day of Rosetta design on Boinc these days is really tough on our end. We'll say each WU needs 10 structures, and R@H is going through 1M per day. That's 10M structures I need to produce or about 700GB of data. And that's to queue 1 day of work. The structure prediction stuff is a lot easier to queue. You only have to upload 1 structure and then say I want 10M outputs. So you send the same WU 1M times.[/i] |
funkydude Send message Joined: 15 Jun 08 Posts: 28 Credit: 397,934 RAC: 0 |
3. We almost have the new update out, but not yet. Will this new update also include 64 bit support so that 2GB tasks stop crashing with out of memory errors when there is plenty free? Also is there plans to update the version of BOINC ran on the web server, or to update the config files the server sends to clients to point from http to https? You never know what [insert authoritarian regime] decides that running this might be offensive tomorrow. All transfers should be protected by default. |
Message boards :
News :
Rosetta's role in fighting coronavirus
©2024 University of Washington
https://www.bakerlab.org