Message boards : Rosetta@home Science : Bottleneck and Results
Previous · 1 · 2
Author | Message |
---|---|
Dimitris Hatzopoulos Send message Joined: 5 Jan 06 Posts: 336 Credit: 80,939 RAC: 0 |
Based on this feedback, I'd think it's important to offer optimised Rosetta clients (SSE/SSE2/SSE3 etc) ASAP. Btw, wasn't it an important BOINC point that the projects can now know PC specs, so they can make the best out of it e.g. send optimized exe's to newer CPUs or bigger WUs to PCs with bigger RAM/CPU etc, without involving the user? Any comments/ideas about this? Best UFO Resources Wikipedia R@h How-To: Join Distributed Computing projects that benefit humanity |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
And lastly, how many credits per day do we need? The answer, for better or for worse, is MORE! The current runs are all on proteins less than 85 amino acids, and searching is a problem. The size of the space that needs to be searched grows exponentially with length (number of amino acids), and many proteins are over 200 amino acids long--we do not have a reliable estimate of how long searching these spaces would take, and I would hesitate to even start at this until we are at current SETI levels or above. This ties in with what we learned at Distributed Folding. To get a reliable result at say 6 Angstroms distance from the actual item, we needed a certain amount of processing power. To get a 5A result, we needed 10 times the processing power. For a 4A result, we needed 100 times the processing power. For a 3A result, we needed 1000 times the processing power of the original 6A result. And the amount of processing was also based on the length of the Amino Acids in question. Dinky lengths were done in house by hand or on a single cpu. 80-120 Amino Acid lengths were items we DF folks could handle in a reasonal amount of time. We'd burn through the shorter proteins, but plod through the longer ones. I can't imagine running through a 200AA item without a very optimized client (like FaD's) that was large farm/pharm friendly, with a tiny bandwidth usage, rather than the 1 gig/month per 2Ghz machine usage I'm seeing at present. Let alone attracting 100k+ dedicated full time crunchers you're talking about.. :) |
alo_dk Send message Joined: 11 Dec 05 Posts: 19 Credit: 30,425 RAC: 0 |
|
Nothing But Idle Time Send message Joined: 28 Sep 05 Posts: 209 Credit: 139,545 RAC: 0 |
Based on this feedback, I'd think it's important to offer optimised Rosetta clients (SSE/SSE2/SSE3 etc) ASAP. Btw, wasn't it an important BOINC point that the projects can now know PC specs, so they can make the best out of it e.g. send optimized exe's to newer CPUs or bigger WUs to PCs with bigger RAM/CPU etc, without involving the user? The project seems to be input/output bound; that is, there are hundreds of well-intentioned people who provide mountains of ideas on what to do and how to do it. (Some do so quite emphatically, and IMO unrealistically.) But how can 5 or 6 project people adequately provide the response you desperately seek? Maybe one of you out there would be willing to optimize the Rosetta software for SSEx? And do we risk optimizing the list of known bugs so the Wus abort faster? Maybe the bugs should be fixed first before any optimizing is considered? I'm not criticizing, just thinking out loud. My disclaimer: I have Rosetta suspended until the bugs are fixed. I want to contribute but I have only 1 computer whose contribution to any project is insignificant. So I choose to donate my tiny effort to projects that don't hang or abort. But I will monitor this site and return as soon as feasible. |
Dimitris Hatzopoulos Send message Joined: 5 Jan 06 Posts: 336 Credit: 80,939 RAC: 0 |
But how can 5 or 6 project people adequately provide the response you desperately seek? Maybe one of you out there would be willing to optimize the Rosetta software for SSEx? And do we risk optimizing the list of known bugs so the Wus abort faster? Often, SSEx optimization is just an issue of enabling a flag at compile-time. I'm not talking about code-optimizations "by hand" in the source code. I don't know if it would make a significant speed difference for Rosetta, I know that optimized science apps for other projects enjoy a speedup of 2x times or more. In such a scenario, some volunteer "beta-testers" would choose to manually install the SSE optimized version of Rosetta app on their systems and test things out. Best UFO Resources Wikipedia R@h How-To: Join Distributed Computing projects that benefit humanity |
Paul D. Buck Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
The peercentage of work that seems to me to "hang" is very low. I don't THINK I have had an over time work unit yet, though I have had some that run quite long. All seem to complete. So, *MY* experience is that the risk is low. But, I don't know why others are seeing these problems ... just like I cannot figure out how changing the DCF is avoiding one of the problems as I can't see in the code how changing the DCF can do what it appears to be doing ... Then again, I am looking at the current code too ... |
nasher Send message Joined: 5 Nov 05 Posts: 98 Credit: 618,288 RAC: 0 |
well every project out there will have bugs in it now and then I personaly dont think the bugs are big enough to worry about the loss of value also remember if you dont tell people what problems you are haveing, how can you expect the people to know and fix the problems. I am glad to know we could use >10x the processing power here and i enjoy this project. of corse i am also running alot of other Boinc DC projects and curently i have sliped a few % of my computer(s) power over to the other projects to get them over 3% of my work each project... then i plan on keepin rosetta at about 50% of my 4 computers and all other projects about 4-7% each. remember the axiom about computer programs you have 3 options fast good cheap ... you may only pick 2 |
Message boards :
Rosetta@home Science :
Bottleneck and Results
©2024 University of Washington
https://www.bakerlab.org