Comments/questions on Rosetta@home journal

Message boards : Rosetta@home Science : Comments/questions on Rosetta@home journal

To post messages, you must log in.

Previous · 1 . . . 6 · 7 · 8 · 9

AuthorMessage
David Baker
Volunteer moderator
Project administrator
Project developer
Project scientist

Send message
Joined: 17 Sep 05
Posts: 705
Credit: 559,847
RAC: 0
Message 17041 - Posted: 25 May 2006, 6:36:32 UTC - in response to Message 17012.  

One of the Rosetta team mentioned running 10k decoys, and using that data to create another WU to run 10k more decoys; in the hopes that the approach would result in better results than our running 100k decoys on a single WU. Are we going to try this approach on the 445 AA behemoth that you announced?
Have we got to try out the 10k, analyze, create a new WU and create 10k more decoy approach yet? (How well did it do?)


We want to test this approach more systematically after CASP is over. Rhiju and I were talking today about what to do about the 445 residue monster; there are some hints it may be composed of two separate domains which have more tractable sizes; you will see work units from this divide and conquer approach soon.
ID: 17041 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Cureseekers~Kristof

Send message
Joined: 5 Nov 05
Posts: 80
Credit: 689,603
RAC: 0
Message 17150 - Posted: 26 May 2006, 17:13:33 UTC

On the homepage:
We have changed the workunit buffer size from 65,000 workunits to 20,000 workunits. This is so that CASP7 targets experience a smaller lag time between being queued and being sent out.

What is the impact for the users?
More risk that the project becomes without WU's?
Or...?
Member of Dutch Power Cows
ID: 17150 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 17152 - Posted: 26 May 2006, 17:20:47 UTC - in response to Message 17150.  
Last modified: 26 May 2006, 17:22:41 UTC

What is the impact for the users?
More risk that the project becomes without WU's?

There is basically no impact to users. But people had noticed the drop and expressed concern.

If the work generator should happen to fail (haven't seen that happen yet), then yes, the work would run out sooner. But the way Rosetta works, we're all basically crunching on the same small number of proteins at the same time. We're just exploring a different area of them. So the WU generator can easily keep up. It just picks a different random number and sends another copy of the protein(s) du jour.

With the larger WU buffer on the server, they would add the latest CASP proteins to the list to crunch on, and not even send any OUT for half a day or so. With the smaller buffer, they'll be sending the new CASP targets sooner and getting back initial results sooner. This will help them determine if they want to try one of the other approaches in their toolbox to try and solve it.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 17152 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile rbpeake

Send message
Joined: 25 Sep 05
Posts: 168
Credit: 247,828
RAC: 0
Message 17157 - Posted: 26 May 2006, 17:46:01 UTC - in response to Message 17152.  

...This will help them determine if they want to try one of the other approaches in their toolbox to try and solve it.

Just as an aside, with the CASP models if I happen to see one of the workunits in my queue has completed, I manually "Update" BOINC so that it is sent in right away. I get the sense that the project team is interested in getting results back as soon as possible after a unit is released to us crunchers.
Regards,
Bob P.
ID: 17157 · Rating: 0 · rate: Rate + / Rate - Report as offensive
hugothehermit

Send message
Joined: 26 Sep 05
Posts: 238
Credit: 314,893
RAC: 0
Message 17297 - Posted: 29 May 2006, 8:31:39 UTC
Last modified: 29 May 2006, 8:34:48 UTC

I wrote an internal benchmark for Rosetta last week, and Rom now has a version that uses this to compute credits. Rom suggests however that we wait until after CASP to deploy it because it may take a few iterations to make it acceptable to everybody. I don't know how difficult it will be to "get it right", but I'd like to start testing it on Ralph soon.


I for one am in agreement with Rom, and would highly recommend not adressing this issue until your CASP predictions are in, I would also recommend that you don't release it on Ralph until CASP finishes as you may need to test a point release before CASP finishes (I'm not sure I made sense here, you don't want to clog Ralph with the new credit system as you may need to improve your main R@H algorithm during CASP)


The new version soon to appear on ralph will also have a fix Rom put in for graphics problems; as reported on the boards, a good fraction of the errors seem to be associated with the graphics (I suspect the fact that they consume lots of memory is part of the problem), and in the new versions graphics related errors should abort the graphics but not disrupt completion of the Rosetta calculation.


I have only seen ATI graphics problems to date (I don't pretend that I have seen every graphical error, but from what I've seen reported), I personaly think this is a BOINC/ATI problem.

I have 1GB of RAM on my main machine it has an ATI 9800 pro AGP 128MB videocard and while looking at the Rosetta@Home graphics a WU died. As I've said I have never seen any other graphics card have this problem.

Oh before I forget DR D.B. , would you write up a comparison between R@H and Bluegene when you get you hands on it, I would really like to know the differences in speed and such.
ID: 17297 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Carlos_Pfitzner
Avatar

Send message
Joined: 22 Dec 05
Posts: 71
Credit: 138,867
RAC: 0
Message 17304 - Posted: 29 May 2006, 14:42:34 UTC - in response to Message 17292.  
Last modified: 29 May 2006, 14:49:11 UTC

I wrote an internal benchmark for Rosetta last week, and Rom now has a version that uses this to compute credits. Rom suggests however that we wait until after CASP to deploy it because it may take a few iterations to make it acceptable to everybody. I don't know how difficult it will be to "get it right", but I'd like to start testing it on Ralph soon.

The new version soon to appear on ralph will also have a fix Rom put in for graphics problems; as reported on the boards, a good fraction of the errors seem to be associated with the graphics (I suspect the fact that they consume lots of memory is part of the problem), and in the new versions graphics related errors should abort the graphics but not disrupt completion of the Rosetta calculation.


How about instead of a "Internal benchmak" really counting flops
and using the proper boinc interface for this case ?
http://boinc.berkeley.edu/api.php
Credit reporting 
By default, the claimed credit of a result is based on the product of its total CPU time and the benchmark values obtained by the core client. This can produce results that are too low if the application uses processor-specific optimizations not present in the core client, is compiled with different compiler settings, or uses a GPU or other non-CPU computing resource. To handle such cases, the following functions can be used. 

void boinc_ops_per_cpu_second(double floating_point_ops, double integer_ops); 

This reports the results of an application-specific benchmark, expressed as number of floating-point and integer operations per CPU second. 
void boinc_ops_cumulative(double floating_point_ops, double integer_ops); 

This reports the total number of floating-point and integer operations since the start of the result. It must be called just before boinc_finish(), and optionally at intermediate points. 



Why Rosetta, currently, does not use any optimization ?

Using 3Dnow! (for Atlhon XP+) and (sse2 for Pentium IV & others)
can shirink the CPU time required to finish a float-point WU by 6 times (1:6)
So, why not ?

BTW: Was that "Internal benchmark" compared with a credited
benchmark program ?
eg: Sisoft Sandra
http://downloads.guru3d.com/download.php?det=177

ps: boinc is well know to produce very low (fantasy) benchmarks.

Do u believe,
that some team leader, wanting credits to place his team on top position,
will crunch by standard boinc benchmarks ?

*using a optimized boinc togheter with an optimized application my pc
can produce 22 credits/hour

*and I get this granted even on projects that do use "Quorum" and not everyone
is using "optimized" boinc/application, cause I claim the *same* credits
that someome else claim, using "standard" boinc/application

*Only they claim that *same* credits after 6 hours crunching a WU
while my pc is able to claim that *same* credits, at each hour -:)

Thus, I am not cheating in any way,
*and the importance of the "optimizations" !
that in the end, only causes science go a lot faster. and the CPU Hotter!


Thanks,
Click signature for global team stats
ID: 17304 · Rating: 0 · rate: Rate + / Rate - Report as offensive
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 17366 - Posted: 30 May 2006, 13:54:42 UTC - in response to Message 17304.  

I wrote an internal benchmark for Rosetta last week, and Rom now has a version that uses this to compute credits. Rom suggests however that we wait until after CASP to deploy it because it may take a few iterations to make it acceptable to everybody. I don't know how difficult it will be to "get it right", but I'd like to start testing it on Ralph soon.

The new version soon to appear on ralph will also have a fix Rom put in for graphics problems; as reported on the boards, a good fraction of the errors seem to be associated with the graphics (I suspect the fact that they consume lots of memory is part of the problem), and in the new versions graphics related errors should abort the graphics but not disrupt completion of the Rosetta calculation.


How about instead of a "Internal benchmak" really counting flops
and using the proper boinc interface for this case ?
http://boinc.berkeley.edu/api.php
Credit reporting 
By default, the claimed credit of a result is based on the product of its total CPU time and the benchmark values obtained by the core client. This can produce results that are too low if the application uses processor-specific optimizations not present in the core client, is compiled with different compiler settings, or uses a GPU or other non-CPU computing resource. To handle such cases, the following functions can be used. 

void boinc_ops_per_cpu_second(double floating_point_ops, double integer_ops); 

This reports the results of an application-specific benchmark, expressed as number of floating-point and integer operations per CPU second. 
void boinc_ops_cumulative(double floating_point_ops, double integer_ops); 

This reports the total number of floating-point and integer operations since the start of the result. It must be called just before boinc_finish(), and optionally at intermediate points. 



Why Rosetta, currently, does not use any optimization ?

Using 3Dnow! (for Atlhon XP+) and (sse2 for Pentium IV & others)
can shirink the CPU time required to finish a float-point WU by 6 times (1:6)
So, why not ?

BTW: Was that "Internal benchmark" compared with a credited
benchmark program ?
eg: Sisoft Sandra
http://downloads.guru3d.com/download.php?det=177

ps: boinc is well know to produce very low (fantasy) benchmarks.

Do u believe,
that some team leader, wanting credits to place his team on top position,
will crunch by standard boinc benchmarks ?

*using a optimized boinc togheter with an optimized application my pc
can produce 22 credits/hour

*and I get this granted even on projects that do use "Quorum" and not everyone
is using "optimized" boinc/application, cause I claim the *same* credits
that someome else claim, using "standard" boinc/application

*Only they claim that *same* credits after 6 hours crunching a WU
while my pc is able to claim that *same* credits, at each hour -:)

Thus, I am not cheating in any way,
*and the importance of the "optimizations" !
that in the end, only causes science go a lot faster. and the CPU Hotter!


Thanks,



I beleive Rom knows what he is doing with BOINC ;-)
(Rom is the release manager and a major developer of the platform)
So he'll know what works and does not, maybe fpops doesn't work well in rosetta (or maybe fpops is the internal benchamrk ;-))

SIDE/
As for the optimised app thing,
Well that's when you boinc credit system breaks down, since it's fine for 'in the same project credit' but bad for 'cross project credit'
Take Einstein for instance, maybe everyone not using the optimised apps should actually be getting less and have all there credit adjusted so and thoose using it should get the standard boinc credit ;) Why should other project suffer from a project (any) poorley coded application and when someone improves it now recons they should get more than a project who put the time in to write efficient code.
/SIDE
Team mauisun.org
ID: 17366 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Dimitris Hatzopoulos

Send message
Joined: 5 Jan 06
Posts: 336
Credit: 80,939
RAC: 0
Message 17371 - Posted: 30 May 2006, 15:34:19 UTC - in response to Message 17366.  

Why should other project suffer from a project (any) poorley coded application and when someone improves it now recons they should get more than a project who put the time in to write efficient code.


I agree with FluffyChicken, let me just add that the issue isn't only of a "poorly written app", but sometimes the nature of the app itself. Sometimes you can get the main code to fit into L2 cache which speeds things up immensely.

Other times, a science app needs to constantly access memory, in which case FSB speed is the most important (in which case a 2GHz and 3GHz P4 might actually get about the same amount of work done per unit of time).

So, a lot depends on the nature of the science app (it also depends on developer skills ofcourse, as akosf has shown in the case of Einstein).
Best UFO Resources
Wikipedia R@h
How-To: Join Distributed Computing projects that benefit humanity
ID: 17371 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Delk

Send message
Joined: 20 Feb 06
Posts: 25
Credit: 995,624
RAC: 0
Message 17486 - Posted: 1 Jun 2006, 0:30:36 UTC
Last modified: 1 Jun 2006, 0:31:12 UTC


I wrote an internal benchmark for Rosetta last week, and Rom now has a version that uses this to compute credits. Rom suggests however that we wait until after CASP to deploy it because it may take a few iterations to make it acceptable to everybody. I don't know how difficult it will be to "get it right", but I'd like to start testing it on Ralph soon.


How about approaching this slightly differently, rather than cutting over directly why not have the internal benchmark run in the next rosetta version although only have it report what would have been the credit allocation in the result output.

This way everyone can see the impact, differences between hardware & software platforms etc and voice any concerns/suggestions before it has any impact at all.
ID: 17486 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile David E K
Volunteer moderator
Project administrator
Project developer
Project scientist

Send message
Joined: 1 Jul 05
Posts: 1480
Credit: 4,334,829
RAC: 0
Message 17487 - Posted: 1 Jun 2006, 0:34:24 UTC - in response to Message 17486.  


How about approaching this slightly differently, rather than cutting over directly why not have the internal benchmark run in the next rosetta version although only have it report what would have been the credit allocation in the result output.

This way everyone can see the impact, differences between hardware & software platforms etc and voice any concerns/suggestions before it has any impact at all.


I like this idea.
ID: 17487 · Rating: 0 · rate: Rate + / Rate - Report as offensive
tralala

Send message
Joined: 8 Apr 06
Posts: 376
Credit: 581,806
RAC: 0
Message 17499 - Posted: 1 Jun 2006, 8:09:04 UTC - in response to Message 17487.  
Last modified: 1 Jun 2006, 8:10:17 UTC


How about approaching this slightly differently, rather than cutting over directly why not have the internal benchmark run in the next rosetta version although only have it report what would have been the credit allocation in the result output.

This way everyone can see the impact, differences between hardware +ACY- software platforms etc and voice any concerns/suggestions before it has any impact at all.


I like this idea.


I too. The new credit system should really be finetuned and scrutinized for possible exploits and bugs before actually used and the discussion should involve as many aspects as possible. There won't be a total agreement in the end but hopefully the best possible compromise.
ID: 17499 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Jose

Send message
Joined: 28 Mar 06
Posts: 820
Credit: 48,297
RAC: 0
Message 17500 - Posted: 1 Jun 2006, 8:15:32 UTC - in response to Message 17486.  
Last modified: 1 Jun 2006, 8:17:15 UTC

How about approaching this slightly differently, rather than cutting over directly why not have the internal benchmark run in the next Rosetta version although only have it report what would have been the credit allocation in the result output.

This way everyone can see the impact, differences between hardware & software platforms etc and voice any concerns/suggestions before it has any impact at all.


That is one hell of a great suggestion!!!!!! I like it!!!!

This and no other is the root from which a Tyrant springs; when he first appears he is a protector.”
Plato
ID: 17500 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Previous · 1 . . . 6 · 7 · 8 · 9

Message boards : Rosetta@home Science : Comments/questions on Rosetta@home journal



©2025 University of Washington
https://www.bakerlab.org