Finally, a gpu app on Rosetta@Home

Message boards : Number crunching : Finally, a gpu app on Rosetta@Home

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1994
Credit: 9,524,889
RAC: 7,500
Message 109408 - Posted: 22 Jun 2024, 10:59:41 UTC - in response to Message 109399.  

Not so relevant for me as I think I've gone over the top having 2Gb & 4Gb cards and I doubt that will change anytime soon. Or ever



+1
ID: 109408 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2117
Credit: 41,147,941
RAC: 16,290
Message 109409 - Posted: 22 Jun 2024, 11:31:59 UTC - in response to Message 109402.  

Is it like the situation we had with Raspberry Pi devices that time(?) when an amount of RAM needed was hard-coded in but free RAM was fractionally below, so only 4Gb devices could run tasks, even though actual RAM called was way less?
At this stage we don't know.
The previous version was able to run on cards with 6GB of VRAM without issue. Updated version came out and with it the message about the 6144 MB minimum if you had less than that.
But we don't know if the new version can't run on 6GB of VRAM, or if the minimum they've set is just that little bit higher than it needs to be.

To my way of thinking, if 8GB was the minimum amount then you'd set it that (or even 7.9GB), but to pick a value as specific as 6144MB? (to me that would be 6.2GB, or 6.1GB if it needed just that little bit more than 6GB).
It's just an odd value to use.


And from an installed base perspective, the vast majority would still be 4GB or so, but ruling out even those with 6GB would take another pretty big chunk out of the available compute resources.
It wasn't until the RTX 20 series that all models of card (including the bottom end models) had more than 4GB of RAM.

Iirc the problematic setting used involved a certain number of bytes (of RAM, not VRAM in that case), so 4,000,000,000,000 bytes was 3,906,250,000 bits, was 3,814,697Kb, was 3,725Mb, was 3.637Gb
RAM is used differently obvs, but it's easy to see how they could miss by a small fraction, so you miss out by just 1Mb
Anyway, their problem (and yours) not mine. I miss out by a mile.
ID: 109409 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1673
Credit: 17,603,339
RAC: 22,064
Message 109436 - Posted: 8 Jul 2024, 4:57:31 UTC

Well, while you need to take a few bags of salt with the TeraFLOPS estimates for the work being done for the Projects it's interesting to see the comparison between the projects, CPU v GPU at the moment.

Rosetta TeraFLOPS estimate:  40.835
Ralph   TeraFLOPS estimate:   2.847

The thing to keep in mind is the difference in the number of users producing that work.
For Rosetta there are 2,400, for Ralph there are 31. And while each user may have 1 to 100 (or more ) computers, the number of systems with a usable GPU is much, much less than those with a suitable CPU. So it's quite impressive that so few systems can produce so much work.
And given that the present Ralph application is Windows only, NVidia only, and 6GB or more of VRAM required, there's a massive amount of computing resources still available to be used.
Grant
Darwin NT
ID: 109436 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1994
Credit: 9,524,889
RAC: 7,500
Message 109437 - Posted: 8 Jul 2024, 7:55:44 UTC - in response to Message 109436.  
Last modified: 8 Jul 2024, 7:56:03 UTC

For Rosetta there are 2,400, for Ralph there are 31. And while each user may have 1 to 100 (or more ) computers, the number of systems with a usable GPU is much, much less than those with a suitable CPU. So it's quite impressive that so few systems can produce so much work.[/url]


That not so strange.
If i'm not wrong, you have an "entry level" "old" gpu (2 generations ago).
That gpu produce almost 7 TFlops in single precision and 200 GFlops in double precision.
For an "entry level" "old" cpu makes you can calculate the gflops (number of cores*frequency*operation per clock) and it's much less than gpu.
It's normal, cpu is "general purpuse", gpu is "specialized"
If you can "fit" your code on gpu, you win


And given that the present Ralph application is Windows only, NVidia only, and 6GB or more of VRAM required, there's a massive amount of computing resources still available to be used.

Waiting for Amd :-P
ID: 109437 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Finally, a gpu app on Rosetta@Home



©2024 University of Washington
https://www.bakerlab.org