Coronavirus research

My PC is folding but its hit and miss on the work units. So my PPD will fluctuate greatly.

I've edged the HBM2 up a bit to 1100Mhz though. idk if that'll help speed up what I do get but every little bit helps right?

The client should tell you what your theoretical potential PPD is while folding. If increasing your clocks makes a significant difference in that number, then by all means. Changing my clock speeds/voltages/etc. did not seem to equate to any noticeable difference even when I lowered speeds/voltages to the lowest the Radeon software would let me.


One good thing about the new client, the visual representation works now.

Check out this puppy:
20200422_folding_protein.png
 
Liking these 14561 & 14562 WU on my RX570

First 14561 unit was 513,000 PPD
Second 14562 is reporting 530,000 PPD

Highest I ever got before on this card was 360,000

Got a 14563 on my RX460 - 175,000 PPD

Highest I ever got before on this card was 130,000
 
Last edited:
Liking these 14561 & 14562 WU on my RX570

First 14561 unit was 513,000 PPD
Second 14562 is reporting 530,000 PPD

Highest I ever got before on this card was 360,000

Got a 14563 on my RX460 - 175,000 PPD

Highest I ever got before on this card was 130,000

Nice.

My Vega 64 hit 1.7M PPD when doing a 14416 project :D

Proof

What sucks is when it finished at 100% and it started doing its thing before uploading (I assume compressing the WU), I got a THREAD_STUCK_IN_DEVICE_DRIVER BSOD. Powered my PC back on, started the client, watched the log and it said:
Code:
18:18:29:WU02:FS01:Started FahCore on PID 1284
18:18:29:WU02:FS01:Core PID:2708
18:18:29:WU02:FS01:FahCore 0x22 started
18:18:29:WARNING:WU00:FS01:FahCore returned: BAD_FRAME_CHECKSUM (112 = 0x70)
18:18:29:WARNING:WU00:FS01:Fatal error, dumping

Nearly four hours wasted. :( 1.7M PPD means nothing when it does this at least once a day. FWIW it typically hits 1.1M. Due to necessarily pausing while doing other things... my real average is much lower.
 
I had to shut down for the last several weeks way to many errors. I had 5 straight failures on the 12th. I really thought my card was giving out. Gaming was fine.

I updated the client then waited for the new 20.4.2 driver. 4 straight wu's completed without error. The estimated ppd is way up currently 1.6 million
on this 14563 wu. The opencl driver must have been partially broken for my Fury previously ppd had never gone over 1 mil.

*Edit
I just noticed all the wu's I completed are well over the 165k atoms that would previously error out, Project 14561 atoms 438,651, Project Project 14562 atoms 371,771 Project 14563 atoms 448,584
 
Last edited:
Looks like I got excited over nothing.

These are the least stable folding drivers I've used since I re-joined.
 
Quick reminder to remove the dust from your rigs once in a while.
I checked the temperature of my APU yesterday to find out it was running at 107 C!
After cleaning the heat sink, temperature is down by 25C to where it was when I restarted folding 24/7 6-7 weeks ago.
I clean my house thoroughly every week but by being in the house almost all of the time, there’s a lot more dust in the air.
 
I'm really starting to dislike Project 14564 I've only had 1 complete without error. The second one I got conked out at 94% , the 3rd showed a different error but managed to complete. They take 6 hours to finish despite only having 68k atoms and aren't worth a whole lot. Half tempted to report it maybe if I ask nicely I can get it blacklisted for Fiji. :P

The new drivers seem to work well with every other wu I've had so far zero errors. The 800k ppd I had was the highest single day/card output in the almost 5 years of owning this FuryX.
*edit
Someone else opened a thread so I chimed in. They are talking about restricting it to lower end cards so if you got one and it doesn't run right you may want to add your card to the thread.

Currently you'll only see it if you've added the advanced flag to your client. https://foldingforum.org/viewtopic.php?f=19&t=34797&p=330139#p330139
 
Last edited:
I still can't find a pattern to my crashes. FWIW, I'm not using the advanced flag.

In a fit I turned my case fans down to minimum because that's literally the only thing I haven't tried and the GPU crunched two WU's one after the other just fine this morning.

When using the Power Saver preset it never gets above 75C, Default @ 78C, and Turbo Preset @ 79C. I've had "max fan RPM" crashes with all settings, though strangely Turbo seemed more reliable than Default and Power Saver with the last drivers.

With my case fans turned down, the card got to 80C, occasionally touching 82C and was as stable as a brick in a breeze.
 
I had 2 WU failed to complete over the last 3 days and both were Project 14564 on my RX 460
 
I've gone ahead and started a spreadsheet to track all of this.

So far, with case fans set to low, with newest drivers, it's more reliable (Edit: Or maybe it's having Excel open that's increasing reliability... :lol: ).

Crashes seem to coincide with checkpoints; often the CPU WU gets dumped when the GPU crashes the system, and I think its because the checkpoint only gets partially written. If I let the CPU run and disable the GPU, the CPU is 100% reliable. PPD is about 1/10 of the GPU though. :bleh:

My crashes since yesterday are on 16403/11743 and 14724/11745. Again, I think it's the GPU that's jacking things up; it just happens to ruin the WU the CPU is working on. I've hardly ever had a WU fail without a crash on this system.
 
I've hardly ever had a WU fail without a crash on this system.

This happened yesterday on one of the big mega-Work Units Project 16435. Luckily it happened at 5% so barely any time was wasted.

Today marks the second day in a row where I've come downstairs to a folding PC after my slumber. Prior to this, I have a tally of ZERO times where my PC is running/folding without a crash overnight.

I moved the mouse and saw it was about 90% complete with this:

Code:
12:36:25:WU00:FS01:Sending unit results: id:00 state:SEND error:NO_ERROR project:14415 run:0 clone:201 gen:25 core:0x22 unit:0x000000290d5262775e839e5cc326047f
. . .
12:37:16:WU00:FS01:Final credit estimate, 199907.00 points

A new record for me.
 
New record today @ 210821 points from a 41200 base point project:

Code:
20:55:45:WU01:FS01:Sending unit results: id:01 state:SEND error:NO_ERROR project:14415 run:0 clone:199 gen:52 core:0x22 unit:0x000000430d5262775e839e5cccba610d
20:55:45:WU01:FS01:Uploading 87.80MiB to 13.82.98.119
20:55:45:WU01:FS01:Connecting to 13.82.98.119:8080
20:55:51:WU01:FS01:Upload 21.71%
20:55:57:WU01:FS01:Upload 46.06%
20:56:03:WU01:FS01:Upload 74.96%
20:56:08:WU01:FS01:Upload complete
20:56:08:WU01:FS01:Server responded WORK_ACK (400)
20:56:08:WU01:FS01:Final credit estimate, 210821.00 points
 
I only just now figured out that I can pause/finish individual slots by right clicking them in Advanced Control.
 
Back
Top