More
referral
Increase your income with Hive. Invite your friends and earn real cryptocurrency!

New Miner with Rx580 x 6 can only get 3 mining consistently - suggestions?

,

Or you can try using “Download VBIOS” from Hive dashboard (Overclocking section).

Hardware and wiring - no conclusive test. Just keep trying. Change GPUs and connect them on different risers. Maybe faulty riser or cable… Check which GPU isn’t hashing, switch places and see if the error stays with GPU or with the riser… If nothing else - try booting with single GPU so to prove that GPUs are OK. Then test risers - one by one…

Thank you @farkeytron and @givo.

I did the “Download VBIOS” then ran the Polaris application. Click the bigger box and said yes to all the popups. Save the original file (yes I backed it up just in case). Then used the “FLASH BIOS” option. Does the “Download VBIOS” unlock or does Polaris “unlock” the rom?

I think that is what I will do, in doing the one-by-one replacement and verification. I was hoping for something faster lol.

Update:

Thanks to the guidance, I ripped everything down and built back up slowly and it dawned upon me to read up on the motherboard I have and found this in particular which provided some understanding.

y The PCI_E4 slot will be unavailable when an M.2 SSD module has been installed in
the M.2_2 slot.
y The PCI_E2 slot will be unavailable when an expansion card has been installed in
the PCI_E5 slot.
y The PCI_E3 slot will be unavailable when an expansion card has been installed in
the PCI_E6 slot.
y If you install a lar

This showed me that even though I have 6 slots does not necessarily mean that I have 6 slots available simultaneously. So I double-checked my setup to ensure I was not blocking the slots that I was using. During this “re-configuration”, I noticed that the two longer PCI slots (x16) had a light under them when they are active. And the first one was not active. Not sure why the stubs would not work in that slot. Then I noticed that it really has to be secure for that light to indicate that it’s active. Since that slot was finicky I moved things around.

I then ran into some posts that mentioned changing the latency and generation PCI bios configuration. This configuration did seem to impact things. I found that keeping the generation to “auto” worked best for me. Moving it to 1,2 just kept things unstable. The next was the latency. I started from the bottom and moved up until I got the consistent boot with all the GPUs active and seen by Hive and the miner. This setting is currently working at around 128+ cycles with 4 GPUs at the moment. Two of the GPUs are on a 4 USB expansion PCIe expansion card due to the slot limitation.

I would connect the final two GPUs, but my USB cables are too short based on my configuration. So I have to figure that out or get longer USB cables. From what understand they should be USB 3.0 and not beyond a certain length?

Until then, I will let things run for a day two before mucking around with the overclocking and/or flashing the bios I have on them. Any comments/suggestions will be appreciated.

Thank you for the feedback thus far.

Update:

Checked everything this morning and see that I got a reboot around 2am due to the following:

00:02.0 Temp: 0C Fan: 0% Power: 0W
06:00.0 Temp: 511C Fan: 100% Power: 16777215W
08:00.0 Temp: 511C Fan: 100% Power: 16777215W
0a:00.0 Temp: 46C Fan: 87% Power: 74W
0b:00.0 Temp: 45C Fan: 87% Power: 74W

My guess was the two GPUs on the expansion card. I can never tell which GPUs the logs a referring to. The journey continues…

I’m leaning towards that I need a different motherboard to support what I am doing.

I have to also say that I have better stability with lolminer versus any other I have tried.
Phoenix miner throws may errors and doesn’t handle my setup at all.
ETHMiner doesn’t connect to the mining pool at that I working with.

Any suggestions are helpful.

I really looks like you’re having some riser issues.

Are you using risers with those awful SATA power connectors??

Thank you @farkeytron. I took your cue and started looking at my whole riser setup. I didn’t see anything obvious but did take the notion of finding out which GPU was failing. The system seems stable with 3 GPUs. So it seemed constantly the same GPU that was “crashing” and causing the system to reboot. So I went one-by-one to narrow which one it is. (be nice if hive or the miners would tell you that?). Once I found which one, I took everything out for that GPU and double-checked and cleaned everything. Replaced the riser set up with a newer one. Put everything back and it failed again.

Next, it occurred to me that it has to do with the slots and the Mobo. I have Ubic? 4 USB adapter and decided to put that in place in the PCIe 1x slot and avoid using the 16x slots. I put 3 cards on the adapter and left the other card in the other PCIe 1x slot. It has been running stable for 12 hours, with no reboots and hashing at total of 90+ mh/s.

My next step is to put all 4 on the adapter and put on a 5th.

I found on Reddit that it seems that others have the same issue with this Mobo. To where they can, at best, get 4 GPUs. Grant it this was back in 2017 when the board was new. Yet it makes sense. I think I will start looking for a “newer” Mobo/CPU combo. Though I hear ETH is changing, sounds like I will be mining something else in July lol. In meantime…ETH it is.

I can’t help but see that there should be some diagnosis tools on Hiveos, but it is particularly interesting that Hiveos detects the GPUs but the miner software (Lolminer being the most stable) all have a different opinion per se.

If you enable persistent logging and SSH into HiveOS you can usually see which GPU causes the reboot. (or use the video console instead of SSH)

To enable logging, run “logs-on” from the CLI:

miner:~# logs-on

There’s a watchdog that will reboot the worker if a GPU stops responding.
To see the, SSH to HiveOS, login as your user/password and ls the log dir.

For phoenixminer, it would be:

miner:~# tail -50 /var/log/miner/phoenixminer/phoenixminer_reboot.log

(You can change the miner path by renaming phoenixminer to lolminer or teamredminer, etc.)

Look for the “wdog” and which GPU isn’t responding.
Example:

2021.03.06:11:14:11.088: wdog GPU4 not responding
2021.03.06:11:14:11.088: wdog Thread(s) not responding. Restarting.

Thank you for all the tips @farkeytron .

Updated to 6 GPUs and has been running stable with my latest configuration. The only time it went down is when I pushed GPU4 memory. Put it back and it’s been stable with no issues so far (knocking on wood). It’s only been 2 hours…I will let this run for the rest of the day.

If this stays stable, I will add more GPU (i.e. Rx 580s). I have space for 2 more in slots.

In the meantime, does the choice of mining pool impact how fast I will make ETH? Currently using 2miners and I like the web interface to see what my miner is doing at the moment. But I see there are others that seem to rank higher.

Thank you again for your help in this and hopefully, my journey thus far helps someone else.

The choice of pool can have a small impact on the profitability.

Although the actual hashrates between pools is usually roughly even, there are other things that can eat into your profits.

Some pools have no fee, like HiveON, which is why I switched from Sparkpool… every percent counts!

Also, phoenixminer no longer has a fee, so I get every penny I mine (phoenix+hiveon+hiveos=100% FREE)

1 Like

Update:

The rig has been up for 10 hours straight and stable! I am so happy that it’s at this point but also
afraid to change anything (i.e. OC, flashing, etc). The only thing I will change is add two more GPUs.

Last night, I saw it reboot a couple of times and checked the connections, and saw that one of the expansion cards had moved. I need to secure them but they are running stable.

Let me know what you think should be the next move? OC, flash? New rig? leave it? I will definitely add the two more cards.

I will take on @farkeytron advice on potentially moving to Hiveon once I see my first payout. However, how does that work in moving from one farm to another? My guess is that you just lose whatever you minded on that farm if you don’t reach that “pay-out” amount?

Thank you again for your help.

If you need more help with mining try joining this server https://discord.gg/xbBzvqcKc2 :slight_smile:

Thanks, I joined! :slight_smile:

Update:

Reboot this morning, what happened?!

Rebooted and back to normal, and the 4 involved are clearly the ones on my Ubix 4 USB expansion card…any way to go about understanding what happened? Any thoughts?

The simplest explanation is that your USB PCI Expansion card is probably unstable.

You might consider moving it to another slot, if possible… or buying a replacement unit and using it.

Regarding pools:

Most pools will transfer any unpaid amounts to the wallet of the user after a certain amount of inactivity.

So for example, you can stop using HiveON and within 7 days of inactivity, they transfer whatever small amount in the pool to your wallet.

1 Like

I am pretty happy on the stability now. Just need to get the bar in place to hold those expansion cards in place. Also looking to add two more GPUs. I was now able to flash all the bios of all the gpu however, I am unsure how everyone is setting the VDD versus MVDD. Can someone help with this? I think I am setting the wrong field. Memory voltage versus Memory core voltage?

On top of this question, is that I read that it’s really about Mem and not Core; however, I see my hash rate increase when I increase the core where Mem seems to not impact anything. According to hive.

there are three kinds of voltages in Polaris (RX 470/570/480/580/590)

Core voltage - AKA “VDD” - The GPU core voltage
Memory voltage - AKA “MVDD” - The voltage at which the RAM chips themselves operate.
Memory controller IO voltage - AKA “VDDCI” - The voltage that the IO bus between the GPU and memory operates.

On Polaris, you can’t change the MVDD so the only things of interest is VDD and VDDCI.
By lowering them you can reduce the amount of power consumed by the card.

1 Like

I have found another issue with my rig. Today, I have realized that my 900w power supply that powers my GPUs has a short at the power outlet. It started today and I can visibly see where the connection is loose. I’m contemplating moving my GPU to the 850w power supply that is powering the mobo, risers, and fans. However, I am not 100% sure how much power those devices are using.

Is there a way to know the consumption of that 850w with the mobo, riser, and fans?

Below is the total consumption as reported by HiveOs.

There’s really no way to judge how much power the rig is using without a clamp multimeter or a “kill-a-watt” meter.

Your 850 watt ATX PSU may not even be a true 850 watts, but possibly a lot lower. The 850W rating of most ATX PSUs is a combined rating that includes 12V, 5V and 3.3V rails together.

You’d need to find out the 12V wattage rating of your PSU and make that decision.

Since the GPUs use only 12V power it might not a good idea to use it to power the whole rig.

Of course, it depends on the PSU… some PSUs are badass monsters that have incredible output capability… and others are nothing but trash inside a metal box.

Hello,

I use a wall Watt Meter to monitor Rig wattage. Hive watts are less than wall watts since it only sees GPU’s. For example Hive shows 386 W for four(4) RX580’s but I read 480 at the wall. I’m able to account for the discrepancy due to PSU efficiency, Fans, and MOBO draw. BTW - everything is powered from 1200W HP server supply.

Greg

This topic was automatically closed 416 days after the last reply. New replies are no longer allowed.