More
referral
Increase your income with Hive. Invite your friends and earn real cryptocurrency!

Hive OS Diskless PXE

,

I also had to make the tmp folder writable.

sudo chmod 777 /tmp/

This is cool - I normally just buy old second hand SSD drives but this is something to think about for flexibility.

I don’t know about updating the Nvidia drivers on the master PXE boot image, but there is probably a way to do that. You might have to repackage the Nvidia drivers into a .tar.xz file that the HiveOS PXE environment can then use, but I don’t see why not.

If you dissect /path/to/pxeserver/pxe-config.sh, you will see that there is a line in there that says:
NV_VER=nvidia-470.86.tar.xz, so I think that if you were to update that, you should be able to do it.

(It might take a bit of dissection for the setup and configuration scripts, but I don’t see why you wouldn’t be able to do it either.)

(Sidebar: You might also have to update the PXE boot parameters in /path/to/pxeserver/tftp/bios/menu.cfg along with /path/to/pxeserver/tftp/efi/grub.cfg)

And the same goes for where you want to add your own small apps.

For example, previously, the HiveOS PXE setup use to just use regular xz to compress and decompress the hiveramfs.tar.xz files.

Quite some time ago, I asked about and then modified my own pxe-config.sh where I had the system download pxz and I used pxz to compress the hiveramfs.tar.xz file in order to speed up the compression process (because the system that I am using for the PXE server doesn’t have a super power processor, so it used to take me almost 40 minutes to compress the hiveramfs.tar.xz file back up whenever I updated it.) Since I’ve modified my scripts (which it looks like that it’s been rolled back into the official diskless PXE script, they’ve now standardised on this (because when I download the new scripts, this is already included.)

Thus, to your point and question about including your own apps - absolutely you can.

Again, you just have to dissect their scripts to be able to do that.

Yeah. I prefer this approach because I have a centrally managed PXE boot image.

And so long as you take some of the necessary and (pre-)cautionary steps to secure that image (and/or periodically back it up), if say you have a security breach, it’s really easy to “repair” your mining rig (and/or your entire mining farm) by just rebooting your systems back to the secured, original PXE boot image and your mining rigs should be back up in a few minutes (as fast as it can reboot basically).

So this is an amazing feature and functionality and I kind of wished that more crypto YouTubers actually USED this feature more (and talked about it more) rather than just buying a bunch of cheap, low-capacity Kingston SATA SSDs and flashing the HiveOS image onto that.

@hiveon
Have you or anybody ever tried moving to using pixz instead of using pxz for the parallel compression and parallel decompression of the boot archive (i.e. moving from hiveramfs.tar.xz to hiveramfs.tpxz)?

I tried dissecting through the scripts and I can’t seem to find the part where the system knows to use tar to extract the hiveramfs.tar.xz file into tmpfs.

I’ve tried looking in /path/to/pxeserver/tftp and also in /path/to/pxeserver/hiveramfs and I wasn’t able to find where it codifies the instruction and/or the command to unpack the hiveramfs.tar.xz.

If you can provide some guidance as to where I would find that in the startup script, where it would instruct the client to decompress and unpack the hiveramfs.tar.xz, that would be greatly appreciated.

Thank you.

edit
I’ve implemented pixz now for both parallel compression and the creation of the boot archive hiveramfs.tpxz and the decompression of the same.

It replaces the boot archive hiveramfs.tar.xz.

The PXE server host, if you are running an Ubuntu PXE boot server, will need to have pixz installed (which you can get by running sudo apt install -y pixz, so it’s pretty easy to get and install.)

The primary motivation for this is on your mining rig, depending on the CPU that you have in it, but usually, at boot time, you will have excess CPU capacity, and therefore; if you can use a parallel decompression for the hiveramfs archive, then you can get your mining rig up and running that much quicker.

The side benefit that this has also produced is that in the management of the hiveramfs image on the PXE server, pixz worked out to be faster in the creation of the FS archive compared to pxz.

Tested on my PXE server which has a Celeron J3455 (4-core, 1.5 GHz base clock), it compressed the FS archive using pxz in 11 minutes 2 seconds whilst pixz was able to complete the same task (on a fresh install of the HiveOS PXE server) in 8 minutes 57 seconds. (Sidebar: For reference, previously, when using only xz (without the parallelisation), on my system, it would take somewhere between 40-41 minutes to create the FS archive.)

On my mining rig, which has a Core i5-6500T, it takes about 8.70 seconds to decompress hiveramfs.tpxz to hiveramfs.tar and then it takes about another 1.01 seconds to unpack the tarball file.

Unfortunately, I don’t have the benchmarking data for how long it took my mining rig to decompress and unpack hiveramfs.tar.xz file.

Here are the steps to deploying pixz, and using that to replace pxz.

On the PXE server, install pixz:
sudo apt install -y pixz

Run the pxe-config.sh to specify your farm hash, server IPv4 address, etc. and also the change the name of the FS archive from hiveramfs.tar.xz to hiveramfs.tpxz.

DO NOT RUN HiveOS update/upgrade yet still!!!

When it asks if you want to upgrade HiveOS, type n for no.

For safety/security, make a backup copy of the initial hiveramfs.tar.xz file that can be found in /path/to/pxeserver/hiveramfs.

(For me, I just ran sudo cp hiveramfs.tar.xz hiveramfs.tar.xz.backup.)

You will need to manually create the initial hiveramfs.tpxz file that the system will act upon next when you run the hive-upgrade.sh script.

To do that, run the following:

/path/to/pxeserver$ sudo mkdir -p tmp/root
/path/to/pxeserver$ cd tmp/root
/path/to/pxeserver/tmp/root$ cp ../../hiveramfs/hiveramfs.tar.xz .
/path/to/pxeserver/tmp/root$ tar --lzma -xf hiveramfs.tar.xz
/path/to/pxeserver/tmp/root$ tar -I pixz -cf ../hiveramfs.tpxz .
/path/to/pxeserver/tmp/root$ cd ..
/path/to/pxeserver/tmp/root$ cp hiveramfs.tpxz ../../hiveramfs
/path/to/pxeserver/tmp/root$ cd ../../hiveramfs
/path/to/pxeserver/hiveramfs$ cp hiveramfs.tpxz hiveramfs.tpxz.backup

Now, edit the pxe-config.sh:
at about line 51, it should say something like:
#adde pxz (typo included)

copy lines 51-53 and paste it after line 53

(basically, add an i so that where it says pxz now says pixz instead)
edit the lines to read:
#adde pixz
dpkg -s pixz > /dev/null 2>&1
[[ $? -ne 0 ]] && need_install="$need_install pixz"

save, quit

Run pxe-config.sh again.

DO NOT RUN HiveOS update/upgrade yet still!!!

Now, your farm hash, IP address, etc. should all have been set previously. Again, when it asks you if you want to upgrade HiveOS, type n for no.

Now, we are going to make a bunch of updates to hive-upgrade.sh.

(For me, I still use vi, but you can use whatever text editor you want.)

/path/to/pxeserver$ sudo vi hive-upgrade.sh
at line 71, add pixz to the end of the line so that the new line 71 would read:
apt install -y pv pixz

I haven’t been able to figure out how to decompress the hiveramfs.tpxz archive and unpack it in the same line.

(I also was unable to get pv working properly so that it would show the progress indicator, so if someone else who is smarter than I am can help figure that out, that would be greatly appreciated, but you can also remote into your PXE server again in another terminal window and run top to monitor your PXE server to make sure that it is working in the absence of said progress indicator.)

So the section starting at line 79 echo -e "> Extract Hive FS to tmp dir" now reads:

line80: #pv $FS | tar --lzma -xf -
line81: cp $FS .
line82: pixz -d $ARCH_NAME
line83: tar -xf hiveramfs.tar .
line84: rm hiveramfs.tar

Line84 is needed because otherwise, without it, when you go to create the archive, it will try to compress the old hiveramfs.tar in as well, and you don’t need that.

Now fast forward to the section where it creates the archive (around line 121) where it says:
line121: echo -e "> Create FS archive"
line122: #tar -C root -I pxz -cpf - . | pv -s $arch_size | cat > $ARCH_NAME
line123: tar -C root -I pixz -cpf - . | pv -s $arch_size | cat > $ARCH_NAME

(in other words, copy that line, paste it, comment out the old line, and add an i to the new line.)

line125 is still the old line where it used the single threaded xz compression algorithm/tool, which should be already commented out for you.

The rest of the hive-upgrade.sh should be fine. You shouldn’t have to touch/update the rest of it.

Now you can run hive-upgrade.sh:
/path/to/pxeserver$ sudo ./hive-upgrade.sh

and you can run it to check and make sure that it is copying the hiveramfs.tpxz from /path/to/pxeserve/hiveramfs to /path/to/pxeserver/tmp/root, decompressing the archive, and unpacking the files properly.

If it does that properly, then the updating portion of it should be running fine, without any issues (or none that I observed).

Then the next section that you want to check is to make sure that when it repacks and compresses the archive back up, that that should be working properly for you.

Again, it is useful/helpful to have a second terminal window open where you’ve ssh’d into the PXE server again, with top running so that you can make sure that the pixz process is working/running.

After that is done, you can reboot your mining rig to make sure that your mining rig is picking up the new hiveramfs.tpxz file is ok and that it is also successful in decompressing and unpacking the archive.

I have NO idea how it is doing that because normally, I would have to issue that as two separate commands, but again, it appears to be working with my mining rig.

shrug

It’s working.

I don’t know/understand why/how.

But I’m not going to mess with it too much to try and figure out why/how it works, because it IS working.

(Again, if there are other people who are smarter than I am that might be able to explain how it is able to decompress and unpack a .tpxz file, I would be interested in learning, but on the other hand, like I said, my mining rig is up with the new setup, so I’m going to leave it here.)

Feel free to ask questions if you would want to implement pixz so that you would have faster compression and decompression times.

If your PXE server is fast enough for you such that pxz is fast enough for you and this isn’t going to make enough of a difference for you, then that’s fine. That’s up to you.

For me, my PXE server, running on a Celeron J3455 is quite slow, so anything that I can do to speed things up a little bit is still a speed up.

Thanks.

HI. Can you share a link to image of latest Version 6.5 ?
I would like to install it localy on few machines because of kernel 5.15

Thank you

Hi there. I recently tried this and it worked great however I can’t get CPU mining to work when the machine PXE boots. Is there a way to enable this so I can mine XMR from a PXE boot machine is it just GPU mining?

Thanks

You can mine XMR with your CPU.

Set it up in your flight sheet.

(Add a second miner, make sure your XMR wallet and pool information is already set up and then you can set the cryptocurrency to XMR, give it your XMR wallet address, from wherever you might obtain one, the pool, and the miner that you want to use and away you go.)

I’ve done it before and it works.

(except for there’s another current issue that I am about to post about), but other than that, it works.

Sidebar, for the benefit of the team here:

Ever since I have implemented pixz for both parallel compression and decompression of the archive, whenever I’m updating the PXE boot server, I’ve now been able to cut the time it takes to prepare the hiveramfs archive file from ~40-41 minutes on my Celeron J3455 to 1 minutes and 41 seconds on my AMD Ryzen 9 5900HX (where I run Ubuntu in a virtual machine).

The AMD Ryzen 9 5900HX Ubuntu VM will usually take about 24.27 seconds to decompress the hiveramfs.tpxz file and then another 1 minute 8 seconds to unpack the tarball.

But this shows that depending on the processor that you’re using to host the PXE server, if it has a fast enough processor, the upgrade can be performed very quickly with pixz.

Just wanted to add an additional data point in terms of the amount of time savings that can be gained by switching to and using pixz instead of pxz or the original PXE boot script (where it had no parallelisation for neither the compression nor decompression side of things).

Thanks.

Hi, when i reboot the Diskless PXE Server, my rigs do not boot anymore over the network like there is no pxe server. Even when I rerun the pxe-config.sh script (It tells everything is ok and server is ready to work).
Does anybody else have this problem or knows how to fix it?

So a couple of things that you will want to check:

The most “common” problem that I have experienced whenever the client fails to connect to the PXE server is because there is a port conflict between the atftpd and dnsmasq. Both use port 69 (I think it’s UDP port, but I forget if it is UDP and/or TCP and UDP. Either way, it’s a port conflict.)

Usually when that happens, I will check to see if that’s the case by running sudo systemctl status atftpd and sudo systemctl status dnsmasq.

If there is a port conflict, it should tell you in the terminal output.

And then if that’s the case, then I would stop both services using sudo systemctl stop atftpd and sudo systemctl stop dnsmasq.

Sometimes, restarting them manually one by one will “unclog” the port conflict, but I have found that that isn’t always necessarily the case.

If after (re-)starting both services, and the status still says that there’s a port conflict, then I would stop it again and then run pxe-config.sh because at the end of that script, it will start those services for you.

And once you do that, then you can check the status of those services again to make sure that both should be reporting that the services are active with no other issues like said port conflict.

At that point, you should be able to either reboot or power-cycle your client rig and it should pick up your PXE server.

I’ve also had it happen, I think once, where nginx failed as well, so I had to check the status of that on the PXE server, and then stop and (re-)start it as necessary, and then reboot and/or power-cycle the client rig again to make sure that it was able to connect to it, to the continue the booting process.

Try that.

Give that a shot and hopefully that can help fix your problem.

As a sidenote, it might appear that the latest update might have “broke” the PXE image. I’m not sure, because I ran the update on my PXE server, and then when I rebooted my rig, all of 5 of my GPUs will report back as “MALFUNCTION” in rather than like what type of GPU it is.

But if I revert back to one of my earlier hiveramfs.tpxz backup images, then it is able to run ok again.

So I don’t know what that’s about.

Just be aware that this can sometimes happens with updates and/or reboots as well.

(And the fix for a failed updated like that is I would just copy the backup image and replace the updated image with that one using:

$ cd <<your_pxe_server_folder_on_pxe_server>>
pxeserver$ cd backup_fs
backup_fs$ cp <<last_known_good_hiveramfs.tpxz.date.bak>> ../hiveramfs/hiveramfs.tpxz

(sidebar: I’m using the .tpxz format as I’ve written about above in order to support both parallel compression and parallel decompression. My PXE server can update the archive in 1 minute 50 seconds now, although the entire process of running hive-upgrade.sh probably takes maybe about 10 minutes all-told. (including decompression, unpacking, updating, compressing, verifying, and replacing the backup hiveramfs file.) The compression phase used to take the longest time. pixz writing out to the .tpxz format fixed that.)

and then you should be able to reboot/power-cycle your system again and then it should work for you again.

(Or at least that’s what I’ve been doing since the end of July.)

Sidebar: This assumes that you dnsmasq isn’t ACTUALLY working as a DHCP server. (Or at least in my experience. My router is what dishes out the IPv4 addresses, and then I set the static IPv4 address in when I set up the client, I think via the HiveOS web interface. I think that I had it happen to be once before where I tried to make dnsmasq be the DHCP server as well, and so, the PXE client was running into issues trying to get a IPv4 address because it couldn’t figure out whether it should be getting it from my router or whether it should be getting it from the PXE server. So, I think now, to make it more robust, I’ve let my router handle dishing out IPv4 addresses, and I think that I turned off the DHCP part in the dnsmasq config file. I mention this, because this can also be a potential source of conflict or a reason why your system might not work. It depends, but I thought that I would mention it anyways.)

I updated the settings to build the ubuntu20 image and have it being served properly to a client. I would like to be able to have all updates be available with a selfupgrade. However running the command ./deploy_pxe ubuntu20 --selfupgrade followed by ./deploy_pxe ubuntu20 --upgrade , does NOT install the updated packages…

When i try to run hive-upgrade.sh i’m told this is deprecated and it exits. What is the proper way to get the hive packages updated?

Hello, Is there any timeline for when more AMD Driver versions will be supported? I need version 5.4.6 (6.1.2307) not 5.4.6 (6.2.2311.1) This one doesn’t work for certain AMD cards, other than that it works pretty well.

Is there a way I can install hiveos-0.6-222-stable instead of ubuntu20? The AMD driver doesn’t play well with the BC-250.

Which one for example?

I’m having trouble with the BC-250. It works with the following but it will not init the GPU on the version used in diskless PXE:

5.15.0-hiveos #110

Kernel Version

A22.20.5 (5.18.2301)

BC-250 × 1

Yes the BC-250 has issues (amdgpu failed to read reg:ec48) I’ve tried modify the deploy_pxe file to force install other amd gpu versions but none work and all give the same error

Is there a repo with the older diskless kernels so that I can have it pull down one of those versions? For instance, they work fine with the Hive on Ubuntu 18. If the diskless kernel for that one was available I could just change the repo url and have it install that one instead.

1 Like

Anyone want to get BC-250 working ive made a modified version.

TheJames is it possible to make a pxe boot version of the beta image hiveos? there are as few coins that require it but it impossible to do a hive replace beta on pxe, thanks.