How to use HP_RECOVERY partition on windows 7

Things get broken, things get tainted… Windows 7 is no more under support but I still have some PCs running it. And stupid me one of them didn’t have a backup copy of the initial working system, since it was installed in 2014. Also I found the original CD but it was… let’s say… unreadable. Actually, the metallic cover of the CD peeled off.

Time to recover it. First trial is to install a new W.7 system on a new disk, and use the activation code on the sticker to activate it. Unfortunately the OA (Online Activated) windows can’t be re-installed, so I ended up with a brand new, unregistered, Windows 7. I tried with different Windows 7 dumped versions and with a Windows 10, with partial success. But I don’t want to write about that. The thing is I was trying to solve the problem the wrong way. With a working OS I can read the “damaged” system disk and I see that it comes with a partition called HP_recovery. Why should I know? I never used the computer! They never called me to check it before! No one cared until now! πŸ˜”πŸ˜”πŸ˜”

Anyway, the super-expensive hardware is connected to a HP desktop, so the best is to use the HP system recovery for Windows 7. Basically, since I can’t boot, neither recover my install from a recovery point, I’m forced to perform a full HP recovery. This takes time and it’s destructive, so first I clone the original disk, in case I need to extract some license file from it, then I boot the computer pressing F11 repeatedly. At one point, an HP splash screen appears and I see some shell scripts running, then I need to wait (maybe you don’t) until a new window pops up. Sorry guys, no screenshot, but there are not so many options on the new window: repair by wiping up everything or cancel. I choose the first, click on I accept all the risks (nice) and wait for a few hours until I magically have a W.7 back. This time, with license. Now I can tell the user to install his programs to control the hardware, or contact the company that supplies it. And I wonder….for how long this agony called W.7 is going to be with us? πŸ˜”πŸ˜”πŸ˜”

GPFS : cannot delete file system / Failed to read a file system descriptor / Wrong medium type

A nice and mysterious title. As usual. 😁. But you know what I’m speaking about. I had one GPFS share that crashed due to a failing NVMe disk. The GPFS share was composed of several disks, and configured without safety, as a scratch disk. Being naive, I thought that simply replacing the failed disk by a disk that will call here Disk it and rebooting everything should bring my GPFS share back. It didn’t work, and I have learned some new things that I’d like to show you.

To recover my shared gpfsshare after the replacing of the new disk, first I’ve tried changing the StanzaFile disk names and recreate the NSD disks by running mmcrnsd -F StanzaFile. I could join the new disk Disk but the share was not yet usable.

Then I tried to change the mounting point of gpfsshare from /mountpoint to /newmountpoint

root@gpfs ~ ## > mmchfs gpfsshare -T /newmountpoint
Verifying file system configuration information ...
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmchfs: Failed to collect required file system attributes.
mmchfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Next thing I thought on doing is deleting Disk from gpfsshare. That way, I would end up with a GPFS share, smaller than the original, but functional.

root@gpfs ~ ## > mmdeldisk gpfsshare Disk
Verifying file system configuration information …
Too many disks are unavailable.
Some file system data are inaccessible at this time.
Check error log for additional information.
Too many disks are unavailable.
Some file system data are inaccessible at this time.
mmdeldisk: Failed to collect required file system attributes.
mmdeldisk: Unexpected error from reconcileSdrfsWithDaemon.
Return code: 1
mmdeldisk: Attention:
File system gpfsshare may have some disks
that are in a non-ready state.
Issue the command:
mmcommon recoverfs scratch
mmdeldisk: Command failed.
Examine previous error messages to determine cause.

Let’s list our NSD disks to see what we have. We can list with mmlsnsd -X or without the argument (-X). This is my output (edited):

root@gpfs ~ ## > mmlsnsd

File system | Disk name | NSD servers
gpfsshare Disk
gpfsshare Disk_old1
gpfsshare Disk_old2
(free disk) Disk_A
(free disk) Disk_B
(free disk) Disk_C

Since everything is looking awful here, I will delete my gpfsshare filesystem and make it new. Actually I need to force the deletion. Let me show you.

root@gpfs ## > mmdelfs gpfsshare
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmdelfs: tsdelfs failed.
mmdelfs: Command failed. Examine previous error messages to determine cause.
root@gpfs ## > mmdelfs gpfsshare -p
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmdelfs: Attention: Not all disks were marked as available.
mmdelfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

We managed to delete the filesystem! But what happened with our NFS disks? Let’s use mmlsnsd to see where they stand:

root@gpfs ~ ## > mmlsnsd

File system | Disk name | NSD servers
(free disk) Disk
(free disk) Disk_old1
(free disk) Disk_old2
(free disk) Disk_A
(free disk) Disk_B
(free disk) Disk_C

So the disks are there. The filesystem gpfsshare is gone so they are marked as belonging to a free disk filesystem. Let’s then delete the NSD disks. We need to do it one by one. I show the output for the disk Disk.

root@gpfs ## > mmdelnsd Disk
mmdelnsd: Processing disk Disk
mmdelnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Once we removed all our disks (check with mmlsnsd) we can add them again using the original StanzaFile (mmcrnsd -F StanzaFile). If you have already a disk processed, don’t worry: the mmcrnsd disk will anyway work. A standard output for our disks will be like this:

root@gpfs ## > mmcrnsd -F StanzaFile
mmcrnsd: Processing disk Disk_A
mmcrnsd: Disk name Disk_A is already registered for use by GPFS.
mmcrnsd: Processing disk Disk_B
mmcrnsd: Disk name Disk_B is already registered for use by GPFS.
mmcrnsd: Processing disk Disk_C
mmcrnsd: Disk name Disk_C is already registered for use by GPFS.
mmcrnsd: Processing disk Disk
mmcrnsd: Processing disk Disk_old1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Time to re-create the gpfs filesystem. We can call it the same way as before πŸ˜‰, if we like. It works now! I have changed nothing with respect to the command I used originally to create my gpfs share gpfsshare. This one:

mmcrfs gpfsshare -F StanzaFile -T /mountpoint -m 2 -M 3 -i 4096 -A yes -Q no -S relatime -E no –version=5.X.Y.Z

If you get an error like this:

Unable to open disk 'Disk_A' on node
No such device

Check that the gpfs is running on the corresponding node (in this example it’s and run again the mmcrfs.

I hope you have learned something already. But for my notes, first delete the filesystem (mmdelfs gpfsshare -p), second delete the nsf disks (mmdelnsd Disk), then add the disks again (mmcrnsd -F StanzaFile), then create a new filesystem (mmcrfs). Take care!

DELL chassis management controller website not functioning

And it’s OK. It’s a 7 years-old CMC and it was not taken care of as it deserved. Computers are like people, if you don’t care about their issues, they end up not speaking to you. If you want a computer to speak again to you, sometimes it’s enough to do a factory reset, or a reinstall of the OS (don’t try to do that with a person). There’s a thread on DELL support that is very much explaining you how to reset a CMC. Let’s do an ssh to it and see what we get.

ssh's password:
X11 forwarding request failed on channel 0

Welcome to PowerEdge M1000e CMC firmware version 6.21

$ racadm getversion
<Server> <iDRAC Version> <Blade Type> <Gen> <Updatable>
server-1 (02) PowerEdge M630 iDRAC8 Y
server-2 (02) PowerEdge M630 iDRAC8 Y
server-3 (02) PowerEdge M630 iDRAC8 Y
server-4 (02) PowerEdge M630 iDRAC8 Y
server-5 (02) PowerEdge M630 iDRAC8 Y
server-6 (02) PowerEdge M630 iDRAC8 Y
server-7 (02) PowerEdge M630 iDRAC8 Y
server-8 (02) PowerEdge M630 iDRAC8 Y
server-9 (02) PowerEdge M630 iDRAC8 Y
server-10 (02) PowerEdge M630 iDRAC8 Y
server-11 3.65 (Build 6) PowerEdgeM610 iDRAC6 Y
server-12 3.65 (Build 6) PowerEdgeM610 iDRAC6 Y
server-13 3.85 (Build 3) PowerEdgeM610 iDRAC6 Y
server-14 3.85 (Build 3) PowerEdgeM610 iDRAC6 Y
server-15 3.85 (Build 3) PowerEdgeM610 iDRAC6 Y
server-16 3.85 (Build 3) PowerEdgeM610 iDRAC6 Y

<Switch> <Model Name> <HW Version> <FW Version>
switch-1 M8024-k 10GbE SW A06
switch-5 M4001F FDR IB Switch A01 N/A

Yeah, that’s all we got. We can reset each management system independently, but we want to go for a full reset of the chassis itself. For that we do

$ racadm racresetcfg
The configuration has initiated restoration to factory defaults.
CMC reset operation initiated successfully.
It may take up to a minute
for the CMC to come back online again.

and wait around 10 minutes. After that, the CMC and the web come back to life, but with

Name and password are root and calvin, IP address is

Info taken from the manual. To correct and set up a new IP for the CMC I go to the machine itself and use the LCD display. BTW here you have a list of racadm commands. Because you never know when you need them.

Notes: MacBook Pro maintainance

A capture from StatsWidgetPlus. Image taken from here.

I don’t know if I’m lucky or not but I have a nice MacBook Pro from 2017 for home office. I’ve been working with it since quite some time already, but it’s a refurbished one, meaning it got a problem already. So I need to keep a constant eye on its performance. The best way of doing so is by using the available apps to monitor usage stats.

I run Big Sur 11.3 at this moment. This is my collection of apps, with the corresponding feedback from my usage experience.

  • StatsWidgetPlus. Once installed, you will get a nice widget like the one above where you can monitor the 4 main parameters : CPU, Memory, Network, and Disk. Thanks to this one, I saw that my C++ app (see my previous posts) was demanding much more resources than what I’d like to.
  • Disk Space Analyzer Free. You know the pie charts of disk space give by programs like baobab in Linux? This app is doing just that. You will need to give it access to your disk, and after that, you will have a clickable piechart of disk usage. The colour scheme is great, too. Thanks to this one, I found that I was storing old data on my computer that I was thinking I had deleted already. Note that it tells you how big it is, not what it is 😁
  • Macs Fan Control. Do you think that, for unknown reasons, your MacBook is making more noise than needed? You should try this one. It’s not available on the App Store (the previous ones are) so here is the download link. I have used previous incarnations since years, and I never had an issue with it. I mean I’ve never blown up a laptop because of forcing it to go to lower RPMs. Besides, it gives you also the CPU temperature, so you know where you stand πŸ˜‰.

In general : be aware of your memory usage or you will get a sluggish performance over all, maintain your disk usage below 70% or you will not be able to index files as quick as you may need, and keep an eye on fan activity after an OS update, since more than once an update resulted on abnormal fan noise in my case. There’s a procedure to fix a fan that seems to be overworking (reboot, flush hardware, clean OS reinstall, and such) but the simplest and fastest so far for me was to use the Macs Fan Control.

As I tend to say, I hope you don’t need to read this post at all 😊.

Hardware tests with FC 33 Live

Yesterday I have shown you how to burn a pendrive on your mac. Today I have plugged the key on the ill server to test the hardware, as planned. This is what I did.

  1. Plug the key, boot the server, choose the one-time boot option for booting from the pendrive
  2. Open a shell and install the missing tool
sudo su -
yum install smartmontools
yum install memtester

Now it comes the fun. First let’s tests the hard disks in search for errors. Usually you have more than one on a server (or is it me?) so first we list them. I’m going to suppose we have one on position /dev/sda. It could be we need to run a selftest to read the test results. We do that after the first smartclt run.

fdisk -l
smartclt -a /dev/sda

Here is the run fo the test, and a sample output. As usual, edited for a better reading.

smartctl -t short /dev/sda
smartctl 7.2 2021-01-17 r5171
(local build)
Copyright (C) 2002-20,
Bruce Allen, Christian Franke,

Sending command: "Execute SMART Short self-test routine
immediately in off-line mode".
Drive command "Execute SMART Short self-test routine
immediately in off-line mode" successful.
Testing has begun.
Please wait 2 minutes for test to complete.
Test will complete after Wed Apr 28 06:05:23 2021 EDT
Use smartctl -X to abort test.

smartctl -l selftest /dev/sda
smartctl 7.2 2021-01-17 r5171
(local build)
Copyright (C) 2002-20,
Bruce Allen, Christian Franke,

SMART Self-test log structure revision number 1
Num Test_Description Status
Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 15961 -

You can find more help on this post. And here you can learn if your hard drive is failing. Probably no, these tests tend to scare you saying you have an old age or pre fail disk, but the truth is they disk is still OK. More about how to interpret te the outcome here.

Now let’s go for the RAM memory test. How to test the RAM on linux? With memtester. We have installed it at the top of the post. I will show you an standard ouput:

# memtester 1024 2
memtester version 4.3.0 (64-bit)
Copyright (C) 2001-2012 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).

pagesize is 4096
pagesizemask is 0xfffffffffffff000
want 1024MB (1073741824 bytes)
got 1024MB (1073741824 bytes), trying mlock ...locked.
Loop 1/2:
Stuck Address : ok
Random Value : ok
Compare XOR : ok
Compare SUB : ok
Compare MUL : ok
Compare DIV : ok
Compare OR : ok
Compare AND : ok
Sequential Increment: ok
Solid Bits : ok
Block Sequential : ok
Checkerboard : ok
Bit Spread : ok
Bit Flip : ok
Walking Ones : setting 45
Walking Zeroes : ok
8-bit Writes : ok
16-bit Writes : ok

Loop 2/2:
Stuck Address : ok
Random Value : ok
Compare XOR : ok
Compare SUB : ok
Compare MUL : ok
Compare DIV : ok
Compare OR : ok
Compare AND : ok
Sequential Increment: ok
Solid Bits : ok
Block Sequential : ok
Checkerboard : ok
Bit Spread : ok
Bit Flip : ok
Walking Ones : ok
Walking Zeroes : ok
8-bit Writes : ok
16-bit Writes : ok


As you see, for 1024 bits everything is OK. Now you may have problems if you try to allocate all the RAM of the server. Like mlock -> Too many pages requested. Check this post about the issue. I solved it by rebooting (so that I really free the pages) and matching the test to 90% of all the RAM available. Yes, we don’t test it all, but I feel quite confident about. If you don’t like memtest, grab an Ubuntu image and use the mem test option on the boot menu. Or without leaving the live distribution we are running, we can try 5 commands to check the memory. Or Test Memory Through Linux File Cache with On-Disk Temporary File. Also known as dd on RAM.

Unfortunately all the tests my server passed with flying colours. So my problem is somewhere else 😩😩😩. Life is so frustrating sometimes πŸ˜”πŸ˜”πŸ˜”.

VLAN configuration on CentOS 7.X

This is a recurrent problem. I’m moving to tagged VLAN one server, so that we can use 2 networks through one interface. I start having ssh access to the server through the interface p3p1 and I configure the VLAN through the nm-connection editor (image above taken from the official RH documentation) naming it vlanONE and linking it to the online Device p3p1. Then I reboot and change the port configuration so that the switch is giving me the tagged VLANs I’m interested. Unfortunately the server doesn’t come back online. So I physically need to go to it (that is a problem nowadays of course) and change the network configuration. Once I’m logged in I go to where the networks scripts live, and check the routes. Like this:

root@server ~ ## > cd /etc/sysconfig/network-scripts/
root@server ~ ## > ip r s
default via dev p3p1 proto static metric 400
default via dev vlanONE proto static metric 400 dev vlan132 proto kernel scope
link src metric 400

You probably already saw the problem. There are two default routes. Let’s delete the default on the physical interface p3p1, then list the routes, then deactivate it, and show it’s not there.

root@server ~ ## ip route del default via dev p3p1
root@server ~ ## ip r s
default via dev vlanONE proto static metric 400 dev vlan132 proto kernel scope
link src metric 400
root@server ~ ## ifdown p3p1
~ ## nmcli con show
vlan132 UUID-1 vlan vlanONE
Wired1 UUID-2 ethernet --
Wired2 UUID-3 ethernet --

We could also bring the vlanONE up and down with the nmcli command.

root@server ~ ## nmcli con down vlanONE
root@server ~ ## nmcli con up vlanONE

As usual, IP and routes are not real. They have been modified to write the article. Now I reboot and the server comes back online, as it should πŸ™‚

Steam Hardware & Software Survey: January 2021

My screen capture

I’m tired of hearing the mantra “you can’t play games on your MacBook”. I do play, before with CDs, now with Steam, so when Steam offered me to participate on its hardware & software survey I didn’t hesitate. In addition, I wanted to know how common was my weapon of choice amongst the other games. Above you have the screenshot of the Steam results, available here.

My comments on what you see. Most of the people have an NVIDIA gpu. I’m in that 8.96% that does not. By choice, of course. I have access to the hardware if I want πŸ™‚ , after all, I’m a SysAdmin. I don’t comment on the second graph, since it’s for Windows. I was playing on a Windows laptop before, until I was hit by the obsolescence of my original CD games after my blind upgrade to 10. To work around it, I’ve tried Windows 7 virtual machines and so on, but all of it ended up just annoying me. The graph of PC processors was not clear for me, so I was forced to click on the details to be sure of what I saw, that is an slow increase of AMD-powered gaming equipments. I wonder if this will change in the future…we have a new player here, the M1.

I’m going to comment on the last three graphs together. After all, I’m not writing a review but just my view of the results. I think 4 CPUs is a nice choice, but I don’t have that. I’m quite fine with my Dual Core, and as far as I remember, that was always the case. Maybe because I don’t play FPS like Elon Musk. Maybe because I don’t play VR at this moment, not that I don’t want to. Anyway I’m happy to find out that, at this moment, it’s quite common for OSX users to play on a MacBook Pro (52.6%). The problem? I will not be able to enjoy VR easily on my mac, at least, for the moment.

Install android things on a raspberry pi

the official logo

I don’t know why I started this project. Basically, I have a few raspberry pies ( can I write it like that?) with touchscreens that I wanted to play with before giving them away. The default Raspberry Pi OS is quite neat, but not suitable for small touchscreens. I recently became fascinated by the IoT (do I need to explain what is that meaning?) so I decided to create my own IoT center, maybe with some additional functions, like control over what is playing on my bluetooth speakers. Yes, a very ambitious developer I am, that Yoda could say.

I found this tutorial on the android developer page. In short, I flush a kernel on an ssd, then an app on it. This is my experience.

As most of you know, I’m using CentOS 7 and 8. I start sign in to your Google account and accept the licensing agreement and terms of service, then unzip the downloaded archive ( unzip on my CentOS 8 computer with a card reader. The execution goes like this:


Android Things Setup Utility (version 1.0.21)
This tool will help you install Android Things on your board
and set up Wi-Fi.

What do you want to do?
1 - Install Android Things and optionally set up Wi-Fi
2 - Set up Wi-Fi on an existing Android Things device
What hardware are you using?
1 - Raspberry Pi 3
2 - NXP Pico i.MX7D
You chose Raspberry Pi 3.

Setting up required tools...
Fetching additional configuration...
Downloading platform tools...
5.44 MB/5.44 MB
Unzipping platform tools...
Finished setting up required tools.

Raspberry Pi 3
Do you want to use the default image or a custom image?
1 - Default image: Used for development purposes.
No access to the Android
Things Console features such as metrics, crash reports,
and OTA updates.
2 - Custom image: Upload your custom image for
full device development and management with
all Android Things Console features.
Downloading Android Things image...
342 MB/342 MB
Unzipping image...

Downloading Etcher-cli, a tool to flash your SD card...
22.4 MB/22.4 MB
Unzipping Etcher-cli...

Plug the SD card into your computer. Press [Enter] when ready

Running Etcher-cli...
? Select drive /dev/sdb (7.9 GB) - SD/MMC CRW
? This will erase the selected drive. Are you sure? Yes
Flashing [========================] 100% eta 0s
Validating [========================] 100% eta 0s
iot_rpi3.img was successfully written to SD/MMC CRW (/dev/sdb)
Checksum: 2eba2225

If you have successfully installed Android Things on your SD card,
you can now put the SD card into the Raspberry Pi and power it up.
Otherwise you can abort and run the tool again.

Would you like to set up Wi-Fi on this device? (y/n) n

Easy as a pie! (you got the joke?). I get the card, put it in my pi, connect the network cable (I don’t set up the Wi-Fi) and simply boot it up. What appears on my touchscreen is a very simple serie of menus. Like this (stolen from StackOverflow)

The androidthings on a pi.

The screen shows the IP given by my network. I now try to connect to the given IP through adb just to find out I don’t have that command! We need of course to install Android Studio IDE. Just google it, and you will find it. The installation is on the Install-Linux-tar.txt file, but basically run ./, (inside bin) and follow the instructions. Once we are done (or if we have installed it already) we can ge the SDK through the GUI by going to the menus Tools–> SDK Manager (the icon is a package) or through command line on ~/Android/Sdk/tools/bin ## > ./sdkmanager. In my case I used the first. Select the second tab, on SDK Tools, select Android SDK Command-line Tools and related, if you don’t have them. When the installation is done, you should have the adb on ~/Android/Sdk/platform-tools. I go there and finally run it like this:

 ~/Android/Sdk/platform-tools ## > ./adb connect RPI-IP
* daemon not running; starting now at tcp:5037
* daemon started successfully
connected to RPI-IP:5555

Obviosly, RPI-IP is the IP of my raspberry. Time to download a apk (android package) and install it on our pi. There are app downloaders and I can’t recommend you one single way: I used chrome app downloader. Here you have how to install Spotify. I did it for Netflix in a similar way. My final commands look like this:

./adb connect RPI-IP
./adb shell am start -n
Starting: Intent {
.ui.launch.NetflixComLaunchActivity }

After these two, I see the splashscreen of Netflix on my touch screen. Unfortunately the apk doesn’t seem to work on this particular configuration, but it’s a good beginning. We’ll see how far I go with my project… πŸ˜›

Notes on HoloLens 2 for science

We are having 3D monitors with shutter glasses to do the so called structural analysis on a native 3D. But this type of technology is on the way to die. It’s difficult to buy 3D monitors and 3D Vision toolkits with emitters, so we are playing around to find a solution that works for several people at the same time. For bioscience, there is already HTC Vive software that permits visualization of molecules on VR. Unfortunately, we can’t afford to have a room blocked just for VR, so HoloLens looked like a more reasonable solution. I got an ad from our vendor announcing the release of HoloLens 2, so I run quickly to check out if that amazing technology (yes I have tested a working one) was ready to be pipelined easily for our specific “customer” needs.

It turned out it is not ready. The reason is, it is not yet interactive enough. It sounds funny, but my average use does not want only to see and zoom the 3D object together with others, he or she wants to modify it, apply masks, change colors, delete parts. Like in some crazy type of Paint3D. And HoloLens 2 may have it, but it’s not able to import a PDB map and associated masks.

But about my experience with it. I followed this Step by Step HoloLens 1 tutorial from medium and despite of the look of the menus, it is still working. I had to install in addition to Unity the Windows 10 SDK and Visual Studio. Even so, at the Building phase I got the Selected Visual Studio is missing required components and may not be able to build error. The whole compilation part was annoying, and even the result (a cube) was satisfactory, the whole administrator experience was not. There are too many pieces. Maybe with things like the MixedRealityToolkit-Unity the usage will become closer to what we want. Maybe not. I think the Interactable object Microsoft document will give you a clue of the current capabilities of the HoloLens 2. Sorry but science is more interactive than pressing a button. See you on your next iteration, maybe.

IMPI reset through CentOS 7.8

We have an IMPI (an IP Multimedia Private Identity I believe, a web-based remote management system) that failed to be updated to the latest version. As a result, ping to the address and web login are lost. I found this procedure to reset it to factory defaults from the working, installed OS, that is as usual for us CentOS 7.8. First we download the IMPI utilities from the provider (the Supermicro page is here). Note: you need to fill up a form. The same web will give you information about what you can do with the package, like requesting hardware status through the command line. But let’s go through the reset. Once unpacked the software on the folder we want, we need to reset the broken configuration to factory defaults:

root@server ~/IPMICFG_XXX/ ## > ls
DOS/ IPMICFG_UserGuide.pdf* Linux/ ReleaseNotes.txt* UEFI/ Windows/
root@server ~/IPMICFG_XXX/ ## > cd Linux/64bit/
root@server ~/IPMICFG_XXX/Linux/64bit ## > ls
root@server ~/IPMICFG_XXX/Linux/64bit ## > ./IPMICFG-Linux.x86_64
IPMICFG Version 1.31.1 (Build 200623)
Copyright(c) 2020 Super Micro Computer, Inc.
Usage: IPMICFG params (Example: IPMICFG -m
-help Display a list of commands
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -fd
Command: -fd <option>
Please select an option:
option: 1 | Preserves User configurations
option: 2 | Restores to factory default and default password
option: 3 | Sets user defaults to ADMIN/ADMIN
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -fd 1
Reset to the factory default completed.

Now let’s configure it anew. We are supposed you have the given network details.

root@server ~/IPMICFG_XXX/Linux/64bit ## > 
./IPMICFG-Linux.x86_64 -m
--> so there's no IP
root@server ~/IPMICFG_XXX/Linux/64bit ## > nslookup server-ilo

root@server ~/IPMICFG_XXX/Linux/64bit ##> 
./IPMICFG-Linux.x86_64 -m
IP address can not be changed because DHCP option is enabled.
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -dhcp off
Successfully disable DHCP.
root@server ~/IPMICFG_XXX/Linux/64bit ##>
./IPMICFG-Linux.x86_64 -m
root@server ~/IPMICFG_XXX/Linux/64bit ##>
./IPMICFG-Linux.x86_64 -m
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -sdr
Status | (#)Sensor | Reading | Low Limit | High Limit |
------ | --------- | ------- | --------- | ---------- |
OK | (4) CPU1 Temp | 53C/127F | 0C/32F | 105C/221F |
... a lot of info here ...
OK | (4158) PS1 Status | Presence detected |
OK | (4225) PS2 Status | Presence detected |
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -k
Subnet Mask=
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -k
Subnet Mask=
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -k
Subnet Mask=
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -g
--> there's no gateway, we need to add it...
root@server ~/IPMICFG_XXX/Linux/64bit ## >
./IPMICFG-Linux.x86_64 -g

I’ve deleted the irrelevant outputs. If you still don’t have it clear, here you have the UserGuide. Maybe you have the packages already in the system, on this case, follow this guide. If not, here you have another version that worked with more command line options. After setting up the gateway, I can log in with the default username and password (ADMIN ADMIN). Don’t forget to change them πŸ™‚