HOWTO: Install Ansible Semaphore on CentOS 7.X (no docker)

I wrote a little ago about how to install Ansible Semaphore as a docker. It’s been raining a lot since them, and now it’s time to finish what I started and install it for real. It’s not so complicated. Unfortunately the result didn’t deliver. Below you have my installation log and my setup, together with a relevant output.

## > wget https://github.com/ansible-semaphore/semaphore/releases/\
> download/v2.8.75/semaphore_2.8.75_linux_amd64.rpm
--DATE-- https://github.com//semaphore_2.8.75_linux_amd64.rpm
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://XXX
Saving to: - semaphore_2.8.75_linux_amd64.rpm
100%[==========================>] 11,696,811 34.3MB/s in 0.3s
DATE (34.3 MB/s) -Β  semaphore_2.8.75_linux_amd64.rpm
saved [11696811/11696811]
## > yum install semaphore_2.8.75_linux_amd64.rpm
Loaded plugins: fastestmirror, langpacks, nvidia
Examining semaphore_2.8.75_linux_amd64.rpm:
semaphore-2.8.75-1.x86_64
Marking semaphore_2.8.75_linux_amd64.rpm
to be installed
Resolving Dependencies
--> Running transaction check
---> Package semaphore.x86_64 0:2.8.75-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
============================
Package Arch Version Repository Size
============================
Installing:
semaphore x86_64 2.8.75-1 semaphore_2.8.75_linux_amd64 31 M
Transaction Summary
================================
Install 1 Package
Total size: 31 M
Installed:
semaphore.x86_64 0:2.8.75-1
Complete!
## > semaphore setup
Hello! You will now be guided through a setup to:

1. Set up configuration for a MySQL/MariaDB database
2. Set up a path for your playbooks (auto-created)
3. Run database Migrations
4. Set up initial semaphore user & password

What database to use:
1 - MySQL
2 - BoltDB
3 - PostgreSQL
(default 1):

db Hostname (default 127.0.0.1:3306):
db User (default root):
db Password: MYPASSWORD
db Name (default semaphore):
Playbook path (default /tmp/semaphore):
Web root URL (optional, see Web-root-URL):
Enable email alerts? (yes/no) (default no): yes
Mail server host (default localhost):
Mail server port (default 25):
Mail sender address (default semaphore@localhost): XXX
Enable telegram alerts? (yes/no) (default no):
Enable slack alerts? (yes/no) (default no):
Enable LDAP authentication? (yes/no) (default no):
Config output directory (default /root/Downloads):

Running: mkdir -p /XXX/semaphore..
Configuration written to /XXX/semaphore/config.json..
Pinging db..

Running db Migrations..
Executing migration v0.0.0 (at DATE)...
Creating migrations table
[12/0]8]
Executing migration v1.0.0 (at DATE)...
[4/87]
... some other migrations...
Executing migration v2.8.58 (at DATE)...
[1/57]
Migrations Finished

> Username: root
> Email: my.email@domain.org
WARN[0075] no rows in result set level=Warn
> Your name: MYNAME
> Password: MYPASSWORD

You are all setup Juan!
Re-launch this program pointing to the configuration file

./semaphore server --config /XXX/semaphore/config.json

To run as daemon:

nohup ./semaphore server \
--config /XXX/semaphore/config.json &

You can login with my.email@domain.org or root.

Now let’s run it.

## > semaphore server --config /XXX/semaphore/config.json
MySQL root@127.0.0.1:3306 semaphore
Tmp Path (projects home) /XXX/ansible/semaphore
Semaphore v2.8.75
Interface
Port :3000
Server is running
WARN[0037] write IP:3000->IP:41248: write: broken pipe level=Warn
INFO[322969] Task 1 added to queue
INFO[322970] Set resource locker with TaskRunner 1
INFO[322970] Stopped preparing TaskRunner 1
INFO[322970] Release resource locker with TaskRunner 1
INFO[322975] Task 1 removed from queue

The INFO entries are produced after running my first task successfully. It takes a little to configure it (you need to write an inventory, an environment, choose a key and define your repositories). Everything is fine from now on. Unfortunately it’s relying on repositories and quite complex playbooks, so it’s not going to be my choice. I want something that even a monkey with a keyboard can use. I’ll keep you informed of the results of my search πŸ™‚

Β 

HOWTO: show all hidden files and folder on your macOS file explorer ‘FInder’

I found the trick on this collection from Tom’s guide. It worked for me (Apple M2, Ventura) without further annoyances. Jus open a terminal and type:

user@mac ~ $ > defaults write com.apple.finder AppleShowAllFiles -bool TRUE
user@mac ~ $ > killall Finder

This will kill the Finder and open it again, this time showing all the hidden files. You’ll be surprised how many #$@ you’ll find! Specially if you install certain scientific software..

HOWTO: install NVIDIA and CUDA on Fedora 37

I come back to Fedora after… 6 years of so… with Scientific Linux first, then with CentOS. It feels nice to come back to it after such a long time! The installation went smootly and I ended up with kernel 6.1.14, which is, very modern in comparison with the one I’m using in CentOS (3.10.0) and slightly more modern than the Ubuntu one (5.15.0). Actually the picture above corresponds to kernel 5.12.8, but the feeling is the same. I took the pic from the installation guide I have followed. Basically the install is done in the same way than 6 years ago. Allow me to ellaborate.

  1. Get the cuda package. I got cuda_12.0.1_525.85.12_linux.run
  2. Try to run it and check the errors on /var/log/cuda-installer.log and /var/log/nvidia-installer.log.
  3. Install the missing packages (maybe kernel-devel and similar)
  4. Blackliust the nouveau drivers echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
  5. Edit /etc/default/grub. My grub line looks like this : GRUB_CMDLINE_LINUX="rhgb quiet rd.driver.blacklist=nouveau nvidia-drm.modeset=1"
  6. Adjist grub. grub2-mkconfig -o /boot/grub2/grub.cfg
  7. Remove nouveau. yum remove xorg-x11-drv-nouveau
  8. Make initramfs backup first. mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
  9. Generate a new initramfs. dracut /boot/initramfs-$(uname -r).img $(uname -r)
  10. Reboot, go to init 3 (no graphics) and try again the installer.

As you see, pretty easy but with a lot of steps. I need to cook up a better way. Maybe with ansible. I’ll keep you posted. BTW, you can download Fedora for Workstations here. BTW2: sorry but the dragons are fighting, maybe they will come back soon πŸ˜‰

HOWTO: open a default file manager with GUI from a terminal on CentOS 7.X

Yes. we’re back in CentOS 7.X. Case scenario: I have a “file” created by (??) that I can’t remove easily (rm) because it has a weird name ( actually it’s –exclude=*.bin). Instead of trying to escape the characters, wild cards and so on, I decide to try from a GUI. Since it’s a remote server, I need to call the file manager from the terminal after ssh to the machine. I found out there’s a general way to do that. This is my output:

## > xdg-open .
START /usr/bin/dolphin %i -caption "%c" "."
KUrl("file:///root") KUrl("")
...
dolphin(90034) KDirWatch::removeDir: doesn't know ""
...
KUrl("file:///root/%25i") KUrl("file:///root")
dolphin(90034) KDirWatch::removeDir: doesn't know ""
KUrl("file:///root/%25i") KUrl("")
KUrl("") KUrl("file:///root/%25i")
KUrl("file:///root/") KUrl("")
KUrl("") KUrl("file:///root/")

As you see, in my case I call dolphin. The image above is not from COS 7, but from the CentOS 8 file explorer, as taken from dedoimedio CentOS 8 review. That I took because you should not, at this point, go for CentOS 7.X as software solution. COS 7 is going to die one of these days. I’m using it but looking forward for the new thing to come…

HOWTO: installing Spectrum_Scale_Standard on Ubuntu 20.04.5 LTS

A new era is about to begin. As you may have gathered from my previous texts, I’m a CentOS 7.X user. As a such, I’m being forced by obsolescence to move to a new OS. That will probably be some kind of Ubuntu. All our clients need to be connected to our GPFS cluster, so in the next lines you are going to read how it was done in my case.

We log in into the machine. We are going to use Spectrum_Scale_Standard-5.1.5.1-x86_64-Linux-install provided by IBM. We simply run it.

# ./Spectrum_Scale_Standard-5.1.5.1-x86_64-Linux-install 

Extracting License Acceptance Process Tool to /usr/lpp/mmfs/5.1.5.1 ...
tail -n +660 ./Spectrum_Scale_Standard-5.1.5.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.5.1 -xvz --exclude=installer --exclude=*_rpms --exclude=*_debs --exclude=*rpm --exclude=*tgz --exclude=*deb --exclude=*tools* 1> /dev/null

Installing JRE ...

If directory /usr/lpp/mmfs/5.1.5.1 has been created or was previously created during another extraction,
.rpm, .deb, and repository related files in it (if there were) will be removed to avoid conflicts with the ones being extracted.

removed '/usr/lpp/mmfs/5.1.5.1/hdfs_rpms/rhel/hdfs_3.1.1.x/repodata/repomd.xml.asc'
... a lot of other stuff being removed...

tail -n +660 ./Spectrum_Scale_Standard-5.1.5.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.5.1 --wildcards -xvz ibm-java*tgz 1> /dev/null
tar -C /usr/lpp/mmfs/5.1.5.1/ -xzf /usr/lpp/mmfs/5.1.5.1/ibm-java*tgz

Invoking License Acceptance Process Tool ...
/usr/lpp/mmfs/5.1.5.1/ibm-java-x86_64-80/jre/bin/java -cp /usr/lpp/mmfs/5.1.5.1/LAP_HOME/LAPApp.jar com.ibm.lex.lapapp.LAP -l /usr/lpp/mmfs/5.1.5.1/LA_HOME -m /usr/lpp/mmfs/5.1.5.1 -s /usr/lpp/mmfs/5.1.5.1

The we get a po-up that is asking you to accept the terms. We say “yes” (do we have an option?) and the installation goes ahead.

License Agreement Terms accepted.

Extracting Product RPMs to /usr/lpp/mmfs/5.1.5.1 ...

tail -n +660 ./Spectrum_Scale_Standard-5.1.5.1-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.5.1 --wildcards -xvz Public_Keys ansible-toolkit ganesha_debs/ubuntu/ubuntu20 ganesha_debs/ubuntu/ubuntu22 gpfs_debs/ubuntu/ubuntu20 gpfs_debs/ubuntu/ubuntu22 hdfs_rpms/rhel/hdfs_3.1.1.x hdfs_rpms/rhel/hdfs_3.2.2.x hdfs_rpms/rhel/hdfs_3.3.x smb_debs/ubuntu/ubuntu20 smb_debs/ubuntu/ubuntu22 zimon_debs/ubuntu/ubuntu20 zimon_debs/ubuntu/ubuntu22 ganesha_rpms/rhel7 ganesha_rpms/rhel8 ganesha_rpms/sles15 gpfs_rpms/rhel7 gpfs_rpms/rhel8 gpfs_rpms/sles15 object_rpms/rhel8 smb_rpms/rhel7 smb_rpms/rhel8 smb_rpms/sles15 tools/repo zimon_debs/ubuntu zimon_rpms/rhel7 zimon_rpms/rhel8 zimon_rpms/sles15 gpfs_debs gpfs_rpms manifest 1> /dev/null

- Public_Keys
- ansible-toolkit
- ganesha_debs/ubuntu/ubuntu20
- ganesha_debs/ubuntu/ubuntu22
- gpfs_debs/ubuntu/ubuntu20
- gpfs_debs/ubuntu/ubuntu22
- hdfs_rpms/rhel/hdfs_3.1.1.x
- hdfs_rpms/rhel/hdfs_3.2.2.x
- hdfs_rpms/rhel/hdfs_3.3.x
- smb_debs/ubuntu/ubuntu20
- smb_debs/ubuntu/ubuntu22
- zimon_debs/ubuntu/ubuntu20
- zimon_debs/ubuntu/ubuntu22
- ganesha_rpms/rhel7
- ganesha_rpms/rhel8
- ganesha_rpms/sles15
- gpfs_rpms/rhel7
- gpfs_rpms/rhel8
- gpfs_rpms/sles15
- object_rpms/rhel8
- smb_rpms/rhel7
- smb_rpms/rhel8
- smb_rpms/sles15
- tools/repo
- zimon_debs/ubuntu
- zimon_rpms/rhel7
- zimon_rpms/rhel8
- zimon_rpms/sles15
- gpfs_debs
- gpfs_rpms
- manifest

Removing License Acceptance Process Tool from /usr/lpp/mmfs/5.1.5.1 ...
rm -rf /usr/lpp/mmfs/5.1.5.1/LAP_HOME
/usr/lpp/mmfs/5.1.5.1/LA_HOME

Removing JRE from /usr/lpp/mmfs/5.1.5.1 ...
rm -rf /usr/lpp/mmfs/5.1.5.1/ibm-java*tgz

==================================================================
Product packages successfully extracted to /usr/lpp/mmfs/5.1.5.1

Cluster installation and protocol deployment
To install a cluster or deploy protocols with the
IBM Spectrum Scale Installation Toolkit:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale -h

To install a cluster manually: Use the GPFS packages located
within /usr/lpp/mmfs/5.1.5.1/gpfs_<rpms/debs>

To upgrade an existing cluster using the
IBM Spectrum Scale Installation Toolkit:
1) Review and update the config:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale
config update
2) Update the cluster configuration to reflect
the current cluster config:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale
config populate -N <node>
3) Use online or offline upgrade depending on your requirements:
- Run the online rolling upgrade:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale
upgrade -h
- Run the offline upgrade:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale
upgrade config offline -N;
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale
upgrade run
You can also run the parallel offline upgrade
to upgrade all nodes parallely after shutting down GPFS
and stopping protocol services on all nodes.
You can run the parallel offline upgrade
on all nodes in the cluster, not on a subset of nodes.

To add nodes to an existing cluster using the
IBM Spectrum Scale Installation Toolkit:
1) Add nodes to the cluster definition file:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale node add -h
2) Install IBM Spectrum Scale on the new nodes:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale install -h
3) Deploy protocols on the new nodes:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale deploy -h

To add NSDs or file systems to an existing cluster
using the IBM Spectrum Scale Installation Toolkit:
1) Add NSDs or file systems to the cluster definition:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale nsd add -h
2) Install the NSDs or file systems:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale install -h

To update the cluster definition to reflect
the current cluster config examples:
/usr/lpp/mmfs/5.1.5.1/ansible-toolkit/spectrumscale config populate -N <node>
1) Manual updates outside of the installation toolkit
2) Sync the current cluster state
to the installation toolkit prior to upgrade
3) Switching from a manually managed cluster
to the installation toolkit

==========================================================
To get up and running quickly, consult the
IBM Spectrum Scale Protocols Quick Overview:
https://www.ibm.com/docs/en/STXKQY_5.1.5/pdf/scale_povr.pdf
==========================================================

For convenience and later use we copy the debian packages to our current folder before installing them.

# cp -Rav /usr/lpp/mmfs/5.1.5.1/gpfs_debs/ .

Time to install. We use dpkg. Other options are available, but this one is native.

# dpkg -i gpfs.base_5.1.5-1_amd64.deb \
gpfs.compression_5.1.5-1_amd64.deb \
gpfs.docs_5.1.5-1_all.deb \
gpfs.gpl_5.1.5-1_all.deb \
gpfs.gskit_8.0.55-19.1_amd64.deb \
gpfs.license.std_5.1.5-1_amd64.deb \
gpfs.msg.en-us_5.1.5-1_all.deb

(Reading database ... 206152 files and directories currently installed.)
Preparing to unpack gpfs.base_5.1.5-1_amd64.deb ...
Unpacking gpfs.base (5.1.5-1) over (5.1.5-1) ...
Preparing to unpack gpfs.compression_5.1.5-1_amd64.deb ...
Unpacking gpfs.compression (5.1.5-1) over (5.1.5-1) ...
Preparing to unpack gpfs.docs_5.1.5-1_all.deb ...
Unpacking gpfs.docs (5.1.5-1) over (5.1.5-1) ...
Preparing to unpack gpfs.gpl_5.1.5-1_all.deb ...
make[1]: Entering directory '/usr/lpp/mmfs/src'
rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib
mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib
rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver
cleaning (/usr/lpp/mmfs/src/ibm-kxi)
make[2]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi'
rm -f install.he; \
for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h cxiPMem.h DirIds.h; do \
(set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiPMem.h
+ rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h
make[2]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi'
cleaning (/usr/lpp/mmfs/src/ibm-linux)
make[2]: Entering directory '/usr/lpp/mmfs/src/ibm-linux'
rm -f install.he; \
for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \
(set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h
+ rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h
make[2]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux'
cleaning (/usr/lpp/mmfs/src/gpl-linux)
make[2]: Entering directory '/usr/lpp/mmfs/src/gpl-linux'
Pre-kbuild step 1...
/usr/bin/make -C /lib/modules/5.15.0-60-generic/build M=/usr/lpp/mmfs/src/gpl-linux clean
make[3]: Entering directory '/usr/src/linux-headers-5.15.0-60-generic'
CLEAN /usr/lpp/mmfs/src/gpl-linux
CLEAN /usr/lpp/mmfs/src/gpl-linux/Module.symvers
make[3]: Leaving directory '/usr/src/linux-headers-5.15.0-60-generic'
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko
rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko
rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`
rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he
rm -f -rf .tmp_versions kdump-kern-dwarfs.c
rm -f -f gpl-linux.trclst kdump lxtrace
rm -f -rf usr
make[2]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux'
make[1]: Leaving directory '/usr/lpp/mmfs/src'
Unpacking gpfs.gpl (5.1.5-1) over (5.1.5-1) ...
Preparing to unpack gpfs.gskit_8.0.55-19.1_amd64.deb ...
Unpacking gpfs.gskit (8.0.55-19.1) over (8.0.55-19.1) ...
Preparing to unpack gpfs.license.std_5.1.5-1_amd64.deb ...
Unpacking gpfs.license.std (5.1.5-1) over (5.1.5-1) ...
Preparing to unpack gpfs.msg.en-us_5.1.5-1_all.deb ...
Unpacking gpfs.msg.en-us (5.1.5-1) over (5.1.5-1) ...
Setting up gpfs.base (5.1.5-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/mmautoload.service ? /lib/systemd/system/mmautoload.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mmccrmonitor.service ? /lib/systemd/system/mmccrmonitor.service.
Setting up gpfs.compression (5.1.5-1) ...
Setting up gpfs.docs (5.1.5-1) ...
Setting up gpfs.gpl (5.1.5-1) ...
Setting up gpfs.gskit (8.0.55-19.1) ...
Setting up gpfs.license.std (5.1.5-1) ...
Setting up gpfs.msg.en-us (5.1.5-1) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...

Now we go to the GPFS quorum-manager and try to join the node. First we try that we can do passwordless ssh to it. That we can. Let’s call our new client ubuntu00.domain.org. I add it to the current cluster like this:

@quorum # > mmaddnode -N ubuntu00.domain.org
@quorum # > mmchlicense client --accept -N ubuntu00.domain.org

We come back now to our client ubuntu00 and test to start the gpfs. It doesn’t start, but it’s fine. We build the kernel extension then:

# /usr/lpp/mmfs/bin/mmbuildgpl
--------------------------------------------------------
mmbuildgpl: Building GPL (5.1.5.1) module begins at DATE.
--------------------------------------------------------
Verifying Kernel Header...
kernel version = 51500060 (515000060000000, 5.15.0-60-generic, 5.15.0-60)
module include dir = /lib/modules/5.15.0-60-generic/build/include
module build dir = /lib/modules/5.15.0-60-generic/build
kernel source dir = /usr/src/linux-5.15.0-60-generic/include
Found valid kernel header file under /lib/modules/5.15.0-60-generic/build/include
Getting Kernel Cipher mode...
Will use skcipher routines
Verifying Compiler...
make is present at /bin/make
cpp is present at /bin/cpp
gcc is present at /bin/gcc
g++ is present at /bin/g++
ld is present at /bin/ld
make World ...
make InstallImages ...
--------------------------------------------------------
mmbuildgpl: Building GPL module completed successfully at DATE.
--------------------------------------------------------

Now gpfs start without issues. I’m being lucky lately, everything I try works out of the box. Maybe because I’m using a modern OS. We’ll see how long until I find a big stone… 😦

HOW TO: Install Homebrew on M2 MacBook running macOS 13.1 Ventura

In case you don’t know homebrew, it’s a macOS package manager. Some kind of yum or apt-get. I had it on my previous mac, now I need it on my new one. Luckily for me to install it was not complicated. I did just follow the instructions written on this post for Monterey.

  1. Open terminal and run:Β /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Type your password and wait
  3. Add brew to your path
export PATH=/opt/homebrew/bin:$PATH
export PATH=/opt/homebrew/sbin:$PATH

That’s it. Now I can continue and install sshpass to avoid typing ssh passwords with sshpass onΒ macOS. Yes it’s getting tedious.

HOWTO: reset local wp-admin password

A little bit of background: I have like 3 copies of WP installed on local machines. If you want to know how to install WP locally, I wrote already a post about it. Then one day I come back to one of them and… surprise! I can’t log in anymore. Panic. Fever. What shall we do now? Why didn’t we pay attention to this install when it was just released? Fortunately, my WP was not alone and it was coming with a mariadb and a phpmyadmin installations. From there we can destroy fix everything. Let’s do it.

  1. Log into phpmyadmin
  2. Search for the database storing the wp information
  3. Browse wp_users until you find your username
  4. Click on Edit, and change the user_pass field for your new password
  5. Choose Function: MD5 (as seen above)
  6. Restart mariadb and httpd just to be sure.

Solution found on this link. Image taken from there also. I can’t believe it’s not meat πŸ™‚

HOWTO: fix your wordpress install after a system update – main page shows only php code

Background: we had a very big disconnection and the local WP installation didn’t come back after all the other systems came back. I have access to the website (so the apache2 service is running) but the main site as well as the admin site show only php text code. I tried rebooting the computer (a virtual machine) without luck. A very simple solution this one has. As usual, the solution is in StackOverflow.

sudo apt install php7.0 libapache2-mod-php7.0 
php7.0-mysql php7.0-curl php7.0-json

I restart then my apache2 and my website comes back. Fiu! Have a nice weeked, or see you tomorrow for the Mars discussion post…

HOWTO: add Java and CSS to your python dash application

This is a pretty sad first post for February. A lot of things have been going on, so many that I’ve not been able to approach to a computer in a relaxed way. Anyway, here’s my tip of the day. To apply CSS styles and use java on your dash applications, place the files in a specific folder called assets. They will be found and used. Hopefully!

---app.py
---assets/style.css
---assets/picture.gif
---data/data.csv

For Flask you have a link here. That’s all for the moment.

HOWTO : Install Ubuntu on WSL2 on Windows 11 with GUI support

I followed this ubuntu tutorial step-by-step and I ended up with an Ubuntu shell on my W.11. There’s no need to add anything to it! But let’s see what we can do with it. To start with, we install a stupid GUI app in addition to the given by the example (xeyes & xcalc) to see if they pop up. We run apt-get geany and then geany, and indeed we get the familiar GUI afterwards. SSH to a remote linux client also works, and I can get the GUI of whatever I run on the remote without any hassle. So kudos for WSL2 and Windows 11! This is the type of posts I like to make, the posts of success 😁 😁 😁. Another one bite the dust!