WordPress multisite: more than one WP site in one server

I don’t have so much mood to write lately. I guess I’ve been busy thinking or something like that. Anyway, here I am again. (B)logging the last thing I did, that is to transform a single WP install onto a multisite install.I will be following this guide to arrive to the desired multisite configuration.

I start with an installed version (so no docker) on CentOS 7. I deactivate the plugins, then edit /var/www/html/wp-config.php as indicated.

Add this code before the /* That's all, stop editing! Happy blogging. */ line:

/* Multisite */
define('WP_ALLOW_MULTISITE', true);

I save the edits on the wp-config.php, and restart httpd service. Unfortunately I don’t get the Network Setup Menu as in the guide (maybe you get it) but I don’t worry about it and continue. After that I will regret, and I will need to edit various other files to achieve what I want. Here you have the details about the file modifications needed. But basicallyt I have installed phpMyAdmin also. For that I have installed and configured mod_rewrite for apache and fixed a package error problem with php 7.4 install on CentOS 7.9. In brief

yum-config-manager --disable 'remi-php*'
yum-config-manager --enable remi-php74

And of course, I have given permission to access phpmyadmin on this server. But at the end, as usual I manage to get what I want. Or should I say you managed to get what you wanted? 🧐. Because what I want is to write about what I think, not about what I do. And I’m not in the mood πŸ˜”. See you around…


Virtual machine manager error: no connection driver available for qemu:///system on CentOS 7

I’m trying to find a nice full-sim environment on my dying CentOS 7.X system. It means not a docker neither a LXC solution but a full OS with IP and so on. As similar as the real thing but running on CentOS 7. I have a clean machine, and I remember qemu as the tool that does everything I want. So I install it, start the service, and call the GUI. Like this:

yum install qemu-kvm qemu-img virt-manager libvirt-daemon
systemctl start libvirtd
virt-manager &

The GUI pops up but it gives me the error above. That I solve in one of the ways described in this post. I have updated and enable the service without luck. So I go for missing packages.

yum -y install qemu-kvm qemu-img virt-manager \
libvirt libvirt-python python-virtinst \
libvirt-client virt-install virt-viewer

After that, no need to reboot, I get my GUI and I can start playing with VMs. I will report you my findings, if any 🧐. BONUS: another post about a similar issue.

ModuleNotFoundError: No module named ‘absl’ while using python

This is a very specific error. We have quite a mess-up python setup with multiple versions, network modules and local, a SLURM cluster, and the option to install your own, so it’s quite tricky to track the origin of a module. But it helps if I have two servers with the same kernel and packages, one of them the program runs, but not in the other. I do have logs, but they are not very clear. Message says

ModuleCmd_Load.c(213):ERROR:105: Unable to locate a modulefile for 'python-3.7.3'
Traceback (most recent call last):
File "/XXX/run_docker.py", line 22, in <module>
from absl import app
ModuleNotFoundError: No module named 'absl'

So what is happening here I believe is that the program is trying to load the module, it fails and goes to the local install. As an user I can install the missing absl-py module, but I can’t run it in this case because of the special process. What to do then? All the solutions on stackoverflow were not suitable, since I don’t know which python is getting what. But I have a hint: the program seems to be loading python 3.7.3, that is not the default. So I look for the python modules stored and I found them on the server one. A simple sync

@ server-one ## > rsync -av /usr/local/lib/python3.6/site-packages/ root@server-two:/usr/local/lib/python3.6/site-packages/

and then the “program” works. What is going on? It looks like python, since it didn’t manage to find the modules for the requested python version, took the closest ones available (3.6). But who knows? I’m not a python expert, I’m just passing by πŸ˜”.

HOWTO: run a GUI in a docker

I’m trying to have this ChimeraX docker running on my CentOS 7.9 and the latest ChimeraX. It turned out I can’t, since I don’t have Qt6 and the support for the above CentOS choice has been dumped, but it has been an interesting experiment, enough to log it. The ChimeraX docker image from the docker builds when you bring it, so in principle it looks like it should work. The documentation, unfortunately, doesn’t tell how to start a sample docker. I will tell you:

docker run -i -t --name chimeraXtest \
--net=host --privileged -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
chimerax:latest /bin/bash

If you have been paying attention to my docker notes, this will deliver you to a bash shell inside the ChimeraX docker, that seems to be Ubuntu 20 based. One can install and run GUIs inside the bash shell (for example try apt-get install nedit), but the very thing we want to run crashes. Like this:

ImportError: libQt6Core.so.6: cannot open shared object file: No such file or

BUG: ImportError: libQt6Core.so.6: cannot open shared object file: No such file or

File "/usr/lib/ucsf-chimerax/lib/python3.9/site-packages/Qt/__init__.py", line
64, in
from PyQt6.QtCore import PYQT_VERSION_STR as PYQT6_VERSION

_See log for complete Python traceback._

There’s no obvious solution for this import error. Maybe I will investigate how to run on a Qt6 docker container for CI. Or do you have a better suggestion maybe? Check this post: docker x11 fails to open display. Tomorrow more dockers, maybe. If I have time πŸ˜‰.

[ERROR CRI]: container runtime is not running while kubeadm init on CentOS 7.X

More on kubernetes. When I try to initialize kubeadm, I get the next error:

# > kubeadm init
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
error execution phase preflight:
[preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running:
output: E1019 15:49:03.837827 19294 remote_runtime.go:948]
"Status from runtime service failed"
err="rpc error: code = Unimplemented
desc = unknown service runtime.v1alpha2.RuntimeService"
time="XXXX" level=fatal
msg="getting status of runtime: rpc error:
code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService", error: exit status 1
[preflight] If you know what you are doing,
you can make a check non-fatal with
To see the stack trace of this error execute
with --v=5 or higher

The solution I found, after a few start and stop of services and deletion of a few files seems to be like this – sucessful output included

## > rm /etc/containerd/config.toml
## > systemctl restart containerd
## > kubeadm init
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up
a Kubernetes cluster
[preflight] This might take a minute or two,
depending on the speed of your internet connection
[preflight] You can also perform this action in
beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [
kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local MASTERNODENAME]
and IPs [ONE_IP MY_IP]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for
DNS names [localhost MASTERNODENAME] and
IPs [MY_IP ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for
DNS names [localhost MASTERNODENAME] and
IPs [MY_IP ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.003061 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node MASTER as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node MASTER as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: TOKEM
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster,
you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join MY-IP:6443 \
--token TOKEN \
--discovery-token-ca-cert-hash HASH

That’s it. BTW, another post with the solution to the issue. Let’s get going with my kubernetes…

ERROR: failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try (fix on CentOS 7.X)

I’m trying to get kubernetes integrated in my SLURM cluster, so I started again to deploy a kubernetes cluster. Unfortunately the previous step-by-step install kubernetes fails now on the Step 2. This is my output, edited as usual to obscure irrelevant information:

## > yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.halifax.rwth-aachen.de
* centosplus: mirror.checkdomain.de
* epel: ftp.halifax.rwth-aachen.de
* epel-testing: ftp.halifax.rwth-aachen.de
* extras: ftp.rz.uni-frankfurt.de
* rpmfusion-free-updates: mirror.netsite.dk
* updates: ftp.rrzn.uni-hannover.de
kubernetes/signature | 844 B 00:00:00
Retrieving key from
Importing GPG key 0x13EDEF05:
Userid : "Rapture Automatic Signing Key
Fingerprint: a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Retrieving key from
kubernetes/signature | 1.4 kB 00:00:00 !!!
[Errno -1] repomd.xml signature could not be verified for kubernetes
Trying other mirror.

One of the configured repositories failed (Kubernetes),
and yum doesn't have enough cached data to continue.
At this point the only safe thing yum can do is fail.
There are a few ways to work "fix" this:

1. Contact the upstream for the repository and
get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository,
to point to a working upstream.
This is most often useful if you are using a newer
distribution release than is supported by the repository
(and the packages for the previous distribution
release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=kubernetes ...
4. Disable the repository permanently,
so yum won't use it by default.
Yum will then just ignore the repository
until you permanently enable it again
or use --enablerepo for temporary usage:
yum-config-manager --disable kubernetes
subscription-manager repos --disable=kubernetes
5. Configure the failing repository to be skipped,
if it is unavailable. Note that yum will try
to contact the repo. when it runs most commands,
so will have to try and fail each time
(and thus. yum will be be much slower).
If it is a very temporary problem though,
this is often a nice compromise:
yum-config-manager --save

failure: repodata/repomd.xml from kubernetes:
[Errno 256] No more mirrors to try.
[Errno -1] repomd.xml signature could not be verified for kubernetes

I found the solution on this post. Basically, I rewrite the repo so that it gets the rpm package key only. Liket this:


After that, yum does the install and I can go ahead.

HOWTO: Install WordPress in a CentOS 7.X (no docker)

Sometimes you need to install the program for real. I have described you before how to install WP with a docker, now I want to tell you how to install it without dockers. I’ve followed this guide from VULTR. It worked like a charm. For the records:

## > mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 5.5.68-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help.
Type '\c' to clear the current input statement.
MariaDB [(none)]> create database myexample;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE USER 'user'@'localhost' IDENTIFIED BY 'XXXX';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> use myexample;
Database changed
MariaDB [myexample]> GRANT ALL PRIVILEGES ON myexample.* TO 'user'@'localhost';
Query OK, 0 rows affected (0.00 sec)
MariaDB [myexample]> exit
## > yum install -y http://rpms.remirepo.net/enterprise/remi-release-7.rpm
## > yum --enablerepo=remi-php74 install php php-bz2 \
php-mysql php-curl php-gd php-intl php-common \
php-mbstring php-xml
## > systemctl restart httpd

## > wget http://WordPress.org/latest.tar.gz
## > tar -xzvf latest.tar.gz
## > mv WordPress/* /var/www/html/
## > chown -R apache.apache /var/www/html/

That’s it. It worked ❀️❀️.

HOWTO : uninstall postregsql and update it on CentOS 7

The image from the original post.

This is going to be a very short post. I just want to log that the method to install a modern postgresql offered in this digitalocean post worked like a charm for me. First I uninstall the default PostgreSQLΒ that I installed for my MAAS and was giving error, then I followed step by step what is written in the already mentioned post. Successfully. Also my MAAS seems to be working, so I’m going to leave for the weekend with a not-so-bad mood after all 😁😁😁.

BONUS NOTES: How to List Databases and Tables in PostgreSQL Using psql. Have a nice weekend you too!

MAAS ERROR: psycopg2.OperationalError: FATAL: Ident authentication failed for user “maas” on CenOS 7.9

Just yesterday I was posting about how to install MAAS on CentOS 7.X. I ended up with a quite nice test web, but I was not able to go into production with it. To have a functional MAAS (not the dummy web) you need a database, and the weapon of choice for the MAAS seems to be postgresql. I’m not familiar with it, so I can’t tell you a lot about it. A database is a database, that they say in my village. I’m trying to start my MAAS with the below username and password I have no regret to copy here:

sudo -u postgres \
sudo -u postgres createdb maasdb -O maas

Yeah not very original. I took it from here. I modify the configuration (how to find it) on /var/lib/pgsql/data/pg_hba.conf as suggested, restart the postregsql service by systemctl restart postgresql and try to init the MAAS like this

# > maas init region+rack --database-uri

Controller has already been initialized.
Are you sure you want to initialize again (yes/no) [default=no]? yes
MAAS URL [default=http://X.X.X.X:5240/MAAS]:
Failed to perfom migrations:ations
Traceback (most recent call last):
... bla bla bla...
FATAL: Ident authentication failed for user "maas"

The above exception was the direct cause of
the following exception:
Traceback (most recent call last):
... bla bla bla ...
FATAL: Ident authentication failed for user "maas"

We have an ident user ID or permission problem. This happened to me before for mysql databases! So I think I know where to loo at, more or less. First at the service configuration, then to the user authentication itself. I look for a solution to the FATAL error and ending up leaving my pg_hba.conf like this:

local all all peer
host all all md5
host all all ::1/128 md5

Then I restart the service and test the login with user maas password maas:

# > psql -h localhost -U maas -d maasdb
Password for user maas:
psql (9.2.24)
Type "help" for help.
maasdb=> \q

So it works! Time to go for the MAAS init:

# > maas init region+rack --database-uri 

Controller has already been initialized.
Are you sure you want to initialize again (yes/no) [default=no]? yes
MAAS URL [default=http://X.X.X.X:5240/MAAS]:
Failed to perfom migrations:ations
Traceback (most recent call last):
...bla bla bla...
Unsupported postgresql server version (90224) detected

Which means I need to uninstall postgresql and install a more advanced version. Which means I have the theme for the next post 😁😁. Well, that’s life!

HOWTO: Install MAAS on CentOS 7.X and update default libseccomp

A dummy picture. Because one image counts for a thousand words. Taken from here.

It’s time to change. What the hell, it’s always time to change! I mean, CentOS 7.9 is already a walking corpse, and I need – we need – a valid replacement. Since it’s been a while since I PXE boot a client, I asked around other SysAdmins and they recommended me to install MAAS from Ubuntu. MAAS (Metal As A Service, or Machine As A Service) is basically a – somehow – evolved version of Cobbler or a puppet-free Foreman. Despite is an Ubuntu software, installation on CenOS is possible after a small trick. Let’s start! πŸ˜‰

Step one: Enable snapd. From the advertisement: “Snaps are applications packaged with all their dependencies to run on all popular Linux distributions from a single build. They update automatically and roll back gracefully.” I heard it before, but I didn’t use it yet. We have epel-release already, so we simply do the next steps. Like this:

# > yum install snapd
... some stuff here...
snapd.x86_64 0:2.56.2-1.el7
Dependency Installed:
snap-confine.x86_64 0:2.56.2-1.el7
snapd-selinux.noarch 0:2.56.2-1.el7
squashfs-tools.x86_64 0:4.3-0.21.gitaae0aff4.el7
squashfuse.x86_64 0:0.1.102-1.el7
squashfuse-libs.x86_64 0:0.1.102-1.el7
# > systemctl enable --now snapd.socket
Created symlink from /etc/systemd/system/sockets.target.wants/snapd.socket
to /usr/lib/systemd/system/snapd.socket.
# > ln -s /var/lib/snapd/snap /snap

Step two: install maas from the channel 3.2. Unfortunately it’s not working.

# > snap install --channel=3.2 maas
error: cannot perform the following tasks:
- Mount snap "maas" (23947) (snap "maas" system usernames require a snapd built against libseccomp >= 2.4)

Time to upgrade libseccomp. We’re going to do it from source. You can get the package from here. We don’t deviate a comma from the instructions. Once downloaded, we type

# > ./configure --prefix=/usr --disable-static && make
... some stuff here...
CCLD scmp_api_level
Making all in tests
Making all in doc
# > make install
... some stuff here...
/usr/bin/install -c -m 644 libseccomp.pc '/usr/lib/pkgconfig'

If we run now snap install it fails again. We need to reload the libraries. This is the output of my successful install:

# > ldconfig
# > snap install --channel=3.2 maas
Warning: /var/lib/snapd/snap/bin was not found in your $PATH.
If you've not restarted your session
since you installed snapd, try doing that.
Please see https://forum.snapcraft.io/t/9469
for more details.

maas (3.2/stable) 3.2.6-12016-g.19812b4da from Canonical? installed

Now I need to initialize it and learn how to use it. But you should be able now to follow the official howto. If I find another annoyance, be sure I’ll post about it! πŸ˜‰.