Failed to start The Apache HTTP Server: Invalid command ‘SSLPassPhraseDialog’

I should not but when I’m in hurry I tend to rsync configurations from one machine to another. In this case, I want to start an Apache instance on a CentOS 8 stream machine. First I get the httpd.conf file from my working machine ‘working‘ to my client ‘client‘:

client $ > rsync -av root@working:/etc/httpd/conf/httpd.conf \ /etc/httpd/conf/httpd.conf --delete-after --progress
client $ > systemctl restart httpd
client $ > systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: active (running) since DATE
Docs: man:httpd.service(8)
Main PID: YYY (httpd)
Status: "Started, listening on: port 80"
Tasks: 214 (limit: 3297467)
Memory: 47.8M
CGroup: /system.slice/httpd.service
├─304637 /usr/sbin/httpd -DFOREGROUND
└─304644 /usr/sbin/httpd -DFOREGROUND
DATE client systemd[1]: Starting The Apache HTTP Server...
DATE client httpd[304637]:
AH00112: Warning: DocumentRoot
[/var/www/html/munin] does not exist
DATE client systemd[1]: Started The Apache HTTP Server.

So the Apache server works but I can’t access the page because of DocumentRoot. I was expecting this. I’m fixing a broken previous configuration! Yeah that’s me, always fixing broken things. But let’s continue. To clean up DocumentRoot we rsync that one also from the working machine, and try starting the server again. Like this:

client $ > rsync -av root@working:/etc/httpd/conf.d/ \ /etc/httpd/conf.d/ --delete-after --progress
client $ > systemctl restart httpd
client $ > systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since DATE
Docs: man:httpd.service(8)
Process: 305059 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
Main PID: YYY (code=exited, status=1/FAILURE)
Status: "Reading configuration..."
DATE client systemd[1]: Starting The Apache HTTP Server...
DATE client httpd[305059]: AH00526:
Syntax error on line 18 of /etc/httpd/conf.d/ssl.conf:
DATE client httpd[305059]: Invalid command 'SSLPassPhraseDialog', perhaps misspelled or defined by a module not included in the server configuration
DATE client systemd[1]: httpd.service:
Main process exited, code=exited, status=1/FAILURE
DATE client systemd[1]: httpd.service:
Failed with result 'exit-code'.
DATE client systemd[1]:
Failed to start The Apache HTTP Server.

As usual, I have highlighted above what I consider interesting. It looks like I need to add support to ssh to go ahead. I do it and try again:

client $ > yum install mod_ssl

Now restart and I can have my default HTTP Server Test Page. Time to fill up the apache server!

Fixing “No URLs in Mirrorlist” Error on CentOS 8 Stream

The error looks like this:

Error: Error downloading packages:
No URLs in mirrowlist

Right after you answer “y” to yum update and get a package list. “No URLs in mirrorlist,” apparently occurs because CentOS 8 has reached its End-Of-Life (EOL) and the official mirrors no longer host the packages. We can fix this by updating your repository files to point to the vault where the CentOS 8 packages are now stored. Open a root terminal, or go as root, and copy this:

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

After that, run yum update again. It will work.

BTW I knew CentOS 8 stream was going to be discontinued. However I was not expecting to face this so soon. Back to square zero, I guess. We’ll see which distro we choose now!

HOWTO: fix 22h2 windows 10 not installing

I’m not having (yet?) an efficient software delivering system. So sometimes I need to install a system from scratch “old style”, that is, with an installation CD or pen drive. I was doing so with a Windows 10 when I encountered this issue. Despite of several reboots, the 22H2 (the latest Windows 10 update) was not downloading or installing. If you want to know why you may read this post that gives you several solutions to this problem. Out of them, the first worked.

I called the System Settings –> Update & Security –> Troubleshoot. The troubleshoot run for a few minutes before finding an error on my update database. After it, it asked me to close the troubleshooter, that I did, and just to be sure, I rebooted also.

And after the reboot, 22H2 started to be downloaded and I managed to have it installed. Thanks google again for such a useful tip! Yes I know, I need to do some Windows admin course. One of these days. Any suggestions? 🙂

mamba ImportError: libmamba.so.2: undefined symbol: archive_read_support_filter_zstd

I’m installing DeePiCt, a catchy name for a conda environment that I’m told can do Convolutional networks for supervised mining of molecular patterns within cellular context. Sigh. You need a PhD only to understand the description. Fortunately, I have one 🙂 Anyway, here’s the GitHub page, from where I took the above picture also. Let’s go for it. I have Miniconda already so I skip the step. Without changing a simple iota, I do what is written on the next steps until reaching the error above, that I will tell you how I fixed.

[root@server ~]# conda init bash
no change   XXX/miniconda/condabin/conda
no change   XXX/miniconda/bin/conda
no change   XXX/miniconda/bin/conda-env
no change   XXX/miniconda/bin/activate
no change   XXX/miniconda/bin/deactivate
no change   XXX/miniconda/etc/profile.d/conda.sh
no change   XXX/miniconda/etc/fish/conf.d/conda.fish
no change   XXX/miniconda/shell/condabin/Conda.psm1
no change   XXX/miniconda/shell/condabin/conda-hook.ps1
no change   XXX/miniconda/lib
/python3.9/site-packages/xontrib/conda.xsh
no change   XXX/miniconda/etc/profile.d/conda.csh
modified      /root/.bashrc

==> For changes to take effect,
close and re-open your current shell. <==

[root@server ~]# exit
logout
(base) [root@server ~]#
conda install -n base -c conda-forge mamba

## Package Plan ##

environment location: XXX/miniconda
added / updated specs:
- mamba
... package plan...

Proceed ([y]/n)? y

Downloading and Extracting Packages

Preparing transaction: done
Verifying transaction: done
Executing transaction: done

(base) [root@server ~]#
mamba create -c conda-forge
-c bioconda -n snakemake
snakemake==5.13.0 python=3.7                                                                                        
Traceback (most recent call last):
File "XXX/miniconda/bin/mamba",
line 7, in <module>
from mamba.mamba import main
... some more stuff here...            
ImportError: XXX/minonda/lib/python3.9/
site-packages/libmambapy/../../../
libmamba.so.2: undefined symbol:
archive_read_support_filter_zstd

Now what? We found the ImportError that gives title to the post. Let’s fix it. It turned out to be the typical error that I call heritage error. Here you have the post that gave me the solution. We go to “XXX/lib” (XXX is the path where I have installed my miniconda) and do the next:

(base) [root@server ~ lib ] # ls libarchive* 
libarchive.a libarchive.so libarchive.so.19
(base) [root@server ~ lib ] #
ln -s libarchive.so libarchive.so.13

After that, the rest take time, a lot of packages are removed, a lot are installed (it takes me around 1 hour to finish it all) but there are no more errors. Mamba!

ulimit: open files: cannot modify limit: Operation not permitted (on CentOS 7)

Wow, two bits in a row! I hope this is not a trend. I’m decided to write more for fun than for work. Aaanyway. I was forced to increase the limit of files an user can handle (so called ‘ulimit‘) and I found out that it was not possible for the user to do that. I’m a little bit ignorant about why you should to that (increase the file limit) but let’s leave that discussion for another forum, that I will coall why I use linux. Visually, this is what I get when I try to modify the file limits:

user@server ~ $ > ulimit -n 10000
-bash: ulimit: open files: cannot modify limit:
Operation not permitted

We have two limits, the soft and the hard. In principle (source) a normal user can adjust the soft limit freely in the range. A normal user can adjust the hard limit too, but can only decrease it. You must be root to increase the hard limit from its default value. So let’s do it.

We modify /etc/security/limits.conf by adding this before the “End of file” message. Like this

* hard nofile 100000
* soft nofile 100000

Save /etc/security/limits.conf, log out and log in again, no need to reboot, then we get this:

user@server ~ $ >  ulimit -Sn
100000
user@server ~ $ > ulimit -Hn
100000

So it’s done. Because if it is not done, we may need to change the sshd configuration and use pam. And this is the last post about the subject. Thank you for coming 🙂

ERROR: line XXX too long in state file /var/lib/logrotate/logrotate.status

One of the good things of Linux systems is that you can modify every little aspect of what they are supposed to do. Unfortunately, sometimes the errors you get are quite dark. This is what I got after I logged in in one of our CentOS 7.9 server after a while:

Subject: Anacron job 'cron.daily' on XXX
Message-Id: <YYY@domain.org>
Date: TODAY (CEST)

/etc/cron.daily/logrotate:

error: line ZZZ too long in state file /var/lib/logrotate/logrotate.status

You can also get it as an email if someone cared to configure the option, and this was not the case. This server is used for samba and we need to keep it healthy. Let’s get rid of the error. First we do a backup of the logrotate.

cp /var/lib/logrotate/logrotate.status /var/lib/logrotate/logrotate.status.bak

Now we edit the logrotate and remove the line ZZZ. We do it with vi, then :ZZZ (it’s a number) to go to line ZZZ of the file and dd to remove the line in question. I used the opportunity to remove “old” entries on the file also. Then save with :wq and force a log rotation.

logrotate -f /etc/logrotate.conf

And the problem is gone for good. Let’s hope all the troubles are as easy to solve as this one 🙂

FIX: Job for docker.service failed because the control process exited with error code on CentOS 7.X

Disclaimer: I fix it by reinstalling it. Here you have the docker installation procedure. First have a look to the message:

# > systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
# > systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since XXX; YYY ago
Docs: https://docs.docker.com
Process: 34949 ExecStart=/usr/bin/dockerd
-H fd:// --containerd=/run/containerd/containerd.sock
(code=exited, status=1/FAILURE)
Main PID: 34949 (code=exited, status=1/FAILURE)

Now the fix. yum remove docker* is in my case removing the next:

Removed:
docker-ce.x86_64 3:20.10.17-3.el7
docker-ce-cli.x86_64 1:20.10.17-3.el7
docker-ce-rootless-extras.x86_64 0:20.10.17-3.el7
docker-compose.noarch 0:1.18.0-4.el7
docker-scan-plugin.x86_64 0:0.17.0-3.el7

Dependency Removed:
nvidia-docker2.noarch 0:2.11.0-1

Now we do what is written on the documentation, that I copy here for a quick fix:

yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

This is the installation report:

Installed:
docker-buildx-plugin.x86_64 0:0.11.1-1.el7
docker-ce.x86_64 3:24.0.4-1.el7
docker-ce-cli.x86_64 1:24.0.4-1.el7
docker-compose-plugin.x86_64 0:2.19.1-1.el7

Dependency Installed:
docker-ce-rootless-extras.x86_64 0:24.0.4-1.el7

Updated:
containerd.io.x86_64 0:1.6.21-3.1.el7

After this the demon starts. We want also nvidia features. Let’s install them.

# > yum install nvidia-docker2

Installed:
nvidia-docker2.noarch 0:2.13.0-1

Dependency Installed:
nvidia-container-toolkit-base.x86_64 0:1.13.4-1

Dependency Updated:
libnvidia-container-tools.x86_64 0:1.13.4-1
libnvidia-container1.x86_64 0:1.13.4-1
nvidia-container-toolkit.x86_64 0:1.13.4-1

Now restart the service systemctl restart docker and we are ready to run!

User management solutions III : dash app and a docker solution

Well I keep working on my stuff while chasing my obsession of a portable, fully-configurable website for user management. The previous flask solution was close to it, but the content after login was quite empty. Today I’m going to comment first this dash app solution. Unfortunately here there’s no git link, so we need to copy it all. We call our project dash-login. Make a folder, and edit an app.py file for example, with vi. Like this (I’m probably offending your intelligence now)

mkdir dash-login
vi app.py

On app.py we copy what is given to us on the example. After fixing the formatting and so on, I only change the last line, since I don’t want to run it on the default port. Instead, I run it on the 5000. So this is how it looks like:

if __name__ == '__main__':
    app.run_server(debug=True,  port="5000")

We can run now the code by typing python app.py. In my case I get the web, unfortunately, also I get an error right away. AttributeError: 'NoneType' object has no attribute 'encode' because I don’t have a database. If I try to add the user, my app also complains. Of course you can spend some time fixing this, but I want a working solution. So let’s go for a docker.

OpenDashAuth works out of the box. The plot above shows a what I get after cloning the repository, building the docker compose and making it up. The problem is, in this case, that is poorly documented. How do I add a user? How do I modify the sample plot? You can add more users easily through the database (quote from the documentation) so I would refer to my previous bash solution, but it’s not as convenient as it could be. Anyway, a top programmer like you should be able to integrate both solutions.

The third solution that I want to offer you is this one from FoodyFood. Fortunately you can also clone this repository and launch the app. What is wrong with it? In principle nothing! It lets you select the app that you want to run, and this is giving me a good room for a general solution. So if this one works for you also, let me know 🙂

E: Unable to locate package nvidia-docker2 on Ubuntu 20.04

Context, please! Of course, here you have it. I’m trying to get dockers with nvidia running for a user on a Ubuntu 20.04 machine. If you have been following me, you know that I’m now a little bit lost and I don’t know which linux distro to choose to replace CentOS 7. Let’s fix this error. I found the solution here.

Since it’s a brand new system (but not deployed by me!) first I will need to install curl. Then I run it. This is my output, as usual, curated.

root@ubuntu:~# apt install curl
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  curl
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
...
Processing triggers for man-db (2.9.1-1) ...
root@ubuntu:~# curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
>   sudo apt-key add -
OK
root@zitlib05:~# distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
root@zitlib05:~# curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
>   sudo tee /etc/apt/sources.list.d/nvidia-docker.list
deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /
root@ubuntu:~# sudo apt-get update
Hit:1 http://de.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://de.archive.ubuntu.com/ubuntu focal-updates InRelease                                       
Hit:3 http://de.archive.ubuntu.com/ubuntu focal-backports InRelease                                     
Get:4 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64  InRelease [1.484 B]       
Get:5 https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/amd64  InRelease [1.481 B]  
Get:6 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64  InRelease [1.474 B]     
Hit:7 http://security.ubuntu.com/ubuntu focal-security InRelease                        
Get:8 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64  Packages [26,4 kB]
Get:9 https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/amd64  Packages [7.416 B]
Get:10 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64  Packages [4.488 B]
Fetched 42,8 kB in 1s (35,7 kB/s)    
Reading package lists... Done
root@ubuntu:~# apt-get install -y nvidia-docker2
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  bridge-utils containerd docker.io git git-man liberror-perl libnvidia-container-tools libnvidia-container1 nvidia-container-toolkit nvidia-container-toolkit-base pigz runc ubuntu-fan
Suggested packages:
  ifupdown aufs-tools btrfs-progs cgroupfs-mount | cgroup-lite debootstrap docker-doc rinse zfs-fuse | zfsutils git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk
  gitweb git-cvs git-mediawiki git-svn
The following NEW packages will be installed:
  bridge-utils containerd docker.io git git-man liberror-perl libnvidia-container-tools libnvidia-container1 nvidia-container-toolkit nvidia-container-toolkit-base nvidia-docker2 pigz runc
  ubuntu-fan
0 upgraded, 14 newly installed, 0 to remove and 23 not upgraded.
Need to get 75,3 MB of archives.
... package download...
Unpacking ubuntu-fan (0.12.13ubuntu0.1) ...
Setting up nvidia-container-toolkit-base (1.13.2-1) ...
Setting up runc (1.1.4-0ubuntu1~20.04.3) ...
Setting up liberror-perl (0.17029-1) ...
Setting up libnvidia-container1:amd64 (1.13.2-1) ...
Setting up bridge-utils (1.6-2ubuntu1) ...
Setting up libnvidia-container-tools (1.13.2-1) ...
Setting up pigz (2.4-1) ...
Setting up git-man (1:2.25.1-1ubuntu3.11) ...
Setting up containerd (1.6.12-0ubuntu1~20.04.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up ubuntu-fan (0.12.13ubuntu0.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service → /lib/systemd/system/ubuntu-fan.service.
Setting up nvidia-container-toolkit (1.13.2-1) ...
Setting up docker.io (20.10.21-0ubuntu1~20.04.2) ...
Adding group `docker' (GID 138) ...
Done.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Setting up nvidia-docker2 (2.13.0-1) ...
Setting up git (1:2.25.1-1ubuntu3.11) ...
Processing triggers for systemd (245.4-4ubuntu3.21) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...

And now I come back and log in as an user. More about this to come later, probably. Or tomorrow. Or last week, if I backpost it 😀

run_docker.py:255] python: can’t open file ‘/app/alphafold/run_alphafold.py’: [Errno 13] Permission denied

I’m following the official howto for alphafold but on a new system Ubuntu 20.04. The goal is to have an installation that a user can run. First I had issues installing nvidia-docker, now I can build the alphafold image but not run it. I have a setup based on a “general” database so a sample run over the fasta myfasta.fasta look like this (error included)

user@ubuntu $ > python3 docker/run_docker.py --fasta_paths=/home/user/myfasta.fasta  --max_template_date=2023-06-21 --data_dir=/home/generic/genetic_database/ --output_dir=/home/user/results/
I0621 DATE run_docker.py:113] Mounting /home/user/fastas -> /mnt/fasta_path_0
I0621 DATE 140474254292800 run_docker.py:113] Mounting /home/generic/genetic_databse/uniref90 -> /mnt/uniref90_database_path
I0621 DATE run_docker.py:113] Mounting /home/generic/genetic_database_new/mgnify -> /mnt/mgnify_database_path
... the other generic database folders... 
I0621 DATE run_docker.py:113] Mounting /home/generic/genetic_database/bfd -> /mnt/bfd_database_path
I0621 DATE run_docker.py:255] python: can't open file '/app/alphafold/run_alphafold.py': [Errno 13] Permission denied

I’ve tried with a different alphafold version, removing the image, copying the app to another place, compiling it in another way (modifying the Dockerfile) sand so on, without luck. It turned out to be a simple permission issue. At least in my case. As root we add “user” to the 2 groups needed.

usermod -aG sudo user
usermod -aG docker user

Now we try to run the simulation as sudo. Basically, I run the same than above but now it works.

user@ubuntu $ > sudo python3 docker/run_docker.py --fasta_paths=/home/user/myfasta.fasta  --max_template_date=2023-06-21 --data_dir=/home/generic/genetic_database/ --output_dir=/home/user/results/
[sudo] password for user:

So cool. But I didn’t want to be asked for a password each time I launch a simulation. We’ll see how can I get rid of this one… maybe fiddeling with the sudo setup, maybe automatising it a little bit more 🙂