A jupyter notebook in a docker: Jupyter Docker Stacks

Again I’m with python. I want to run as package independent as possible, so I want to run the external python apps in a docker. If we have a jupyter notebook, we can easily run it this way if we have it in a “test” folder. Note that the “test” folde needs to be readable for everyone (777) or you’ll get this error :

tornado.web.HTTPError: HTTP 403: Forbidden

Anyway, this is what I get when I run my docker mapping the test folder:

# > docker run -p 8888:8888 -v "${PWD}"/test:/home/jovyan/work 
jupyter/scipy-notebook
Entered start.sh with args: jupyter lab
Executing the command: jupyter lab
[I DATE ServerApp] jupyter_server_terminals
| extension was successfully linked.
[I DATE ServerApp] jupyterlab
| extension was successfully linked.
[W DATE NotebookApp] 'ip' has moved from NotebookApp
to ServerApp. This config will be passed to ServerApp.
Be sure to update your config before our next release.
[W DATE NotebookApp] 'ip' has moved from NotebookApp
to ServerApp. This config will be passed to ServerApp.
Be sure to update your config before our next release.
[I DATE ServerApp] nbclassic
| extension was successfully linked.
[I DATe ServerApp] Writing Jupyter server cookie secret
to /home/jovyan/.local/share/jupyter/runtime/jupyter_cookie_secret
[I DATE ServerApp] notebook_shim
| extension was successfully linked.
[I DATE ServerApp] notebook_shim
| extension was successfully loaded.
[I DATE ServerApp] jupyter_server_terminals
| extension was successfully loaded.
[I DATE LabApp] JupyterLab extension loaded from
/opt/conda/lib/python3.10/site-packages/jupyterlab
[I DATE LabApp] JupyterLab application directory
is /opt/conda/share/jupyter/lab
[I DATE ServerApp] jupyterlab
| extension was successfully loaded.
[I DATE ServerApp] nbclassic
| extension was successfully loaded.
[I DATE ServerApp] Serving notebooks
from local directory: /home/jovyan
[I DATE ServerApp] Jupyter Server 2.1.0 is running at:
[I DATE ServerApp] http://XXX:8888/lab?token=XXX
[I DATE ServerApp] or http://127.0.0.1:8888/lab?token=XXX
[I DATE ServerApp] Use Control-C to stop this server
and shut down all kernels (twice to skip confirmation).
[C DATE ServerApp]

To access the server, open this file in a browser:
file:///home/jovyan/.local/share/jupyter/runtime/jpserver-7-open.html
Or copy and paste one of these URLs:
http://XXX:8888/lab?token=XXX
or http://127.0.0.1:8888/lab?token=XXX

The Jupyter Docker stack documentation is available here. Happy docking!

Advertisement

Error: cannot find -lmkl_intel_lp64 while compiling on Ubuntu 20.04

Ubuntu 20 is escalating positions as the heir of CentOS 7, and I started experiencing compatibility issues. I was compiling relion with the next options

cmake -DAMDFFTW=ON -DCUDA_ARCH=86 -DCUDA=ON -DCudaTexture=ON 
-DFORCE_OWN_FLTK=ON -DCMAKE_CXX_COMPILER=g++ -DCMAKE_C_COMPILER=gcc
-DMPI_C_COMPILER=/usr/bin/mpicc
-DMPI_C_LIBRARIES=/usr/lib/x86_64-linux-gnu/openmpi/lib/
-DMPI_C_INCLUDE_PATH=/usr/lib/x86_64-linux-gnu/openmpi/include/
-DCUDA_ARCH=61 -DCUDA=ON -DCudaTexture=ON -DMKLFFT=ON
-DFORCE_OWN_FLTK=ON -DGUI=ON
-DCMAKE_INSTALL_PREFIX=/XXX/relion_local/
-D CMAKE_BUILD_TYPE=Release ..

when I found this error

[ 59%] Linking CXX executable ../../bin/relion_tomo_tomo_ctf
/usr/bin/ld: cannot find -lmkl_intel_lp64
/usr/bin/ld: cannot find -lmkl_sequential
/usr/bin/ld: cannot find -lmkl_core
collect2: error: ld returned 1 exit status
make[2]: *** [src/apps/CMakeFiles/tomo_ctf.dir/build.make:102:
bin/relion_tomo_tomo_ctf] Error 1
make[1]: *** [CMakeFiles/Makefile2:352:
src/apps/CMakeFiles/tomo_ctf.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

I’m using the default package install locations, no modules or anything fancy. So it looks like some libraries are missing. Of course I try to find them with ldconfig -p | grep ‘mkl’ but they are obviously not there. Let’s install them:

# apt-get install libmkl*intel* libmkl*se* libmkl*core*
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'libmkl-intel-thread' for glob 'libmkl*intel*'
Note, selecting 'libmkl-blacs-intelmpi-ilp64' for glob 'libmkl*intel*'
Note, selecting 'libmkl-intel-ilp64' for glob 'libmkl*intel*'
Note, selecting 'libmkl-blacs-intelmpi-lp64'
for glob 'libmkl*intel*'
Note, selecting 'libmkl-intel-lp64' for glob 'libmkl*intel*'
Note, selecting 'libmkl-sequential' for glob 'libmkl*se*'
Note, selecting 'libmkl-core' for glob 'libmkl*core*'
Note, selecting 'libmkl-cdft-core' for glob 'libmkl*core*'
The following additional packages will be installed:
libmkl-def libmkl-locale libmkl-vml-def
The following NEW packages will be installed:
libmkl-blacs-intelmpi-ilp64 libmkl-blacs-intelmpi-lp64
libmkl-cdft-core libmkl-core libmkl-def
libmkl-intel-ilp64 libmkl-intel-lp64
libmkl-intel-thread libmkl-locale
libmkl-sequential libmkl-vml-def

0 upgraded, 11 newly installed, 0 to remove and 19 not upgraded.
Need to get 38,4 MB of archives.
After this operation, 204 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

You may need to run apt-get install mklibs in addition. Of course it depends on what you have been doing with your system so far, since this will call some python libraries also. After getting those packages, make and make install run without errors. Time to ask the user to test it…

Python Import Error: can’t import name gcd from fractions

Deprecation is a big issue in python. I’m in need of Molecular Dynamic (MD) simulations tools. The error above comes from one tool I already posted about, called LipIDens, more specifically, it’s a complain thrown away by vermouth. Vermouth (for VERsatile, MOdular, and Universal Tranformation Helper) isΒ also a drink and the python library that powers Martinize2. The vermouth source comes here. It is supposed to be used to apply transformation on molecular structures. Which means I don’t really know what it does! Anyway, my error reads

Installed 
/usr/local/lib/python3.9/site-packages/lipidens-1.0.0-py3.9.egg
Processing dependencies for lipidens==1.0.0
error: networkx 3.0 is installed but
networkx~=2.0 is required by {'vermouth'}

What to do here? I found the solution and the explanation once more on StackOverflow. It’s very interesting to know that the the grammar for a mathematical library changed after Python 3.5. So then, why on the LipIDens documentation it is recommended to use a python above 3.9? I’m going to leave the answer to this question open (old developer environments with remnants or insufficient tests) and show you my solution. We install a specific python package. I choose pip to install it instead of conda because it goes to my python site-packages, which I personally consider a more elegant solution. Here you have my output:

bash-5.1# pip install networkx==2.5
Collecting networkx==2.5
Downloading networkx-2.5-py3-none-any.whl (1.6 MB)
|XXXXXXX| 1.6 MB 4.3 MB/s
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.9/site-packages (from networkx==2.5) (5.1.1)
Installing collected packages: networkx
Attempting uninstall: networkx
Found existing installation: networkx 2.0
Uninstalling networkx-2.0:
Successfully uninstalled networkx-2.0
Successfully installed networkx-2.5
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
bash-5.1# python setup.py install

After the pip install over my local python I run again the LipIDens installer and it works. Another issue is to get meaningful results from the program! BTW, I decided to keep writing my bits thanks to some good feedback that I was missing before… thank you guys. I appreciate it.

ERROR : /usr/share/Modules/init/sh: No such file or directory on Ubuntu 22.04

We started the migration to a new Linux flavour – in principle Ubuntu– and therefore we started also to experience the first issues. We have a very established software module method, with the module definitions being loaded from a network location. We can install software modules, but after every folder needed and every other adjustment is made we see this:

$ > ssh user@new-server
Last login: DATE from other-server.org
-bash: /usr/share/Modules/init/sh: No such file or directory

I have a machine “other-server” to check how the modules are working. And I find that the folder “Modules” is missing in Ubuntu. The content of the folder modules in the same location seems to be the same, so I do

root@new-server:/usr/share# ln -s /usr/share/modules Modules

And try logging again. With this result:

$ > ssh user@new-server
user@new-server's password:
Last login: DATE from other-server.org
user@new-server ~ $ > module avail
/PATH/modulecmd:
error while loading shared libraries:
libtcl8.5.so: cannot open shared object file:
No such file or directory

So there’s now a library problem. Let’s try to fix it very quickly. I localize a similar library

root@new-server:~# ls /usr/lib/*/*libtcl*
/usr/lib/x86_64-linux-gnu/libtcl8.6.so
/usr/lib/x86_64-linux-gnu/libtcl8.6.so.0
/usr/lib/x86_64-linux-gnu/libtclenvmodules.so
root@new-server:~#
cp /usr/lib/x86_64-linux-gnu/libtcl8.6.so /usr/lib/libtcl8.5.so
root@new-server:~# ldconfig

And the error is gone. We can do this because the libraries are not so different…. I have no idea of the side effects. Let’s hope there are not so many πŸ˜‰!

WordPress multisite: more than one WP site in one server

I don’t have so much mood to write lately. I guess I’ve been busy thinking or something like that. Anyway, here I am again. (B)logging the last thing I did, that is to transform a single WP install onto a multisite install.I will be following this guide to arrive to the desired multisite configuration.

I start with an installed version (so no docker) on CentOS 7. I deactivate the plugins, then edit /var/www/html/wp-config.php as indicated.

Add this code before the /* That's all, stop editing! Happy blogging. */ line:

/* Multisite */
define('WP_ALLOW_MULTISITE', true);

I save the edits on the wp-config.php, and restart httpd service. Unfortunately I don’t get the Network Setup Menu as in the guide (maybe you get it) but I don’t worry about it and continue. After that I will regret, and I will need to edit various other files to achieve what I want. Here you have the details about the file modifications needed. But basicallyt I have installed phpMyAdmin also. For that I have installed and configured mod_rewrite for apache and fixed a package error problem with php 7.4 install on CentOS 7.9. In brief

yum-config-manager --disable 'remi-php*'
yum-config-manager --enable remi-php74

And of course, I have given permission to access phpmyadmin on this server. But at the end, as usual I manage to get what I want. Or should I say you managed to get what you wanted? 🧐. Because what I want is to write about what I think, not about what I do. And I’m not in the mood πŸ˜”. See you around…

HOWTO: Install WordPress in debian 11 (no docker)

Previously I have installed WP on CentOS 7.X, now I have tried in another system, Debian GNU/Linux 11 (bullseye). I have followed this tutorial from Cloud Infrastructure Services basically cut’n’copying until Step 5, replacing :

In other words, replace the generic entries. I test the apache configuration

# apache2ctl configtest
Syntax OK

Then I go to http://IP_ADDRESS and create the root user, etc. Like it is written here, on the howto install WP on Debian. It was very easy, as it was for CentOS 😎! Note that in one of the tutorial we use an apache server, and in the last one it’s an NGINX Virtual Host. Happy wordpressing! 😁.

Virtual machine manager error: no connection driver available for qemu:///system on CentOS 7

I’m trying to find a nice full-sim environment on my dying CentOS 7.X system. It means not a docker neither a LXC solution but a full OS with IP and so on. As similar as the real thing but running on CentOS 7. I have a clean machine, and I remember qemu as the tool that does everything I want. So I install it, start the service, and call the GUI. Like this:

yum install qemu-kvm qemu-img virt-manager libvirt-daemon
systemctl start libvirtd
virt-manager &

The GUI pops up but it gives me the error above. That I solve in one of the ways described in this post. I have updated and enable the service without luck. So I go for missing packages.

yum -y install qemu-kvm qemu-img virt-manager \
libvirt libvirt-python python-virtinst \
libvirt-client virt-install virt-viewer

After that, no need to reboot, I get my GUI and I can start playing with VMs. I will report you my findings, if any 🧐. BONUS: another post about a similar issue.

ModuleNotFoundError: No module named ‘absl’ while using python

This is a very specific error. We have quite a mess-up python setup with multiple versions, network modules and local, a SLURM cluster, and the option to install your own, so it’s quite tricky to track the origin of a module. But it helps if I have two servers with the same kernel and packages, one of them the program runs, but not in the other. I do have logs, but they are not very clear. Message says

ModuleCmd_Load.c(213):ERROR:105: Unable to locate a modulefile for 'python-3.7.3'
Traceback (most recent call last):
File "/XXX/run_docker.py", line 22, in <module>
from absl import app
ModuleNotFoundError: No module named 'absl'

So what is happening here I believe is that the program is trying to load the module, it fails and goes to the local install. As an user I can install the missing absl-py module, but I can’t run it in this case because of the special process. What to do then? All the solutions on stackoverflow were not suitable, since I don’t know which python is getting what. But I have a hint: the program seems to be loading python 3.7.3, that is not the default. So I look for the python modules stored and I found them on the server one. A simple sync

@ server-one ## > rsync -av /usr/local/lib/python3.6/site-packages/ root@server-two:/usr/local/lib/python3.6/site-packages/

and then the “program” works. What is going on? It looks like python, since it didn’t manage to find the modules for the requested python version, took the closest ones available (3.6). But who knows? I’m not a python expert, I’m just passing by πŸ˜”.

[ERROR CRI]: container runtime is not running while kubeadm init on CentOS 7.X

More on kubernetes. When I try to initialize kubeadm, I get the next error:

# > kubeadm init
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
error execution phase preflight:
[preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running:
output: E1019 15:49:03.837827 19294 remote_runtime.go:948]
"Status from runtime service failed"
err="rpc error: code = Unimplemented
desc = unknown service runtime.v1alpha2.RuntimeService"
time="XXXX" level=fatal
msg="getting status of runtime: rpc error:
code = Unimplemented desc = unknown service
runtime.v1alpha2.RuntimeService", error: exit status 1
[preflight] If you know what you are doing,
you can make a check non-fatal with
`--ignore-preflight-errors=...`
To see the stack trace of this error execute
with --v=5 or higher

The solution I found, after a few start and stop of services and deletion of a few files seems to be like this – sucessful output included

## > rm /etc/containerd/config.toml
## > systemctl restart containerd
## > kubeadm init
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up
a Kubernetes cluster
[preflight] This might take a minute or two,
depending on the speed of your internet connection
[preflight] You can also perform this action in
beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [
kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local MASTERNODENAME]
and IPs [ONE_IP MY_IP]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for
DNS names [localhost MASTERNODENAME] and
IPs [MY_IP 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for
DNS names [localhost MASTERNODENAME] and
IPs [MY_IP 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.003061 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node MASTER as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node MASTER as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: TOKEM
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster,
you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join MY-IP:6443 \
--token TOKEN \
--discovery-token-ca-cert-hash HASH

That’s it. BTW, another post with the solution to the issue. Let’s get going with my kubernetes…

HOWTO : Install Ubuntu on WSL2 on Windows 11 with GUI support

I followed this ubuntu tutorial step-by-step and I ended up with an Ubuntu shell on my W.11. There’s no need to add anything to it! But let’s see what we can do with it. To start with, we install a stupid GUI app in addition to the given by the example (xeyes & xcalc) to see if they pop up. We run apt-get geany and then geany, and indeed we get the familiar GUI afterwards. SSH to a remote linux client also works, and I can get the GUI of whatever I run on the remote without any hassle. So kudos for WSL2 and Windows 11! This is the type of posts I like to make, the posts of success 😁 😁 😁. Another one bite the dust!