HOWTO : compileย ctffind4ย on CentOS 8 stream (automatically)

Ugh. Too much work, too much party, too many meetings. And too few dreams and posts. Anyway. Let’s do this. Warning: this is a log entry, nothing else. I will not explain what CTFFind does, I just want to compile it in my CentOS 8 stream machine, automatically. On this GitHub fork you have the procedure. And it works, it’s that it’s not automatic ๐Ÿ™‚

I wrap everything on a script like this:

#!/bin/bash
yum -y install sudo epel-release
yum -y install fftw fftw-devel wxGTK3 wxGTK3-devel libtiff libtiff-devel cmake make gcc git which diffutils gcc-c++ libjpeg-turbo-devel
#cannot wget the file: we get it from the repository
cp /XXX/repos/ctffind-4.1.13.tar /opt/local/software/
cd /opt/local/software/ && tar -xvf ctffind-4.1.13.tar
cd ctffind-4.1.13/
./configure --disable-debugmode
sed -i '/#include "pdb.h"/d' src/core/core_headers.h
sed -i '/#include "water.h"/d' src/core/core_headers.h
make
rm ctffind-4.1.13.tar

For some reason, at least in my case, wget brings something but it’s not a tarball. So I downloaded it on my network drive (XXX) from the official page and I simply copy it afterwards always from there. Have a nice Monday everyone, if possible.

HOWTO: installing warpem on CentOS 7.X

WARP is one of the few tomography tools that run on Windows. Unfortunately I’m not a cryo-EM tomographer, so I can’t comment on it. Truth is, I’m not monitoring the usage of Windows clients. For Linux, I can, to the desired granularity, thanks to munin plugins. So I was looking forward for the Linux version of WARP. The HOWTO in the github page is quite simple. Let’s run it and see what we get. My comments are in blue. I assume you have a warp folder with the clone of the git repository, so some paths are referring to it, the others to where the conda environment ends (inside anaconda3) ๐Ÿ™‚

warp # > conda env create -f warp_build.yml
Channels:
- nvidia/label/cuda-11.7.0
- pytorch
- conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done
# it didn't take so long
warp # > conda activate warp_build
(warp_build) warp # > ./scripts/build-native-unix.sh
CMake Deprecation Warning at CMakeLists.txt:1
(CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed
from a future version of CMake.

Update the VERSION argument <min> value or use a ...
<max> suffix to tell CMake that the project
does not need compatibility with older versions.


-- The CUDA compiler identification is NVIDIA 11.7.64
-- The CXX compiler identification is GNU 9.5.0
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler:
anaconda3/envs/warp_build/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler:
anaconda3/envs/warp_build/bin/
x86_64-conda-linux-gnu-c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.27.1")
-- Found FFTW:
anaconda3/envs/warp_build/include
found components: FLOAT_THREADS_LIB
-- Found TIFF:
anaconda3/envs/warp_build/lib/libtiff.so
(found version "4.5.0")
-- Configuring done (13.5s)
-- Generating done (0.2s)
-- Build files have been written to:
warp/NativeAcceleration/build
[ 0%] Building CUDA object CMakeFiles/NativeAcceleration.dir/
gtom/src/BinaryManipulation/DistanceMap.cu.o

[ 1%] Building CUDA object CMakeFiles/NativeAcceleration.dir/
gtom/src/CTF/CommonPSF.cu.o
...
# quite som warning but building goes fine...
...
[100%] Linking CXX shared library lib/libNativeAcceleration.so
[100%] Built target NativeAcceleration
CMake Warning (dev) in CMakeLists.txt:
No project() command is present. The top-level CMakeLists.txt
This warning is for project developers. Use -Wno-dev to suppress it.
-- The C compiler identification is GNU 9.5.0
-- The CXX compiler identification is GNU 9.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler:
warp_build/bin/x86_64-conda-linux-gnu-cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler:
anaconda3/envs/warp_build/
bin/x86_64-conda-linux-gnu-c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
...
# a new compilation seems to start
...
-- Found Torch: anaconda3/envs/warp_build/
lib/python3.11/site-packages/torch/lib/libtorch.so
-- Configuring done (16.4s)
-- Generating done (0.1s)
-- Build files have been written to:
warp/LibTorchSharp/build
[ 5%] Building CXX object LibTorchSharp/CMakeFiles/LibTorchSharp.dir/
MultiGPUModule.cpp.o

[ 20%] Building CXX object LibTorchSharp/CMakeFiles/LibTorchSharp.dir/
C2DNet.cpp.o

[100%] Linking CXX shared library libLibTorchSharp.so
[100%] Built target LibTorchSharp
(warp_build) warp # > ./scripts/publish-unix.sh
MSBuild version 17.9.8+b34f75857 for .NET
Determining projects to restore...
Restored warp/Noise2Map/Noise2Map.csproj (in 2.8 sec).
Restored warp/TorchSharp/TorchSharp.csproj (in 2.79 sec).
Restored warp/WarpLib/WarpLib.csproj (in 2.8 sec).
TorchSharp -> /opt/local/software/warp/Release/TorchSharp.dll
...
# quite some MSBuilds and warnings
...
MSBuild version 17.9.8+b34f75857 for .NET
Determining projects to restore...
Restored warp/MCore/MCore.csproj (in 458 ms).
2 of 3 projects are up-to-date for restore.
TorchSharp -> warp/Release/TorchSharp.dll
WarpLib -> warp/Release/WarpLib.dll
MCore -> warp/Release/linux-x64/MCore.dll
MCore -> warp/Release/linux-x64/publish/

After the compilation/installation indeed we have what looks like binaries on warp/Release/linux-x64/publish. But I don’t get/I can’t find the familiar GUI of Warp or M. I do, however, manage to run the binaries. At least they ask me for parameters when I call them. I can only guess that this is a command-line version of the famous program. Maybe it’s the moment to call the expert ๐Ÿ™‚

Installing crYOLO on CentOS 8 stream

And there’s no end to the installation stream. I keep installing and testing, me alone, since it looks like people don’t manage to share the tools they compile. This is specially true for conda environments. Everyone is allowed to make their own, no one knows how to export them. Maybe because it’s a dangerous game to export one conda, and not alway working. Like in this case. I had access to a working cryolo conda environment that I wanted to use as a template for the general and new installations. So I export it

(cryolo) [user@machine ~]# conda env export > cryolo.yml

Then I log in as root and import the yaml. You guess it right, it didn’t work. Like this:

[root@machine ~]# conda env create -f cryolo.yml 

Error reads:

ERROR: Could not find a version that satisfies the requirement
nvidia-cublas-cu116==11.9.2.110 (from versions: 0.0.1.dev5)

ERROR: No matching distribution found for
nvidia-cublas-cu116==11.9.2.110


failed

I tried all my weapons to work around the error. For example, we first create the env and force it to have another python, then we update it with the yaml manifest.

[root@machine ~]# conda create --name cryolo python=3.11
(cryolo) [root@machine ~]# conda env update --file cryolo.yml

Unfortunately it failed with the same error no matter what python I was giving. The manual package install didn’t work neither

(cryolo) [root@machine ~]# conda install -c anaconda nvidia-cublas-cu116

So I decided to create the new cryolo from scratch, using the official howto. Spoiler alert: it worked! Of course we remove the old cryolos first. But this is my story:

[root@machine ~]# conda create -n cryolo -c conda-forge -c anaconda pyqt=5 python=3 numpy=1.18.5 libtiff wxPython=4.1.1 adwaita-icon-theme 'setuptools<66'
[root@machine ~]# conda activate cryolo
(cryolo) [root@machine ~]# pip install nvidia-pyindex
Successfully installed nvidia-pyindex-1.0.9
WARNING: Running pip as the 'root' user can result
in broken permissions and conflicting behaviour
with the system package manager. It is recommended
to use a virtual environment instead:
https://pip.pypa.io/warnings/venv

(cryolo) [root@machine ~]# pip install 'cryolo[c11]'
... this goes on for a while...
Successfully installed ... "all the packages" ...
WARNING: Running pip as the 'root' user can result
in broken permissions and conflicting behaviour
with the system package manager. It is recommended
to use a virtual environment instead:
https://pip.pypa.io/warnings/venv

After the last command, I’m able to log in as an user, conda activate cryolo, and call the GUI right below the title. Beyond that test, I will not go. But it looks great. Success!

HOWTO: install relion on CentOS 8 stream AMD

This is a very specific stuff, also. The official relion installation instructions are very abstract and I couldn’t make it to work out of the box. I found this blog that helped me but basically it’s old information, since it refers to relion 3.1 and CUDA 10 and 9. We start by making the folders for the git code and the binaries, then make and install. In blue I leave the code, the cropped output is in black, my comments over the output go in red. I think you get it.

# mkdir /opt/local/software
# cd /opt/local/software
# git clone https://github.com/3dem/relion
# cd relion/
# module load openmpi-3.1.6
# mkdir build
# cd build
# cmake -Wno-dev -DMPI_C_COMPILER=mpicc \
-DMPI_CXX_COMPILER=mpicc \
-DCMAKE_INSTALL_PREFIX=/opt/local/software/relion/ \
-DCUDA_ARCH=75 -DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=gcc -DCMAKE_C_FLAGS=-lm \
-DCMAKE_CXX_FLAGS=-lm ..


... some output here...


-- Found FLTK: /usr/lib64/libfltk_images.so;/usr/lib64/libfltk_forms.so;/usr/lib64/libfltk.so
-- X11 and FLTK were found
-- FLTK_LIBRARIES: /usr/lib64/libfltk_images.so;/usr/lib64/libfltk_forms.so;/usr/lib64/libfltk.so;/usr/lib64/libSM.so;/usr/lib64/libICE.so;/usr/lib64/libX11.so;/usr/lib64/libm.so
-- Found FFTW
-- FFTW_PATH: /usr/include
-- FFTW_INCLUDES: /usr/include
-- FFTW_LIBRARIES: /usr/lib64/libfftw3f.so;/usr/lib64/libfftw3.so
-- Looking for sincos
-- Looking for sincos - not found
-- Looking for __sincos
-- Looking for __sincos - not found
-- Found TIFF: /usr/lib64/libtiff.so (found version "4.0.9")
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.11")
-- Found PNG: /usr/lib64/libpng.so (found version "1.6.34")
-- Checking class ranker model file...
-- Found local copy of class ranker model
BUILD_SHARED_LIBS = OFF
-- Building static libs (larger build size and binaries)
Running apps/CMakeLists.txt...
-- CMAKE_BINARY_DIR:/opt/local/repos/relion/build
-- Git commit ID: e5c4835894ea7db4ad4f5b0f4861b33269dbcc77
PNG FOUND
-- Could NOT find JPEG (missing: JPEG_LIBRARY JPEG_INCLUDE_DIR)
JPEG NOT FOUND
-- Configuring done (1.9s)
-- Generating done (0.3s)
-- Build files have been written to: /opt/local/repos/relion/build
# make -j 8
[ 0%] Building NVCC (Device) object
[ 0%] Built target class_ranker_model_file

... a lot of building and linking up ...

[100%] Built target manualpick
[100%] Linking CXX executable ../../bin/relion_tomo_taper
[100%] Built target taper
# make install
[ 0%] Built target class_ranker_model_file
[ 0%] Built target copy_scripts
[ 1%] Built target relion_jaz_gpu_util

... more building go here ...

[100%] Built target template_pick
[100%] Built target tomo_ctf
Install the project...
-- Install configuration: "Release"
-- Up-to-date: /usr/local/bin
-- Installing: /usr/local/bin/relion_tomo_convert_projections
-- Installing: /usr/local/bin/relion_tomo_delete_blobs
-- Installing: /usr/local/bin/relion_tomo_find_lattice
-- Installing: /usr/local/bin/relion_tomo_fit_bfactors

... all the install on the install_manifest.txt ...

-- Installing: /usr/local/bin/relion_manualpick
-- Installing: /usr/local/bin/relion_tomo_taper
-- Installing: /usr/local/bin/relion_align_symmetry
-- Set runtime path of "/usr/local/bin/relion_align_symmetry" to "/usr/local/lib:/usr/local/cuda/lib64:/opt/openmpi-3.1.6/lib"
-- Installing: /usr/local/bin/relion_autopick

... more set runtime path, and more installing...

-- Installing: /usr/local/bin/relion_tomo_template_pick
-- Set runtime path of "/usr/local/bin/relion_tomo_template_pick" to "/usr/local/lib:/usr/local/cuda/lib64:/opt/openmpi-3.1.6/lib"
-- Installing: /usr/local/bin/relion_tomo_tomo_ctf
-- Set runtime path of "/usr/local/bin/relion_tomo_tomo_ctf" to "/usr/local/lib:/usr/local/cuda/lib64:/opt/openmpi-3.1.6/lib"

Well, just to let you know, at least in my case, the relion GUI pops up in a new shell after the last line of the make install. It’s good when things work, don’t you think so? It feels good ๐Ÿ™‚ ๐Ÿ™‚

Relion interface image taken from here.

Installing IsoNet version 0.2 on CentOS 7. X with CUDA 12 and python 3.9

Yes againg boring coding things. But it’s useful for me, sorry. The github page is here. The installation section is kinde of meager. The have requirements, so let’s focus on them. I check my machine first:

(base) user@computer ~ $ > conda -V
conda 23.7.4
(base) user@computer ~ $ > python -V
Python 3.9.18
(base) user@computer ~ $ > nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0

There you go. So it should work. We git clone the directory (standard procedure), go in and follow instructions. But the first step already fails Like this:

(base) user@computer ~ $ > pip install tensorflow-gpu==2.12.0
Defaulting to user installation because
normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple,
https://pypi.ngc.nvidia.com
Collecting tensorflow-gpu==2.12.0
Downloading tensorflow-gpu-2.12.0.tar.gz (2.6 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

ร— python setup.py egg_info did not run successfully.
โ”‚ exit code: 1
โ•ฐโ”€> [39 lines of output]
Traceback (most recent call last):

I try with different versions and google a little. Here we have a list of tensorflows available. It’s not clear is available for such a modern CUDA. But on this link they say “just try directly” and I try. And it works. Like this.

(base) user@computer ~ $ >pip install tensorflow
Defaulting to user installation because
normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple,
https://pypi.ngc.nvidia.com
Collecting tensorflow
Obtaining dependency information for tensorflow from
https://files.pythonhosted.org/XXX/tensorflow-2.14.YYY

... here the download of needed packages...
Installing collected packages: libclang, flatbuffers,
zipp, wrapt, typing-extensions, termcolor,
tensorflow-io-gcs-filesystem, tensorflow-estimator,
tensorboard-data-server, pyasn1, protobuf, oauthlib,
numpy, MarkupSafe, keras, grpcio, google-pasta, gast,
cachetools, astunparse, absl-py, werkzeug, rsa,
requests-oauthlib, pyasn1-modules, opt-einsum, ml-dtypes,
importlib-metadata, h5py, markdown, google-auth,
google-auth-oauthlib, tensorboard, tensorflow

Attempting uninstall: numpy
Found existing installation: numpy 1.22.4
Uninstalling numpy-1.22.4:
Successfully uninstalled numpy-1.22.4
Successfully installed MarkupSafe-2.1.3
absl-py-2.0.0 astunparse-1.6.3 cachetools-5.3.2
flatbuffers-23.5.26 gast-0.5.4 google-auth-2.23.4
google-auth-oauthlib-1.0.0 google-pasta-0.2.0
grpcio-1.59.2 h5py-3.10.0 importlib-metadata-6.8.0
keras-2.14.0 libclang-16.0.6 markdown-3.5.1 ml-dtypes-0.2.0
numpy-1.26.2 oauthlib-3.2.2 opt-einsum-3.3.0 protobuf-4.25.0
pyasn1-0.5.0 pyasn1-modules-0.3.0 requests-oauthlib-1.3.1
rsa-4.9 tensorboard-2.14.1 tensorboard-data-server-0.7.2
tensorflow-2.14.0 tensorflow-estimator-2.14.0
ensorflow-io-gcs-filesystem-0.34.0 termcolor-2.3.0
typing-extensions-4.8.0 werkzeug-3.0.1
wrapt-1.14.1 zipp-3.17.0

Then we go to the IsoNet folder (that we get throug git clone) and install the dependencies.

IsoNet $ > pip install -r requirements.txt
Defaulting to user installation because normal
site-packages is not writeable
Looking in indexes: https://pypi.org/simple,
https://pypi.ngc.nvidia.com
...here the donwloads...
Successfully built scikit-image fire
Installing collected packages: PyQt5-Qt5, tifffile,
scipy, PyWavelets, PyQt5-sip, pyparsing, pillow,
networkx, mrcfile, kiwisolver, importlib-resources,
fonttools, fire, cycler, contourpy, PyQt5, matplotlib,
imageio, scikit-image

Successfully installed PyQt5-5.15.10 PyQt5-Qt5-5.15.2
PyQt5-sip-12.13.0 PyWavelets-1.4.1 contourpy-1.2.0
cycler-0.12.1 fire-0.5.0 fonttools-4.44.0 imageio-2.32.0
importlib-resources-6.1.1 kiwisolver-1.4.5 matplotlib-3.8.1
mrcfile-1.4.3 networkx-3.2.1 pillow-10.0.1 pyparsing-3.1.1
scikit-image-0.17.2 scipy-1.11.3 tifffile-2023.9.26

Well no errors also. Which is good. We open a new shell to test. This is my output

(base) user@computer ~ $ echo $PYTHONPATH

(base) user@computer ~ $ cd IsoNet/
(base) user@computer ~ $ source source-env.sh
| ENV: PYTHONPATH='/home/user:'
| ENV: PATH='/home/user/IsoNet/bin:XXXXX'
(base) user@computer ~ $ echo $PYTHONPATH
/home/user:
(base) user@computer ~ $ isonet.py check
IsoNet --version 0.2 installed

So I guess we are ready to go! NOTE: backposted because… because… because I feel like it’s an old item ๐Ÿ™‚

Installing dynamo2Relion on CentOS 7.X

Sorry but I don’t want to explain what dynamo does, neither what relion does. This is about installing a tool to transform the output of one of them into something readble by the other. Here you have the code. We have modules, so the log is referring to the latest one avaliable with pip. So here it comes:

$ module load python-3.9.7
$ pip install dynamo2relion
Defaulting to user installation because
normal site-packages is not writeable
Collecting dynamo2relion
Downloading dynamo2relion-0.0.5-py3-none-any.whl (5.1 kB)
Collecting starfile
Downloading starfile-0.4.11-py3-none-any.whl (27 kB)
Collecting dynamotable
Downloading dynamotable-0.2.4-py3-none-any.whl (8.1 kB)
Collecting pandas
Downloading pandas-1.4.2-cp39-cp39-
manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB)
|XXXXXXXXXXXXXXXXXX| 11.7 MB 8.8 MB/s
Collecting click
Downloading click-8.1.3-py3-none-any.whl (96 kB)
|XXXXXXXXXXXXXXXXXX| 96 kB 5.4 MB/s
Collecting eulerangles
Downloading eulerangles-1.0.2-py3-none-any.whl (11 kB)
Collecting numpy>=1.18.5
Downloading numpy-1.22.4-cp39-cp39-
manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
|XXXXXXXXXXXXXXXXX| 16.8 MB 18.1 MB/s
Collecting pytz>=2020.1
Downloading pytz-2022.1-py2.py3-none-any.whl (503 kB)
|XXXXXXXXXXXXXXXX| 503 kB 37.6 MB/s
Requirement already satisfied: python-dateutil>=2.8.1 in
./.local/lib/python3.9/site-packages
(from pandas->dynamo2relion) (2.8.2)
Requirement already satisfied: six>=1.5
in ./.local/lib/python3.9/site-packages
(from python-dateutil>=2.8.1->pandas->dynamo2relion)
(1.16.0)
Installing collected packages: pytz, numpy, pandas,
starfile, eulerangles, dynamotable, click, dynamo2relion
Successfully installed click-8.1.3
dynamo2relion-0.0.5 dynamotable-0.2.4
eulerangles-1.0.2 numpy-1.22.4
pandas-1.4.2 pytz-2022.1 starfile-0.4.11

At the end, you may get a warning like this:

 WARNING: The script dynamo2relion is installed in '/home/username/.local/bin' which is not on PATH. 

Just do export PATH='/home/username/.local/bin':$PATH, or don’t. ๐Ÿ˜‰