SPHIRE update on CentOS 7

Last January I was telling you my experience installing Sphire. I got a new version of the package, and this time installation was smoother. Note that the package provided to me comes from a workshop (workshop? what is that?) so it could be it’s not the final one. I got a pendrive with several folder. Obviously I go to the folder named “Installer”.

I will from here use eman2.2.linux64.centos6.cluster.numpy18-2017_05_04.sh that seems to tell me, from the name, that I need to install it in a cluster running centos6.

## > ./eman2.2.linux64.centos6.cluster.numpy18-2017_05_04.sh

Welcome to EMAN2 2.2

EMAN2 will now be installed into this location:

- Press ENTER to confirm the location
 - Press CTRL-C to abort the installation
 - Or specify a different location below

[/root/EMAN2] >>> /usr/local/EMAN2_sphire
ERROR: File or directory already exists: /usr/local/EMAN2_sphire
If you want to update an existing installation, use the -u option.

I understand this is not an error, since I have created the folder on purpose. Anyway, I go for the update option:

## > ./eman2.2.linux64.centos6.cluster.numpy18-2017_05_04.sh -u

Welcome to EMAN2 2.2

EMAN2 will now be installed into this location:

- Press ENTER to confirm the location
 - Press CTRL-C to abort the installation
 - Or specify a different location below

[/root/EMAN2] >>> /usr/local/EMAN2_sphire
installing: python-2.7.13-1 ...
...bla bla bla...


Important note for Linux Cluster use:
If you are using EMAN2/SPARX/SPHIRE on a cluster, 
the version of OpenMPI we provide may not work with your 
batch queueing system, meaning you would not be able 
to run jobs on more than one node at a time. If this is true:
- run 'utils/uninstall_openmpi.sh' to remove the 
OpenMPI we provided
- run 'utils/install_openmpi.sh' to install 
OpenMPI from source (optional)
- make sure that the correct OpenMPI for your cluster 
is in your path. You should be able to run 'mpicc' 
and get a message like 'gcc: no input files'
 (note that it is critical that OpenMPI be 
compiled with '--disable-dlopen', which may 
or may not be true on your cluster. 
You may need to consult a sysadmin.)
- run 'utils/install_pydusa.sh' to rebuild 
Pydusa using the system installed OpenMPI

installation finished.
Do you wish the installer to prepend the EMAN2 install location
to PATH in your /root/.bashrc ? [yes|no]
[no] >>> no

You may wish to edit your .bashrc or 
prepend the EMAN2 install location:

$ export PATH=/usr/local/EMAN2_sphire/bin:$PATH

Thank you for installing EMAN2!

I will recycle the module of my previous sphire version. After changing the module to match the new installation, I have a test run, that I launch, without so much hopes, since we are here speaking about openmpi and python, the two monsters of collective environments.  My error reads:

RuntimeError: module compiled against API version 0xa 
but this version of numpy is 0x9

Therefore, I do need to re-install its openmpi. I do so on a test machine, just in case it does screw my other mpi installations. As suggested:

module load eman2/eman2-sphire 
 cd /usr/local/EMAN2_sphire/

Note that the uninstall will fail unless you have the eman2-sphire environment loaded. Note also that the installation of pydusa takes for a while. A successful install should give you this output at the end:

# All requested packages already installed.
# packages in environment at /usr/local/EMAN2_sphire:
fftw-mpi 3.3.6 0 local
+ conda inspect linkages pydusa

 libfftw3.so.3 (lib/libfftw3.so.3)
 libfftw3_mpi.so.3 (lib/libfftw3_mpi.so.3)

 libmpi.so.20 (lib/libmpi.so.20)
 libopen-pal.so.20 (lib/libopen-pal.so.20)
 libopen-rte.so.20 (lib/libopen-rte.so.20)

 libc.so.6 (/usr/lib64/libc.so.6)
 libgpfs.so (/usr/lib64/libgpfs.so)
 libm.so.6 (/usr/lib64/libm.so.6)
 libpthread.so.0 (/usr/lib64/libpthread.so.0)
 librt.so.1 (/usr/lib64/librt.so.1)
 libutil.so.1 (/usr/lib64/libutil.so.1)
 linux-vdso.so.1 ()

not found:

How do I know my installation is really independent? I will sync “only” the folder EMAN2_sphire and my module definition to one node. There, I do:

user@node ~ $ > module load mpi/mpi-2.1.0 
user@node ~ $ > mpirun --version
mpirun (Open MPI) 2.1.0

Report bugs to http://www.open-mpi.org/community/help/
user@node ~ $ > module unload mpi/mpi-2.1.0 
user@node ~ $ > module load eman2/eman2-sphire 
user@node ~ $ > mpirun --version
mpirun (Open MPI) 2.0.2

Report bugs to http://www.open-mpi.org/community/help/
user@node ~ $ > which mpirun
user@node ~ $ >

Then I launch my test run, and no problem so far. So good job, sphire guys 😀 .

This version is easier to install, better encapsulated, and can run on a cluster. I’ll be looking forward for the results.


About bitsanddragons

A traveller, an IT professional and a casual writer
This entry was posted in bits, centos, linux, slurm. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s