COOT errors : error while loading shared libraries: libsrfftw.so.2

The bird in question grabbin a molecule

Today my dear children, we are going to install COOT, a Crystallographic Object-Oriented Toolkit. I don’t really know if you are interested on this, but it’s a program for protein-modelling. Basically it lets you see and edit your structure on 3D, also with stereo glasses. Please refer to the official page to know more.

Where do I enter here? We have coot since quite some time, but it’s not me the one keeping an eye on software version, features, bugs, and so on, since you need to be an user to do that. My user came complaining about missing stereo after the last update to coot 0.9.2, done by a colleague, so I was asked either to fix that or to revert to a previous install. I went for the second option. I got some of the precompiled binaries from here. Then as root, I unzip the chosen old versions, move them to my software folder, log in as an user, and run the binary directly. I was expecting it not to work out of the box, and it didn’t. This was my output:

$ > /usr/local/coot/bin/coot
CLIBD_MON not set using COOT_REFMAC_LIB_DIR
/usr/local/coot/libexec/coot-bin: 
error while loading shared libraries: libsrfftw.so.2: 
cannot open shared object file: No such file or directory
guile: error while loading shared libraries: libguile.so.17: 
cannot open shared object file: No such file or directory
failed to launch the crash catcher   

Googling helps, but you need to understand what you found. In my case, this means we need Coot typically picks up the standard CCP4 library, but allows you to over-ride that for special cases, or if you are using Phenix. Fortunately, I have both already so I don’t need to reinstall them. If I load the default ccp4 module, the CLIBD_MON error is gone but the library error and the guile error stay. Time to ask coot to tell me more. We can do it like this.

$ > /usr/local/coot/bin/coot --check-libs | grep "not"
     libsrfftw.so.2 => not found
     libsfftw.so.2 => not found
     libsrfftw.so.2 => not found
     libsfftw.so.2 => not found

I can confirm using ldconfig -p | grep fftw that we have some fast-fourier transform libraries, but do we have them all? Apparently not. I run yum install fftw2 -y to install the missing set and try again. And the error is gone. I get my coot!

Install android things on a raspberry pi

the official logo

I don’t know why I started this project. Basically, I have a few raspberry pies ( can I write it like that?) with touchscreens that I wanted to play with before giving them away. The default Raspberry Pi OS is quite neat, but not suitable for small touchscreens. I recently became fascinated by the IoT (do I need to explain what is that meaning?) so I decided to create my own IoT center, maybe with some additional functions, like control over what is playing on my bluetooth speakers. Yes, a very ambitious developer I am, that Yoda could say.

I found this tutorial on the android developer page. In short, I flush a kernel on an ssd, then an app on it. This is my experience.

As most of you know, I’m using CentOS 7 and 8. I start sign in to your Google account and accept the licensing agreement and terms of service, then unzip the downloaded archive ( unzip android-things-setup-utility.zip) on my CentOS 8 computer with a card reader. The execution goes like this:

./android-things-setup-utility-linux

Android Things Setup Utility (version 1.0.21)
============================
This tool will help you install Android Things on your board
and set up Wi-Fi.

What do you want to do?
1 - Install Android Things and optionally set up Wi-Fi
2 - Set up Wi-Fi on an existing Android Things device
1
What hardware are you using?
1 - Raspberry Pi 3
2 - NXP Pico i.MX7D
1
You chose Raspberry Pi 3.

Setting up required tools...
Fetching additional configuration...
Downloading platform tools...
5.44 MB/5.44 MB
Unzipping platform tools...
Finished setting up required tools.

Raspberry Pi 3
Do you want to use the default image or a custom image?
1 - Default image: Used for development purposes.
No access to the Android
Things Console features such as metrics, crash reports,
and OTA updates.
2 - Custom image: Upload your custom image for
full device development and management with
all Android Things Console features.
1
Downloading Android Things image...
342 MB/342 MB
Unzipping image...

Downloading Etcher-cli, a tool to flash your SD card...
22.4 MB/22.4 MB
Unzipping Etcher-cli...

Plug the SD card into your computer. Press [Enter] when ready

Running Etcher-cli...
? Select drive /dev/sdb (7.9 GB) - SD/MMC CRW
? This will erase the selected drive. Are you sure? Yes
Flashing [========================] 100% eta 0s
Validating [========================] 100% eta 0s
iot_rpi3.img was successfully written to SD/MMC CRW (/dev/sdb)
Checksum: 2eba2225

If you have successfully installed Android Things on your SD card,
you can now put the SD card into the Raspberry Pi and power it up.
Otherwise you can abort and run the tool again.

Would you like to set up Wi-Fi on this device? (y/n) n

Easy as a pie! (you got the joke?). I get the card, put it in my pi, connect the network cable (I don’t set up the Wi-Fi) and simply boot it up. What appears on my touchscreen is a very simple serie of menus. Like this (stolen from StackOverflow)

The androidthings on a pi.

The screen shows the IP given by my network. I now try to connect to the given IP through adb just to find out I don’t have that command! We need of course to install Android Studio IDE. Just google it, and you will find it. The installation is on the Install-Linux-tar.txt file, but basically run ./studio.sh, (inside bin) and follow the instructions. Once we are done (or if we have installed it already) we can ge the SDK through the GUI by going to the menus Tools–> SDK Manager (the icon is a package) or through command line on ~/Android/Sdk/tools/bin ## > ./sdkmanager. In my case I used the first. Select the second tab, on SDK Tools, select Android SDK Command-line Tools and related, if you don’t have them. When the installation is done, you should have the adb on ~/Android/Sdk/platform-tools. I go there and finally run it like this:

 ~/Android/Sdk/platform-tools ## > ./adb connect RPI-IP
* daemon not running; starting now at tcp:5037
* daemon started successfully
connected to RPI-IP:5555

Obviosly, RPI-IP is the IP of my raspberry. Time to download a apk (android package) and install it on our pi. There are app downloaders and I can’t recommend you one single way: I used chrome app downloader. Here you have how to install Spotify. I did it for Netflix in a similar way. My final commands look like this:

./adb connect RPI-IP
./adb shell am start -n com.netflix.mediaclient/.ui.launch.
NetflixComLaunchActivity
Starting: Intent { cmp=com.netflix.mediaclient/
.ui.launch.NetflixComLaunchActivity }

After these two, I see the splashscreen of Netflix on my touch screen. Unfortunately the apk doesn’t seem to work on this particular configuration, but it’s a good beginning. We’ll see how far I go with my project… 😛

Intel Cluster Check install and test

I got interested on the tool after reading the fantastic post of kittycool Using Intel Cluster Checker, part 1, 2 and 3. Was the ICC clck the tool I was always looking for? Maybe. I don’t know yet, but my first tests after the install were very promising. This is my experience.

Installation: Here the official intel documentation. We have a GPFS cluster with a shared scratch share “scratch”. I give that as a CLCL_SHARED_TEMP_DIR. I do the install on all the GPFS cluster nodes.

root@node01 > yum-config-manager --add-repo https://yum.repos.intel.com/clck/2019/setup/intel-clck-2019.repo
root@node01 > rpm --import https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
root@node01 > yum install -y intel-clck-2019.9-056
root@node01 > export CLCK_SHARED_TEMP_DIR=/scratch/clustercheck

Usage: I create my nodefile with vi. It’s just a list of nodes to be checked by clck. I didn’t find the scripts kittycool was using so I run it directly from the yum-installed package.

root@node01 >/opt/intel/clck_latest/bin/intel64/clck-collect -f nodefile -F node_process_status
root@node01 >/opt/intel/clck_latest/bin/intel64/clck -f nodefile

It’s pretty fast and it prints out a short report on a screen, a long report on a file. For the screen report, I’m going to refer to the original Using Intel Cluster Checker from the already mentioned kittycool. We have several clusters, one of them very heterogeneous, on chips and resources, so I go a pretty long detailed report, that is going to be very helpful to achieve an uniform firmware and environment. What do I want to retrieve from clck? I will try to extract the relevant info about the outliers and send it to my alarm management system. Or at least, I will schedule it and show the logs on my web server 😉 So thanks kittycool for your post, once more! 🙂

Notes on HoloLens 2 for science

We are having 3D monitors with shutter glasses to do the so called structural analysis on a native 3D. But this type of technology is on the way to die. It’s difficult to buy 3D monitors and 3D Vision toolkits with emitters, so we are playing around to find a solution that works for several people at the same time. For bioscience, there is already HTC Vive software that permits visualization of molecules on VR. Unfortunately, we can’t afford to have a room blocked just for VR, so HoloLens looked like a more reasonable solution. I got an ad from our vendor announcing the release of HoloLens 2, so I run quickly to check out if that amazing technology (yes I have tested a working one) was ready to be pipelined easily for our specific “customer” needs.

It turned out it is not ready. The reason is, it is not yet interactive enough. It sounds funny, but my average use does not want only to see and zoom the 3D object together with others, he or she wants to modify it, apply masks, change colors, delete parts. Like in some crazy type of Paint3D. And HoloLens 2 may have it, but it’s not able to import a PDB map and associated masks.

But about my experience with it. I followed this Step by Step HoloLens 1 tutorial from medium and despite of the look of the menus, it is still working. I had to install in addition to Unity the Windows 10 SDK and Visual Studio. Even so, at the Building phase I got the Selected Visual Studio is missing required components and may not be able to build error. The whole compilation part was annoying, and even the result (a cube) was satisfactory, the whole administrator experience was not. There are too many pieces. Maybe with things like the MixedRealityToolkit-Unity the usage will become closer to what we want. Maybe not. I think the Interactable object Microsoft document will give you a clue of the current capabilities of the HoloLens 2. Sorry but science is more interactive than pressing a button. See you on your next iteration, maybe.

Notes on HTML CSS Selectors

I tend to code everything myself. Of course google helps me a lot, but in principle, I’m alone on this. I’m the Software Architect, the Software Developer, the Engineer, and the tester. So I have my snippets, that I tend to modify for a specific usage. One of them is a php sortable table working with a sortable java script that I found. But at the end, you have a B/W table. Ugly, lifeless, boring. To give it some colors I use a CSS Selector. The concept is the next. Whatever element you have, you can select it after some rule. The CSS generic element is called child. Here you have a very nice example of what you can do selecting bullet elements.On that example, the child is a bullet. On this other example, the child is a paragraph. For me, they are table rows. This is what I wrote.

tr:nth-child(n+3):nth-child(-n+22) 
{ background-color: AliceBlue; }
tr:nth-child(n+23):nth-child(-n+40)
{ background-color: AntiqueWhite; }

What do I want to show here? That the nth-child selectors can be chained to select ranges. In that case, rows from the 3rd to the 22 will be coloured in one way, from the 23 to beyond in another. Stackoverflow, as usual, had the answer to my problem. Thanks, Stackoverflow, you’re my only hope 🙂

PS: no, my muse is not yet back. Let’s hope she recovers and I manage to write something interesting again. Interesting as “not techno blabber”. But I’m reading you 🙂

A nice flight to a parallel universe

This is a dream. We were going on holidays to Madrid. It was a tough decision. My memories of the city are bittersweet, speaking of air pollution and unfriendly locals. But then there’s the Prado Museum, the Royal Palace and our doorway to the Big South.

Landing was eventless. The first difference with my fore image appeared already at the airport, where I observed more coloured people than I remembered. But then, that was a decade ago. Immigration from South America to Spain has been rampant for a while, and Madrid is supposed to be the European hub for them. We take a taxi to the centre, and the guy, curious about us, speaks a perfect but colourful English, and the driver doesn’t try to cheat us. When asked about a child-friendly restaurant around the hotel, he recommends me a Mexican, that I of course don’t know.

Plaza Mayor has some big, cone-shaped, hollow monument in the center instead of the equestrian Phillip the 3rd. I’m of course not following the news from the capital city as much as I could, but I feel like that is something different than a publicity stunt. When we get closer – it’s on the way to the restaurant – my children run under the metallic treetop, and I follow them. Indeed it’s not some big Christmas ad – I’m aware it was done before, the covering of a statue by a Christmas tree – but something more permanent. I examine the badly illuminated inner carvings with astonishment. They depict what I believe is the conquest of America. Here and there, you find Mayan and Aztec imagery, some of them, I’m going to say, more disturbing than what I would place in the center of the city. But OK, conciliation is important and so on.

The Mexican restaurant is indeed gorgeous. They claim to be centuries old (is that possible, I wonder) and we order some of the typical dishes one can expect in such a place, ceviche, tacos, and similar. We get the best table since the place is empty – we are the only clients – and I can’t understand the reason, so I ask the service. The waiter, very friendly, without further explanations, switches on the closest TV on the wall and points out to the image there – it’s not a football match, as I was expecting, but a rocket launch. The waiter says “It’s Obama. Finally he’s going to the Moon.” I nod, and I watch the successful launch while the anchorman reminds us of Obama’s achievements.

Once in our room, I quickly go online to check what’s going on. First I’m very surprised to find out that we are on a Republic, not on a Kingdom. Then it turns out that Spain ends in Despenaperros -the mountain pass to Andalusia- meaning our destination is, de facto, in another country. My dream gets personal from here on, so I will let the rest to your imagination. Unfortunately, I woke up at one point. So bye by Republic of Spain, hello reality.

GPFS how to : create a disk pool with 2 disks on 2 servers

Let’s start with the official documentation: here how to create a GPFS Network Shared Disk, here how to create the GPFS file system and pool. Now my story. We have two of the nodes of our GPFS cluster with associated storage. They were used as NFS disks before, accesible via samba, but this is not relevant now. We want to merge them onto one single GPFS storage because we know we can, since that is basically how a GPFS storage works.

Preliminaries: we have the GPFS nodes, called nfs01 and nfs02 accesible through ssh. They are literally the same model of server, a DELL machine with storage attached through a RAID controller. We have formatted the storage disks as a RAID 50 system available (on both nodes) on the location /dev/sda.

We will need a StanzaFile. There we will define what kind of pool we want. We chose system pool, since we don’t have metadata servers and similar, and it comes with small blocksize. Under our conditions, the StanzaFile looks like this:

%pool: pool=system blockSize=256K \
layoutMap=cluster allowWriteAffinity=no
# nfs01 and nfs02 network shared disks
%nsd: nsd=nsd_nfs01 device=/dev/sda servers=nfs01 \
usage=dataAndMetadata failureGroup=-1 pool=system
%nsd: nsd=nsd_nfs02 device=/dev/sda servers=nfs02 \
usage=dataAndMetadata failureGroup=-1 pool=system

We save the file on the current directory. To add both disks as share called data_scratch mounted on /data_scratch, we run on nfs01 the next commands:

root@nfs01 ~ ## > mmcrnsd -F StanzaFile
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sda
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

root@nfs01 ~ ##
> mmcrfs data_scratch -F StanzaFile \
-T /data_scratch -m 2 -M 3 -i 4096 -A yes -Q no -S relatime \
-E no --version=X.X.X

The following disks of data_scratch will be formatted on node nfs01:
nsd_nfs01: size 38149120 MB
nsd_nfs02: size 38149120 MB
Formatting file system ...
Disks up to size 291.06 TB can be added to storage pool system.
Creating Inode File
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
16 % complete on DATE
27 % complete on DATE
...
100 % complete on DATE
Completed creation of file system /dev/data_scratch.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Shall we run the commands also on nfs02? I don’t know, so let’s do it just in case.

root@nfs02 ~ ## > mmcrnsd -F StanzaFile
mmcrnsd: Processing disk sda
mmcrnsd: Disk name nsd_nfs01 is
already registered for use by GPFS.
mmcrnsd: Processing disk sda
mmcrnsd: Disk name nsd_nfs02 is
already registered for use by GPFS.
mmcrnsd: Command failed.
Examine previous error messages to determine cause.

So we don’t need it. A distributed system should know we already used those disks. Now we can go to another GPFS node of the same cluster, for example node03, and mount the share with the usual command, mmmount data_scratch. Or we can mount them on all at the same time, mmmount all -a. Bonus: list of GPFS commands.

GPFS adding and removing a new Network Shared Disk : possible error messages

I have written previously an post about declaring a RAID an NSD and add it to our GPFS cluster. That was a year ago, so I feel like it’s OK to review the topic. I have another RAID disk on one server of our GPFS cluster. I wanted to add the disk so that it’s also visible on all the clients. Clear and in a trade: we aim to have two NSDs. Firs we mount the RAID.

root@node02 ~ ## > mkdir /scratch
root@node02 ~ ## > mount /dev/sda1 /scratch
root@node02 ~ ## > chmod 777 /scratch/

I made the scratch readable for all the users. My first trial consist of copying the StanzaFile from the other node with storage (node01) and change the servers paramerter. Like this:

%nsd:
device=/dev/sda
nsd=nsd1
servers=node02
usage=dataAndMetadata
failureGroup=-1
pool=system

Let’s create the NSD using the corresponding command. The error is pretty obvious:

root@node02 ~ ## > mmcrnsd -F StanzaFile 
mmcrnsd: Processing disk sda
mmcrnsd: Disk name nsd1 is already registered for use by GPFS.
mmcrnsd: Command failed. Examine previous error
messages to determine cause.

We change nsd1 by nsd2. Don’t blame the messanger: I’m just trying to illustrate the errors. We try again just to get this:

root@node02 ~ ## > mmcrnsd -F StanzaFile 
mmcrnsd: Processing disk sda
mmcrnsd: Incorrect node node02 specified for command.
mmcrnsd: Error found while processing stanza

What is going on? The node name, oddly enough in our case, doesn’t correspond to the GPFS name. This is because we have two networks, one for the GPFS, the other for another purpose. Each network has a name and we need to use the GPFS network name, for this example, gpfs02. With this change, the command runs without errors. Time to create the filesystem. I do it like this:

root@node02 ~ ## > mmcrfs /dev/scratch -F StanzaFile \
-T /gpfstest -m 2 -M 3 -i 4096 -A yes -Q no \
-S relatime -E no
mmcrfs: The requested format version for the file system
(5.0.5.2 - default) exceeds the current committed
function level for the cluster (minReleaseLevel 5.0.4.0).
Either specify --version=5.0.4.0 to create the file system
at the current level, or run "mmchconfig release=LATEST"
to update the cluster to the new release level.
mmcrfs: Command failed. Examine previous error messages
to determine cause.

As you see, I failed. If I try again varying slightly the parameters, I still get the same error. I don’t want to enter onto commited function level considerations so I do as suggested.

root@node02 ~ ## > mmchconfig release=LATEST
Verifying that all nodes in the cluster are up-to-date ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

root@node02 ~ ## > mmcrfs /dev/scratch -F StanzaFile \
-T /gpfstest -i 4096 -A yes -Q no -S relatime -E no

The following disks of data_scratch will be formatted
on node node02:
nsd2: size 85839360 MB
Formatting file system ...
Disks up to size 660.12 TB can be added
to storage pool system.
Creating Inode File
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
72 % complete on DATE
100 % complete on DATe
Completed creation of file system /dev/scratch.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

We have our new GPFS NSD! Unfortunately, I don’t like the NSD name gpfstest, so we delete the newly created share just to create it again with a nice name data_scratch. Like this:

root@node02 ~ ## > mmdelfs /dev/scratch
All data on the following disks of scratch
will be destroyed: nsd2
Completed deletion of file system /dev/scratch.
mmdelfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

root@node02 ~ ## > mmcrfs /dev/scratch -F StanzaFile \
-T /data_scratch -i 4096 -A yes -Q no -S relatime -E no

The following disks of data_scratch will be formatted
on node node02:
nsd2: size 85839360 MB
.. as above...
Completed creation of file system /dev/data_scratch.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

We can check on another GPFS node that our share is now visible. We mount it on node03.

mmmount data_scratch
Thu Oct 15 15:56:57 CEST 2020: mmmount: Mounting file systems ...

Now we log in as an user to check the new share.

user@node03 $ > df -h
Filesystem Size Used Avail Use% Mounted on
...some other shares
data_scratch 82T 82T 270G 100% /data_scratch

What is going on? Is the share already full? Is it unproperly added? No, it’s another “feature”: the node takes time to realize the NSD space. I wait, and around 5 minutes later (time may vary depending on your cluster) the new share goes to 1% and life is beautiful 🙂 🙂 🙂

No excuses

I have no excuse. I have the plot, I have the title, and I have time to write it. The only component I miss is the mood to write. Why is that? Because I feel a little alone on this enterprise of moving out of my head the adventures of these poor two guys. I don’t think I’m ever going to finish a book about them, unless I dedicate all my (office ?) time to it. My bits are somehow coming with the corresponding issue, so I will never run out of material. Not unless there’s a EMP killing all the electronics in the world, because even if I’m fired, I’ll probably keep having computing issues. Or I think I will keep having them, if I keep playing with it. Because I’m optimistic. At one point, everything will go back to normal. Everything 😦