Syncthing install on CentOS 7.5

screenshot-720I will start by quoting the product. What is syncthing? “Syncthing is an application that lets you synchronize your files across multiple devices. This means the creation, modification or deletion of files on one machine will automatically be replicated to your other devices.” This says it all. Next question is: What for? This can vary: I’m going to say here because it’s  multi-platform: there are apps, a web interface, a GUI, and so on, and all of if for free. Unfortunately installing it on CentOS is not for newbies. Let’s start.

Step one: create a yum repository. There’s an entry abut syncthing on the centos forum. It means to create a special repository for syncthing. What I did is I copied an already existing repository, rename it, and edit it.

cd /etc/yum.repos.d/
cp epel.repo syncthing.repo
gedit syncthing.repo

Inside the edited repo file, we copy this. Then yum clean all, yum update. Or even better, reboot if you can. At the end

yum install syncthing
systemctl stop firewalld
/bin/syncthing

And if you have the browser open, the Syncthing web UI as above will open. Now what? We go to the syncthing configuration, and we edit it so that it has my CentOS client IP, not the default one.  We may want to create a service for the process, but I’m not going to tell you how to do that.

I test that I can access to the web UI from another computer, and I can. Then I install the syncthing android app (that runs on the same network than my syncthing web server) and add the device on the web interface. It’s not very intuitive: to add the device you get a QR code or a very long set of letters and numbers.  Anyway, once I add it, I see on the web UI that syncthing wants to add one of the folders of my phone to the “Folders” section. I click “it’s OK” and the sync begins. Once you are done, you have the typical options: Pause, Rescan, Edit…

I must say the final sensation is very good, so I approve it. The problem will be, as usual, to propagate and promote its usage. We’ll see how it goes!

Advertisements

CryoSPARC not starting after update to v2.8 on CentOS 7.X : bad timing interval

As usual, click here if you want to know what is cryosparc. I have created a cryosparc master-client setup. In principle I did update from v2.5 to v.2.8 successfully after running on a shell cryosparc update. It’s the standard procedure. I got updated all, master and clients. But after the update I rebooted everything. And after the reboot of the master node the problems started. This is the symptom:

cryosparcm start
Starting cryoSPARC System master process..
CryoSPARC is not already running.
database: started
command_core: started

And the starting hangs there. The message telling you  where to go to access to your server is not appearing. Of course I waited. The status looks like this:

cryosparcm status
--------------------------------------------------
CryoSPARC System master node installed at
/XXX/cryosparc2_master
Current cryoSPARC version: v2.8.0
----------------------------------------------
cryosparcm process status:
command_core                     STARTING 
command_proxy                    STOPPED   Not started
command_vis                      STOPPED   Not started
database                         RUNNING   pid 49777, uptime XX
watchdog_dev                     STOPPED   Not started
webapp                           STOPPED   Not started
webapp_dev                       STOPPED   Not started
------------------------------------------------
global config variables:
export CRYOSPARC_LICENSE_ID="XXX"
export CRYOSPARC_MASTER_HOSTNAME="master"
export CRYOSPARC_DB_PATH="/XXX/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false

It looks like in this cryosparc forum post. Unfortunately no solution is given there. We can check what the log webapp is telling also:

 cryosparcm log webapp
    at listenInCluster (net.js:1392:12)
    at doListen (net.js:1501:7)
    at _combinedTickCallback (XXX/next_tick.js:141:11)
    at process._tickDomainCallback (XXX/next_tick.js:218:9)
cryoSPARC v2
Ready to serve GridFS
events.js:183
      throw er; // Unhandled 'error' event
      ^
Error: listen EADDRINUSE 0.0.0.0:39000
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at Server.setupListenHandle [as _listen2] (net.js:1351:14)
    at listenInCluster (net.js:1392:12)
    at doListen (net.js:1501:7)
    at _combinedTickCallback (XXX/next_tick.js:141:11)
    at process._tickDomainCallback (XXX/next_tick.js:218:9)

It looks like a java problem (EADDRINUSE stands for address in use). So which java process is creating the listening error?

I clean up as suggested on this cryosparc post,  or on this one, deleting the /tmp/ and trying to find and kill any supervisord rogue process. That I don’t have. Next I reboot the master but the problem persists. Messing up with the MongoDB does not help also. What now? The cryosparc update installed a new python, so I decide to force the reinstall of the dependencies. It is done like this:

cryosparcm forcedeps
  Checking dependencies... 
  Forcing dependencies to be reinstalled...
  --------------------------------------------------
  Installing anaconda python...
  --------------------------------------------------
..bla bla bla...
 Forcing reinstall for dependency mongodb...
  --------------------------------------------------
  mongodb 3.4.10 installation successful.
  --------------------------------------------------
  Completed.
  Completed dependency check. 

If I believe what the software tells me, everything is fine. I reboot and run cryosparcm start but my “command core” still hangs on STARTING. After several hours of investigation, I decide to take a drastic solution. Install everything again. Then I find it.

 ./install.sh --license $LICENSE_ID \
--hostname sparc-master.org \
--dbpath /my-cs-database/cryosparc_database \
--port 39000
ping: bad timing interval
Error: Could not ping sparc-master.org

What is this bad timing interval? I access to my servers via SSH + VPN, so it could be that the installer can’t handle the I/O of such a load, or the time servers we use, or something. Or maybe is that the Java versions differ? In any case, I approach to it on another way. I need to be closer. How to?

I open a virtual desktop there and in it, I call an ubuntu shell where I run my installer. Et voila! bad timing gone. And the install goes on without any further issues. Note that I do a new install using the previous database (–dbpath /my-cs-database/cryosparc_database so that everything, even my users, are the same than before 🙂

Long story short: shells may look the same but behave differently. Be warned!

CentOS sudo bash no remote display

Case scenario: someone gave you sudo rights on a computer you can log in as an user. You do ssh -Y username@remote and you get a shell where you can call GUIs, for example “emacs” and you get a pop-up window (“x11”). But if you go as sudo, the shell drops you this error (or one very similar)

> nedit
nedit: the current locale is utf8 (en_US.UTF-8)
nedit: changed locale to non-utf8 (en_US)
X11 connection rejected because of wrong authentication.
NEdit: Can't open display

On this case, for nedit, that I know it’s there and I can open as “username”. What can I do to get my X11 window? Solution found, as usual, on this stackexchange post. As an user, I get the value of the DISPLAY variable, then the xauth list:

[user@remote~]$ echo $DISPLAY
localhost:11.0
[user@remote ~]$ xauth list
remote/unix:25 MIT-MAGIC-COOKIE-1 45673fghdfghd7
remote/unix:32 MIT-MAGIC-COOKIE-1 iiteiket6478787
remote/unix:29 MIT-MAGIC-COOKIE-1 b7unjyuiojnko78
remote/unix:11 MIT-MAGIC-COOKIE-1 e4t1363d166t636
...a lot here
remote/unix:14 MIT-MAGIC-COOKIE-1 e86sdaasdsyuuyu
[user@remote ~]$ sudo bash
root@remote ## > 
xauth add remote/unix:11 MIT-MAGIC-COOKIE-1 e4t1363d166t636

That is, you need to find the magic cookie for the user, and add it as sudo. This worked for me. Will it work for you? I hope so 🙂

The Huawei incident

I’m a multi-platform user by need. So I can’t call myself an Android, an OSX or a Windows user. Of course, all of that said with the big mouth, without specifying any device. And of course I have my favourites. I must say I’m lucky I don’t have an Huawei terminal, or any of their clones (Honor and similar). But the big news of today about Google ending its deal with the technology firm Huawei sound like good news for me. I’ll try to explain why.

Previously I managed to install Android on my old DELL netbook just to find out the specs of it are way below my current mobile phone, to the point of looking ridiculous. But it worked. The install was quite easy, being the most complicated step to mount Oreo on a pen drive.

I do have a micro USB to USB adapter, and I do frequently read an USB with my generic handy. So why not let the users to install custom-designed ROMs on a handy, without the hassle of breaking the warranty or rooting the phone? In this way, I could, let’s say, install Windows XP on it, or something like that, if needed. I consider myself a pro, and I did brick an HTC already trying to root it.

If a company like Huawei goes ahead and let the people use their terminals whatever way you like (of course under certain circumstances, maybe after registering it as a test device, providing a valid email address, opening a developer’s profile with the company, or something like that) I think we will all be more free. I’m looking forward to be able to have a smartphone the way I like it, like one you can control what you send back home or not, or the apps you get and what they say. And maybe in that near future, I will finally have CentOS on my handy, if I need it. Or Windows XP 🙂

CryoSPARC 2 slurm cluster worker update error

This is about CryoSPARC again. Previously we did install it on CentOS and update it, but on a master + node configuration, not on a cluster configuration. If it’s a new install on your slurm cluster, you should follow the master  installation guide, that tells you to make a master install on the login node, then, on the same login node install the worker:

module load cuda-XX
cd cryosparc2_worker
./install.sh --license $LICENSE_ID --cudapath 

The situation is that we update the master node but the Lane default (cluster) doesn’t get the update and the jobs crash because of it. First we uninstall the worker using one of the management tools like this:

cryosparcm cli 'remove_scheduler_target_node("cluster")'

Then we cryosparc stop and we move the old worker software folder

mv cryosparc2_worker cryosparc2_worker_old

and get a new copy of the worker software with curl.

curl -L https://get.cryosparc.com/\
download/worker-latest/$LICENSE_ID \ > cryosparc2_worker.tar.gz

We cryosparc start, and untar, cd, and install. Don’t forget to add your LICENSE_ID and to load the cuda module or be sure you have one cuda by default. This is an edited extract of my worker install:

******* CRYOSPARC SYSTEM: WORKER INSTALLER ***********************

Installation Settings:
License ID :  XXXX
Root Directory : /XXX/Software/Cryosparc/cryosparc2_worker
Standalone Installation : false
Version : v2.5.0

******************************************************************

CUDA check..
Found nvidia-smi at /usr/bin/nvidia-smi

CUDA Path was provided as /XXX/cuda/9.1.85
Checking CUDA installation...
Found nvcc at /XXX/cuda/9.1.85/bin/nvcc
The above cuda installation will be used but can be changed later.

***********************************************************

Setting up hard-coded config.sh environment variables

***********************************************************

Installing all dependencies.

Checking dependencies... 
Dependencies for python have changed - reinstalling...
---------------------------------------------------------
Installing anaconda python...
----------------------------------------------------------
PREFIX=/XXX/Software/Cryosparc/cryosparc2_worker/deps/anaconda
installing: python-2.7.14-h1571d57_29 ...

...anaconda being installed...
installation finished.
---------------------------------------------------------
Done.
anaconda python installation successful.
---------------------------------------------------------
Preparing to install all conda packages...
-----------------------------------------------------------
----------------------------------------------------------
Done.
conda packages installation successful.
------------------------------------------------------
Preparing to install all pip packages...
----------------------------------------------------------
Processing ./XXX/pip_packages/Flask-JSONRPC-0.3.1.tar.gz

Running setup.py install for pluggy ... done
Successfully installed Flask-JSONRPC-0.3.1 
Flask-PyMongo-0.5.1 libtiff-0.4.2 pluggy-0.6.0 
pycuda-2018.1.1 scikit-cuda-0.5.2
You are using pip version 9.0.1, 
however version 19.1.1 is available.
You should consider upgrading via the
 'pip install --upgrade pip' command.
-------------------------------------------------------
Done.
pip packages installation successful.
-------------------------------------------------------
Main dependency installation completed. Continuing...
-------------------------------------------------------
Completed.
Currently checking hash for ctffind
Dependencies for ctffind have changed - reinstalling...
--------------------------------------------------------
ctffind 4.1.10 installation successful.
--------------------------------------------------------
Completed.
Currently checking hash for gctf
Dependencies for gctf have changed - reinstalling...
-------------------------------------------------------
Gctf v1.06 installation successful.
-----------------------------------------------------------
Completed.
Completed dependency check.

******* CRYOSPARC WORKER INSTALLATION COMPLETE *****************

In order to run processing jobs, you will need to connect this
worker to a cryoSPARC master.

****************************************************************

We are adding a worker that was somehow previously there, so I don’t do anything else. If I check the web, Lane default (cluster) is back. Extra tip: the forum entries about a wrong default cluster_script.sh, and about the Slurm settings for cryosparc v2.

If I need to add something: be aware that the worker install looks like coming with its own python, and it does reinstall cftfind and gctf. So be careful if you run python things in addition to cryosparc 🙂

Merge several CSV files in bash using awk

I do produce a lot of CSV files at this moment. I decided to merge them all in one, and read what I want from it. This can be done with the so-called “one-liner”. I hate them, but sometimes they are very practical. I found the merging solution here. This is the expanded solution. Let’s say we want to merge 3 files. First, let’s print the content of all of them together:

paste -d, one.csv two.csv three.csv

Explanation: we paste files one.csv two.csv and three.csv, that are comma separated values (-d,) and we  print them on the stdout (your shell). We now select from it the fields we want with awk like this:

paste -d, one.csv two.csv three.csv | \
awk -F, '{print $1","$2","$4","$7}'

We are selecting from the output of paste thanks to awk the fields we are interested in. In this example, fields 1,2,4, and 7 of the stdout. You can still check the stdout… and when you are happy, dump your fields to a file, like this.

paste -d, one.csv two.csv three.csv | \
awk -F, '{print $1","$2","$4","$7}' > output.csv

Now to plot it 😦

An ASCII version of Star Wars Episode IV

2018-10-13-image-4

This you need to check out! Open a command prompt, and type:

 telnet towel.blinkenlights.nl

Note that you need, of course, to have telnet. Found here while looking for windows command prompt shortcuts. The full article is very interesting, so don’t go directly to the end of it. Note 2: it works also on linux, if you have telnet installed. Have fun! 🙂