Matlab R2019 : deep network design app error using matlab.internal.webwindow on CentOS 7.5

matlab-deeplearning

Yes I know. What??? Deep network design? Whatever. It doesn’t work, so let’s fix it. It’s actually very simple. We need the libXss library.ย  It doesn’t come as a stand-alone thing but with a package. I do as suggested here, like this:

 yum install libXScrnSaver

We don’t need to reboot. After this install (just one package, libXScrnSaver.x86_64 i n my case, you can try the ubuntu solution here) when I choose the deepnetwork designer app, a new pop-up window appear and we can do something on it. What we can do, I don’t know yet, I’m just fixing a problem ๐Ÿ˜€

Advertisements

A vnstat web interface for CentOS 7.5

summary3We want to have an overview of the traffic on all our subnet. We could use the switch information, provided by nagios or similar, but I prefer an opensource, free, client-based solution. like vnstat, that we will use to collect the client information, and a customized web interface. To install vnstat it on CentOS 7, simply do it with yum and start the service:

yum install vnstat; systemctl start vnstat

In my case, this brought both tools we need, vnstat and vnstati.

We can try to run it already. The output a minute after it starts should be similar to this:basic-vnstat-usage

The pictures are taken from the howto in Ubuntu. Actually vnstat is the standart solution to monitor network traffic on ubuntu. There are of course more network monitoring solutions. But let’s focus on vnstat.

On the above picture, the default interface being monitored is ens33. We may need to change that on our configuration, placed on /etc/vnstat.conf.ย  Remember, if you do any changes, you will need to stop and start the vnstat daemon. Once we have it running saving data on a database ( named as /var/lib/vnstat/interface-name) we can periodically produce a plot with the stats we are interested in, plot that should go to our website.

You know I’m lazy. So let’s check the available options before reinventing the wheel. Option one is a simple webpage to show the vnstati images. This is not what we want. We want to collect images, but from multiple clients. It’s in addition not producing the images (summary.png, monthly.png, etc) but kudos for the layout. It’s clean and it’s easy to customize ๐Ÿ™‚

Second option, already more complicated is vnstatsvg. I install it following the instructions. This is my output:

root@server~/vnstatsvg ## > ./configure -w /var/www/html/stats

Your Configuration:

cgi-bin directory : /var/www/cgi-bin
usr-bin directory : /usr/bin
vnstatsvg directory : /var/www/html/stats
XML dumping method: a shell script with the --dumpdb 
option provided by vnStat

checking command: vnstat... /usr/bin/vnstat
checking command: cron... which: no cron in 
(/usr/local/sbin:/...)
/usr/bin/crontab
checking command: gawk... /usr/bin/gawk
checking command: apache2... which: no apache2 in 
(/usr/local/sbin:/...)
/usr/sbin/httpd
checking command: sort... /usr/bin/sort
checking command: grep... /usr/bin/grep

Finish configuration: 
CGI_BIN: /var/www/cgi-bin/
<-- the files in cgi-bin directory of vnstatsvg will be 
installed here.
USR_BIN: /usr/bin/
<-- the vnstat binary will be installed here.
VNSTATSVG_ROOT: /var/www/html/stats/
<-- the files in admin directory of vnstatsvg will be 
installed here.
XML_DUMP_METHOD: vnstat-shell.sh
<-- the xml dumping method will be used.
The configuration have been saved in Makefile :-)

now, you can compile and install vnStatSVG as following:
$ make; make install 
root@server ~/vnstatsvg ## > make; make install
make httpclient -C src/cgi-bin/
make[1]: Entering directory `/root/vnstatsvg/src/cgi-bin'
gcc -O2 -W -o httpclient httpclient.c
make[1]: Leaving directory `/root/vnstatsvg/src/cgi-bin'
make httpclient -C src/cgi-bin/
make[1]: Entering directory `/root/vnstatsvg/src/cgi-bin'
make[1]: `httpclient' is up to date.
make[1]: Leaving directory `/root/vnstatsvg/src/cgi-bin'
Installing the administration pages...
Installing the CGI programs...
Finished installation. :-)
-----------------------------------------------

Time to go to server/stats/index.html to check out. And it works! We do get our web interface. Unfortunately, no data seems to be coming to it. Nothing even similar to this: vnstatsvg-multi-hosts

I’m not saying it’s not working. I’m going to say it’s not working for our configuration, or for our setup. I don’t know and I don’t want to dig up, since as I said, I’m lazy.ย  So what now? I’ll tell you what. Let’s produce the plots periodically and copy them to our webserver, to start with. Something like what I found here:

*/10 * * * * /usr/bin/vnstati -s -i eth1+eth2 -o /nfs/server_d.png

This is a crontab running each 10 minutes, saving on a png the traffic from both interfaces of my server “server” on an nfs folder /nfs/.ย  The idea is that all my servers will save their pictures on that folder, and I will copy also via crontab those pictures on my webserver. If I don’t have an nfs folder, I could create a crontab to ssh the pngs to my web server. As a layout, I used the two colum example from w3schools. One machine, one plot, a little bit of manual coding…and I’m done!

Could it be done better? Yes of course. I still need to try this solution with a vnstat-dashboard docker, or the dumping of the data and plotting with json or java. Why I didn’t try it? Because I know dockers, and the vnstat-dashboard will work for a single client solution, not for our multiple-client problem. If I find a proper docker solution you will be one of the first to know about it ๐Ÿ™‚

Install and use bbcp on CentOS 7

It is very annoying to move or copy data around, from storage A to storage B, and back sometimes. Specially if it’s like, a few terabytes of data. The default tools under linux are usually command-line, and will depend on a lot of parameters, like how the type of mount (cifs, nfs, gpfs) the bandwith and the command used. The obvious ones are copy (cp) that I tend to run recusively and verbose (cp -Ravย  origin destination), move (mv) and rsync (that also has a lot of options). But they don’t squeeze out all the speed, since they don’t run in parallel. If you want and your data structure allows it, you can always write a script to loop over the folders and start a copy/mv/rsync for each of them. But this is not what I wanted to show you. The most complete tool, and the fastest after my experience, is bbcp. Unfortunately, to use it, you need to install it first.

There are several ways to do that. You should get the package from somewhere, for example, download latest puias-computational rpm from a CentOS repository:

http://springdale.math.ias.edu/data/puias/computational/7/x86_64/

Install puias-computational rpm:

# rpm -Uvh puias-computational*rpm

Install bbcp rpm package:

# yum install bbcp

Or you can follow this installation procedure from another blog ๐Ÿ™‚

After that, you can start by typing bbcp and you should get something like this:

bbcp
bbcp: Copy arguments not specified.
Usage: bbcp [Options] [Inspec] Outspec

Options: [-a [dir]] [-A] [-b [+]bf] [-B bsz] 
[-c [lvl]] [-C cfn] [-D] [-d path]
[-e] [-E csa] [-f] [-F] [-g] [-h] [-i idfn] 
[-I slfn] [-k] [-K]
[-L opts[@logurl]] [-l logf] [-m mode] [-n] 
[-N nio] [-o] [-O] [-p]
[-P sec] [-r] [-R [args]] [-q qos] [-s snum] 
[-S srcxeq] [-T trgxeq]
[-t sec] [-v] [-V] [-u loc] [-U wsz] [-w [=]wsz] 
[-x rate] [-y] [-z]
[-Z pnf[:pnl]] [-4 [loc]] [-~] [-@ {copy|follow|ignore}] 
[-$] [-#] [--]

I/Ospec: [user@][host:]file

So how does it work? In principle, once tuned, bbcp will start several parallel copying processes and report you (if you want) of what’s going on. The nicest feature, from my point of view, is that, at the end of your run, it will inform you of what it did. Like this:

5170 files copied at effectively 20.3 MB/s

I got this line at the end of my run. Sample line to bbcp origin to destination is:

bbcp -av -P 2 -s 16 -w 2M -r origin/ /destination/

Here you have another usage example. I think the tool, although a little bit cumbersome, is working very well. What do you want more, a progress bar? ๐Ÿ™‚

A Wiki install on CentOS 7

We need a website that can be used to shared information. This is usually called a wiki. I was tempted to install a wiki on a container (see for example theย  xwiki docker) but I went for the real thing due to a weird error with multiple LAMP dockers running on the same host that I didn’t have time to debug. But what wiki style to choose? I chose MediaWiki since it is the Wikipedia layout,ย but let’s have a look to the other options.

  • DokiWiki is a simple wiki. Here you have the installation procedure for CentOS 7. Why I didn’t choose it? Because it doesn’t have a popular layout, one everybody knows.
  • PhpWiki is an even simpler wiki. Here you have the installation on CentOS 7, and a live version to have a look to it. Again, I’m going to say the layout is not modern enough.
  • Xwiki is indeed looking great, so I could install it instead of MediaWiki. Here you have the installation on CentOS 7. I didn’t chose this one out of my gut’s feeling against a wiki that looks like a social network. Time will say if I’m wrong or not ๐Ÿ™‚

I go ahead then with MediaWiki. I need to have at hand is the Manual Installation guide. And first stone I find there is the requirements.ย  I quote:

On CentOS 7, the default php is 5.4, so how the hell we install 7.X? I found the solution here. At the end, the list of additional packages I installed is this:

yum install 
https://dl.fedoraproject.org/pub/epel/
epel-release-latest-7.noarch.rpm
yum install
http://rpms.remirepo.net/enterprise/remi-release-7.rpm
yum install yum-utils
yum-config-manager --enable remi-php72
yum install php php-mcrypt php-cli php-gd php-curl
php-mysql php-ldap php-zip php-fileinfo mysql postgresql sqlite

I have unzip my mediawiki tarball, move it to /var/www/html/ and point my browser to the local address. But I see a php text instead of the nice installer that you see at the top of this post. What am I doing wrong? Of course, I need to either reboot the computer or restart the apache server. After that, the configuration script runs, andย  I have my wiki. Easy as a pie on Ubuntu, I guess. ๐Ÿ˜›

Package atomic-registries requires python-pytoml error installing docker on CentOS 7

I’m moving dockers around. Most of my machines are Intel running CentOS 7. But on one Opteron running CentOS 7 (but same kernel than the intel) I got the next error when running yum.

yum install docker
..stuff here, repository list, fastest mirrors...
Resolving Dependencies
--> Running transaction check
...the checks here...
---> Package libsepol-devel.x86_64 0:2.5-8.1.el7 
will be an update
--> Finished Dependency Resolution
Error: Package: 1:atomic-registries-XXX.el7.centos.x86_64 (extras)
Requires: python-pytoml
Available: python-pytoml-0.1.14-1.XXX.el7.noarch (extras)
python-pytoml = 0.1.14-1.XXX.el7
Available: python2-pytoml-0.1.18-1.el7.noarch (epel)
python-pytoml = 0.1.18-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

The same command works flawlessly on other machines. Maybe it’s my machine, that has remnants from other installations. Maybe it is Opteron, maybe it’s something else. But let’s fix it. First we better be sure there’s no previous docker installed on the system:

yum remove docker docker-client \ 
docker-client-latest docker-common \
docker-latest docker-latest-logrotate \
docker-logrotate docker-selinux \
docker-engine-selinux docker-engine

Then we apply the official patch to install it.

 wget -qO- https://get.docker.com/ | sh

After that, everything seems to work! Time to move my babies around! I want to try to convert my servers to dockers so that I stop doing what I call the dance: this server here, this GPU card there, this RAM removed, this RAM added ๐Ÿ™‚ a lonesome and tiring job the one of the moving man ๐Ÿ˜›

Install pyEM on CentOS 7

After a week of silence, I’m back on track. I managed to integrate pyEM onto our systems, and this is my log about. We want it to export cryoSPARC data to relion. Since we don’t start from scratch, my first trial is over one of the python modules, so that I do it once and I have it everywhere. This is not the preferred method. And it didn’t work in our case. Let me show you how to get to the error.

module load python-2.7.13
git clone https://github.com/asarnow/pyem.git
cd pyem
pip install -e .

The install goes on for a while, until this:

Downloading stuff ...  
Matplotlib 3.0+ does not support Python 
2.x, 3.0, 3.1, 3.2, 3.3, or 3.4.
Beginning with Matplotlib 3.0, Python 3.5 and above is required.
This may be due to an out of date pip.
Make sure you have pip >= 9.0.1.
----------------------------------------
Command "python setup.py egg_info" failed with 
error code 1 in /tmp/pip-build-H4XfvY/matplotlib/
You are using pip version 8.1.2, however 
version 18.1 is available.
You should consider upgrading via the 
'pip install --upgrade pip' command.

Of course it can be pip. Before upgrading it, I try with another python module, that is:

module load python-3.5.3
git clone https://github.com/asarnow/pyem.git
cd pyem
pip install -e .

The error in this case, is with the default gcc.

/bin/ld: cannot find -lpython3.5m
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyfftw

I have tried yum install python-devel gcc-g++, but it doesn’t fix it. Another trial is yum install python34-devel. But still gcc fails. I make the python-3.5.3 module local by copying it to the local drive and this time I manage. Alright, we know it works. It’s not very convenient having it locally only, so I go for miniconda. Step by step install is here.

Step 1: Install miniconda

 bash Miniconda2-latest-Linux-x86_64.sh -u

I do it on a network location (NFS mounted) rewriting an existing miniconda install.

Step 2: On a new shell, export the path to miniconda, git clone the folder onto a network location (the same NFS mounted share)

git clone https://github.com/asarnow/pyem.git

and pip install it.

Step 3: Write a module for it. In my case, the network drive is /network/. The module looks like this:

#%Module1.0#####################################
## modules pyem
## modulefiles/pyem pyEM
##
proc ModulesHelp { } {
global version modroot
    puts stderr "pyem - sets the Environment for using pyem"
}

module-whatis "Sets the environment for using pyem-0.3"

# for Tcl script use only
set topdir /network/pyem-0.3
set version 0.3
set sys linux86
prepend-path PATH /network/miniconda2/bin:$topdir:$topdir/pyem/

And that’s what I have to say today. I hope it helps somebody!

 

Install Chrome on CentOS 7

This was interesting. Fruitless, but interesting. I wanted to install a new browser to test some web-app that is supposed to work only on modern browsers.ย  Unfortunately Chrome works but not the web-thingy…so back to square zero. Anyway, as it is described on tecmint, what you need to do is :

  1. Create a google yum repository. Go to /etc/yum.repos.d/ and edit and save a file google-chrome.repo with the next:
[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/$basearch
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
  1. Install it:
yum install google-chrome-stable

Now, if I’m root andย  I want to launch it via the command line, I need to opt out the sandbox. Otherwise you get this error:

[31040:31040:0910/113432.321798:
ERROR:zygote_host_impl_linux.cc(89)] 
Running as root without --no-sandbox is not supported. 
See https://crbug.com/638180.

Just for your information. And for my records ๐Ÿ˜‰