Open gnome-terminals with a specific configuration on CentOS 7

I’m a hard-core terminal guy. I do all with it. As such, I hate to open again and again the same group of terminals, give them names, and so on. So, is there any fast way to save and restore tabs of terminals? It turns out that no, there isn’t. If I type on my CentOS 7.6

# > gnome-terminal --version
# GNOME Terminal 3.28.2 using VTE 0.52.2 +GNUTLS
# > gnome-terminal --save-config=/tmp/test
# Option “--save-config” is no longer supported 
in this version of gnome-terminal.

So the nice load-config and save-config are no more. What now? I looked for an alternative, like terminal emulators. Some of them can be installed via yum, and they allow profile management, like ROXTerm. My problem? I run on different computers, and I don’t want to be installing, even via yum, something everywhere just to save 5 minutes. It’s like killing a fly with a canyon. I mostly just want to ssh to different clients, and rename the tab afterwards.

Let’s say I want to open 3 tab in addition to the one I am to client1, client2, and client3. I create an open_clients.sh executable file (chmod 777) with this content:

#!/bin/bash
gnome-terminal \
--tab -t client1 -e 'sh -c "ssh -Y client1"' \
--tab -t client2 -e 'sh -c "ssh -Y client2"' \
--tab -t client3 -e 'sh -c "ssh -Y client3"'

I open a new shell, run the open_clients.sh and see how 3 additional tabs are opened with my ssh connections established. Note that on the one I run the script I get this deprecation message, but I can keep working on it, so so far so good 🙂

BTW, if you try to correct the “-e” the way they suggest, you will end up with this other error. This script I can copy around, have it in my home folder, or even in my email as a draft, so small it is. And I can have as many as I like, let’s say open_webservers.sh, open_dataservers.sh, etc. My problem is solved, and my day gone, therefore, have a nice weekend !

Advertisements

A Wiki install on CentOS 7

We need a website that can be used to shared information. This is usually called a wiki. I was tempted to install a wiki on a container (see for example the  xwiki docker) but I went for the real thing due to a weird error with multiple LAMP dockers running on the same host that I didn’t have time to debug. But what wiki style to choose? I chose MediaWiki since it is the Wikipedia layout, but let’s have a look to the other options.

  • DokiWiki is a simple wiki. Here you have the installation procedure for CentOS 7. Why I didn’t choose it? Because it doesn’t have a popular layout, one everybody knows.
  • PhpWiki is an even simpler wiki. Here you have the installation on CentOS 7, and a live version to have a look to it. Again, I’m going to say the layout is not modern enough.
  • Xwiki is indeed looking great, so I could install it instead of MediaWiki. Here you have the installation on CentOS 7. I didn’t chose this one out of my gut’s feeling against a wiki that looks like a social network. Time will say if I’m wrong or not 🙂

I go ahead then with MediaWiki. I need to have at hand is the Manual Installation guide. And first stone I find there is the requirements.  I quote:

On CentOS 7, the default php is 5.4, so how the hell we install 7.X? I found the solution here. At the end, the list of additional packages I installed is this:

yum install 
https://dl.fedoraproject.org/pub/epel/
epel-release-latest-7.noarch.rpm
yum install
http://rpms.remirepo.net/enterprise/remi-release-7.rpm
yum install yum-utils
yum-config-manager --enable remi-php72
yum install php php-mcrypt php-cli php-gd php-curl
php-mysql php-ldap php-zip php-fileinfo mysql postgresql sqlite

I have unzip my mediawiki tarball, move it to /var/www/html/ and point my browser to the local address. But I see a php text instead of the nice installer that you see at the top of this post. What am I doing wrong? Of course, I need to either reboot the computer or restart the apache server. After that, the configuration script runs, and  I have my wiki. Easy as a pie on Ubuntu, I guess. 😛

mysqld recover root password on CentOS 7

The disaster scenario is that you have been installing third party software and its scripts somehow screwed up your mysql (users, tables, whatever). Or it could be that you get a server with mysqld already installed but you don’t know the root password. (Can such a thing happen? Of course in research everything is possible!) This solution from stack overflow worked for me.

1. Stop mysql:
systemctl stop mysqld

2. Set the mySQL environment option 
systemctl set-environment MYSQLD_OPTS="--skip-grant-tables"

3. Start mysql usig the environment option
systemctl start mysqld

4. Login as root
mysql -u root

5. Update the root user password

mysql> UPDATE mysql.user SET authentication_string =
PASSWORD('MyNewPass') WHERE User = 'root' AND Host = 'localhost';
mysql> FLUSH PRIVILEGES;
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';

6. Stop mysql
systemctl stop mysqld

7. Unset the mySQL environment option
systemctl unset-environment MYSQLD_OPTS

8. Start mysql normally:
systemctl start mysqld

9. Login using your new password:
mysql -u root -p

It could be the system still complains about the password. You can try to change the password policies as written here on another StackOverflow post. Now I need to fix phpMyAdmin, but maybe that story I will tell another day 😛

Package atomic-registries requires python-pytoml error installing docker on CentOS 7

I’m moving dockers around. Most of my machines are Intel running CentOS 7. But on one Opteron running CentOS 7 (but same kernel than the intel) I got the next error when running yum.

yum install docker
..stuff here, repository list, fastest mirrors...
Resolving Dependencies
--> Running transaction check
...the checks here...
---> Package libsepol-devel.x86_64 0:2.5-8.1.el7 
will be an update
--> Finished Dependency Resolution
Error: Package: 1:atomic-registries-XXX.el7.centos.x86_64 (extras)
Requires: python-pytoml
Available: python-pytoml-0.1.14-1.XXX.el7.noarch (extras)
python-pytoml = 0.1.14-1.XXX.el7
Available: python2-pytoml-0.1.18-1.el7.noarch (epel)
python-pytoml = 0.1.18-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

The same command works flawlessly on other machines. Maybe it’s my machine, that has remnants from other installations. Maybe it is Opteron, maybe it’s something else. But let’s fix it. First we better be sure there’s no previous docker installed on the system:

yum remove docker docker-client \ 
docker-client-latest docker-common \
docker-latest docker-latest-logrotate \
docker-logrotate docker-selinux \
docker-engine-selinux docker-engine

Then we apply the official patch to install it.

 wget -qO- https://get.docker.com/ | sh

After that, everything seems to work! Time to move my babies around! I want to try to convert my servers to dockers so that I stop doing what I call the dance: this server here, this GPU card there, this RAM removed, this RAM added 🙂 a lonesome and tiring job the one of the moving man 😛

Topaz install on CentOS 7

Yes I’m back to work. I should have logged it more, but I was busy with so many issues that the only thing I wanted to write was fiction. Anyway. Today we are going to install Topaz over our already existing python module.

You may wonder what topaz is: it  is a “pipeline for particle detection in cryo-electron microscopy images using convolutional neural networks trained from positive and unlabeled examples“. Understood? Alright, so we can go ahead. On the github topaz page you find  the software installation formula and a docker definition. I did try the docker, but for some reason linked with my default python, my docker was created but I couldn’t manage to make it run.  I call these ones DOA dockers (Dead On Arrival).

When I find a DOA docker I tend to reverse engineer it. In this case, it is not needed. We have a “From source” section.  I quote:

Installation of dependencies

conda install numpy pandas scikit-learn cython
conda install -c soumith pytorch=0.2.0 torchvision
conda install -c soumith pytorch torchvision cuda80

This takes for a while. In my case, around 20 minutes, but I may be influenced by the installation method. Then I cd into my topaz folder (the one of the DOA docker) and type

pip install .

No error is thrown. To test it’s properly installed, I go to another machine, load my python module and try to do “topaz” something. And it works! The users are going to be happy, since my python module is mounted on a network location, everybody will have topaz once they load it. Even better than a docker! We’ll see what they can do with it.

Deployment Management Tools

chef-puppet-ansibe-saltstack-fabric-tools_

I’m trying to grasp PXE booting from the professional point of view. Not that I need it: I can afford the luxury of installing a client by hand. But after having use cobbler and foreman, I feel like I need to know more. The article:

What are the pros and cons of Chef, Puppet, Ansible, SaltStack and Fabric?

is answering a lot of my questions about, so I will not rewrite what is written. Check it out if you need. My choice, so far, is foreman. And it matches what the article says. Anyway, don’t worry, I will let you know if I change my mind 🙂

Add a puppet node to a foreman server

I wrote before how to add a puppet node to a foreman docker. I want to add  the same node to the real server, that I call cfore, instead of the dfore docker. On the pnode from the previous example I edit the configuration, so that now  /etc/puppet/puppet.conf points to the server cfore instead of dfore. I’m trying to follow this unixmen guide about how to add puppet nodes to foreman. After the change of the server and restarting the daemon, we get an SSL error.

pnode # > systemctl status puppet
● puppet.service - Puppet agent
 Loaded: loaded (/usr/lib/systemd/system/puppet.service; 
disabled; vendor preset: disabled)
 Active: active (running) since XXX; 5s ago
 Main PID: 25553 (start-puppet-ag)
 CGroup: /system.slice/puppet.service
 ├─25553 /bin/sh /usr/bin/start-puppet-agent agent --no-daemonize
 └─25555 /usr/bin/ruby /usr/bin/puppet agent --no-daemonize

XXX pnode puppet-agent[25569]: 
(/File[/var/lib/puppet/facts.d]) Wrapped exception: 
XXX pnode puppet-agent[25569]: 
(/File[/var/lib/puppet/facts.d]) 
SSL_connect returned=1 errno=0 state=error: 
certificate verify failed: [self signed YYY]
XXX pnode puppet-agent[25569]: 
(/File[/var/lib/puppet/lib]) Failed to generate additional 
resources using 'eval_generate': 
SSL_connect returned=1 errno=0...YYY]
XXX pnode puppet-agent[25569]: 
(/File[/var/lib/puppet/lib]) Could not evaluate: 
Could not retrieve file metadata for puppet:]
XXX pnode puppet-agent[25569]: 
...and so on

Now we can check the foreman SSL issues. I believe, anyway, that is more a ruby SSL connection issue, or a puppet SSL mismatch.

So I install the puppetdb tool, and run it to regenerate the SSL certificates:

cfore # > yum install puppetdb
..some stuff here...
================================================
Installing:
 puppetdb noarch 4.4.0-1.el7 puppetlabs-pc1 22 M
Transaction Summary
=====================================================
Install 1 Package
..some stuff here...
 Installing : puppetdb-4.4.0-1.el7.noarch 1/1 
Config archive not found. Not proceeding with migration
PEM files in /etc/puppetlabs/puppetdb/ssl are missing, 
we will move them into place for you
Copying files: /etc/puppetlabs/puppet/ssl/certs/ca.pem, 
/etc/puppetlabs/puppet/ssl/private_keys/
cfore.mydomain.com.pem and 
/etc/puppetlabs/puppet/ssl/certs/cfore.mydomain.com.pem 
to /etc/puppetlabs/puppetdb/ssl
Backing up /etc/puppetlabs/puppetdb/conf.d/jetty.ini 
to /etc/puppetlabs/puppetdb/conf.d/jetty.ini.bak.1528274270 
before making changes
Updated default settings from package installation for 
ssl-ca-cert in /etc/puppetlabs/puppetdb/conf.d/jetty.ini.
 Verifying : puppetdb-4.4.0-1.el7.noarch 1/1
Installed:
 puppetdb.noarch 0:4.4.0-1.el7
Complete!
cfore # > puppetdb ssl-setup -f
PEM files in /etc/puppetlabs/puppetdb/ssl already exists, 
checking integrity.
Overwriting existing PEM files due to -f flag
Copying files: /etc/puppetlabs/puppet/ssl/certs/ca.pem, 
/etc/puppetlabs/puppet/ssl/private_keys/cfore.mydomain.com.pem 
and /etc/puppetlabs/puppet/ssl/certs/cfore.mydomain.com.pem 
to /etc/puppetlabs/puppetdb/ssl
Setting ssl-host in /etc/puppetlabs/puppetdb/conf.d/jetty.ini 
already correct.

As you see, installing puppetdb does do some configuration also.

We have now new SSL certificates. We clean the one of pnode, if it’s there. During my experiments I managed to issue the certificate once, so I clean it  doing puppet cert clean pnode. Now to be sure I reinstall puppet on pnode. It’s quick and it does no harm. I adjust puppet.conf, do more or less as suggested by stackoverflow and:

  • remove manually all remnants of the previous puppet install
  • remove and reinstall the puppet package (yum)
  • restart the puppet agent (puppet agent -t)
  • sign the new certificate on the puppet master
  • restart the puppet service (systemctl start puppet)
pnode # > rm -rf /var/lib/puppet/
pnode # > rm -rf /etc/puppet/
pnode # > yum remove puppet
pnode # > yum install puppet  
pnode # >  puppet agent -t
Info: Creating a new SSL key for pnode.mydomain.com
Info: Caching certificate for ca
Info: csr_attributes file loading from 
/etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request 
for pnode.mydomain.com
Info: Certificate Request fingerprint (SHA256): 
SO:ME:SE:TO:FN:UM:BE:RS:AN:DL:ET:TT:ER:S
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled
cfore # > puppet cert list
 "pnode.mydomain.com" (SHA256) :
SO:ME:SE:TO:FN:UM:BE:RS:AN:DL:ET:TT:ER:S
cfore # > puppet cert sign pnode.mydomain.com
Signing Certificate Request for:
 "pnode.mydomain.com" (SHA256) :
SO:ME:SE:TO:FN:UM:BE:RS:AN:DL:ET:TT:ER:S
Notice: Signed certificate request for pnode.mydomain.com
Notice: Removing file Puppet::SSL::CertificateRequest 
pnode.mydomain.com at 
'/etc/puppetlabs/puppet/ssl/ca/requests/pnode.mydomain.com.pem'
pnode # > systemctl start puppet
pnode # > systemctl status puppet
--> OK

If I refresh the foreman web interface, I see also my new client pnode appears after a few minutes. So far so good! Next step, PXE boot from foreman.