Migration to a new mac : macOs needs to repair your Library to run application

5atl0Migrating a mac may not be your weapon of choice. I’d rather copy the files I’m interested on manually, or put both old and new computers in the same network and rsync whatever I need.

But life is not fair. We can’t do whatever we want, or not on this reality. So I was asked for help migrating one “old” MacBook (~ 2015) to a new one (~2018) with touch bar. How did I do that? First I got the very rare USB-c to Thunderbolt adapter, then connect both macs and open the migration assistant on both. The whole procedure is described in this apple support post, so I’m not going to write it again.  I’m going to make some comments.

I migrated only user accounts and Applications. Migrating all will cause the new mac to have the same name and credentials than the old one, and I don’t want that on  my network. Migration proceeded without issues, but when I tried to log in with the user usertwo account of the “old” mac, I found the error above. To fix it, I logged in as an administrator of the new compute, (a not migrated account,  and do this:

cd /Users/usertwo
ls -alh 
sudo chown -R usertwo ./*

The solution is at the end of this macrumors post. I’m so pissed off the procedure didn’t even copy the users ID…what’s so complicated on that? 😀

Advertisements

Syncthing install on CentOS 7.5

screenshot-720I will start by quoting the product. What is syncthing? “Syncthing is an application that lets you synchronize your files across multiple devices. This means the creation, modification or deletion of files on one machine will automatically be replicated to your other devices.” This says it all. Next question is: What for? This can vary: I’m going to say here because it’s  multi-platform: there are apps, a web interface, a GUI, and so on, and all of if for free. Unfortunately installing it on CentOS is not for newbies. Let’s start.

Step one: create a yum repository. There’s an entry abut syncthing on the centos forum. It means to create a special repository for syncthing. What I did is I copied an already existing repository, rename it, and edit it.

cd /etc/yum.repos.d/
cp epel.repo syncthing.repo
gedit syncthing.repo

Inside the edited repo file, we copy this. Then yum clean all, yum update. Or even better, reboot if you can. At the end

yum install syncthing
systemctl stop firewalld
/bin/syncthing

And if you have the browser open, the Syncthing web UI as above will open. Now what? We go to the syncthing configuration, and we edit it so that it has my CentOS client IP, not the default one.  We may want to create a service for the process, but I’m not going to tell you how to do that.

I test that I can access to the web UI from another computer, and I can. Then I install the syncthing android app (that runs on the same network than my syncthing web server) and add the device on the web interface. It’s not very intuitive: to add the device you get a QR code or a very long set of letters and numbers.  Anyway, once I add it, I see on the web UI that syncthing wants to add one of the folders of my phone to the “Folders” section. I click “it’s OK” and the sync begins. Once you are done, you have the typical options: Pause, Rescan, Edit…

I must say the final sensation is very good, so I approve it. The problem will be, as usual, to propagate and promote its usage. We’ll see how it goes!

CryoSPARC not starting after update to v2.8 on CentOS 7.X : bad timing interval

As usual, click here if you want to know what is cryosparc. I have created a cryosparc master-client setup. In principle I did update from v2.5 to v.2.8 successfully after running on a shell cryosparc update. It’s the standard procedure. I got updated all, master and clients. But after the update I rebooted everything. And after the reboot of the master node the problems started. This is the symptom:

cryosparcm start
Starting cryoSPARC System master process..
CryoSPARC is not already running.
database: started
command_core: started

And the starting hangs there. The message telling you  where to go to access to your server is not appearing. Of course I waited. The status looks like this:

cryosparcm status
--------------------------------------------------
CryoSPARC System master node installed at
/XXX/cryosparc2_master
Current cryoSPARC version: v2.8.0
----------------------------------------------
cryosparcm process status:
command_core                     STARTING 
command_proxy                    STOPPED   Not started
command_vis                      STOPPED   Not started
database                         RUNNING   pid 49777, uptime XX
watchdog_dev                     STOPPED   Not started
webapp                           STOPPED   Not started
webapp_dev                       STOPPED   Not started
------------------------------------------------
global config variables:
export CRYOSPARC_LICENSE_ID="XXX"
export CRYOSPARC_MASTER_HOSTNAME="master"
export CRYOSPARC_DB_PATH="/XXX/cryosparc_database"
export CRYOSPARC_BASE_PORT=39000
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false

It looks like in this cryosparc forum post. Unfortunately no solution is given there. We can check what the log webapp is telling also:

 cryosparcm log webapp
    at listenInCluster (net.js:1392:12)
    at doListen (net.js:1501:7)
    at _combinedTickCallback (XXX/next_tick.js:141:11)
    at process._tickDomainCallback (XXX/next_tick.js:218:9)
cryoSPARC v2
Ready to serve GridFS
events.js:183
      throw er; // Unhandled 'error' event
      ^
Error: listen EADDRINUSE 0.0.0.0:39000
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at Server.setupListenHandle [as _listen2] (net.js:1351:14)
    at listenInCluster (net.js:1392:12)
    at doListen (net.js:1501:7)
    at _combinedTickCallback (XXX/next_tick.js:141:11)
    at process._tickDomainCallback (XXX/next_tick.js:218:9)

It looks like a java problem (EADDRINUSE stands for address in use). So which java process is creating the listening error?

I clean up as suggested on this cryosparc post,  or on this one, deleting the /tmp/ and trying to find and kill any supervisord rogue process. That I don’t have. Next I reboot the master but the problem persists. Messing up with the MongoDB does not help also. What now? The cryosparc update installed a new python, so I decide to force the reinstall of the dependencies. It is done like this:

cryosparcm forcedeps
  Checking dependencies... 
  Forcing dependencies to be reinstalled...
  --------------------------------------------------
  Installing anaconda python...
  --------------------------------------------------
..bla bla bla...
 Forcing reinstall for dependency mongodb...
  --------------------------------------------------
  mongodb 3.4.10 installation successful.
  --------------------------------------------------
  Completed.
  Completed dependency check. 

If I believe what the software tells me, everything is fine. I reboot and run cryosparcm start but my “command core” still hangs on STARTING. After several hours of investigation, I decide to take a drastic solution. Install everything again. Then I find it.

 ./install.sh --license $LICENSE_ID \
--hostname sparc-master.org \
--dbpath /my-cs-database/cryosparc_database \
--port 39000
ping: bad timing interval
Error: Could not ping sparc-master.org

What is this bad timing interval? I access to my servers via SSH + VPN, so it could be that the installer can’t handle the I/O of such a load, or the time servers we use, or something. Or maybe is that the Java versions differ? In any case, I approach to it on another way. I need to be closer. How to?

I open a virtual desktop there and in it, I call an ubuntu shell where I run my installer. Et voila! bad timing gone. And the install goes on without any further issues. Note that I do a new install using the previous database (–dbpath /my-cs-database/cryosparc_database so that everything, even my users, are the same than before 🙂

Long story short: shells may look the same but behave differently. Be warned!

LLDP Link Level Discovery Protocol on Windows and Linux

ldwinI need to know where are my computers connected without running around in the building, checking cables and reading not-so-easy-to-read cryptic labels. I’m not Central IT, so I don’t have access to the core switches. What can I do? First I build a list of Linux clients, like client01…client10, and Windows Clients. On then, I install the package lldpd. For CentOS 7.X, it is done like this:

yum install lldpd
systemctl enable lldpd.service 
systemctl start lldpd.service

You can now try it. For example I will show the neighbours:

root@client01 # lldpcli show neighbors
---------------------------------------------------
LLDP neighbors:
----------------------------------------------------
Interface: enXXX, via: LLDP, RID: 2, Time: YYY
Chassis:
ChassisID: mac AA:BB:CC:DD:EE:FF
SysName: SYS-SWITCH-CRYPTIC-NAME
SysDescr: My-Swit-Model-With-SW-Version
TTL: 120
MgmtIP: 111.222.333.444
Capability: Bridge, on
Port:
PortID: ifname 1
PortDescr: TP-1
--------------------------------------------------

Obviously I edited the output. We can grep the output, for example, on a for loop to get the SysName or PortDescr, once we are familiar with the ouput. What about Windows? For Windows, we can install the program above. You can find it here. By the way, the screen capture is from the blog entry. Now time to assemble the output on a web. Because I love web pannels 😀

Checksel ipmiutil warning on CentOS 7

I get from time to time this Warning:

/etc/cron.daily/checksel:

WARNING: free space is low (=3184), need to clear with -d
ipmiutil sel version 3.12
-- BMC version 2.61, IPMI version 2.0 
SEL Ver 51 Support 02, Size = 1024 records (Used=825, Free=199)
ClearSEL: Log Cleared successfully
ipmiutil sel, completed successfully

I do have remote ILO access setup for this machine. As you see, the error is sorted out on its own, so no need to clear the ILO free space as suggested. For the records, there is a cron process called checksel. We can read it out:

 more /etc/cron.daily/checksel 
...
# This script runs ipmiutil sel writing any new records to syslog, 
# and will then clear the SEL if free space is low.

So we could, in principle, modify the cron to do whatever we want. Of course, I will not. I’m not very successful modifying things that work. I just wanted to log its existence 🙂

The Huawei incident

I’m a multi-platform user by need. So I can’t call myself an Android, an OSX or a Windows user. Of course, all of that said with the big mouth, without specifying any device. And of course I have my favourites. I must say I’m lucky I don’t have an Huawei terminal, or any of their clones (Honor and similar). But the big news of today about Google ending its deal with the technology firm Huawei sound like good news for me. I’ll try to explain why.

Previously I managed to install Android on my old DELL netbook just to find out the specs of it are way below my current mobile phone, to the point of looking ridiculous. But it worked. The install was quite easy, being the most complicated step to mount Oreo on a pen drive.

I do have a micro USB to USB adapter, and I do frequently read an USB with my generic handy. So why not let the users to install custom-designed ROMs on a handy, without the hassle of breaking the warranty or rooting the phone? In this way, I could, let’s say, install Windows XP on it, or something like that, if needed. I consider myself a pro, and I did brick an HTC already trying to root it.

If a company like Huawei goes ahead and let the people use their terminals whatever way you like (of course under certain circumstances, maybe after registering it as a test device, providing a valid email address, opening a developer’s profile with the company, or something like that) I think we will all be more free. I’m looking forward to be able to have a smartphone the way I like it, like one you can control what you send back home or not, or the apps you get and what they say. And maybe in that near future, I will finally have CentOS on my handy, if I need it. Or Windows XP 🙂

3 Bandwidth monitoring tools

zabbix-dashboard-1I managed to build a vnstat web interface to plot the network traffic but the latency and the look is not the best. Also, it relies on crontabs and ssh connections, and it lacks of alarms. Now that I have a working solution,  I decided to searched a little bit more. Here you have 16 useful bandwidth monitoring tools, and my comments on some of them.

Zabbix seems to be a fully configurable dashboard, with everything you may want. The installation procedure is here. Zabbix can collect different type of data than are used to create historical graphics and output performance or load trends of the monitored targets. Suitable for small, private, and homogeneous networks, I believe.

Observium is said to supports a wide range of operating systems and hardware platforms including, Linux, Windows, FreeBSD, Cisco, HP, Dell, NetApp and so on. This is quite interesting if you want to monitor all, even the printers. The installation procedure is here . If you have a look at the demo version, you see how complicated it can become. My opinion: suitable for big, distributed, heterogeneous networks. This is not my case, so I’m not going to use it.

Cacti is the last one that I considered keeping as my final tool. It says it is used to graph time-series data of metrics such as network bandwidth utilization, CPU load, running processes, disk space etc. Therefore is like munin, but fully configurable. Here you have the installation procedure. I’m not sure RRDtool is the best one for my plots, but you may want to give it a try. I’m going to say here that cacti is suitable for small, private and heterogeneous networks.

Of course it’s a lot of work to install and configure all of them, si if you have a chance, don’t waste your time and go for the one you like more. In my case, it’s going to be Zabbix. BTW, anyone knows how the web look of its dashboard is called?