The perfect storm

Looks like I’m more busy than before. It’s the start of a new academic year, and the unofficial end of the holiday season. I’ve been out on a business trip for a few days in a place with such a sh#$@ internet that I was barely able to delete my emails, just to come back to find out half of our services are down. And now I am little by little recovering everything, while the people, as usual, keep coming asking what’s going on, or simply requesting other service. Good news are, I’m going on holidays soon, for real. Good news for me, maybe not for you, dear reader. We’ll see if I manage to dream a little dream over there. Anyway, thanks for passing by, see you soon โค๏ธโค๏ธ.

GPFS : cannot delete file system / Failed to read a file system descriptor / Wrong medium type

A nice and mysterious title. As usual. ๐Ÿ˜. But you know what I’m speaking about. I had one GPFS share that crashed due to a failing NVMe disk. The GPFS share was composed of several disks, and configured without safety, as a scratch disk. Being naive, I thought that simply replacing the failed disk by a disk that will call here Disk it and rebooting everything should bring my GPFS share back. It didn’t work, and I have learned some new things that I’d like to show you.

To recover my shared gpfsshare after the replacing of the new disk, first I’ve tried changing the StanzaFile disk names and recreate the NSD disks by running mmcrnsd -F StanzaFile. I could join the new disk Disk but the share was not yet usable.

Then I tried to change the mounting point of gpfsshare from /mountpoint to /newmountpoint

root@gpfs ~ ## > mmchfs gpfsshare -T /newmountpoint
Verifying file system configuration information ...
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmchfs: Failed to collect required file system attributes.
mmchfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Next thing I thought on doing is deleting Disk from gpfsshare. That way, I would end up with a GPFS share, smaller than the original, but functional.

root@gpfs ~ ## > mmdeldisk gpfsshare Disk
Verifying file system configuration information โ€ฆ
Too many disks are unavailable.
Some file system data are inaccessible at this time.
Check error log for additional information.
Too many disks are unavailable.
Some file system data are inaccessible at this time.
mmdeldisk: Failed to collect required file system attributes.
mmdeldisk: Unexpected error from reconcileSdrfsWithDaemon.
Return code: 1
mmdeldisk: Attention:
File system gpfsshare may have some disks
that are in a non-ready state.
Issue the command:
mmcommon recoverfs scratch
mmdeldisk: Command failed.
Examine previous error messages to determine cause.

Let’s list our NSD disks to see what we have. We can list with mmlsnsd -X or without the argument (-X). This is my output (edited):

root@gpfs ~ ## > mmlsnsd

File system | Disk name | NSD servers
-----------------------------------------------
gpfsshare Disk node1.domain.org
gpfsshare Disk_old1 node2.domain.org
gpfsshare Disk_old2 node3.domain.org
(free disk) Disk_A node4.domain.org
(free disk) Disk_B node5.domain.org
(free disk) Disk_C node6.domain.org

Since everything is looking awful here, I will delete my gpfsshare filesystem and make it new. Actually I need to force the deletion. Let me show you.

root@gpfs ## > mmdelfs gpfsshare
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmdelfs: tsdelfs failed.
mmdelfs: Command failed. Examine previous error messages to determine cause.
root@gpfs ## > mmdelfs gpfsshare -p
Disk Disk: Incompatible file system descriptor version or not formatted.
Failed to read a file system descriptor.
Wrong medium type
mmdelfs: Attention: Not all disks were marked as available.
mmdelfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

We managed to delete the filesystem! But what happened with our NFS disks? Let’s use mmlsnsd to see where they stand:

root@gpfs ~ ## > mmlsnsd

File system | Disk name | NSD servers
-----------------------------------------------
(free disk) Disk node1.domain.org
(free disk) Disk_old1 node2.domain.org
(free disk) Disk_old2 node3.domain.org
(free disk) Disk_A node4.domain.org
(free disk) Disk_B node5.domain.org
(free disk) Disk_C node6.domain.org

So the disks are there. The filesystem gpfsshare is gone so they are marked as belonging to a free disk filesystem. Let’s then delete the NSD disks. We need to do it one by one. I show the output for the disk Disk.

root@gpfs ## > mmdelnsd Disk
mmdelnsd: Processing disk Disk
mmdelnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Once we removed all our disks (check with mmlsnsd) we can add them again using the original StanzaFile (mmcrnsd -F StanzaFile). If you have already a disk processed, don’t worry: the mmcrnsd disk will anyway work. A standard output for our disks will be like this:

root@gpfs ## > mmcrnsd -F StanzaFile
mmcrnsd: Processing disk Disk_A
mmcrnsd: Disk name Disk_A is already registered for use by GPFS.
mmcrnsd: Processing disk Disk_B
mmcrnsd: Disk name Disk_B is already registered for use by GPFS.
mmcrnsd: Processing disk Disk_C
mmcrnsd: Disk name Disk_C is already registered for use by GPFS.
mmcrnsd: Processing disk Disk
mmcrnsd: Processing disk Disk_old1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Time to re-create the gpfs filesystem. We can call it the same way as before ๐Ÿ˜‰, if we like. It works now! I have changed nothing with respect to the command I used originally to create my gpfs share gpfsshare. This one:

mmcrfs gpfsshare -F StanzaFile -T /mountpoint -m 2 -M 3 -i 4096 -A yes -Q no -S relatime -E no –version=5.X.Y.Z

If you get an error like this:

Unable to open disk 'Disk_A' on node node4.domain.org
No such device

Check that the gpfs is running on the corresponding node (in this example it’s node4.domain.org) and run again the mmcrfs.

I hope you have learned something already. But for my notes, first delete the filesystem (mmdelfs gpfsshare -p), second delete the nsf disks (mmdelnsd Disk), then add the disks again (mmcrnsd -F StanzaFile), then create a new filesystem (mmcrfs). Take care!

Kubernetes metric-server not going to READY 1/1 on CentOS 7.X

My kubernetes dashboard seemed to have lost the metrics at one point. And this is bad, because I do need to monitor the resources, since my cluster is baremetal and I don’t plug any web service. The symptoms are below. I deploy the metric server and check the status:

root@kube ## > kubectl apply -f \
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
root@kube ## > kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default ubuntu-desktop 1/1 1 1 3d23h
development snowflake 2/2 2 2 4d22h
kube-system coredns 2/2 2 2 39d
kube-system metrics-server 0/1 1 0 2s
kube-system skooner 1/1 1 1 39m
portainer portainer 1/1 1 1 4d20h
production cattle 5/5 5 5 4d22h

Also, on the dashboard Workload Status the service fails. I’ve waited long enough but my graphs don’t appear. First I uninstall the dashboard.

root@kube  ## > kubectl delete -f \
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

serviceaccount "metrics-server" deleted
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "system:metrics-server" deleted
rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" deleted
service "metrics-server" deleted
deployment.apps "metrics-server" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted

Then I make a copy of the metric server manifest locally (open the link on the browser, copy it as a yaml manifest) and edit it. The args (lines 129 to 140) look now like this:

spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls=true
image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
imagePullPolicy: IfNotPresent
livenessProbe:

The main change is in red. Of course, careful with the indentation! Because you know, yaml, ๐Ÿ˜๐Ÿ˜. Now I deploy the modified manifest

root@kube ## > kubectl apply -f kube-metric-server-new.yaml

and wait for my graphs to start appearing on the dashboard (~ 15 minutes). Here the issue thread from github. If I have learned something is that one needs to have local copies of the manifests, just in case. ๐Ÿ˜‰๐Ÿ˜‰. BONUS: documentation about deployment on Kubernetes and the Kubernetes cheatsheet. Because I may have saved you one or two google searchs.

Portainer on Kubernetes for CentOS 7.X

I already installed Portainer as a docker and I’m very happy about it. Is it the same sensation on a Kubernetes cluster? I need to deploy it to see it ๐Ÿ˜‰. The official How-To is here. This my experience with it.

First I stomp over the pre-req Note. Since I have a baremetal Kubernetes cluster (that is, something I installed) I forgot to define the kubernetes storage classes. Here you have the kubernetes storage classes documentation. As you can expect, since it is a kubernetes feature, you can connect your cluster to a wide variety of storages, cloudly or not. I don’t want to speak about what I don’t know so I’ll simply go ahead and add from my Kubernetes Dashboard the yaml files for a storage class sc and for a persistent volume pv. Just press the “+” on the web and add to the yaml the next:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

This should create the sc called “local-storage“. I didn’t change a comma from the documentation. We can see it by typing kubeclt get sc. This is my output

root@kube ## > kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 13m

Sorry but there’s no way to format this right ๐Ÿ˜๐Ÿ˜. We will need also a local persistent volume. This is my yaml.

apiVersion: v1
kind: PersistentVolume
metadata:
name: kube-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /my/storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube

I didn’t change so much with respect to the kubernetes local persistent volume documentation. Also I add it through the web, although you should have already realized how to do it from the command line.ย Now that we have our storages, we can go ahead and deploy. I try first through helm, but somehow it doesn’t seem to work. Anyway, here’s my output:

root@kube ~ ## > helm install -n portainer portainer portainer/portainer --set persistence.storageClass=kube-local-pv
NAME: portainer
LAST DEPLOYED: Thu Aug 26 14:16:11 2021
NAMESPACE: portainer
STATUS: deployed
REVISION: 1
NOTES:
Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[0].nodePort}" services portainer)
export NODE_IP=$(kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

Again, sorry for the editing. The above command is not like in the tutorial due to the changes in helm v3. I guess the kubernetes world is evolving pretty quickly, so to say. I don’t pass the namespace but I create it through command line.

kubectl create -f portainer.yaml

My portainer.yaml looks like this:

apiVersion: v1
kind: Namespace
metadata:
name: portainer

Of course it will work also if you add that through the dashboard. Anyhow. It didn’t work with helm (the pod was always pending resources) so I went for a YAML manifest install. First we undo the deployment by deleten the namespace and the cluster role. We can do that from the dasboard or via command line. The command line cleanup looks like this:

root@kube ~ ## > kubectl delete namespace portainer
namespace "portainer" deleted
root@kube ~ ## > kubectl delete clusterrolebinding portainer
clusterrolebinding.rbac.authorization.k8s.io "portainer" deleted

Then we deploy as NodePort. This is my output:

root@kube ~ ## > kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
persistentvolumeclaim/portainer created
clusterrolebinding.rbac.authorization.k8s.io/portainer created
service/portainer created
deployment.apps/portainer created

After a few second, I can access to my portainer gui through the address kube:30777. Everything is pretty similar to the docker version, so โค๏ธโค๏ธ success!

Rootless docker error on CentOS 7: failed to mount overlay: operation not permitted storage-driver=overlay2

While trying a rootless docker on my servers, I found a lot of issues. They reccomend to use an Ubuntu kernel, but I use CentOS 7.X. So I need to stick with it. The prerequisites are fine – I have newuidmap and newgidmap and enough subordinates. This is how it looks like when I run the rootless setup script as an user user.

user@server ~ $ > dockerd-rootless-setuptool.sh install
[ERROR] Missing system requirements.
[ERROR] Run the following commands to
[ERROR] install the requirements and run this tool again.

########## BEGIN ##########
sudo sh -eux <<EOF
# Set user.max_user_namespaces
cat <<EOT > /etc/sysctl.d/51-rootless.conf
user.max_user_namespaces = 28633
EOT
sysctl --system
# Add subuid entry for user
echo "user:100000:65536" >> /etc/subuid
# Add subgid entry for user
echo "user:100000:65536" >> /etc/subgid
EOF
########## END ##########

We go as root and cut and copy the thing above between the #. This is the output, edited.

root@server ~ ## > cut-and-copy-of-the-thing-above
+ cat
+ sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/51-rootless.conf ...
user.max_user_namespaces = 28633
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
+ echo user:100000:65536
+ echo user:100000:65536
root@server ~ ## > ########## END ##########

Time to try again. The result gives no error but is not like in the tutorial. Here you have it.

user@server ~ $ > dockerd-rootless-setuptool.sh install
[INFO] systemd not detected, dockerd-rootless.sh
needs to be started manually:
PATH=/usr/bin:/sbin:/usr/sbin:$PATH dockerd-rootless.sh
[INFO] Creating CLI context "rootless"
Successfully created context "rootless"
[INFO] Make sure the following environment variables
are set (or add them to ~/.bashrc):
export PATH=/usr/bin:$PATH
export DOCKER_HOST=unix:///run/user/3201/docker.sock
user@server ~ $ >

So what does it mean to start it manually? After reading this bug report, I decide to try to run it with the experimental tag and specifying the storage driver. This is my output, as usual, edited for a proper reading. Important messages in blue, comments in cursive, errors in red.

user@server ~ $ > dockerd-rootless.sh \
--experimental --storage-driver overlay2
+ case "$1" in
+ '[' -w /run/user/USERID ']'
+ '[' -w /home/user ']'
--> some user-dependent messages...
+ exec dockerd --experimental --storage-driver overlay2
INFO[] Starting up
WARN[] Running experimental build
WARN[] Running in rootless mode.
This mode has feature limitations.
INFO[] Running with RootlessKit integration
...more messages here, loading plugins...
INFO[] skip loading plugin "io.containerd.snapshotter.v1.aufs"...
error="aufs is not supported: skip plugin"
type=io.containerd.snapshotter.v1
INFO[] loading plugin "io.containerd.snapshotter.v1.devmapper"...
type=io.containerd.snapshotter.v1
WARN[] failed to load plugin e
rror="devmapper not configured"

INFO[] loading plugins..
INFO[] skip loading plugin "io.containerd.snapshotter.v1.zfs"...
error="path must be a zfs : skip plugin"
type=io.containerd.snapshotter.v1
WARN[] could not use snapshotter devmapper
in metadata plugin
error="devmapper not configured"

INFO[] metadata content store policy set policy=shared
INFO[] loading a lot of plugins sucessfully...
...more messages here, loading plugins...
INFO[] serving... address=/run/user/USERID/docker/containerd/sockets
INFO[] serving...
INFO[] containerd successfully booted in 0.033234s
WARN[] Could not set may_detach_mounts kernel parameter
error="error opening may_detach_mounts kernel config file:
open /proc/sys/fs/may_detach_mounts: permission denied"

INFO[] parsed scheme: "unix" module=grpc
...more messages here...
INFO[] ClientConn switching balancer to "pick_first" module=grpc
ERRO[] failed to mount overlay:
operation not permitted storage-driver=overlay2
INFO[] stopping event stream following graceful shutdown
error="context canceled"
module=libcontainerd namespace=plugins.moby
failed to start daemon:
error initializing graphdriver: driver not supported

[rootlesskit:child ] error:
command [/usr/bin/dockerd-rootless.sh
--experimental --storage-driver overlay2]
exited: exit status 1

[rootlesskit:parent] error: child exited: exit status 1

What do I get from the above run? There are warnings on zfs, aufs, and finally overlay2 so it looks like there’s some kind of problem with the storage driver. You can get also a dark failed to register layer message or an error creating overlay mount. It makes sense, since I’m coming from a fully working root install. I try once more without the storage driver option, and this is the (interesting part of) the output.

ERRO[] failed to mount overlay: 
operation not permitted storage-driver=overlay2
ERRO[] AUFS cannot be used in non-init user namespace
storage-driver=aufs
ERRO[] failed to mount overlay: operation not permitted
storage-driver=overlay
INFO[] Attempting next endpoint for pull after error:
failed to register layer:
ApplyLayer exit status 1 stdout:
stderr: open /root/.bash_logout: permission denied

So if you don’t give a storage option, it tries them all. Mystery solved, I guess. You can have a look to the available overlayfs documentation (covering overlay and overlay2). But in short, the docker daemon running under user doesn’t manage to access to the storage drivers. Let’s have a look onto the docker storage options. Some documentation first. We know how to change the directory to store containers and images in Docker. In my CentOS 7.X, I see my daemon runs overlay2 and indeed the images after dowloaded are stored on /var/lib/docker/overlay2. I can change the docker image installation directory by editing /etc/docker/daemon.json. I add something like this.

{
"data-root": "/extrahd/docker",
"storage-driver": "overlay2"
}

I then clean up by docker system prune -a and restart my docker daemon still as root to be sure the newly downloaded images end up on /extrahd/docker. As expected. ๐Ÿ˜‰. In principle the given location cannot be a GPFS or a CIFS mounted folder, or I end up getting all the driver errors.

Will this work for on rootless mode? In my case,it was not possible until I did the same trick as for root. So one needs to configure the user docker daemon. For my user user, it should be located on

 /home/user/.config/docker/daemon.json 

Remember, of course, that the storage must be writable by the user user ๐Ÿ˜‰๐Ÿ˜‰. I hope now you can run docker rootless on CentOS 7.X as I can! ๐Ÿค˜๐Ÿ˜Š.

Bonus: a docker storage using a bind mount (not tested) and how to control the docker with systemd, and the full docker daemon configuration file documentation.

The 3-2-1 backup rule

I’m reading now the book Kubernetes Backup & Recovery for dummies (Kasten by Veeam Special edition) that I got for free after my Cloud Native Days with Kubernetes conference. First I need to say I’m not a native cloud inhabitant, but I hope to be ready to live in the clouds when the time comes. So I can’t comment a lot about the book as a whole, but it’s definitely a very good overview of the problem of the backup and recovery. There’s one topic that I think it goes beyond kubernetes that I want to annotate here.

It’s named as a timeless rule against a failure scenario, so I guess it’s OK to copy it here. It is called, you guessed it right, the 3-2-1 backup rule. This rule answers the two questions.

  • How many backup files should I have?
  • where should I store them?

Answers:

ยป 3: Have at least three copies of your data.

ยป 2: Store the copies on two different media.

ยป 1: Keep one backup copy offsite.

It may sound easy to achieve, but it is not when we have petabytes of data. We do have an HPSS with snapshots (one) and a TSM server for the home folders, but we can’t ask everyone to keep an offline copy of their data. I do ask the people to follow this rule (at least 3 copies) of their important documents (papers, etc). So I’m going to add a zero rule here: select what you want to backup. And do not trust the black box on this one. You must back up what you want to keep. Just in case everything else fails ๐Ÿ˜‰.

Portainer, a docker GUI in a docker, on CentOS 7.X

The Portainer web. As taken from this tutorial.

It’s been a while since I started with the dockers and the kubernetes but so far I didn’t show you a docker management solution, only a docker usage cheat sheet. Well, forget about that, now I give you Portainer, a web that will allow you to check your images, volumes, and containers. Of course, running as a docker.

The image above I’ve taken from this portainer installation tutorial for Ubuntu. It includes a docker installation, but I expect you have already have dockers running. Anyway, once with dockers, the install is pretty simple. Just like this:

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

After that, you should be able to access to the web (machinename:9000) and finish it up by creating the initial admin password. So far I’m very satisifed with the experience, but if you are not, you can check this comparison of docker GUIs. What’s next? Isolation, security and kubernetes integration. I’ll keep you posted…

On the way down

In my dream, I was going out, maybe in Trento, Lausanne, or some place with a very aggressive geometry, with big slopes and in-between valleys. I was invited to a street party, I believe, and I was looking for the specific spot of it. In Europe some villages are having medieval acute slopes, or streets with stairs and not enough place to drive. OK, maybe enough space for a motoretta or similar. The current slope I was running down was around 8 meters wide and it was bisected by irregular step groups, two, three, five, each 8 meters or so, connecting different plateaus. Between the yellow and white heterogenous buildings on both sides there were laid cables and strings with paper lanterns, garlands, and festive, colourful light bulbs. The road was really crowded. Most of the people were wearing masks, not surgical, but for a masquerade, or a carnival. The moon was shining above us all, right on top. While wandering I was offered by a girl a glass with champagne, that I took, and I was asked to follow her, that I did. I don’t know if it was the person I was supposed to meet. I wanted to ask her, but the ambient music was kind of loud. I tried to identify from where was the music coming and if we were heading there. Not only I didn’t manage but I found out there was not a single source of the melody. They were overlapping, oddly enough, in harmony, rock and traditional, classical and jazz.

I’m done with my glass by the moment we arrive. “I found him!” she says. “Hey everybody!” I smile, and unknown people wave to me. We are on a small lateral alley with an impossible exit, like one meter above everyone. Some kind of half-built well, somehow, very comfy. There’s a sofa on the ground, some stools, and a table filled with colourful bottles. This time I’m offered a spritz. “Did you see him?” – the dark-haired man with the purple mask asks me as soon as I find my spot. “He was asking before for you, saying that your portable thingy is ready.” Then I remember. “The hologram projector. Why did I ask him to meet me here?” The man smiles.”I guess you wanted to feel supported.” I nod.”There he is!”

I head to the man with the leather jacket. Actually, he’s totally dressed as an aviator. I don’t recognise his army badges, though. In the middle of my way, I decide to offer him a drink also, so I come back to our table to serve us another spritz. The leather jacket man moves up his shades. When I come close, I see that his eyes are composed, like from a fly or something. “Thanks for the drink.” He says. The eyes are not as inhuman as one could think. I wonder what he sees with those. I doubt about asking him. He gulps the spritz and takes out something from his pocket. “There you have it.” I look at the thing. It looks like a small wrist band with circular section, or half cuffs.”How does it work?” I ask. He takes it and rotate some parts of it. “Here, on the central part, you have the 2D projector. You can use it to check the matrix status, that is, temperature, rendering, stability, etc. You see? ” on the wall the thing seems to project a set of small and shinny progress bars. They mutate to pie charts, and then to numbers, while the pilot manipulates the device.” The next four rings ” he fumbles with some sections of the bracelet ” you may need to exchange from time to time. They are the matrix memories. You can easily burn them if you are not careful. Please be careful!” I nod.”And these two, close to the lock, are the switch. Yes, they are double, to avoid an accidental switching on of the whole theatre. Wait.” I wait. He shows me how to switch the thing off, and then his insect eyes are gone. It was an hologram.” Now it’s all yours. What do you want to do with it?” I tell him I don’t know yet. I look at his now fully human blue eyes, looking for a message of a lie. “Well, just be careful and don’t overestimate the battery. I recommend you play with it before going to the war. Give yourself a new face, or change your color, things like that.” I smile, thinking about it.”That’s also possible. But remember it’s an hologram, she will find out it’s not solid!” He laughs. I laugh. “Thanks.” I say, and I walk back to my people. Whoever they are.

Bloganiversary!

My averages – weekends were always slow.

Around today, in 2016, I decided to start a new blog. My first post is not exactly a post but a test, and you need to go to August 18th to really find the style that I’m trying to keep. This day happens to be also close to my birthday so it’s a good moment to make balance.

It’s all clear in my mind, but it’s hard to write it down. So is it still worthing it? Yes, I think I will keep posting at least for another year. Not as frequently as I’d like to – if I could I would post daily – but as much as the business flow and my muse are allowing me. Did I grow a lot on followers and views in this period? Well, that’s a good one. I’ve posted after a lot of doubts my averages above and they are quite clear. The daily average is heavily damaged by the weekends, when I have between 20 and 100 visits top. Even if I post ๐Ÿ˜‰. My readers seems to reach this page from the office, most probably, from Monday to Friday. Since 2019, nothing seems to be changing. I have a solid 6K per month on the number of visitors, which is quite a nice number, and the search terms, as well as the country of origin, are consistent week by week. Am I happy with this? Well, I’m not Tim Hockin or Jensen Huang and I don’t think I’m ever going to be any close to them, since I’m not really developing solutions, just adapting them for the setup at my current working place. If I modify something, it’s because it doesn’t work on our OS or with our hardware. Most of the bits posts are more what I think a blog should show: raw notes to add to a specially dark HOWTO, or the path I chose to fix an error. The world is a big place and my error may appear earlier on our systems than anywhere else, we try to be the second on everything -the first is the one designing the solution. Therefore my post may appear in a random search as a valid solution for a real problem, directing potential readers. Only until the solution is posted in a more specific blog or media.

What about my dragons? Are they different now? Definitely. I believe it’s hard to read for a casual visitor a story that has been going on since months. Of course a casual reader may enjoy the chapter of today, but unless I create a specific section or tag, it’s not so easy, even for me, to grasp the whole story. Go back in time to refresh my memories of what’s going on needs a lot of planning. You don’t need it to if a post is unique, maybe a part of the same universe, so to say, or if it’s like this, about a specific topic, more like a column on the news. Anyway I never had 300 visits on my dragon. I understand, and they are mostly for me. It relaxes me very much to write about it, it helps me to clean up my mind, to make peace my my dark thoughts.

That’s all for today. If you have reached this point, thanks for reading it all, and if you have enjoyed this post press ๐Ÿ‘ or follow my YouTube channel (just joking ๐Ÿ˜๐Ÿ˜๐Ÿ˜).

A browser in a browser: a firefox docker

Different logos. Choose yours. Image taken from here.

I don’t trust flight companies. I have the impression that if I look for a flight and I find a good price but I don’t buy it on the spot, next time I come the price is gone. This may be because of the cookies, of it can be that the server instance actually saves the searches from the IP you are using, so that “for a better service” they can remember what you searched before. Or it can be because good prices fly away ๐Ÿ˜๐Ÿ˜๐Ÿ˜.

In order to minimize this issue I have taken a radical approach. I’m using a firefox docker to search for the flight, a docker that I destroy after my search. If you have followed my blog you definitely have a user able to run a docker in your OS, whatever OS it is. I’ve tried several solutions, but the best so far is this one:

docker run -d --name=firefox -p 5800:5800 -v /docker/appdata/firefox:/config:rw --shm-size 2g jlesage/firefox

After this is done, open your browser (chrome, konqueror, safari, it doesn’ matter) and write your IP or machine name followed by :5800 to reach the firefox docker instance. You will see a browser in the browser, but it delivers. I feel now safer than running on incognito or similar. It’s fast, and it’s dirty. Original firefox docker image here.

If you want a full desktop on your browser, you can also have it. I wrote about it long time ago. Unfortunately, a docker-as-app solution when run as a user throws me a display error. Like this:

(firefox:1): Gtk-WARNING **: 11:35:33.496: 
Locale not supported by C library.
Using the fallback 'C' locale.
Unable to init server:
Broadway display type not supported: 1.2.3.4:0
Error: cannot open display: 1.2.3.4:0

As root, of course it runs fine, but we are not always root, aren’t we? ๐Ÿ˜‰,

EDIT: If you want to completely delete firefox history you will need to delete also the appdata. To play completely on the safe side, do:

docker system prune -a;

/docker/appdata ## > rm -rf firefox/

Careful: system prune will remove ALL your docker stuff ๐Ÿ˜‰