StrongSwan: Encryption not supported

Introduction

IPSec always sounded like a nightmare to me, at least, as a long time user of OpenVPN I never understood why it is so complicated.

But hello GCE, AWS and customer asking me to join on-premise networks to their cloud provider. There’s no alternative here, but IPSec. If you don’t want to do it on Cisco (or assimilated devices) there’s StrongSwan on Linux but there’s a huge pitfall and I wanted to write about it.

Usually, you start having cipher negotiation issue and StrongSwan logging is to say the least, not helpful.

Debian tricked me

Today I was connecting a Google Cloud to a Debian-based gateway with StrongSwan and as expected I got cipher issue:

So the first question was, is AES_GCM_16 not supported on my side or on Google side ? Cryptic messages did not help but I assumed it was on my side as when it comes from the other side the message is usually “NO_PROPOSAL_CHOSEN”.

How do I check supported ciphers ?

Indeed, I cannot see anything related to GCM, looks being the root of my issue.

Investigations and resolution

I wanted to verify first if this cipher is supposed to be supported by StrongWan and found the answer here:

https://wiki.strongswan.org/projects/strongswan/wiki/IKEv2CipherSuites

It needs “aes” and “gcm” plugins, so let’s read more about StrongSwan plugins

https://wiki.strongswan.org/projects/strongswan/wiki/PluginList

Ok that’s interresting. Aes should be ok, but gcm is disabled by default. I need to check what Debian did here, and verify the gcm plugin is enabled.

Debian developers usually store their packaging files on Debian’s internal public GitLab server which is called Salsa. Strongswan packaging is hosted here, so it was easy to check:

https://salsa.debian.org/debian/strongswan/blob/debian/master/debian/rules

debian/rules file is in charge of building the package, so this is what you want to check for options passed ton autotools configure script. Here we can clearly see gcm plugin is explicitly enabled.

I also checked debian/changelog to be sure it was already enabled in the stable version of the package which is available on Debian Buster, and yes, it was…

So why do I missed it ?

I check installed strongswan packages and found *libstrongswan*, let’s see it content

This is very interesting ! The aes plugin file is here, but no gcm one. Let’s check if it’s available (i’ll just guess its name) anywhere in the Debian archive ?

I felt stupid here… This is quite common in Debian to separate the “standard” files from the “minimal” ones.

After installing the package and restarting strongswan the list of supported cipher is quite different:

And the VPN now works !

DKIM signature in Postfix on Debian

Introduction

DKIM works by interfacing a proxy in postfix to add priv/pub signature to out-going emails using a private key and opendkim.

Matching public key will be exported in the DNS server so receiving SMTP can verify in the signature in the metadata matches the public key.

It helps for non being consider as a spamer.

A friend asked me to setup this on his two servers with two different domains so we’ll be setting up the whole thing here with separate keys pair for each domain/each server and we’ll allow both servers to sign emails for these domains.

Install required packages

Create pub/priv keys sets

According to /usr/share/doc/opendkim/README.Debian.gz there is a tool named opendkim-genkey that can help generating the key pairs. This tool is in opendkim-tools package.

As we want to allow different servers to sign messages with different key but for the same domain, we’ll use the origin server short hostname as selector. It means the signature will be somehow prefixed with this selector, indicating the receiver SMTP which DNS entry should be queried for getting associated public key. By doing so, we’ll be able to export different public key for each server.

Now you should see the private keys as well as public key as a bind9 snippet in /etc/dkimkeys:

Register public key in DNS zones

Now you need to add the public signature in your DNS zone. In this example, the primary bind server for both domain1.com and domain2.com is running on the server itself so we can just do:

This is very unlikely that method is suitable for you, but you get the idea right ?

Don’t forget to bump DNS zone serial number and reload bind

We can now check with dig that our DNS server now expose the public key:

Should return something like:

Configure OpenDKIM

Now we need to create a KeyTable file to match domain, selector and private key file. We also need a SigningTable to actually ask for signature to be added to outgoing emails.

In /etc/opendkim.conf add the following entries at bottom of files:

Then we we’ll create /etc/dkimkeys/KeyTable file:

The file now looks like:

Now we create /etc/dkimkeys/SigningTable file:

The file should content:

OpenDKIM is now configured, restart it

Integrate with Postfix

On Debian systems Postfix is chrooted so there are a few additionnal steps to get it working correctly:

In /etc/opendkim.conf change the socket path to Postifx chroot:

Create proper folder in Postfix chroot and give proper permissions

Add Postfix to opendkim group so it can write to the socket:

Enable filtering in postfix (postconf commands will edit /etc/postfix/main.cf):

Missing trailling / is not a typo !

Restart both services:

Testing

You can send an email from the server itself using following commands:

My your@real.email server runs Postfix with Amavis so I can check the header of the email I just received and I can confirm valid DKIM signature has been seen:

Update all DELL PowerEdge BIOS from a PXE Live Ubuntu

Quick notes from what I did today. Starting point was I was unable to update iDrac EXPRESS firmware (Uploading stuck at 95%) and these express cards don’t offer me to upgrade other BIOS anyway.

So I PXE-started the server on Ubuntu 16.05 Live CD and here’s what I had to do to be able to upgrade everything using “BIN” files (for RedHat) downloaded at DELL support website.

For the records I was upgrading the following:

  • R610_BIOS_C6MRW_LN_6.4.0.BIN: PowerEdgge R610
  • ESM_Firmware_9GJYW_LN32_2.90_A00.BIN: Embedded iDRAC controller
  • Network_Firmware_35RF5_LN_7.12.19.BIN: Additional 4x 1GB Broadcom card
  • Network_Firmware_82J79_LN_08.07.26_A00-00.BIN: Embedded Intel 1GB NICs
  • SAS-RAID_Firmware_9FVJ2_LN_12.10.7-0001_A13.BIN: Embedded PERC H700 RAID controller
  • SAS-Drive_Firmware_M2P11_LN_AS0D_A00.BIN: Seagate ST91000640SS SAS 1TB disks

Disable CDROM entryin soruces.list

sed '/^deb cdrom/d' /etc/apt/sources.list

Enable universe repository

sed -i 's/main/main universe/' /etc/apt/sources.list

Enable my HWRaid repository

echo "deb [trusted=yes] http://hwraid.le-vert.net/ubuntu xenial main" >> /etc/apt/sources.list

Install megaclisas-status to check the drive model

apt update
apt install megaclisas-status

Make sure /bin/sh is provided by bash (required by all updates)

dpkg-reconfigure dash

Select “No”

Install dependencies required by the BIN files (required by iDRAC firmware at least)
This requires to enable 32 bits support first.

dpkg --add-architecture i386
apt update
apt install libstdc++6:i386 rpm

Fake the system as being a RedHat one (required by R610 BIOS)

echo "Red Hat Enterprise Linux Server release 6.3 (Santiago)" > /etc/redhat-release

Despite tools says you’ll have to restart between each update it’s not necessary. However, double check after reboot that all have been applied by re-running the BIN files. I had to start Broadcom NIC twice, first attempt did nothing (old version still showed up after reboot)

PXE boot Debian installer with firmwares

This article is mostly a reminder for me but might be useful to others.

1. Install/prepare TFTP server

First we need a tftp server (with symlink support is better) and a network bootloader (pxelinux)

apt install tftpd syslinux-common pxelinux

Then will link the minimum files required to display a text menu for our PXE entries

ln -s /usr/lib/PXELINUX/pxelinux.0                /srv/tftp/
ln -s /usr/lib/syslinux/modules/bios/ldlinux.c32  /srv/tftp/
ln -s /usr/lib/syslinux/modules/bios/libutil.c32  /srv/tftp/
ln -s /usr/lib/syslinux/modules/bios/menu.c32     /srv/tftp/

And a default menu booting normally after a 5 seconds timeout

mkdir /srv/tftp/pxelinux.cfg

cat << 'EOF' > /srv/tftp/pxelinux.cfg/default
DEFAULT menu.c32

LABEL Normal boot
LOCALBOOT 0

PROMPT 0
TIMEOUT 50
EOF

Additional keymaps

For luxembourgish, swiss people or whoever may use different keyboard layouts, will add ability to select a keymap (in case you want to edit an existing boot entry):

We need lilo here because it includes a (broken) tool to create syslinux compatible keymaps.

apt install kbd console-data lilo

Patch the broken keytab-lilo tool using the following diff (found at https://bugzilla.syslinux.org/show_bug.cgi?id=68)

--- orig	2016-07-15 22:14:28.000000000 +0200
+++ new		2018-01-12 21:37:16.349203039 +0100
@@ -44,9 +44,9 @@
     $empty = 1;
     while () {
 	chop;
-	if (/^(static\s+)?u_short\s+(\S+)_map\[\S*\]\s+=\s+{\s*$/) {
+	if (/^(static\s+)?(u_|unsigned )short\s+(\S+)_map\[\S*\]\s+=\s+{\s*$/) {
 	    die "active at beginning of  map" if defined $current;
-	    $current = $pfx.":".$2;
+	    $current = $pfx.":".$3;
 	    next;
 	}
 	undef $current if /^};\s*$/;
EOF

Create the keymap you want (french azerty here and swiss qwertz with french variant).
For then french azerty I found a few additional remaping here: https://forums.archlinux.fr/viewtopic.php?t=11953

mkdir /srv/tftp/keymaps
keytab-lilo -p 60=46 -p 92=60 -p 124=62 \
            /usr/share/keymaps/i386/qwerty/us.kmap.gz \
            /usr/share/keymaps/i386/azerty/fr-latin1.kmap.gz \
            > /srv/tftp/keymaps/fr.ktl
keytab-lilo /usr/share/keymaps/i386/qwerty/us.kmap.gz \
            /usr/share/keymaps/i386/qwertz/fr_CH-latin1.kmap.gz \
            > /srv/tftp/keymaps/fr-ch.ktl

Uninstall lilo:

apt purge lilo

And add entry in the menu to select an alternative keymap

ln -s /usr/lib/syslinux/modules/bios/kbdmap.c32  /srv/tftp/
ln -s /usr/lib/syslinux/modules/bios/libcom32.c32  /srv/tftp/
cat << 'EOF' >> /srv/tftp/pxelinux.cfg/default

LABEL Switch to french AZERTY
KERNEL kbdmap.c32
APPEND keymaps/fr.ktl

LABEL Switch to swiss french QWERTZ
KERNEL kbdmap.c32
APPEND keymaps/fr-ch.ktl
EOF

Debian installer with non-free firmware

Extract kernel and initrd from the Debian netboot archive:

mkdir /srv/tftp/debian-stretch-amd64-netinstall
wget -q -O- http://ftp.fr.debian.org/debian/dists/stretch/main/installer-amd64/current/images/netboot/netboot.tar.gz \
  | tar xvz -C /srv/tftp/debian-stretch-amd64-netinstall/ \
            ./debian-installer/amd64/initrd.gz \
            ./debian-installer/amd64/linux \
            --strip-components=3

Download firmware file and append to original initrd:

wget -q -O /srv/tftp/debian-stretch-amd64-netinstall/firmware.cpio.gz \
  http://cdimage.debian.org/cdimage/unofficial/non-free/firmware/stretch/current/firmware.cpio.gz

cat /srv/tftp/debian-stretch-amd64-netinstall/initrd.gz \
    /srv/tftp/debian-stretch-amd64-netinstall/firmware.cpio.gz \
    > /srv/tftp/debian-stretch-amd64-netinstall/initrd_firmware.gz

And add it to the boot menu:

cat << 'EOF' >> /srv/tftp/pxelinux.cfg/default

LABEL Debian Stretch NetInstall (firmware) amd64
KERNEL debian-stretch-amd64-netinstall/linux
APPEND initrd=debian-stretch-amd64-netinstall/initrd_firmware.gz
EOF

Mimic IPv4 docker behavior for IPv6 with Shorewall and NAT

Hi,

Docker IPv6 support is messed up. Instead of sharing a non-routed IPv6 prefix between host and containers, just like it does for IPv4, docker team decided to implement a full IPv6 support, with routing and everything.
The point is: most of people just want basic IPv6 inside the container (in my case I’m doing monitoring checks) and would love to have it working out of the box. Honestly, who wants a dedicated routed prefix for its IPv6 containers ? Certainly not me.

So here’s the trick: we’re going to use a chosen prefix between docker daemon and its host and we’ll use shorewall6 to NAT it when going outside. Basically it’ll mimic IPv4 behavior.

So first, make sure IPv6 is working correctly on host (actually while writing this doc, it was not on my sample setup :D):

Then, define a IPv6 prefix to be use inside docker by editing /etc/docker/daemon.json

Create the file if not existing yet.

Restart docker daemon:

Now your container should have an IP range in the range (and a default gateway):

You host should be able to reach the container too:

Here we go… Now all we need to do is make a source NAT of 2a00:1450::/64 when going out of eth0 on the host server.

For this, we’ll use shorewall:

And we’ll create a default configuration based on provided examples:

Default configuration can work but I’ll tweak it a bit.

In /etc/shorewall6/interfaces I renamed the interface to loc instead of net and disable routeur advertisment (IPv6 gateway statically defined in /etc/network/interfaces) and I need to define the docker bridge interface:

In /etc/shorewall6/zones there must be one ipv6 zone for each interface:

In /etc/shorewall6/policy I give full access to host server to any network, docker has access to local network (but not to its host) and I block everything else

Then, I create a few overrides in /etc/shorewall6/rules to authorize Ping and SSH from local network:

As you can see, there’s a few more rules: at least one containers uses a MariaDB server on the host, so I permit that. Also I permit Nagios NRPE and SNMP from local network to host (fw) for monitoring purpose and I also accept the same from container to host because there’s a monitoring container that will actually monitor its own host.

Now we need to NAT the private IPv6 subnet used by docker in /etc/shorewall6/snat

Finally we’ll enable IP_FORWARDING in */etc/shorewall6/shorewall.conf* by setting it to yes and restart shorewall:

And that’s it:

Fixing VMware vCenter template customization for Debian Stretch (nic detected as “ether”)

Hello,

I’m a big fan of Foreman, I use it everywhere to spawn my virtual machines (mostly with VMWare vCenter or AWS) and then apply directly Puppet classes on it to get a fully configured new host in a few clicks. Maybe I’ll write about it one day, let’s see.

Anyway, this week theme was mostly “Let’s upgrade from Jessie to Stretch, I’m craving Python 3.5 and the new async/await syntax)”.
Sadly, it went wrong. I was unable to use my Foreman anymore against ESX 6.0 because when injecting the customization XML file (used to define IP settings within the VM through open-vm-tools) the resulting VM had no network set.
After looking at what happened, I figured out /etc/network/interfaces had been created wrong: instead of using eth0 (yes, I disabled predicitve interface name in my template) it was all set like the interface was named ether. Uh ?

Quick Google search with “debian stretch vmware ether” lead me to the following GitHub bug opened against open-vm-tools. Sadly the issue wouldn’t come from open-vm-tools: this issue comes from a VMWare script not parsing correctly current ifconfig output (yeah, I added net-tools in my template too).

Here is an extract of the net-tools package NEWS.Debian file:

Wow, that’s a pretty dangerous move you did here….

The script creating the network configuration is actually a piece of Perl crap copied directly from the vCenter server into the VM filesystem. Yeah, that sounds like black magic but the good news is that’s it’s Perl, so it’s fixable.

So I searched for this “Customization.pm” file on my vCenter Windows server and I found it here:
C:\Program Files\VMware\vCenter Server\vpxd\imgcust\linux\imgcust-scripts\Customization.pm

I managed quite easily to understand what was wrong, and I must say that original output parsing was pretty cheap.
Anyway, here’s a better one that just works:

Nothing to restart, this file in copied everytime you apply customization to a template. You’ll find attached a text version of the patch: vcenter_Customization_pm.diff

Good luck!

Policy Based Routing (PBR) with Shorewall to migrate a server

Hey,

Today I’m doing pure sysadmin work and I’ve been asked to migrate several servers from an obsolete IP range 192.168.x to 10.x. Things were quite easy until I reached the internal mail server that can be used by hundreds field hardware as a relay server. Everybody is supposed to use DNS entry but I won’t trust that.

So my idea is to switch eth0 to the new network and keep a new eth1 in the old one to keep the service working and be able to log what’s using the obsolete address.
There’s just a little problem: if my default gateway is on eth0, any packet entering eth1 from a routed network (would work for what’s connected in the legacy local network) will be answered using the default gateway on eth0. That’s asymmetric routing and that just doesn’t work.

Okay, so how do I solve that ? With Shorewall of course ! The idea if to tag any packet entering eth1 with a different mark than the ones coming throught eth0 and provide different routing table for each mark. I’ll do this on CentOS today but it should be basically the same for any Linux system. Shorewall is usually available everywhere but you can try doing this by hand with “ip” and “iptables”. Looks like a lot of pain to me, though.
Having both address routed and working is a nice step but it’s pretty useless if I have no way to find out who’s using the obsolete address so we’ll use Shorewall to log these access and create a specific rsyslog/logrotate configuration to get a dedicated log.

First, we’ll change the network configuration to have both interfaces up with a default gateway only on the first interface (connected to the new network). The gateway will be later overridden by Shorewall but it’s always saner to have a default configuration working, even with limited feature.

So make sure to create proper ifcfg-eth0 and ifcfg-eth1 in /etc/sysconfig/network-scripts and make sure to have only GATEWAY defined on the new network. You should also make sure that the server is reachable on its new address and reachable on the old address with a machine directly connected to the legacy network.

Let’s continue with a very basic Shorewall configuration. yum -y install shorewall and then make sure to have the three following files in /etc/shorewall:

  • interfaces – List of network adapter handled by Shorewall
  • policy – Default firewall policies between each zone
  • providers – This one is PBR specific, we’ll use this to mark packets
  • rules – Overrides default policies with port/host rules
  • shorewall.conf – Global settings
  • zones – Map interfaces to firewall zones
  • If you miss one, copy it from /usr/share/shorewall/configfiles/

    So let’s do a few adjustments in shorewall.conf first:

    IP_FORWARDING=No (No this machine SHOULD never be used a gateway between legacy and new network, we're not here to create security flows ;-))
    DISABLE_IPV6=Yes (Sadly, there's no IPv6 here so it's better to let Shorewall4 kill the whole stack)
    LOGTAGONLY=Yes (Change the way Shorewall generate log prefix, otherwise ours will be too long and get shortened)

    Now defines the interfaces in rules:

    And map them to IPv4 zones:

    fw is a default zone meaning “myself”.

    And we create a default policy allowing the machine itself to reach legacy and new network zones and blocking any incoming packets.

    Finally we’ll add a set of default rules to be at least, to SSH the server again

    Just like in policy file, you can use loc,old if you want to permit ping and SSH from the old network too.

    I’ll also add a few rules to permit mail related services from the new zone:

    Okay, now we can enable and start Shorewall.

    Now we’ll ask Shorewall to mark packets differently according to the incoming interface. This will be done in providers file.

    Last column is the gateway to use on each network.

    Let’s permit mail-related traffic from the legacy network but ask Shorewall to log these packets. Add the following in rules file:

    Reload Shorewall and try to telnet tcp/25 from a routed network, both IPs are now working !

    If you check /var/log/messages you will see log like:
    Jul 24 16:55:39 mailsrv kernel: Shorewall:MailMigration:ACCE IN=eth1 OUT= MAC=XXX SRC=192.168.55.4 DST=192.168.0.10 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=22405 DF PROTO=TCP SPT=39474 DPT=25 WINDOW=29200 RES=0x00 SYN URGP=0 MARK=0x2

    You can also check your routing tables with ip route show table.
    ip route show table main shows no more default gateway
    ip route show table 1 shows local route for eth0 network and default gateway of the new network. It’ll be used for packet tagged as 1.
    ip route show table 2 shows local route for eth1 network and default gateway of the legacy network. It’ll be used for packet tagged as 2 (Note the log above with MARK=0x2).

    Your server is now completely accessible from both networks and you can easily monitor the log file to find clients still using the legacy address. But we can make it a lot easier by asking rsyslog to create a separate log file with these specific messages:

    Create /etc/rsyslog.d/mailmigration.conf with the following content:

    And the associated logrotate /etc/logrotate.d/mailmigration file to avoid having a single never ending file:

    If you want to go further for a more automated way of handling this, I’d definitely suggest having a look at Rsyslog AMQP module to publish event to a RabbitMQ and write a quick Python consumer to parse and notify “Someone” (may I suggest calling some API to create an internal support ticket ?) using Pika. The “worker.py” file should be enough for testing, just try/except/ch.basic_nack your handler so the message goes back in queue in case of failure.

Building native Python 3.4 module with Pip on Windows

Hello,

Yesterday I decided to use Python setproctitle module in a project to rename Python script process name (for pretty displaying in nestat, ps…).
RPM package for CentOS 7 was done very quickly by modifying current dev Fedora one to get Python34 flavor for my old CentOS but I try to keep compatibility of my code for Windows too (mostly for development purpose of colleagues).

As usual, I would just go for “pip install setproctitle” on Windows (I would clearly advice to NEVER do that on production but it’s fine for developing).
Sadly it failed with the following error:
error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat).

According to Google this error is quite famous but most of the people seem to be trying to fix it without having any clue of what is really going on.
The root of this issue is that renaming a process in something really specific and thus platform dependent. If you look at setproctitle code you’ll see it’s all C code with specific section for each family of operating system. So we are having to issue installing this module on Windows because:

  • You need a compiler, but unlike on Linux you need the same compiler that the one Python team used when building the Python interpreter you have installed
  • You will probably also need Windows SDK, because setproctitle is very likely to use Windows low-level headers

According to pip error message when installing setproctitle module, I need Visual Studio 10.0 compiler. Okay.
Thanks to Wikipedia, I’m now aware that version 10.0 is actually Visual Studio 2010.
Microsoft confirms this but adds an interesting information: Visual Studio 2010 is a commercial software, so I need a free alternative which is Microsoft Windows SDK for Windows 7 and .NET Framework 4 embedding Visual Studio 2010 compiler.

I’d suggest getting the ISO version instead because the previous link in an online installer. It may not work anymore next time you need to install it…

A funny thing: you’ll be prompted for three different ISO file without any information about what the difference is… So here is the explanation:

  • GRMSDK_EN_DVD.iso: This is the regular X86 Windows running in 32 bits mode
  • GRMSDKIAI_EN_DVD.iso: Intel Itanium 64 bits, you don’t want that
  • GRMSDKX_EN_DVD.iso: X64 version, that’s probably the one you need

If you get the wrong one, the installer will fail with weird error message saying there’s an MSI file missing!

Before trying to install this, uninstall any “Visual Studio 2010” related software, especially the classic Microsoft Visual Studio 2010 Redistribuable X86 and X64 which are very likely to be installed already. Otherwise, the SDK will fail to install without any understandable error message (but you’re free to give it a try and try to figure out what’s going on in the Windows Installer log file, good luck)

I also had trouble running the installer from a network mapped drive, so you can safely extract ISO content with 7-Zip but you might have to copy the folder locally before running it (any feedback would be appreciated in comments if you give a try).

You may now think you’re ready but wait… There’s more.
It seems Windows SDK package installs a broken Visual Studio distribution: KB2519277 (FIX: Visual C++ compilers are removed when you upgrade Visual Studio 2010 […] if Windows SDK v7.1 is installed). According to the title, it’s not exactly what we are doing but you really need that, get VC-Compiler-KB2519277.exe here:

Last but no least, despite Microsoft released a fix to repair a broken Visual Studio installation from the SDK package, they still managed to release it broken: it’s not working on X64, there’s a missing BAT file to set environment variables when running from an X64 shell 😀 No kidding…

Even worse, it has been reported to Microsoft but they closed the issue with no explanation: https://connect.microsoft.com/VisualStudio/feedback/details/510784/vcvarsall-bat-amd64-environment-is-missing

Hopefully some people at StackOverflow fixed the issue by themselves.

I made a batch script so anyone can just run the script and enjoy the fix:

Again, it cannot be run from a network drive (wtf, Windows really…) so you’ll have to create this script on your desktop with “.bat” extension and run it with administrator privileges using right click.

Now you can go back to pip and enjoy the package building and installing successfully 🙂

PS: Did I mention setproctitle 1.1.10 module is real good shit ? If you’re running tones of Python processes, especially related to networking you may benefit from a renamed process when using ps or netstat !

A working Microsoft RDP (remote desktop) client

Hey,

Recent Windows Server release (like 2012) seems to require some additional feature the good old “rdesktop” tools do not handle. Here is what happens when connecting:

Autoselected keyboard map en-us
ERROR: CredSSP: Initialize failed, do you have correct kerberos tgt initialized ?
Failed to connect, CredSSP required by server.

Many people around Internet suggest disabling something on the server but it means disabling some security feature. Moreover, you might need to use RDP to disable this (ZeroDivisionError) and you may not be allowed to do so. Anyway, shitty answser.

Here is the proper one: https://github.com/FreeRDP/FreeRDP

This client just works but has the same issue as rdesktop: it’s highly stupid. For instance, look at the error message above and notice “Autoselected keyboard map en-us”.
Sorry, what ? It’s not because I’m using en_US locale that I’m actually staying in the United States and using a regular ANSI QWERTY keyboard. In fact, I’m not, not at all.
Another issue is the screen size setting which seems to be always set to 1024×768 which is a pitty nowadays, everybody uses at least “FullHD” screen.

So I made a shell wrapper script implementing dynamic screen size selection to 90% of your current display (configurable) and setting the right keymap according to your keyboard layout and variant (layout=ch, variant=fr for me, which is a french oriented QWERTZU layout used in Luxembourg and called by Windows “Swiss French”).

It also feature a configuration file to override defaults and some handy default options to share clipboard and home disk with the remote target. All you have to do is to put saner-xfreerdp in /usr/local/bin/ and use it instead of the real binary.

Get the script here: https://github.com/eLvErDe/saner-xfreerdp

Here is an very simple usage example:

user@host:~$ saner-xfreerdp -u username -a some-srv-01.domain.lan
INFO: Detected active screen on monitor DVI-0 with width=1920 and height=1200
INFO: Will use resized resolution of 1728×1080

INFO: Running xfreerdp +clipboard +home-drive /u:”username” /v:”some-srv-01.domain.lan” /kbd:”Swiss French” /w:1728 /h:1080

[xrdp logs…]
Password:

Debugging “no output” in Nagios or Centreon

You just set up a new test that was running perfectly fine when ran by hand but fails completely after integration in the monitoring software ?
Of course, you suspect that the actually run command in invalid, thanks to parameters, quotes, escapes or whatever but you’re having hard time to figure out what was run exactly…

Been there, done that. But here’s a magic trick:
Let’s do some kind of “ps | tail -f | grep” on the monitoring poller itself:

while true; do ps aux | grep check_script_name | grep -v grep; done

Now, trigger a forced check and get the full command on your terminal.
Some quotes might be missing because ps aux doesn’t show the argument separator but I guess that could be workarounded with a real script querying /proc/${pid}/cmdline that contains \0 arg separator…