Arista Test Lab (macOS & VirtualBox)

Due to one of my computers dying recently I needed to setup an Arista test lab again on macOS Monterey. Luckily using the setup script provided by Arista does most of the heavy lifting, but a few tweaks are still needed get everything working as expected.

You can basically follow the setup guide provided by Arista, but below I will outline a few changes/issues I encountered:

  1. The latest version of vEOS that works with the setup script is 4.25. Using any of the later version of vEOS will result in a “Timeout waiting for SSH.” error. I suspect that you could change some vEOS configuration during the setup process to fix the error, but I wasn’t motivated enough to dig further.
  2. I had a newer version of ‘Packer’ installed using brew (the script provided by Arista uses Packer 1.1.3). This meant I needed to change line 51 of packer-veos/create-veos.py to use a slightly different argument. Instead of --parallel=false the argument needs to be -parallel-builds=1
  3. Newer version of ‘VirtualBox’ (currently using 6.1) no longer support the --iso argument. This means that you need to edit packer-veos/virtualbox/vEOS.json and delete all the lines that look like this:

    "export_opts" : [ "--iso" ],


After having made the above changes, the setup script should run without error and your Arista lab will be created. However there are still a few more steps required to make the lab easier to use.

  1. Configure an IP in each of the Arista nodes created by the setup script. This will allow you to SSH into each of the nodes and access them from the macOS terminal instead of the VirtualBox interface. You can check the network diagram provided by the setup guide to confirm all the network interface names and how they interact, but I will give an example for the node vEOS-1.

    In the Mac terminal, first confirm the IP address assigned to the interface vboxnet2:

    $ ifconfig vboxnet2
    vboxnet2: flags=8943 mtu 1500
    ether 0a:00:27:00:00:02
    inet 192.168.58.1 netmask 0xffffff00 broadcast 192.168.58.255


    Then login to vEOS-1 node via VirtualBox and assign an IP to the Management1 interface:

    > enable
    # config t
    # int ma1
    # ip addr 192.168.58.3/24
    # write


    You should now be able to SSH into the vEOS-1 node using 192.168.58.3 from the Mac terminal.
  2. I also like to assign a password for the ‘admin’ user and also start the user in ‘enable’ mode so that I can SCP files from my Mac to the Arista nodes.

    > enable
    # config t
    # username admin privilege 15 secret yourpasswordhere
    # aaa authorization exec default local
    # write

  3. The last step I like to take is to install the Python Client for eAPI (pyeapi). Download pyeapi 0.8.4 to your local Mac. Then SCP the file to the vEOS-1 node:

    $ scp pyeapi-0.8.4.tar.gz admin@192.168.58.3:/mnt/flash/

    Then login to the vEOS-1 node via SSH and enter the bash prompt:

    # bash

    You can then install pyeapi with the following commands:

    $ sudo pip install /mnt/flash/pyeapi-0.8.4.tar.gz

    You also need to configure the Arista node to install pyeapi each time it boots by adding the following to /mnt/flash/rc.eos :

    #!/bin/bash
    sudo pip install /mnt/flash/pyeapi-0.8.4.tar.gz


    Note that some pyeapi setup instructions also state that netaddr has to be installed as a dependency for pyeapi, however netaddr is already installed by default in vEOS v4.25.

Mac OS Backups to Raspberry Pi

I recently got a Raspberry Pi to play with, so I figured it was a good time to setup automatic local backups for my important files. I wrote this simple Bash script (local-backup.sh) and added it to a cron job running every 30 min.

# Backup source configs
SOURCE_HOME="/Users/yourusernamehere/"
DIRECTORIES=('Desktop' 'Documents' 'Other' 'Scripts')

# Backup destination configs
USER="rpi-username"
HOST="rpi-host"
DESTINATION="/destination/backup/directory/"

ls /tmp/local-backup.lock
if [ $? == 0 ]
then
  echo "Backups cron already running, exiting"
  exit 0
fi
touch /tmp/local-backup.lock

for dir in "${DIRECTORIES[@]}"
do
  echo $SOURCE_HOME$dir
  rsync -aP $SOURCE_HOME$dir $USER@$HOST:$DESTINATION
  if [ $? != 0 ]
  then
    /usr/local/bin/terminal-notifier -title "Backups cron" -message "'$dir' backup failed"
    rm /tmp/local-backup.lock
    exit 1
  fi
done

rm /tmp/local-backup.lock
exit 0

A few things to note:

  • It assumes you are backing up directories found in your Mac OS user directory, defined using ‘SOURCE_HOME’.
  • The directories you want to backup are defined using ‘DIRECTORIES’.
  • terminal-notifier is used to send failure notifications to to the Mac OS Notification Center. You can install this using ‘brew install terminal-notifier’.

I also recommend creating a new user (without sudo rights) on the Raspberry Pi to connect with using rsync. That way you can use a key pair without a password and limit the fallout if the username and key pair are ever compromised.

  • The rpi-host details are defined using .ssh/config (identityfile, port, ip etc etc)

Now you just need to add a cron job to run the script every 30 min. In terminal just run ‘crontab -e’ then add the following line:

*/30 * * * * /Users/yourusernamehere/local-backup.sh > /Users/yourusernamehere/local-backup.log

The cron also outputs the script output to local-backup.log to help with any debugging.

The last step is to allow cron full disk access in the Mac OS ‘Security & Privacy’ settings otherwise the backup script will fail to run due to permission errors. The following page has the steps required to allow full disk access:

Mac OS Dock Fix

If you’re anything like me, you probably find the default Mac OS dock auto hide/reveal to be painfully slow. Thankfully you can fix that by entering the following two lines in Terminal:

defaults write com.apple.Dock autohide-delay -float 0 && killall Dock
defaults write com.apple.dock autohide-time-modifier -int 1 && killall Dock

The first line defines the delay on when the dock starts to hide/reveal (0 sec). The second line defines how long the hide/reveal animation runs for (1 sec).

You can also easily revert the change and go back to the default values by running the following two lines in Terminal:

defaults delete com.apple.dock autohide-delay;killall Dock
defaults delete com.apple.dock autohide-time-modifier;killall Dock

Homebrew MTR Fix

MTR is a useful tool to diagnose network issues between two points. Its also very easy to install on MacOS using Homebrew:

$ brew install mtr

However once you try and run the mtr command, the following error is returned:

mtr: command not found or mtr: Failure to start mtr-packet: Invalid argument)

This occurs because Homebrew installs mtr in the /usr/local/sbin directory instead of the more commonly used /usr/local/bin directory. To stop the errors from occurring, you just need to add /usr/local/sbin to your shell $PATH variable. Simply add the following line to ~/bash_profile :

export PATH="$PATH:/usr/local/sbin"

Yubikey & gpg-agent hanging

I found that after opening many SSH sessions using a Yubikey for authentication that gpg-agent would bork out and take increasingly long to authenticate. Eventually the time taken to provide the authentication key would take too long and opening new SSH sessions would just time out. The temporary fix just involved killing and restarting the gpg-agent, but luckily the permanent fix for the issue is very simple. All you need to do is create/edit the file ~/.gnupg/scdaemon.conf and add the following line:

disable-ccid

That’s it! You can now authenticate using your Yubikey as many times as you like without gpg-agent grinding to a halt.

CentOS Security Updates

CentOS is one of my favourite Linux distros, a great option for anyone wanting to run an incredibly stable server. However there is one area where CentOS is lacking, and that is a default mechanism to view and install security patches. To make matters worse, it actually looks like there is a way to install security updates using yum:

$ sudo yum --security update 

The problem is that the above command doesn’t actually do anything, and unless you specifically research the issue chances are you would never know the command does nothing. This problem has caused widespread confusion and many otherwise good Linux resources online are also unaware of the limitation. You can find countless guides on the web which instruct CentOS admins to configure updates using ‘yum –security update‘, but these guides result in a false sense of security where server admins believe updates are occurring which never actually happen.

So why doesn’t ‘yum –security update‘ work? It is due to the fact that the default CentOS repos do not contain the necessary metadata which specifies whether an update is related to security issues or not. As a result ‘yum –security update‘ will run without issue, but it will never find any security related updates and simple say the system is fully updated for security patches.

This has been confirmed multiple times on the CentOS forums by the moderation TrevorH, eg:

There is no security metadata in the CentOS yum repos which means that running yum –security update does nothing useful.

https://forums.centos.org/viewtopic.php?t=70084#p294493

Once you are aware of the issue, it is quite simple to verify it for yourself.

$ sudo yum --security update
Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.trouble-free.net
 * epel: fedora-epel.mirrors.tds.net
 * extras: mirror.cc.columbia.edu
 * updates: mirror.centos.iad1.serverforge.org
<SNIP>
No packages needed for security; 233 packages available

Here you can see that ‘yum –security update‘ states that no security updates are needed. But if we do a bit more digging we can prove this is incorrect.

$ yum list available | grep bash
bash.x86_64                               4.2.46-34.el7                  base

Checking the CentOS security advisories we can see that the ‘bash.x86_64 4.2.46-34.el7’ update is in fact a security update: https://lists.centos.org/pipermail/centos-cr-announce/2020-April/012412.html

One solution to this problem would be to simply run ‘yum update‘ to update all packages, and considering the nature of CentOS it would actually be fairly safe. However the paranoid sysadmin in me would much prefer to only automatically update security patches. After looking into the various solutions, I personally decided the most straight forward way of implementing security updates automatically is to use the centos-package-cron GitHub project.

This project works by comparing the CentOS/RHEL security advisories to the packages installed on the server and it is very simple to install and use.

centos-package-cron installation

All you need to do is grab the release RPM from their GitHub page and install it using yum:

$ curl -L "https://github.com/wied03/centos-package-cron/releases/download/releases%2F1.0.10/centos-package-cron-1.0-10.el7.centos.x86_64.rpm" > centos-package-cron-1.0-10.el7.centos.x86_64.rpm
$ sudo yum localinstall centos-package-cron-1.0-10.el7.centos.x86_64.rpm

Using centos-package-cron

Once centos-package-cron is installed, you can get a very detailed report of all outstanding security updates simply by running:

$ centos-package-cron --output stdout --forceold

However if you want to automate security updates, all we really need is a list of the packages that need a security update. We can get that easily enough simply by piping the centos-package-cron results through a few filters:

$ centos-package-cron --output stdout --forceold | awk '/Packages:/,/References:/' | grep -o "[^* ]*" | grep -v 'Packages:' | grep -v 'References:' | sort | uniq

Let’s do some more testing and see what happens when we run the above command on the same server where ‘yum –security update‘ stated that no security patches needed to be installed:

$ sudo centos-package-cron --output stdout --forceold | awk '/Packages:/,/References:/' | grep -o "[^* ]*" | grep -v 'Packages:' | grep -v 'References:' | sort | uniq

bash-4.2.46-29.el7_4
bind-libs-lite-9.9.4-51.el7
bind-license-9.9.4-51.el7
curl-7.29.0-42.el7
expat-2.1.0-10.el7_3
file-5.11-33.el7
file-libs-5.11-33.el7
gettext-0.19.8.1-2.el7
gettext-libs-0.19.8.1-2.el7
httpd-2.4.6-67.el7.centos.5
httpd-tools-2.4.6-67.el7.centos.5
kernel-3.10.0-693.2.2.el7
kernel-3.10.0-693.el7
kernel-headers-3.10.0-693.2.2.el7
kernel-tools-3.10.0-693.2.2.el7
kernel-tools-libs-3.10.0-693.2.2.el7
libcurl-7.29.0-42.el7
libX11-1.6.5-1.el7
libX11-common-1.6.5-1.el7
libxml2-2.9.1-6.el7_2.3
libxml2-python-2.9.1-6.el7_2.3
mariadb-libs-5.5.56-2.el7
nss-3.28.4-12.el7_4
nss-softokn-3.28.3-8.el7_4
nss-softokn-freebl-3.28.3-8.el7_4
nss-sysinit-3.28.4-12.el7_4
nss-tools-3.28.4-12.el7_4
nss-util-3.28.4-3.el7
php-5.4.16-42.el7
php-cli-5.4.16-42.el7
php-common-5.4.16-42.el7
php-fpm-5.4.16-42.el7
polkit-0.112-12.el7_3
python-2.7.5-58.el7
python-libs-2.7.5-58.el7
python-perf-3.10.0-693.2.2.el7
rsyslog-8.24.0-12.el7
shared-mime-info-1.8-3.el7
sqlite-3.7.17-8.el7
sudo-1.8.19p2-11.el7_4
telnet-0.17-64.el7
unzip-6.0-20.el7

As you can see the results are very different. yum –security update identifies 0 packages that have security updates whereas centos-package-cron finds 41 packages with security updates. You may also have noticed that centos-package-cron did correctly identify that bash-4.2.46-29.el7_4 needs to be updated.

Now if we want to have completely automated updates, we can just setup a daily cron which pipes the above results straight to yum update, like this:

$ sudo centos-package-cron --output stdout --forceold | awk '/Packages:/,/References:/' | grep -o "[^* ]*" | grep -v 'Packages:' | grep -v 'References:' | sort | uniq | xargs -r yum -y update

DIY Simple Whitelist API Tutorial Part 3

Now that we have a list of whitelisted IPs we can use, it makes sense for us to create another end-point in order to return a list of all the whitelisted IPs. Any servers under our control can then send requests to this end-point to receive a list of the current IP whitelist and add them to iptables or other ACLs. This end-point will be created in the file whitelist-get.php.

The first thing I realised is that I want successful authentication to occur before returning a list of whitelisted IPs. Instead of copy/pasting the existing authentication code into the new file and duplicating code, it is better that we add the authentication code to its own file (auth.php) and then requiring auth.php in both whitelist-url.php and whitelist-get.php.

I also decided that the functionality for whitelist-url.php and whitelist-get.php are quite distinct with no real overlap, so it makes sense to have two user types. The first type would be end users who only need to whitelist their IPs and the second type would be servers which only need to grab a current list of whitelisted IPs. As such I have separated their authenticated file directory structure which allows slightly different permissions.

So here is the new auth.php file:

<?php

function login($type) {

	function auth_fail() {
		header('HTTP/1.1 403 Forbidden');
		echo "You are forbidden!\n";
		exit;
	}

	if ( !isset($_GET['user']) || !isset($_GET['auth']) ) 
	{ 
		auth_fail();
	}

	$user = $_GET['user'];
	$pass = $_GET['auth'];
	if ( $type == "user" )
    {
    	$auth_file = "data/auth/user/{$user}";
    }
    elseif ( $type == "infra" ) 
    {
    	$auth_file = "data/auth/infra/{$user}";
	} 
	else 
	{
		auth_fail();
	} 

	if (!file_exists("$auth_file")) {
		auth_fail();
	}

	$userauth = file_get_contents($auth_file);

	if ( $userauth != $pass ) {

		auth_fail();
	}

	return $user;
}

You can see above that I have placed all the authentication code into the login function which also requires a type to be passed to it. The type is either an end user who needs their IP whitelisted (user) or a server that needs a list of IPs to whitelist (infra). The type is then used to define where the authentication file resides.

I also used this chance to cleanup the duplicate code used to return the forbidden response in multiple locations. The 403 forbidden response was added as its own function auth_fail() and called as needed.

With the authentication now sorted out the rest is very simple in our whitelist-get.php file:

<?php 

# Include authentication code for login function
require 'auth.php';
$user = login("infra");

$path    = 'data/ip-data/';
$files = array_diff(scandir($path), array('.', '..'));

echo "---Whitelisted IPs---\n";

foreach($files as $file){
    echo file_get_contents("{$path}{$file}");;
}

echo "----End Whitelist----\n";

After including auth.php and setting the user type to infra all the code does is grab the list of saved user IP whitelist files from ‘data/ip-data/’ and echo their contents. The only additional thing I added was to add a ‘header’ and ‘footer’ to the IP list in order to allow some basic validation by the servers pulling this data.

And just for good measure I have included the updated version of whitelist-url.php after the authentication code was refactored:

<?php 

# Include authentication code for login function
require 'auth.php';
$user = login("user");

$whitelist_file = "data/ip-data/{$user}";
$current_ip = $_SERVER['REMOTE_ADDR'];
$whitelist_max = 5;

if (file_exists($whitelist_file)) {
	$user_whitelist = file($whitelist_file, FILE_IGNORE_NEW_LINES);
}
else {
	$user_whitelist = [];
}

if (in_array($current_ip, $user_whitelist)) {
    echo "$current_ip already whitelisted\n";
    exit;
}

elseif (count($user_whitelist) < $whitelist_max ) {
	array_unshift($user_whitelist , $current_ip);
}
elseif (count($user_whitelist) == $whitelist_max) {
	array_unshift($user_whitelist , $current_ip);
	array_pop($user_whitelist);
}

$i = 0;
while ($i < count($user_whitelist))
{
	if ($i == 0) {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n");
	}
	else {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n", FILE_APPEND);   
	}
    $i++;
}

echo "$current_ip added to whitelist\n";

?>

The web api for our whitelisting functionality is now complete. End users can whitelist their IPs as needed and our infrastructure servers can requests the list of IPs to whitelist. In the 4th and final part of this tutorial I will show how our infrastructure servers can pull this list of IPs and allow access using iptables and ipset.

DIY Simple Whitelist API Tutorial Part 2

Ok, so we have authenticated the user in part 1, we can now start the process of adding the users IP to their whitelist.

if (file_exists($whitelist_file)) {
	$user_whitelist = file($whitelist_file, FILE_IGNORE_NEW_LINES);
}
else {
	$user_whitelist = [];
}

First we check to see if the user already has a corresponding file with previously whitelisted IPs. If they do then we simply load the existing IPs into an array, otherwise we create a new empty array.

if (in_array($current_ip, $user_whitelist)) {
	echo "IP Already whitelisted";
	print_r($user_whitelist);
	exit;
}
elseif (count($user_whitelist) < $whitelist_max ) {
	array_unshift($user_whitelist , $current_ip);
}
elseif (count($user_whitelist) == $whitelist_max) {
	array_unshift($user_whitelist , $current_ip);
	array_pop($user_whitelist);
}

Above we are create the logic for how IPs are added to the whitelist. If the IP has already been whitelisted then we can simply stop here.

As I mentioned previously, I want to make the whitelisting process as painless as possible for the end user. Because of this I have decided that the last 5 user IPs (this is defined earlier using $whitelist_max) should be whitelisted. This will help to reduce the need to whitelist every single time they swap between common wifi networks, or if they want multiple devices to be whitelisted at the same time but they are on different networks.

The final bit of logic handles the whitelisted IP list when 5 IPs are already whitelisted and a new IP is added. The new IP is added to the beginning of the array and then the final array element is removed.

$i = 0;
while ($i < count($user_whitelist))
{
	if ($i == 0) {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n");
	}
	else {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n", FILE_APPEND);   
	}
    $i++;
}

print_r($user_whitelist);

All that’s left to do is save the new IP list to the users whitelisted IP file. We iterate through the array, on the first loop it writes over the file and the subsequent loops append to the end of the file. Finally the new $user_whitelist is printed so the user can see their current IP whitelist.

And here is the completed PHP file (whitelist-url.php) that authenticates requests and stores successfully whitelisted IPs:

<?php

if ( !isset($_GET['user']) || !isset($_GET['auth']) ) 
{ 
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

$user = $_GET['user'];
$pass = $_GET['auth'];
$auth_file = "data/auth/{$user}";
$whitelist_file = "data/ip-data/{$user}";
$current_ip = $_SERVER['REMOTE_ADDR'];
$whitelist_max = 5;

if (!file_exists($auth_file)) {
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

$userauth = file_get_contents($auth_file);

if ( $userauth != $pass ) {
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

if (file_exists($whitelist_file)) {
	$user_whitelist = file($whitelist_file, FILE_IGNORE_NEW_LINES);
}
else {
	$user_whitelist = [];
}

if (in_array($current_ip, $user_whitelist)) {
	echo "IP Already whitelisted";
	print_r($user_whitelist);
	exit;
}
elseif (count($user_whitelist) < $whitelist_max ) {
	array_unshift($user_whitelist , $current_ip);
}
elseif (count($user_whitelist) == $whitelist_max) {
	array_unshift($user_whitelist , $current_ip);
	array_pop($user_whitelist);
}

$i = 0;
while ($i < count($user_whitelist))
{
	if ($i == 0) {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n");
	}
	else {
		file_put_contents($whitelist_file, $user_whitelist[$i] . "\n", FILE_APPEND);   
	}
    $i++;
}

print_r($user_whitelist);

Our API is now authenticating user requests and storing a list of whitelisted IPs for each user in a file. In part 3 of the tutorial I will create a new endpoint that only returns a list of IPs whitelisted for each user. This can then be used by any server/service we like to pull these trusted IPs and allow them access.

DIY Simple Whitelist API Tutorial Part 1

A small side project I am working on requires some very simple IP whitelisting in order to restrict access. The side project isn’t particularly sensitive, but placing the restriction allows me to more easily set it and forget it while also being a better net citizen (think custom DNS resolvers). Only a small group of personal contacts will be using the whitelisting API, but they will have dynamic IPs and I want to make the process of whitelisting as painless as possible. While this ‘whitelist API’ is far from best practice or Enterprise ready, for my purposes I believe it is sufficiently secure and adds minimal complexity for the end users.

With the idea of making this API as simple for myself create as it is for people to use, I decided to stick with PHP and avoid using a database. I also wanted users to be able to whitelist themselves simply by visiting a bookmark in their web browser of choice. Eg:

https://exampledomain.com/whitelist-url.php?user=XYZ&auth=123

User Authentication

So let’s get started. It makes sense to begin with the PHP authorization code, as nothing else should happen until the users have successfully validated. Note that If I were writing a proper web application, I would not pass data using query strings. However in this case I wanted to keep it nice and simple for the end users and I ensure that a valid SSL certificate is installed on the website so that the username and passwords are not transmitted in plain text.

if ( !isset($_GET['user']) || !isset($_GET['auth']) ) 
{ 
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

This is a basic check to make sure that the request has set a user and auth value in the query string. If they have not submitted these required authentication details then the request will receive a 403 response.

$user = $_GET['user'];
$pass = $_GET['auth'];
$auth_file = "data/auth/{$user}";
$whitelist_file = "data/ip-data/{$user}";
$current_ip = $_SERVER['REMOTE_ADDR'];
$whitelist_max = 5;

Now that we know the required values have been submitted we can set some variables. As I mentioned earlier, I wanted to avoid using a database, and I also did not want to hard code the authorization details in PHP either. So to make it quick and nasty I opted to store the user authorization details in files. The filename corresponds to the username and the file content is the password. A similar scheme has been used to save the whitelisted IPs. Each user has their own file and the file is a list of their whitelisted IPs.

So to breakdown the above code, the first two lines are defining the user submitted username and password data. The third and forth lines define the authorization file and whitelist file that should exist for a valid user. The fifth line defines the IP of the user making the request and the sixth line sets the maximum number of IPs that can be whitelisted for a user at any one time.

if (!file_exists($auth_file)) {
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

$userauth = file_get_contents($auth_file);

If the $auth_file does not exist then the username is invalid and the request fails authentication. Once the username is known to be valid then you can get the file contents for their password.

if ( $userauth != $pass ) {
	header('HTTP/1.0 403 Forbidden');
	echo 'You are forbidden!';
	exit;
}

This checks that the file contents matches the password submitted by the user.

Now this actually ended up being much wordier than I was expecting, so I will end part 1 here. In part 2 of this tutorial I will show how I am saving the whitelisted IPs.

Copyright © 2024 iGeckoDev

Theme by Anders NorenUp ↑