Deploying OpenBSD 6.3 on AWS

Here are the steps on deploying OpenBSD 6.3 on Amazon Web Service, I use it as SMTP/IMAP server, also it can be used as secure Jump Server.

Roadmap

  • Create a VM on VirtualBox (VBox) running OpenBSD 6.3
  • Prepare the OpenBSD VBox VM to be deployed on AWS
  • Upload the OpenBSD VBox VM to AWS as volume
  • Snapshot and create AMI from the uploaded volume

Steps

Create a VM on VirtualBox (VBox)

I use /vbox directory as backend storage for VBox disk images, so first I create disk image for OpenBSD:

$ vboxmanage createhd --format VHD --filename /vbox/openbsd/obsd-disk0 --size 8196
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 75a6caa8-c6ea-4e36-9768-002944d846e7

Create the VBox VM:

$ vboxmanage createvm --name "openbsd-6.3" --ostype OpenBSD_64 --register
Virtual machine 'openbsd-6.3' is created and registered.
UUID: 2760b9d6-1c35-4783-9090-c0cb5f3b35f4
Settings file: '/home/te/VirtualBox VMs/openbsd-6.3/openbsd-6.3.vbox'

Create SATA controller and attach OpenBSD VM virtual disk to it:

$ vboxmanage storagectl openbsd-6.3 --name "SATA Controller" --add sata --controller IntelAHCI
$ vboxmanage storageattach openbsd-6.3 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium /vbox/openbsd/obsd-disk0.vdi

Create IDE controller and attach OpenBSD installation ISO to it:

$ vboxmanage storagectl openbsd-6.3 --name "IDE Controller" --add ide
$ vboxmanage storageattach openbsd-6.3 --storagectl "IDE Controller" --port 0 --device 0 --type dvddrive --medium /vbox/ISO/openbsd/6.3/amd64/install63.iso

Now set some configuration for the VM to work:

$ vboxmanage modifyvm openbsd-6.3 --ioapic on
$ vboxmanage modifyvm openbsd-6.3 --boot1 dvd --boot2 disk --boot3 none
$ vboxmanage modifyvm openbsd-6.3 --memory 768
$ vboxmanage modifyvm openbsd-6.3 --vram 128
$ vboxmanage modifyvm openbsd6.3 --cpus 2
$ vboxmanage modifyvm openbsd6.3 --uart1 0x3F8 4

Notes:

  • It’s important to set CPU count to 2 for the OpenBSD installer to install SMP kernel
  • It’s important to set COM1 (UART1) to be able to to view the console messages

Review:

$ vboxmanage showvminfo openbsd-6.3
Name: OpenBSD 6.3
Groups: /
Guest OS: OpenBSD (64-bit)
UUID: 2760b9d6-1c35-4783-9090-c0cb5f3b35f4
Config file: /home/te/VirtualBox VMs/openbsd-6.3/openbsd-6.3.vbox
Snapshot folder: /home/te/VirtualBox VMs/openbsd-6.3/Snapshots
Log folder: /home/te/VirtualBox VMs/openbsd-6.3/Logs
Hardware UUID: 2760b9d6-1c35-4783-9090-c0cb5f3b35f4
Memory size: 768MB
Page Fusion: off
VRAM size: 8MB
CPU exec cap: 100%
HPET: off
Chipset: piix3
Firmware: BIOS
Number of CPUs: 1
...
IOAPIC: on
BIOS APIC mode: APIC
Time offset: 0ms
RTC: local time
Hardw. virt.ext: on
Nested Paging: on
Large Pages: off
VT-x VPID: on
...
Storage Controller Name (0): SATA Controller
Storage Controller Type (0): IntelAhci
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0): 30
Storage Controller Port Count (0): 30
Storage Controller Bootable (0): on
Storage Controller Name (1): IDE Controller
Storage Controller Type (1): PIIX4
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1): 2
Storage Controller Port Count (1): 2
Storage Controller Bootable (1): on
SATA Controller (0, 0): /vbox/openbsd/obsd-disk0.vdi (UUID: 75a6caa8-c6ea-4e36-9768-002944d846e7)
IDE Controller (0, 0): /vbox/ISO/openbsd/6.3/amd64/install63.iso (UUID: bef3fcaf-31c1-47e4-96bc-6596ce0dc07c)
NIC 1: MAC: 0800274874D9, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings: MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 2: disabled
...
Pointing Device: PS/2 Mouse
Keyboard Device: PS/2 Keyboard
UART 1: I/O base: 0x03f8, IRQ: 4, disconnected
UART 2: disabled
UART 3: disabled
UART 4: disabled
LPT 1: disabled
LPT 2: disabled
...
...

Now start the VM and then follow OpenBSD installation

$ vboxmanage startvm openbsd6.3

Inside the OpenBSD VBox VM

create ec2-user and add to /etc/doas.conf to be able to use doas tool:

permit nopass keepenv ec2-user as root

Download the file ec2-init.sh from the below URL: https://raw.githubusercontent.com/ajacoutot/aws-openbsd/master/ec2-init.sh

Install the ec2-init.sh it to the path /usr/local/libexec/ec2-init and set necessary ownership and permissions:

# chmod 0555 /usr/local/libexec/ec2-init
# chown root.bin /usr/local/libexec/ec2-init

In the file /etc/ttys replace line that reads:

#tty0 ...

With:

tty00 /usr/libexec/getty std.9600\" vt220 on secure

Add the following file to /etc/boot.conf:

stty com0 9600
set tty com0

Create the network configuration file /etc/hostname.xnf0 with mode 0640 that reads:

dhcp
!/usr/local/libexec/ec2-init

The /usr/local/libexec/ec2-init is a cloud-init help for OpenBSD responsible for passing instance information to AWS OpenBSD instance and  setting hostname, instance-id, SSH public key etc.

Disallow root and password login in SSH /etc/ssh/sshd_config:

PermitRootLogin no
PasswordAuthentication no

And finally do any necessary package installation and configuration in the OpenBSD VBox VM, this will be our default image for OpenBSD instances create in AWS.

Uploading OpenBSD image to AWS

I use Ubuntu 18.04 for my personal laptop, to upload the OpenBSD VBox disk image to AWS the following software is needed:

$ sudo apt install ec2-api-tools ec2-ami-tools

Then execute the following command to upload the image to AWS:

$ export AWS_KEY="YOUR_AWS_KEY"
$ export AWS_SEC="YOUR_AWS_KEY_SECRET"
$ ec2-import-volume --format vhd --volume-size 12 --region \
   us-east-1 --availability-zone us-east-1c \
   --bucket openbsd-tmp-folder --owner-akid $AWS_KEY \
   --owner-sak $AWS_SEC --aws-access-key $AWS_KEY \
   --aws-secret-key $AWS_SEC /vbox/openbsd/obsd-disk0.vhd

The “us-east-1” and “us-east-1c” is region and availability zone desired.

The above command upload the OpenBSD disk image in chucks to S3 bucket “openbsd-tmp-folder” and then convert them to AWS volume of size 12 GB. Conversion process can be monitored with the command:

$ ec2-describe-conversion-tasks --aws-access-key $AWS_KEY \
   --aws-secret-key $AWS_KEY

Then depending on preference, we can login to AWS console and create a snapshot from the OpenBSD volume and then chose to make an AMI from that snapshot or using the following command to create them:

$ ec2-create-snapshot \
   --aws-access-key $AWS_KEY" \
   --aws-secret-key $AWS_SEC \
   --region us-east-1 \
   <VOLUME-NAME>

$ ec2-register \
   --name "OpenBSD 6.3 AMI" \
   --aws-access-key $AWS_KEY \
   --aws-secret-key $AWS_SEC \
   --region us-east-1 \
   --architecture x86_64 \
   --root-device-name /dev/sda1 \
   --virtualization-type hvm \
   --snapshot <SNAPSHOT-NAME>

Then launch instance in AWS from that AMI and login with ec2-user keys, here is my OpenBSD dmesg:

ip-172-30-2-198$ dmesg
OpenBSD 6.3 (GENERIC.MP) #107: Sat Mar 24 14:21:59 MDT 2018
deraadt@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 1056964608 (1008MB)
avail mem = 1017905152 (970MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb01f (11 entries)
bios0: vendor Xen version "4.2.amazon" date 08/24/2006
bios0: Xen HVM domU
acpi0 at bios0: rev 2
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP APIC HPET WAET SSDT SSDT
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 32 bits
acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
ioapic0 at mainbus0: apid 1 pa 0xfec00000, version 11, 48 pins
, remapped to apid 1
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz, 2399.73 MHz
cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,ABM,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 100MHz
acpihpet0 at acpi0: 62500000 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
"ACPI0007" at acpi0 not configured
pvbus0 at mainbus0: Xen 4.2
xen0 at pvbus0: features 0x705, 32 grant table frames, event channel 3
xbf0 at xen0 backend 0 channel 5: disk
scsibus1 at xbf0: 2 targets
sd0 at scsibus1 targ 0 lun 0: <Xen, phy hda 768, 0000> SCSI3 0/direct fixed
sd0: 12288MB, 512 bytes/sector, 25165824 sectors
xnf0 at xen0 backend 0 channel 6: address 0e:ac:b7:ee:8a:2a
"console" at xen0: device/console/0 not configured
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility
pciide0: channel 0 disabled (no drives)
pciide0: channel 1 disabled (no drives)
piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x01: SMBus disabled
vga1 at pci0 dev 2 function 0 "Cirrus Logic CL-GD5446" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
xspd0 at pci0 dev 3 function 0 "XenSource Platform Device" rev 0x01
isa0 at pcib0
isadma0 at isa0
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
com0: console
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbd0 at pckbc0 (kbd slot)
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pms0 at pckbc0 (aux slot)
wsmouse0 at pms0 mux 0
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on sd0a (02307a84259f2d52.a) swap on sd0b dump on sd0b
fd0 at fdc0 drive 0: density unknown
fd1 at fdc0 drive 1: density unknown

References

Using GlusterFS with Docker swarm cluster

In this blog I will create a 3 node Docker swarm cluster and use GlusterFS to share volume storage across Docker swarm nodes.

Introduction

Using Swarm node in Docker will create a cluster of Docker hosts to run container on, the problem in had is if container “A” run in “node1” with named volume “voldata”, all data changes applied to “voldata” will be locally saved to “node1”. If container A is shut down and happen to start again in different node, let’s say this time on “node3” and also mounting named volume “voldata” will be empty and will not contain changes done to the volume when it was mounted in “node1”.

In this example I will not use named volume, rather I will use shared mount storage among cluster nodes, of course the same can apply to share storage for named volume folder.

I’m using for this exercise 3 EC2 on AWS with 1 attached EBS volumes for each one of them.

How to get around this?

One of the way to solve this is to use GlusterFS to replicate volumes across swarm nodes and make data available to all nodes at any time. Named volumes will still be local to each Docker host since GlusterFS takes care of the replication.

Preparation on each server

I will use Ubuntu 16.04 for this exercise.

First we put friendly name in /etc/hosts:

XX.XX.XX.XX    node1
XX.XX.XX.XX    node2
XX.XX.XX.XX    node3

Then we update the system

$ sudo apt update
$ sudo apt upgrade

Finally we reboot the server. Then start with installing necessary packages on all nodes:

$ sudo apt install -y docker.io
$ sudo apt install -y glusterfs-server

Then start the services:

$ sudo systemctl start glusterfs-server
$ sudo systemctl start docker

Create GlusterFS storage for bricks:

$ sudo mkdir -p /gluster/data /swarm/volumes

GlusterFS setup

First we prepare filesystem for the Gluster storage on all nodes:

$ sudo mkfs.xfs /dev/xvdb 
$ sudo mount /dev/xvdb /gluster/data/

From node1:

$ sudo gluster peer probe node2
peer probe: success. 
$ sudo gluster peer probe node3
peer probe: success.

Create the volume as a mirror:

$ sudo gluster volume create swarm-vols replica 3 node1:/gluster/data node2:/gluster/data node3:/gluster/data force
volume create: swarm-vols: success: please start the volume to access data

Allow mount connection only from localhost:

$ sudo gluster volume set swarm-vols auth.allow 127.0.0.1
volume set: success

Then start the volume:

$ sudo gluster volume start swarm-vols
volume start: swarm-vols: success

Then on each Gluster node we mount the shared mirrored GlusterFS locally:

$ sudo mount.glusterfs localhost:/swarm-vols /swarm/volumes

Docker swarm setup

Here I will create 1 manager node and 2 worker nodes.

$ sudo docker swarm init
Swarm initialized: current node (82f5ud4z97q7q74bz9ycwclnd) is now a manager.
 
To add a worker to this swarm, run the following command:
 
    docker swarm join \
    --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \
    172.31.24.234:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Get the token for worker nodes:

$ sudo docker swarm join-token worker
To add a worker to this swarm, run the following command:
 
    docker swarm join \
    --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h \
    172.31.24.234:2377

Then on both worker nodes:

$ sudo docker swarm join --token SWMTKN-1-697xeeiei6wsnsr29ult7num899o5febad143ellqx7mt8avwn-1m7wlh59vunohq45x3g075r2h 172.31.24.234:2377
This node joined a swarm as a worker.

Verify the swarm cluster:

$ sudo docker node ls
ID                           HOSTNAME          STATUS  AVAILABILITY  MANAGER STATUS
6he3dgbanee20h7lul705q196    ip-172-31-27-191  Ready   Active        
82f5ud4z97q7q74bz9ycwclnd *  ip-172-31-24-234  Ready   Active        Leader
c7daeowfoyfua2hy0ueiznbjo    ip-172-31-26-52   Ready   Active

Testing

To test, I will create label on node1 and node3, then create a container on node1 then shut it down then create it again on node3, with the same volume mounts, then we will notice that files created by both containers are shared.

Label swarm nodes:

$ sudo docker node update --label-add nodename=node1 ip-172-31-24-234
ip-172-31-24-234
$ sudo docker node update --label-add nodename=node3 ip-172-31-26-52
ip-172-31-26-52

Check the labels:

$ sudo docker node inspect --pretty ip-172-31-26-52
ID:			c7daeowfoyfua2hy0ueiznbjo
Labels:
 - nodename = node3
Hostname:		ip-172-31-26-52
Joined at:		2017-01-06 22:44:17.323236832 +0000 utc
Status:
 State:			Ready
 Availability:		Active
Platform:
 Operating System:	linux
 Architecture:		x86_64
Resources:
 CPUs:			1
 Memory:		1.952 GiB
Plugins:
  Network:		bridge, host, null, overlay
  Volume:		local
Engine Version:		1.12.1

Create Docker service on node1 that will create a file in the shared volume:

$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node1' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol /bin/touch /mnt/testvol/testfile1.txt
duvqo3btdrrlwf61g3bu5uaom

Verify service creation:

$ sudo docker service ls
ID            NAME     REPLICAS  IMAGE    COMMAND
duvqo3btdrrl  testcon  0/1       busybox  /bin/bash

Check that it’s running in node1:

$ sudo docker service ps testcon
ID                         NAME           IMAGE          NODE              DESIRED STATE  CURRENT STATE           ERROR
6nw6sm8sak512x24bty7fwxwz  testcon.1      ubuntu:latest  ip-172-31-24-234  Ready          Ready 1 seconds ago     
6ctzew4b3rmpkf4barkp1idhx   \_ testcon.1  ubuntu:latest  ip-172-31-24-234  Shutdown       Complete 1 seconds ago

Also check the volume mounts:

$ sudo docker inspect testcon
[
    {
        "ID": "8lnpmwcv56xwmwavu3gc2aay8",
        "Version": {
            "Index": 26
        },
        "CreatedAt": "2017-01-06T23:03:01.93363267Z",
        "UpdatedAt": "2017-01-06T23:03:01.935557744Z",
        "Spec": {
            "ContainerSpec": {
                "Image": "busybox",
                "Args": [
                    "/bin/bash"
                ],
                "Mounts": [
                    {
                        "Type": "bind",
                        "Source": "/swarm/volumes/testvol",
                        "Target": "/mnt/testvol"
                    }
                ]
            },
            "Resources": {
                "Limits": {},
                "Reservations": {}
            },
            "RestartPolicy": {
                "Condition": "any",
                "MaxAttempts": 0
            },
            "Placement": {
                "Constraints": [
                    "nodename == node1"
                ]
            }
        },
        "ServiceID": "duvqo3btdrrlwf61g3bu5uaom",
        "Slot": 1,
        "Status": {
            "Timestamp": "2017-01-06T23:03:01.935553276Z",
            "State": "allocated",
            "Message": "allocated",
            "ContainerStatus": {}
        },
        "DesiredState": "running"
    }
]

Shutdown the service and then create in node3:

$ sudo docker service create --name testcon --constraint 'node.labels.nodename == node3' --mount type=bind,source=/swarm/volumes/testvol,target=/mnt/testvol ubuntu:latest /bin/touch /mnt/testvol/testfile3.txt
5y99c0bfmc2fywor3lcsvmm9q

Verify it has ran on node3:

$ sudo docker service ps testcon
ID                         NAME           IMAGE          NODE             DESIRED STATE  CURRENT STATE           ERROR
5p57xyottput3w34r7fclamd9  testcon.1      ubuntu:latest  ip-172-31-26-52  Ready          Ready 1 seconds ago     
aniesakdmrdyuq8m2ddn3ga9b   \_ testcon.1  ubuntu:latest  ip-172-31-26-52  Shutdown       Complete 2 seconds ago

Now check the files created from both containers exist in the same volume:

$ ls -l /swarm/volumes/testvol/
total 0
-rw-r--r-- 1 root root 0 Jan  6 23:59 testfile3.txt
-rw-r--r-- 1 root root 0 Jan  6 23:58 testfile1.txt