Installing docker on our group vm requires carefull considerations regarding the security measure already implemented.
If a
rootless docker principle is
not established on the docker host, any container running will have root privileges which isn’t really an issue for
veteran sysadmins that knows have to restrict access to the docker daemon using aswell firewall or custom docker
registries to allow only vetted and verified container images to run, but in the hands of non-experienced employee a
non-rootless docker client will run any code within any docker image.
The second major
concern is regarding
the iptables chains and rules applied to the group vm acting as the network’s firewall. The group vm acts as a router
for our
internal network and routes network traffic between clients and forwards client trafic to the internet using Network
Address so that any packets originating from our clients destined to the internet will have their source headers
translated to that if the internet facing router i.e. our group host. Upon starting the docker daemon, the socket
responsible for talking to the docker client service and the containers within, new docker specific chains and rules
gets
applied to the iptables with the -I
prefix for insertion. This is not a major concern if your aware of
this fact, but the FORWARD
chain policy gets set to DROP
requiring
manual
configuration to any
docker host also acting as a router.
From docker documentation this iptables rule is supplied to help reenable ip forwarding:
$ iptables -I DOCKER-USER -i src_if -o dst_if -j ACCEPT
We’ve opted to run the docker in root mode for a few reasons:
Using the default image registry supplied with docker, we pull and run the ubuntu:squid
container using
this run configuration to enable host network traficking directly to the squid container:
$ docker run -d --name squid -p \
--volume /etc/squid/squid.conf:/etc/squid/squid.conf \
--volume /srv/docker/squid/log:/var/log/squid \
--volume /srv/docker/squid/cache:/var/cache/squid \
-e TZ=Europe/Copenhagen \
ubuntu/squid:latest
Below configuration implements policy routing for separate gateway and squid hosts
# IPv4 address of proxy
PROXYIP4= 192.168.0.10
# IPv6 address of proxy
PROXYIP6= fe80:dead:beef::10
# interface facing clients
CLIENTIFACE= eth0
# arbitrary mark used to route packets by the firewall. May be anything from 1 to 64.
FWMARK= 2
# permit Squid box out to the Internet
iptables -t mangle -A PREROUTING -p tcp --dport 80 -s $PROXYIP4 -j ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp --dport 80 -s $PROXYIP6 -j ACCEPT
# mark everything else on port 80 to be routed to the Squid box
iptables -t mangle -A PREROUTING -i $CLIENTIFACE -p tcp --dport 80 -j MARK --set-mark $FWMARK
iptables -t mangle -A PREROUTING -m mark --mark $FWMARK -j ACCEPT
ip6tables -t mangle -A PREROUTING -i $CLIENTIFACE -p tcp --dport 80 -j MARK --set-mark $FWMARK
ip6tables -t mangle -A PREROUTING -m mark --mark $FWMARK -j ACCEPT
# NP: Ensure that traffic from inside the network is allowed to loop back inside again.
iptables -t filter -A FORWARD -i $CLIENTIFACE -o $CLIENTIFACE -p tcp --dport 80 -j ACCEPT
ip6tables -t filter -A FORWARD -i $CLIENTIFACE -o $CLIENTIFACE -p tcp --dport 80 -j ACCEPT
There are several pros and cons to using a proxy server. Security wise, a proxy server helps with protecfting a clients computer. It works like a relay between the browser and the website, since the browser doesn’t directly speak to the website, it has to go through the proxy first. The reason for the proxy to act as a relay is if the website tries something malicious, it will hit the proxy server and not the clients computer. A proxy server can also give a faster browsing experience on the clints most used sites, since a proxy server stores a local cache. Even when managing an office or a school, can a proxy server have its uses. By running all the workers/students browsing through the proxy, an administrator can easily monitor the webtraffic, since all browsing has to go through the proxy. Not only that a proxy server can also use to block specific websites eg. malicious websites, or even social media websites, to keep your employees from entering them.
What is then bad about proxy servers? Well not much, but if the provider of the proxy server has malicious intent, it could cause harm for the client. As mentioned earlier, a proxy server keeps a cache for a faster browsing experience and to save bandwidth. THe problem with that is it could also store private information like passwords and other details, which the provider of the proxy server can have or gain access to. For that reason it is important to have trusted provider, or create a proxy server inhouse.
Firstly we create a folder for every individual on our group vm /usr/local/share with this kind of folder structure:
/usr/local/share
└─── nik
│ │ files
│ │ etc...
│ └─ subfolders
│
└─── emin
│ │ files
│ │ etc...
│ └─ subfolders
│
└─── saif
│ │ files
│ │ etc...
│ └─ subfolders
│
└─── shared
│ saif-logs.txt
│ emin-logs.txt
│ nik-logs.txt
We want it so everyone can work within their own folders, and then use the share folder to share logs and other files. We want to make sure everyone only has access to their individual folder, while they all have access to the share folder. Although they should only be able to modify and delete their own files in the share folder.
First we want to make sure permissions are correct, and we start by adding a group, and a user for each individual. We then assign each individual to their own group and to the main group.
$ groupadd t8g1-skylab
$ groupadd emin/nik/saif
$ useradd emin/nik/saif
$ usermod -a -G t8g1-skylab emin/nik/saif
$ usermod -a -G emin/nik/saif emin/nik/saif
After adding groups and assigning the individual users to the desired group we assign each directory to a group.
$ chgrp -R emin/nik/saif /usr/local/share/(emin/nik/saif)
$ chgrp -R t8g1-skylab /usr/local/share/shared
After assigning each directory to a group, we can then set permissions.
$ chmod g+w /usr/local/share/(emin/nik/saif)
$ chmod g+w /usr/local/share/shared
Now each individual has access to their own directory, and the all have access to the shared directory. Now we have to make it so only the creator of the file in the shared direcotry can modify it.
chmod +t /usr/local/share/shared
We then setup an nfs server and mount our individual folder and the shared folder on our individual vms. We setup the nfs server on our group vm, and make sure we give rw access, so if the user mounting the nfs is in the correct group, he or she shoud have rw access.
To mount the nfs filesystem we need to have the nfs client package installed.
$ sudo apt install nfs-common
Then we mount:
$ sudo mount -t nfs 192.168.165.1:/usr/local/share/(shared_or_emin...) /the_directory_we_want_to_mount_it_on_local_machine
We can now through our own machine access our directories and the shared directory
We’ve opted to present a static website using Hugo and its official docker container as a service on the docker swarm.
The
service is created and attached to a
custom overlay network which is a network driver type in docker
exclusively available for docker clients in swarm mode. As default an
ingress
network using the same overlay type network driver
is created
at swarm initialization. This network provides a routing mesh between
swarm nodes and acts as a load-balancer for services created with the
replicas option, enabling
distributed tasks to run accross the swarm nodes. All overlay type
networks, both
the default ingress and any custom created overlay networks gets bridged
to the particular host where the docker daemon is running and can be
seen using ip address show
command at a shell prompt on
the host showing a “docker_gwbridge” network along with a “docker0”
device which is the corresponding bridge for containers not connected to
any overlay type networks.
A default bridge associated with swarm overlay networks is used and/or created once the swarm is initialized or when a node joins the swarm. We use a custom created overlay bridge to control the subnet from which containers or services on the swarm communicates at and to truly replicate an actual subnet on the network we statically set a single ip address for which services or containers should at deafult bind to, disabled the default ip masquerading option and disabled inter container connictivity by issuing this command before initializing the swarm on Group VM:
docker network create \
--subnet 10.81.10.0/24 \
--opt com.docker.network.bridge.host_binding_ipv4=10.81.10.25 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=false \
**docker_gwbridge**
Creating this bridge before initializing a swarm allows for setting this custom parameters and forces the docker client to use this bridge to connect swarm services and containers on overlay networks to the host. Setting the “[…].bridge.enable_icc” option to false will isolate the services and containers on overlay networks from any regular container instances, prohibiting any containers not connected to overlay networks from accessing the swarm services. TCP ports 2377, 7946 and UDP ports 4789 must also be enabled in the firewall to allow for cluster management, node communication and overlay traffic respectively, on all nodes.
With the default gateway we create a custom overlay named “t8g1-squid-overlay” from which any swarm node can access
$ docker network create \
--driver overlay \
--opt encrypted \
--subnet 10.81.30.0/24 \
--gateway=10.81.30.2 \
--opt com.docker.network.driver.mtu=1500 \
--opt com.docker.network.bridge.enable_ip_masquerade=false \
t8g1-squid-overlay
Both the ingress network, and any
overlay type
networks encrypts management trafic between the nodes rotating the AES key every 12 hours. A docker swarm
also allows for splitting management and data trafic onto separate network interfaces using the
–advertise-addr
and –datapath-addr
for each node when joining the swarm or at swarm
initialization.
Overlay networks such as the ingress network and any custom overlay networks acts
as a
load-balancing proxy for optimizing connections to the services within them, which a: improves loading time for
requests to services with replicas i.e. multiple, indentical and independent tasks of a single service
distributed in the swarm and b:
enables
sys-admins to create custom policies for updating services allowing rules to keep a certain amount of tasks running
at all times during the update process and thus increasing availability to the service.
Only custom overlay networks i.e. networks created without the parameter and option: –mode ingress
permits
setting
the
attachable flag enabling
standalone containers to attach to
the network.
It is recommended to always create separate custom overlay networks for independent services.
As mentioned simple firewall rules gets cluttered by the docker clients running on each node. Extensive measure must be taken to secure the network and especially the containers running withing it. As a deafult docker will open any published ports from services and standalone containers to the external network, and running these services and containers in a non-rootless docker environment which is dictated by the docker engine when running docker in swarm mode, exposes the integrity of the entire network. Luckily the official documentation for docker is an endless source of considerations and warnings regarding this “feature” and provides several recommendations for securing the environment. Docker creates new iptables chains per default and drops all forwarding on the host, which is an issue for the group vm which also acts as a router. The following command reenables forwarding on the group vm:
$ iptables -I DOCKER-USER -i src_if -o dst_if -j ACCEPT
When starting a service or a container created with the flags –publish [host_port]:[container_port]
docker will
insert
rules in the
DOCKER chain opening the specified host port on the interface upon which docker binds which is 0.0.0.0
by default. This means that docker will listen for any incomming requests on those ports from any interface on the
host!
A simply but not commonly known fix is to specifiy the entire host and port on which to listen for requests:
–publish 192.168.165.1:3128:3128
.
At first having such a significant setting enabled by default seems malignant, but it actually offers a very
opportune scenario where we can restrict access to the network pre-docker, blocking any and all
trafic not originating
from within the network, and let docker and running containers or services dictate which ports to open.
We consider this way of configuring the firewall optimal given the requirements posed upon our network by
docker.
The following table implemented on our group vm drops all trafic except ssh trafic to the network, outgoing connections
from withing the internal LAN and then uses the DOCKER_USER chain to insert custom rulesets that will do the very
same thing only this chain superceeds the DOCKER chain that initializes once the docker service is running, this
incremental configuration is remniscent of common UNIX package and services behavior where administrator are
instructed or discouraged from altering the default configuration files and instead use custom user configuration
which the package software will se as overruling to any default settings and also like how shell profile scripts
exists at various levels for contemplating structured and context aware behavior such as simply providing defaults
to multiple users and allowing users to overrule the defaults.
#!/bin/bash
INET_IF=eth0
IPTABLES=/usr/sbin/iptables
#This will purge the firewall rules
$IPTABLES -F
$IPTABLES -t nat -F
$IPTABLES -X
#A new chain "block" is used and will new connection from within and
# accept only already established by LAN connections
#Incoming tcp trafic is accepted to the default ssh port 22
#Anything else gets droped ... but only until starting a docker
# container/service that listens on a port
$IPTABLES -N block
$IPTABLES -A block -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$IPTABLES -A block -m conntrack --ctstate NEW ! -i $INET_IF -j ACCEPT
$IPTABLES -A block -p tcp --dport ssh -j ACCEPT
$IPTABLES -A block -j DROP
#These chains gets redirected to the block chain, they're needed for
# actually providing the block chain with trafic
$IPTABLES -A INPUT -j block
$IPTABLES -A FORWARD -j block
#Here we use the "-I" option to skip or circumvent any docker created
# rules and go straight to the block chain
#This rule also works for reenabling FORWARDING on hosts running docker
# that also must suply routing capabilities
#for our network
$IPTABLES -I DOCKER_USER -j block
#If external access is actually needed or if connection tracking isn't
# wanted from DOCKER originating packets
#those rules must be prepended aswell to apply before the redirect rule
# above.
#EXAMPLE: #iptables -I DOCKER-USER -i $src_if -o $dst_if -j ACCEPT
#Masquerade
$IPTABLES -t nat -A POSTROUTING -o $INET_IF -j MASQUERADE
Using docker compose plugin we can define a set of services to run,
making ochestration of services on a swarm resemble the conventional
docker compose scenarios. We subsitity the version “2” with “3” and use
the command docker stack deploy
and point this to our
stack docker compose file which looks like this:
version: "3.9"
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
redis:
image: redis:alpine
As shown we can define any configuration files needed in the t8g1demo folder such as the squid.conf configuration file used to define the allowed networks and access control list ACL which allow or prohibits access of various ports through our proxy.
First we must include the compose docker plugin. We install the official binaries avaiable from docker docs
# First we download the binaries in to a docker engine \
# recognizabel directory
$ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
$ mkdir -p $DOCKER_CONFIG/cli-plugins
$ curl -SL https://github.com/docker/compose/releases/download/v2.5. \
0/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
# Set the file permission for execution
$ chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
# Test the plugin by echoing the version
$ docker compose version