Securing a collocated Raspberry Pi 2 (or 3)

My original HowTo was written in 2013 when i got a free hosted Raspberry Pi B+. I wrote the article mostly for myself as a reminder on how to set it up. Over the years it has proven quite usable, and not just for myself. Whenever i’ve been messing things up with this blog, and articles have gone missing, that article is the one that keeps being requested.

With that in mind, i figured i would update it to a 2017 version, using newer packages. I’m leaving the original in place, though i will most likely not be updating it any longer.

The basics are the same, only package names have changed.

  • BTSync is now Resilio Sync.
  • Syncthing included as an alternative to Resilio Sync
  • OwnCloud is now Nextcloud.
  • Docker is used for hosting nextcloud and optionally Resilio Sync and/or Syncthing.
  • Logwatch is used instead of Logcheck.
  • PSAD has been removed. IMO it mostly generates noise.
  • The custom monitoring script has been replaced my sar, and a script for generating RRD graphs from it.

I’m assuming you’re using a Raspbian equal to or later than 2017-05-17.

DISCLAIMER: I’ve followed these instructions myself, first on a local Raspberry Pi to test the commands out, and then on a hosted Raspberry Pi. In both cases they’ve worked flawlessly. And not once have i locked myself out of any of the boxes.

That may not always be so! Packages change names, configuration options change, but tutorials rarely follows suit. It’s extremely important that you watch the output of the commands, and check for errors

Anyway, these instructions work for me, and are intended for me. I’m sharing them here hoping that someone might find them useful, but don’t blame me if they blow up your Raspberry Pi!

Now that we’ve established the playground, enjoy, and feel free to contact me on Twitter @jimmyselgen

Changelog

Initial Setup

Enable SSH

Raspbian has moved to disable SSH by default. To enable SSH you need to create an empty file in the /boot partition, named ssh.

    $ touch ssh

After you reboot the machine, you can now ssh into it.

Change the pi users password

Even though we’re not going to use the pi user, the password should still be changed.

    ~$ sudo passwd pi

Setting up a new user

Replace default ‘pi’ user with your own user (replace username with your own username)

~$ sudo adduser username

Configuring sudo

To add your user to the sudo group run the following

 $ sudo sed -i -e 's/\:pi,username/\:username/g' /etc/group
 $ sudo sed -i -e 's/\:pi,username/\:username/g' /etc/gshadow

This will put your newly created user into the ssh and sudo groups (and keep the pi user there for now)

In a new terminal open a new connection to the Raspberry Pi and verify that you can use sudo as your new user. If everything works you’re ready to remove the Pi user from the sudoers file.

$ sudo sed -i -e 's/\:pi,username/\:username/g' /etc/group
$ sudo sed -i -e 's/\:pi,username/\:username/g' /etc/gshadow

In theory you can now delete the pi user, but since we’ll be restricting logins to specified users only, and use SSH key authentication, i don’t see the need for it.

Setting up public key authentication

Generate a SSH key

You can skip this step if you already have a SSH key you use for remote logins

Generate a SSH key for authorization on your local machine

~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
22:25:25:a6:3b:72:cb:e4:37:ea:46:88:70:e5:fc:10 user@vali
The key's randomart image is:
+--[ RSA 2048]----+
|    o .          |
|   oEo           |
|  .+...          |
|. ..+o           |
|+o= .o. S        |
|o*.o ...         |
| .+ o            |
|  .o .           |
| oo              |
+-----------------+

Copy SSH public key to the Raspberry Pi

Upload your public key with

ssh-copy-id username@hostname

Disable password logins in sshd

Edit the ssh server config

~$ sudo vi /etc/ssh/sshd_config

and change PasswordAuthentication to ’no’. Remember to enable the line

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Whitelisting SSH users

Next we’ll setup a list of users allowed to SSH into the Raspberry Pi.

The easiest way to maintain a list of users is the group file. So edit the groups file

~$ sudo vigr

and find the line reading

ssh:x:103:

and append your username right after the :

ssh:x:103:username

After you save the file, you’ll need to update the shadow group file as well

~$ sudo vigr -s

And make the same changes (add your username to the ssh group)

Next we’re going to edit the SSH Server config. Edit /etc/ssh/sshd_config and add the following right at the bottom

AllowGroups ssh

Save the file and exit the editor. Before proceeding we need to test the new setup, since this could potentially lock you out of your Raspberry Pi. To test, enter the following

~$ sudo sshd -t

If the command just exits, without any output, everything should be fine. Now we need to restart the SSH server.

~$ sudo /etc/init.d/sshd restart

This will spawn a new SSH server, leaving your current session running. Next try to make a new SSH connection to your Raspberry Pi.

Hardware Watchdog

The Raspberry Pi has a hardware watchdog on-chip. A hardware watchdog is basically just a hardware timer register that needs to be “reset” every so often, or it will trigger a reboot.

The hardware watchdog module on a Raspberry Pi B+ is called bcm2708_wdog. The hardware watchdog module on a Raspberry Pi 2/3 is called bcm2835_wdt. To enable the watchdog, load the bcm2835_wdt module

~$ sudo modprobe bcm2835_wdt
~$ sudo sh -c 'echo "bcm2835_wdt" >> /etc/modules'

This will install the watchdog hardware driver, but by itself it doesn’t do anything. To handle faults/hangs/crashes we need a piece of software to feed the watchdog, and Linux has just the thing.

The watchdog daemon is a standard daemon that watches your system for various parameters, and “stops feeding the dog” if something is amiss. Among the watched parameters are :

  • Is the process table full?
  • Is there enough free memory?
  • Is the average work load too high?
  • Has a file table overflow occurred?

If any of the above tests fail, the watchdog daemon will “stop feeding the dog”, and a reboot will occur shortly after (60 seconds or so)

To install the watchdog daemon

~$ sudo apt-get install watchdog

When it finishes installing edit the file /etc/watchdog.conf and uncomment the line saying “watchdog_device”

watchdog-device = /dev/watchdog

To enable monitoring for when the Raspberry Pi is hung, i.e. a forkbomb, uncomment the lines for monitoring load

max-load-1      = 24
max-load-5      = 18
#max-load-15    = 12

This will make the watchdog daemon reboot your Raspberry Pi whenever the system load reaches the given limit. In the above example, whenever we reach a cpu load of 24 (meaning your Raspberry Pi has 24 active processes waiting to run), or a 5 minute average of 18.

Add the following to the bottom of the file to enable monitoring for checking if sshd is running.

pidfile   = /var/run/sshd.pid

Then enable the daemon

~$ sudo update-rc.d watchdog defaults
~$ sudo service watchdog start

Your Raspberry Pi should now reboot automatically whenever it hangs, crashes, or sshd dies.

WARNING I’ve experimented with various watchdog settings, and both “ping” and “interface” seems to be bugged on the Raspberry Pi, causing the Raspberry Pi to go into a reboot loop. My initial idea was to enable monitoring of eth0 and a ping monitor of my default gateway to reboot the Raspberry Pi whenever network connectivity disappeared, but, for now at least, that doesn’t seem to work. If at all possible, test on a local Raspberry Pi before experimenting with those settings.

Security (or locking down your Raspberry Pi)

Firewall

Safety First!

A good idea when testing firewall rules on a remote machine, is to schedule a reboot of the machine in 5-10 minutes, before loading the new rules. That way, if the script locks you out, your machine will reboot by itself and the bad rules will be flushed.

Schedule a reboot by entering the following command

~$ sudo nohup shutdown -r dd:mm &

Where dd:mm is replaced by a timestamp in the future, e.g. 12:00

To cancel the reboot (if all went well), use

~$ sudo shutdown -c	

Rules

While there are easier options, like shorewall, I prefer to maintain my own iptables script. I’ve copied it in full here, but expect to edit to your own needs

#/usr/bin/env bash
#Prevent lockout
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT

iptables -F

#Allow loopback connections
iptables -A INPUT -i lo -j ACCEPT

#Allow established connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

#Drop fast connects most likely bot
iptables -A INPUT -p tcp -i eth0 -m state --state NEW -m recent --set
iptables -A INPUT -p tcp -i eth0 -m state --state NEW -m recent --update --seconds 30 --hitcount 10 -j DROP

#Allow SSH
iptables -A INPUT -p tcp --dport ssh -j ACCEPT
iptables -A OUTPUT -p tcp --sport ssh -j ACCEPT

#Allow http
iptables -A INPUT -p tcp --dport http -j ACCEPT
iptables -A OUTPUT -p tcp --sport http -j ACCEPT

#Allow https
iptables -A INPUT -p tcp --dport https -j ACCEPT
iptables -A OUTPUT -p tcp --sport https -j ACCEPT

#Silently DROP broadcasts
iptables -A INPUT -i eth0 -d 255.255.255.255 -j DROP

#Log dropped packets
iptables -A INPUT -m limit --limit 2/min -j LOG --log-prefix "DROP: "

#Drop anything else
iptables -A INPUT -j DROP

#Allow output
iptables -A OUTPUT -j ACCEPT

#Change policy
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP

#IPv6 rules

#Prevent lockout 
ip6tables -P INPUT ACCEPT
ip6tables -P OUTPUT ACCEPT

#Flush old rules
ip6tables -F

#Allow connections on loopback interface
ip6tables -A INPUT -i lo -j ACCEPT

#Allow established connections
ip6tables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

#Allow SSH
ip6tables -A INPUT -p tcp --dport ssh -j ACCEPT
ip6tables -A OUTPUT -p tcp --sport ssh -j ACCEPT

#Allow http
ip6tables -A INPUT -p tcp --dport http -j ACCEPT
ip6tables -A OUTPUT -p tcp --sport http -j ACCEPT

#Allow https
ip6tables -A INPUT -p tcp --dport https -j ACCEPT
ip6tables -A OUTPUT -p tcp --sport https -j ACCEPT

#Allow ICMPv6
ip6tables -A INPUT -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT

# Allow some other types in the INPUT chain, but rate limit.
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-request -m limit --limit 900/min -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-reply -m limit --limit 900/min -j ACCEPT

# Allow others ICMPv6 types but only if the hop limit field is 255.
ip6tables -A INPUT -p icmpv6 --icmpv6-type router-advertisement -m hl --hl-eq 255 -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type neighbor-solicitation -m hl --hl-eq 255 -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type neighbor-advertisement -m hl --hl-eq 255 -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type redirect -m hl --hl-eq 255 -j ACCEPT

#Log dropped packets
ip6tables -A INPUT -m limit --limit 2/min -j LOG --log-prefix "DROP: "

#Drop everything else
ip6tables -A INPUT -j DROP

#Allow output
ip6tables -A OUTPUT -j ACCEPT 

#Change policy to deny everything by default
ip6tables -P INPUT DROP
ip6tables -P FORWARD DROP
ip6tables -P OUTPUT DROP

Save the script as pi_firewall.sh, set execute permissions on it and run it.

~$chmod +X pi_firewall.sh
~$ sudo ./pi_firewall.sh

Enable firewall at boot

If everything works, save your firewall

~$ sudo sh -c "iptables-save > /etc/iptables.rules"
~$ sudo sh -c "ip6tables-save > /etc/ip6tables.rules"

add a pre-up command to /etc/network/interfaces

iface eth0 inet static
    address 92.63.xxx.xxx
    netmask 255.255.255.0
    gateway 92.63.xxx.xxx

right below gateway, like this

iface eth0 inet static
    address 92.63.xxx.xxx
    netmask 255.255.255.0
    gateway 92.63.xxx.xxx
pre-up iptables-restore < /etc/iptables.rules
pre-up ip6tables-restore < /etc/ip6tables.rules

Log checking

Ideally you should be checking your servers logs for problems, attacks, etc every day, which is where logwatch comes in. Logwatch is a small daemon that scans your logfiles for problems every morning, and emails you a summary of whatever problems might have arisen.

To install, simply execute

~$ sudo apt-get install logwatch

Avoid log spam from iptables

With the firewall rules we setup before, every dropped packet goes into /var/log/messages, which will trigger logcheck to alert you that something suspicious has occurred. If you want to know everything that happens on your box, feel free to skip this section.

To avoid triggering logcheck, we will redirect iptables log entries to /var/log/iptables.log

~$ sudo vi /etc/rsyslog.d/iptables.conf

Add the following lines

:msg,contains,"DROP:" -/var/log/iptables.log
& ~

The first line means “send everything that contains “DROP:” to /var/log/iptables.log and the second line just tells rsyslog to discard the line that was matched in the previous rule. The ‘-’ before the logfile tells rsyslog that we don’t want to synchronize the logfile every time we write to it. This may cause a dataloss if the system crashes right after writing to the file, but it’ll save some CPU power, especially on verbose loggers. Additionally it will save a bit on wear & tear on the SD Card.

Save the file, and restart the syslog service.

~$ sudo service rsyslog restart

You can verify that you now have a file name /var/log/iptables.log

➜  ~  ls -l /var/log/iptables.log
-rw-r----- 1 root adm 0 Jul 18 22:29 /var/log/iptables.log

To avoid filling up your SD-Card we need to tell logrotate about the new logfile. Edit /etc/logrotate.d/rsyslog and add a line for your iptables.log file. I chose to rotate my iptables log file daily, so i added it above /var/log/syslog

/var/log/iptables.log
/var/log/syslog
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    postrotate
    invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

This will keep a backlog of your iptables.log files for 7 days, gzip compressed, named from /var/log/iptables.log.1, to /var/log/iptables.log.7.gz

Preventing attacks

Fail2ban

Fail2ban is small daemon that monitors your logfiles for failed login attempts, and when it finds a pattern matching its configuration, it automatically adds a iptables rule that bans the offending source IP address for a specified period of time.

From Wikipedia

Fail2ban operates by monitoring log files (e.g. /var/log/pwdfail, /var/log/auth.log, etc.) for selected entries and running scripts based on them. Most commonly this is used to block selected IP addresses that may belong to hosts that are trying to breach the system’s security. It can ban any host IP that makes too many login attempts or performs any other unwanted action within a time frame defined by the administrator.[2] Fail2ban is typically set up to unban a blocked host within a certain period, so as to not “lock out” any genuine connections that may have been temporarily misconfigured.[3] However, an unban time of several minutes is usually enough to stop a network connection being flooded by malicious connections, as well as reducing the likelihood of a successful dictionary attack.

To install, simply execute

~$ sudo apt-get install fail2ban

Docker

Docker simplifies running services on your machine, as well as adding an extra layer of security by packaging things in containers that contain every library and executable needed by the specific service.

A docker container will only be able to “see” the resources you’ve explicityly given it permission to, so should someone find a bug, they will not have access to your entire machine, but only to the resources available to the given docker service.

Furthermore a docker container can be memory/cpu limited, so that the container can only use xx% cpu and %%gb/mb memory.

We’ll be using docker for hosting Nextcloud, and optionally Resilio Sync. To install docker, simply execute the following

$ curl -sSL https://get.docker.com | sh

Nextcloud

To install Nextcloud, i used the officially sanctioned ARM docker image. I use /data/docker/nextcloud for storing my data, so first create that directory

$ sudo mkdir -p /data/docker/nextcloud
$ sudo chown -R username /data/docker/nextcloud

Then pull and run the docker image

$ sudo docker pull ownyourbits/nextcloudpi
$ sudo docker run -d -p 443:443 -p 80:80 -v /data/docker/nextcloud:/data --name nextcloud ownyourbits/nextcloudpi $DOMAIN

You should now be able to connect to your Raspberry Pi on port 80 (and 443 if SSL is enabled)

Synchronizing data

When it comes to synchronizing data between your Nextcloud box and your other devices, you can chose from a multitude of services, most have a free tier, but will grow more and more expensive as your data grows.

Personally i’ve used a Synology NAS for years, and used their included smartphone apps for access to my data on the go. Recently i’ve been trying to get away from my dependency on the NAS (still have and use it), and move to tools that allows me to access not just the data on my NAS, but also data stored on other servers. This allows me to both add redundancy to data that would otherwise only be stored on a single drive, but also effectively expand my available storage beyond what the NAS offers.

Enter the “personal cloud”. You have a couple of options here. Personally i paid $60 for a lifetime license to use Resilio Sync (when it was still called Bittorrent Sync), and so i use that.

In later years Syncthing has sprung up as a very viable free alternative, but the lack of iOS support, along with some other annoyances (some minor, a few major) keeps me using Resilio.

I’ve included a “recipe” for using Syncthing via Docker, but YMMV. It is possible to run both Syncthing and Resilio on the same machine, so if you want/install both it will run just fine. Both can’t be pointing to the same directory though, so you’ll need seperate directories under Nextcloud if you use both!

Resilio Sync

Resilio Sync can be installed either as a system wide installation, or as a Docker container.

System wide Resilio Sync installation

To install Resilio Sync i follow the official instructions

Add the repository

$ echo "deb http://linux-packages.resilio.com/resilio-sync/deb resilio-sync non-free" | sudo tee /etc/apt/sources.list.d/resilio-sync.list

Add the key

$ wget -qO - https://linux-packages.resilio.com/resilio-sync/key.asc | sudo apt-key add -

And finally install Resilio Sync

$ sudo apt-get update
$ sudo apt-get install resilio-sync

Running Resilio Sync in a Docker container

Setting up Syncthing via docker is almost identical to setting up Resilio Sync, and like Resilio Sync it offers an official Docker image, just not for ARM. Instead we have to rely on (and trust) a 3rd party. Luckily LinuxServer.io also makes a Syncthing image, so we’ll use that.

First we create the config directory

$ sudo mkdir -p /data/docker/syncthing

Next it’s time to pull the image, and start the service. To avoid permission problems, we need Resilio to run with the same credentials that NextCloud is running with. The Docker image for Nextcloud runs as www-data, so we need to specify this as the UID and GID for the Resilio container.

On my machine the www-data has a GID/PID of 33, but yours may differ. You can check in /etc/passwd and /etc/group

$ grep www-data /etc/passwd /etc/group
/etc/passwd:www-data:x:33:33-www-data:/var/www:/usr/sbin/nologin
/etc/group:www-data:x:33:

Armed with the PID/GID we can now launch the Resilio container

$ sudo docker pull lsioarmhf/resilio-sync
$ sudo docker run -d \
  --name=resilio-sync \
  -v /data/docker/resilio:/config \
  -v /data/docker/nextcloud:/sync \
  -e PGID=33 -e PUID=33  \
  -p 127.0.0.1:8888:8888 \
  -p 55555:55555 \
  lsioarmhf/resilio-sync

Resilio Sync configuration

Because this is a remote server, and i don’t expect to setup synchronized drives other than initial setup, i prefer not to have my Raspberry Pi listen on the port for the Resilio Sync administration interface.

Instead i forward a local port to the machine via ssh, like this (replace raspberrypi.pcextreme.nl with the hostname/ip address of your raspberry pi)

~$ ssh -f raspberrypi.pcextreme.nl -L 8888:localhost:8888 -N

Now you can just connect to http://localhost:8888 and configure your Resilio Sync directories.

Syncthing

For a bit of added security, it is possible to run Syncthing in a Docker container instead of a system wide installation, though there might be problems with permissions between NextCloud and Docker.

While Syncthing provides an official Docker image for x86, no such image exists (yet) for the ARM platform. Instead we have to rely on (and trust) a 3rd party.

I’ve had good luck with LinuxServer.io, so i’ll reuse that here.

First we create the config directory

$ sudo mkdir -p /data/docker/syncthing

Next it’s time to pull the image, and start the service. To avoid permission problems, we need Syncthing to run with the same credentials that NextCloud is running with. The Docker image for Nextcloud runs as www-data, so we need to specify this as the UID and GID for the Syncthing container.

On my machine the www-data has a GID/PID of 33, but yours may differ. You can check in /etc/passwd and /etc/group

$ grep www-data /etc/passwd /etc/group
/etc/passwd:www-data:x:33:33-www-data:/var/www:/usr/sbin/nologin
/etc/group:www-data:x:33:

Armed with the PID/GID we can now launch the Syncthing container

$ sudo docker pull lsioarmhf/syncthing
$ sudo docker create \
 --name=syncthing \
 -v /data/docker/syncthing:/config \
 -v /data/docker/nextcloud:/sync \
 -e PGID=33 -e PUID=33  \
 -p 127.0.0.1:8384:8384 -p 22000:22000 -p 21027:21027/udp \
 lsioarmhf/syncthing

Again, as with Resilio Sync, i’ve set the Web UI port to only listen for connections from localhost. To connect to it from your local machine run

$ ssh -L 8384:localhost:8384 <hostname of raspberry pi>

This will forward your local port 8384 to the localhost port on the Raspberry Pi, and you can connect to it via a browser on http://localhost:8384

Monitoring

I use sar for monitoring performance

To install sar

$ sudo apt install sysstat

By default sar runs every 10 minutes. Once it has had some time to collect statistics, you can view the stats with the sar command.

To view CPU usage

$ sar
Linux 4.9.35-v7+ (ratatosk) 	2017-08-14 	_armv7l_	(4 CPU)

00:00:01        CPU     %user     %nice   %system   %iowait    %steal     %idle
00:05:01        all      6,01      0,00      1,71      0,93      0,00     91,34
00:10:01        all      5,05      0,00      1,65      0,22      0,00     93,07

To view Memory usage

$ sar -r
Linux 4.9.35-v7+ (ratatosk) 	2017-08-14 	_armv7l_	(4 CPU)

00:00:01    kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty
00:05:01        42720    951512     95,70     17396    334196   3117628     58,92    508048    356892       272
00:10:01        36972    957260     96,28     17988    337692   3119432     58,96    511220    359084       524
00:15:01        27332    966900     97,25     18568    347180   3123064     59,03    513152    367316      1176

I use this script for generating graphs automatically. I’ve writted a more detailed post about setting it up here

Sending reports via GMail

Before you setup your mail server, you must set your Reverse DNS. If GMail cannot resolve your servers IP address to the hostname you claim to be sending from, then GMail will simply refuse to deliver mail from you.

You can set your reverse DNS on your Raspberry Pi here1

My preferred MTA is Postfix, to install it

~$ sudo apt-get install postfix

Enable aliases

~$ sudo postconf -e "alias_maps = hash:/etc/aliases"

Then edit /etc/aliases2

root: username
username: [email protected]

When you’re done editing /etc/aliases, you need to tell Postfix to reload the alias file

~$ sudo newaliases

The Debian Wiki Postfix page has more configuration options.

And you’re done. All mail sent to your local user on the Raspberry Pi should now get forwarded to your gmail account.

Staying up to Date

Finally done, and hopefully everything is setup and working perfectly. Now let us make sure it stays that way.

Setting up automatic updates

Raspbian (debian) has a little tool that will download fresh apt sources every night, and email you if some of your packages need upgrading.

~$ sudo apt-get install cron-apt

With the default configuration, cron-apt will run every night at 04:00, and send a notification to root, and assuming you’ve setup your MTA like above, all mails for root will get forwarded to your account.

Cron-apt doesn’t install anything by default, so you still need to do that manually.

Updating Docker containers

The easiest way to update your Nextcloud container is simply to delete the existing container, and recreate it.

$ sudo docker pull ownyourbits/nextcloudpi
$ sudo docker stop nextcloud
$ sudo docker rm nextcloud
$ sudo docker run -d -p 443:443 -p 80:80 -v /data/docker/nextcloud:/data --name       nextcloud ownyourbits/nextcloudpi $DOMAIN

If you run Resilio Sync in Docker, the procedure is the same

$ sudo docker pull lsioarmhf/resilio-sync
$ sudo docker stop resilio-sync
$ sudo docker rm resilio-sync
$ sudo docker run -d \
  --name=resilio-sync \
  -v /data/docker/resilio:/config \
  -v /data/docker/nextcloud:/sync \
  -e PGID=33 -e PUID=33  \
  -p 127.0.0.1:8888:8888 \
  -p 55555:55555 \
  lsioarmhf/resilio-sync

Conclusion

Congratulations.

You now have a relatively secure Raspberry Pi running with Resilio Sync and Nextcloud.

Provided you’ve selected good passwords, the only remaining attack vector will be bugs in the software you’ve installed, so remember to login and update your software every time cron-apt sends you an email.

Further Reading

Here is a list of resources i’ve used while putting together this little guide.

Changelog

  • 2015/08/xx - Initial version ported from version 1

  1. If you’re setting up a server that has no reverse DNS, this might come in handy. ↩︎

  2. Alternatively you can setup a forward address in ~/.forward, just enter your email address. ↩︎