This is how I set up automatic snapshots (like Time Machine on Mac) on my backup server (an old Raspberry Pi 1B running Raspberry Pi OS – a.k.a. Raspbian, with an old 1TB USB2 HD attached), using Snapper. Supposedly Snapper can work with modified ext4, but it seems far more tested with Btrfs, so I’m using that. This is more or less for my own reference in case I need to do it again, but should work well enough elsewhere.

You need the btrfs-progs and snapper packages:

sudo apt install btrfs-progs snapper

First, the filesystem needs to be btrfs – either from the start or the tool btrfs-convert can do this in place from ext3/4. Note on the current version of Raspbian btrfs-convert is missing and the version from buster-backports segfaults – I used the Pi version of Ubuntu 20.04 to do that.

I’m mounting my hard drive at /ext, so get the UUID with blkid then add to fstab and mount (done this way rather than /dev/sdX etc as it gets the right partition even when USB devices plugged/unplugged/similar). Note if you’re using subvolumes in btrfs you’ll have to adjust the fstab entry:

blkid # Copy the relevant UUID

sudo nano /etc/fstab # And add the line, adjusted as required:
UUID=copied-from-blkid    /ext    btrfs    defaults    0    0

sudo mount /ext

Note if you want multiple configurations for multiple partitions, you can use the “-c myconfigname” option on all the following snapper commands, but I’m not bothering here as I’m only interested in versioning /ext. Note apt will automatically create snapshots on the root config before & after making changes – so if you’re using this solely on a data partition you might want to use a different config for it anyway. To do the initial snapper setup:

sudo snapper create-config /ext

This creates the /ext/.snapshots directory. If you just want to accept the defaults, you’re done – hourly snapshots will be taken, and it’ll automatically delete after 10 hourly, 10 daily, 0 weekly, 10 monthly and 10 yearly snapshots

To view all snapshots:

sudo snapper list

To manually create a snapshot:

sudo snapper create --description "My First Snapshot"

To look at a file in a snapshot, get the ID with list and just look in the .snapshots dir, e.g.:

sudo ls /ext/.snapshots/VERSION_ID/snapshot

To restore a single file from a snapshot, either just copy or with the IDs from list (the -v is verbose – lists the changed files):

snapper -v undochange PREID..POSTID /path/to/file

To completely revert to a snapshot, just omit the path. Note do NOT do this on the root filesystem, as it can cause really odd things to happen – this isn’t a perfect revert. If you want to do that, change fstab to mount the relevant snapshot directly or take a snapshot of the snapshot. I haven’t done that, and there seems to be differing opinions on the best way to do it, so look at the various links. For small things though, as I said, just omit the path:

snapper -v undochange PREID..POSTID

To adjust how often snapshots are automatically created, and how many are retained (If using multiple configs, replace root with config name):

sudo nano /etc/snapper/configs/root

See here for a good reference on how things in the config file work – this also includes instructions on how to make it so non-root users can use snapper.

If you want to look at the underlying Btrfs snapshots:

sudo btrfs subvolume list /ext

The automatic snapshots and cleanup are triggered via systemd (joy.). To see them, and how long’s left on each:

systemctl list-timers | grep snapper

 

Recently I decided to use Ubuntu 20.04 on my new Raspberry Pi 4B, and use Seafile on an external hard drive. This is how I got it working, and some troubleshooting tips I found. The server install instructions are somewhat outdated (particularly things like the prerequisites – it targets Ubuntu 14.04, and is still working with Python 2 when the latest Seafile uses Python 3). It’s worth glancing through anyway, just don’t take it as gospel.

First, it’s important to use the 32 bit version of Ubuntu otherwise you’ll get very weird “Not Found” errors for files that are actually there – this happens when you’re running on the wrong architecture. To fix this you’d have to recompile the whole thing, which I’ve tried and had major problems with before on a Synology NAS. That was a couple of years ago though, and Synology is a pain in the first place, so maybe you’d have better luck.

If you’re putting things on an external drive, Seafile say to use MySQL instead of Sqlite – presumably going by their backup instructions that tell you to backup the DB first this is to make sure things don’t go corrupt if it unexpectedly disconnects. So install the MySQL server (I’m using MariaDB here, but either should work):

sudo apt install mariadb-server

Choose where you want to store things. Note everything is stored under this directory – other directories will be created under it by the setup script. While it seems like you can move it around afterwards, the setup script hardcodes a couple of things that stop it working and don’t give good error messages, so don’t. I have my external HD mounted at /ext, so:

mkdir /ext/seafile
cd /ext/seafile

Download and extract the latest version (7.1.4 as I’m writing this) – the Buster version is Debian 10, which I believe is closest to Ubuntu 20.04:

wget https://github.com/haiwen/seafile-rpi/releases/download/v7.1.4/seafile-server_7.1.4_pi-buster-stable.tar.gz
tar -zxf seafile-server_7.1.4_pi-buster-stable.tar.gz
mkdir installed
mv *.gz installed/
cd seafile-server-7.1.4

Don’t bother renaming the resulting folder to remove the version – a symlink to the latest one is automatically created in the setup script.

The 7.1.4 package has a bug in where it stores certain Python packages – to fix it create a symlink. Note this is just where Seafile is internally storing packages, it doesn’t require that specific Python version – 20.04 uses 3.8 and that’s fine:

ln -s python3.7 seafile/lib/python3.6

You now need to run the setup script – since I’m using MySQL I’m using that version, otherwise use the other. Note MariaDB (and probably MySQL, but I’m not sure) have a weird thing where the MySQL root user can ONLY be accessed by the system root user. So use sudo here and chown the resulting files later. When it asks for the root password just type anything, like 123, unless you’ve specifically set it. For the mysql user password choose something secure obviously. It’ll ask for a admin email+password, this is what you’ll log into the server as.

sudo ./setup-seafile-mysql.sh
cd ..
sudo chown -R ubuntu * # ubuntu is default, obviously substitute if required
cd seafile-server-latest

Now while technically by the instructions you should be ready to start things, with the exact versions I’m using here seahub will fail to start. Annoyingly by default it doesn’t tell you, or even log, why. So if you’re using the exact versions I am, here’s how to fix things, otherwise scroll down for how to start and debugging instructions:

PIL and Pillow are a python image library compiled specifically for the system they’re used on, and apparently Debian Buster and Ubuntu 20.04 are using different versions which causes an error when starting the server. So to fix this:

sudo apt install python3-pil
cd seahub/thirdpart/
mkdir DISABLED
mv PIL DISABLED/
mv Pillow-7.1.2.dist-info/ DISABLED/
cd ../..
Similarly with the Crypto module (causes an empty response when trying to access the site):
sudo apt install python3-crypto
cd seahub/thirdpart/
mv Crypto DISABLED/
cd ../..

Now, to attempt to start the server:

./seafile.sh start
./seahub.sh start

If things go wrong, particularly when starting seahub as it doesn’t give error messages by default, you can try ./seahub.sh start-fastcgi – this won’t work, as Django has discontinued fastcgi support, but will actually give you error messages if the problem is before that point. Otherwise:

nano ../conf/seahub_settings.py

Add the line (watch the capitalisation): DEBUG = True

If you still aren’t getting useful error messages:

nano ../conf/gunicorn.conf.py

Change the line daemon=True to daemon=False – remember to change this back once you work out what’s wrong.

Now with luck Seafile is running locally, but you won’t be able to access the web interface from another computer unless you either change a config setting or install a reverse proxy. If you’re testing locally to check if it’s working, use wget instead of curl as the latter returns an empty response for some reason.

To access from your local network without a reverse proxy, replace 127.0.0.1 with your Pi’s IP in the following:

nano ../conf/gunicorn.conf.py
./seahub.sh restart

You can then access it by going to youripaddress:8000 – you can change the port in the above file but ports <1024 need either to be run as root or using a workaround – using a reverse proxy fixes this.

That’s pretty much it. When using the clients (Seafile Sync Client if you want stuff kept on your computer like Dropbox, Seafile Drive Client if you want stuff to stay on the server), and the iOS/Android clients, make sure to use the whole http://youripaddress:8000 URL for the server.

Running automatically on boot

First make sure the external HD is mounted automatically – add to /etc/fstab (substituting device, mountpoint and filesystem type obviously):

/dev/sda1 /ext ext4 defaults 0 0

Finally, to make seafile and seahub start on boot, follow the instructions here – I changed the user/group to be ubuntu, just for simplicity, but you could use a specific seafile user if you wanted.

Backups

If you want to set up backups, make sure to do the database first, then the files. I just backup the databases to a subfolder on /ext/seafile, and then backup that whole folder. To do so keeping backups for a month (after creating the directory):

sudo crontab -e

Add the line (making sure to get the quotes right, and escaping the %):

0 0 * * * sudo mysqldump --all-databases | gzip > /ext/seafile/mariadb-backups/mariadb_`date +"\%d"`.sql.gz

Then just set up the backup of the /ext/seafile directory (I’m backing up to a second, far older, raspberry pi – already set up with ssh-keygen and ssh-copy-id):

# Either using Duplicity:
10 0 * * * duplicity --no-encryption /ext/seafile sftp://pi@myotherraspberrypi/ext/seafile-backups
# OR using just rsync (good if using btrfs snapshots / snapper on target):
10 0 * * * rsync -az /ext/seafile pi@myotherraspberrypi:/ext/seafile-backups

If you have problems with Duplicity not recognising your RSA key, try the following to convert to a compatible format:

ssh-keygen -p -m PEM -f ~/.ssh/id_rsa

 

 

Keypairs and Authorized Keys

To generate a keypair on the client computer:

ssh-keygen -t dsa

This creates the files ~/.ssh/id_dsa and id_dsa.pub (you can set a passphrase but then you’d need to type it in every time you wanted to use it). To add this key to the servers you want to log in to:

cat ~/.ssh/id_dsa.pub | ssh servername “cat >> ~/.ssh/authorized_keys”

You can do this on multiple computers and it should just add to the list, not overwrite. Warning – the .ssh directory must already exist on the remote host or it won’t work…

Setting Default Username for SSH login

Create the file ~/.ssh/config, with the line:

User usernametouse

Using a SSH alias

To set default options for an alias, eg so you can just type ssh aliasname, add the following to ~/.ssh/config:

host aliasname
hostname servername.com
user username

Type man ssh_config for more available options

Update 2013-04-24:

Stopping Timeouts on Virgin Media

For some reason Virgin kills connections that don’t show enough activity, this causes SSH to hang. To fix, add the following to ~/.ssh/config:

Host *
ServerAliveCountMax 600
ServerAliveInterval 10

Using a specific .pem certificate to authenticate

To use a certificate to connect, without installing it as an authorized key on the client, (e.g. for Amazon EC2 in a script):

ssh -i /path/to/certificate.pem servername.com

Just some random snippets of SQL I always have to search to find (or spend an inordinate amount of time thinking about). Will be updated as I find them…

Get number of duplicate records:

SELECT COUNT(*) FROM table WHERE id NOT IN    (SELECT MAX(id) FROM table GROUP BY col1, col2, col3);

Delete duplicate records:

DELETE FROM table WHERE id NOT IN    (SELECT MAX(id) FROM table GROUP BY col1, col2, col3); 

(Where col1, col2, col3 are the columns you’re looking for duplicates on – from Stack Overflow, tested with SQLite on ~10000 records, took <2s)

GNU Screen is a terminal application for running persistent, multiple sessions inside a single terminal (or in the most useful case, a ssh connection)

Running Screen:

apt-get install screen: installs screen on Debian
screen: starts a new screen session
screen -r: reattaches to an existing screen session
screen -x: reattaches to an existing screen session where the original session is still attached (happens when screen has not been properly detached)
Ctrl-a d: detaches from a running screen session (note this will lose split screen placement – this is the only way I’ve found to avoid that)

Creating and switching between terminal sessions:

Ctrl-a c: Create new terminal session
Ctrl-a n: Switch to next running terminal session (also Ctrl-a space)
Ctrl-a p: Switch to previous running terminal session
exit: This will close the running terminal session, as usual.

Split Screen:

Ctrl-a |: Split screen vertically (note this one isn’t in all screen versions – it is in the Debian one however)
Ctrl-a S: Split screen horizontally
Ctrl-a Tab: Switch to the next screen
Ctrl-a X: Remove active bit of split screen (opposite from | and S)

Copy and Paste:

Screen supports copy and paste between terminal sessions. Copying from a vertically split screen from an ssh session (as in to a different application on your client operating system) can be tricky if it extends over multiple lines, as you’ll get the stuff from the other terminal too – , although I’ve yet to try it.

Ctrl-a [: Starts copy mode. Press enter/space to start, enter/space again to end.
Ctrl-a ]: Paste
Ctrl-a <: Copy from a file
Ctrl-a >: Paste to a file

Other:

Ctrl-a ?: Command reference (also see the manual)

Copy mode can also be used to view the scrollback buffer: after Ctrl-a [ you can use the arrow keys to scroll up to its limits – Esc exits without copying.

On login, bash runs scripts in your home directory depending on a number of factors. Note these scripts don’t seem to require being set executable. A summary, as far as I can make out (tested on Debian lenny):

/etc/profile always seems to be run first

/etc/bash.bashrc will be run next for interactive shells (aka not scripts or su -c)

If the shell has been started with the command /bin/sh, bash emulates the older sh shell and runs the ~/.profile script

A ssh login (and presumably also local login) runs ~/.bash_profile , or if it can’t be found ~/.bash_login or .profile, if they exist (~/.profile being last)

Graphical terminal emulators usually run ~/.bashrc

The command su username from a shell runs ~/.bashrc

The command su -c “command” username from a shell or script doesn’t run any of the scripts (except /etc/profile as always)

The command su -l -c “command” username runs ~/.bash_profile (the -l flag to su emulates a login shell, so the other possibilities mentioned above if .bash_profile doesn’t exist)