Collation of all things DSM:
Functionalities
- DDNS client:
Control Panel > External Access > DDNS > Customize Provider
- Connect to VPN:
Control Panel > Network > Network Interface > Create VPN Profile > OpenVPN
- Connect over SMB:
\\[IPaddr]\[sharedFolderName]
Setting up key-based SSH access was mildly infuriating. The basics (user uA on machine A initiating SSH connection to machine B with user uB), these are the *bare* requirements.
- Home directory of the uB needs to contain the public key of uA in
~/.ssh/authorized_keys
- uB: Write permissions of group & others should be disabled for all directories from home directory
~
all the way until~/.ssh/authorized_keys
, e.g.755
and644
for directories and files - uA: All access permissions of group & others for
~/.ssh/id_rsa
should be disabled, e.g.600
/etc/ssh/sshd_config
should havePubKeyAuthentication yes
and
The problem with the setup on Synology is the ~
home directory has full permissions (due to the use of ACLs). Disable this.
Drive swapping
First time replacing hard drives (just a general capacity upgrade), and boy was it easy. Basically:
- Pull out one hard drive enclosure, and the machine will start beeping as a warning of degraded RAID. DS920+ supports hard drive hot-swapping.
- Go to the Storage Manager and click on repair RAID. For a 4TB Toshiba 7200RPM, this takes about 5 hours.
- Once done, the disk access indicators will stop flashing.
In a RAID 10 setup, disks 1 and 2 are mirrored. To test a couple of things:
- Effect on RAID array on plugging back old disk
- Simultaneous rebuilding on both RAID 1 pairs
Update: I believe there is a way to stop the drive, instead of just interrupting disks directly by pulling them out. Ouch.
rsync
Be careful when using rsync
- a likely symptom that will occur looks something like the following:
admin@storage:/$ TARGET="/volume1/backups" sudo rsync -av server:/var/www/html $TARGET/ && sudo chmod -R go-rwx $TARGET/html && sudo chown -R admin:users $TARGET/html receiving incremental file list html/data/cache/0/05b98e12e0e75abf6e4535e2723b8940.metadata html/data/cache/0/05b98e12e0e75abf6e4535e2723b8940.xhtml html/data/cache/2/2d0bf1ebf1fd09e6119c53db16afc5a6.metadata html/data/cache/5/5bf47b992ccfc82d60485e8520f302a8.metadata html/data/cache/5/5bf47b992ccfc82d60485e8520f302a8.xhtml html/data/cache/6/636893f37a3bd82091f09d0482df99cd.metadata html/data/cache/6/6dc333cad130fc0d8ef9eebc2c759d83.metadata html/data/cache/7/73d29a595340e59385d46073fe739729.metadata html/data/cache/7/73d29a595340e59385d46073fe739729.xhtml html/data/cache/f/ html/data/cache/f/fb829059507a70bd319c0a4e52e9fc48.media.1920x1066.crop.png rsync: write failed on "/html/data/cache/f/fb829059507a70bd319c0a4e52e9fc48.media.1920x1066.crop.png": No space left on device (28) rsync error: no space on remote server (code 41) at receiver.c(423) [receiver=3.1.2] rsync: [generator] write error: Broken pipe (32)
This is true:
/$ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 2.3G 2.3G 0 100% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 236K 3.9G 1% /dev/shm tmpfs 3.9G 22M 3.8G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 3.9G 1.1M 3.9G 1% /tmp /dev/mapper/cachedev_0 28T 7.2T 21T 26% /volume1
Turns out, a large chunk of space is somehow consumed by /usr/syno/synoinstall/space-preserve
taking up 537919488 bytes (found by iteratively searching using sudo du -hd 1 --exclude=volume1
).
Some user on this forum contacted Synology Technical Support, and got back the following response (sic):
Since DSM 7.1. the Operating System will generate /usr/syno/synoinstall/space-preserve on startup. The File is generated to reserve Space on the Systempartition for a Firmwareupdate.
During the Update the File will be removed so that the Space may be used for the Update. If you remove the File it will be regenerated when restarting the Device. However since this is included in the Design of DSM it should not cause any Problem for the Operating System.
We recommend against trying to remove the File manually.
So maybe this is not the real culprit, especially since it is reasonable to expect firmware updates on the order of GBs. And yes, turns out the main culprit was the rsync command itself which writes in the current directory.
The proper usage of local variables is as follows (opens a subshell, exports local variable, then expands in subsequent command):
$ (TARGET="/volume1/backups"; sudo rsync -av server:/var/www/html $TARGET/ && sudo chmod -R go-rwx $TARGET/html && sudo chown -R admin:users $TARGET/html);
Testing network performance
iperf3
is the obvious candidate, but the question is how to get the tool loaded into Synology. Two possible methods:
- Install SynoCLI, which packages popular sysadmin tools
- Install the first-party tool with
sudo synogear install
, which loads into a root shell for tests
On the server side, run the server command, with a specified port number:
(synogear) root@synologynas:/# iperf3 -sp 7575
Then run the other end iperf3 on the connecting client side. Add the bidirectional flag -bidir
to test both uplink and downlink.
% iperf3 -c 169.254.11.22 -p 7575 -bidir Connecting to host 169.254.11.22, port 7575 [ 5] local 169.254.11.23 port 62147 connected to 169.254.11.22 port 7575 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 114 MBytes 956 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 940 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 940 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 940 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 940 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 940 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 940 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec sender [ 5] 0.00-10.01 sec 1.10 GBytes 940 Mbits/sec receiver
Here's another test, but with the OWC Thunderbolt 3 to 10GbE adapter, to the Synology 10GbE extension card:
% iperf3 -c 169.254.11.22 -p 7575 -bidir Connecting to host 169.254.11.22, port 7575 [ 5] local 169.254.11.23 port 55015 connected to 169.254.11.22 port 7575 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.09 GBytes 9.39 Gbits/sec [ 5] 1.00-2.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 2.00-3.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 3.00-4.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 4.00-5.00 sec 1.09 GBytes 9.40 Gbits/sec [ 5] 5.00-6.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 6.00-7.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 7.00-8.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 8.00-9.00 sec 1.09 GBytes 9.41 Gbits/sec [ 5] 9.00-10.00 sec 1.09 GBytes 9.41 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 10.9 GBytes 9.40 Gbits/sec sender [ 5] 0.00-10.00 sec 10.9 GBytes 9.40 Gbits/sec receiver
Overall throughput with 4x Seagate Ironwolf 4TB + 4x Toshiba N300 4TB was around 400 MBps (3.2 Gbps), so again IO-bounded.
One-way latencies may be possible, but not particularly straightforward to deploy: Overview, GUI
Wake-on-LAN
Tailscale TUN
DSSM7 has stronger sandbox architecture, need to allow Tailscale to have permissions to generate a TUN device so that other applications can perform outbound Tailscale connections other than the Tailscale package itself. Instructions at: https://tailscale.com/kb/1131/synology/
- Under Triggered Tasks, create a user-defined script with root owner and the task:
/var/packages/Tailscale/target/bin/tailscale configure-host; synosystemctl restart pkgctl-Tailscale.service
. Set the trigger to Boot-up. - Run it
See the code here: https://github.com/tailscale/tailscale/blob/main/cmd/tailscale/cli/configure-synology.go
Symptoms:
- No
tailscale0
TUN device when runningip addr
- Pings over Tailscale works, i.e.
tailscale ping ...
, but not regular pings, i.e.ping ...
This issue is still there as of 2023-03-23.
Tailscale relay
Attempt a ping over Tailscale network. If tailscale status
shows connection is performed over a relay, then tailscaled has issues performing a peer-to-peer connection. This is not necessarily good, since now you're dependent on Tailscale's DERP servers to act as relay, which also increases RTT.
Several solutions depending on application profile:
- Enable inbound TCP connections to destination port
443
from anywhere- This allows direct C2 communication from Tailscale's DERP servers
- Enable outbound UDP connections from source port
41641
to anywhere- This enables outbound WireGuard connections
- Enable outbound UDP connections from source port
3478
to anywhere- This allows use of STUN protocol to relay internal port and public IP address information
Do this over Synology's internal firewall (which you should really enable).
See: https://tailscale.com/kb/1082/firewall-ports/#how-can-i-tell-if-my-devices-are-using-a-relay
For ufw
, these correspond to:
sudo ufw allow 443/tcp sudo ufw allow out 41641/udp sudo ufw allow out 3478/udp
Duplicity
Add community package: https://packages.synocommunity.com/
Install Duplicity from the community list of packages, which has Python 3.10 and GnuPG dependencies.
> gpg --gen-key --pinentry-mode loopback
- script.sh
#!/bin/bash # Justin, 2023-07-18 # Override the PS4 for debugging #PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }' #set -x; if [ $# -eq 0 ]; then echo "Specify program to execute."; exit 0; elif [ "$1" = "test" ]; then ###### START ###### duplicity \ --encrypt-key A68371451EE475C9F7EEE21462915F4037235CE4 \ /volume1/backups/test/local \ scp://backup/backups/test/remote ###### END ###### else echo "No commands matched."; exit 1; fi
Ensure associated backup users on remote server is created, with only access to homes
(for SSH authorized_keys reading) and backup
(for backups in RAID). Make sure backup
user directory and .ssh
directory is set to self as owner, and read access is limited to self.
Testing GPG keys: https://stackoverflow.com/questions/11381123/how-to-use-gpg-command-line-to-check-passphrase-is-correct
Might be useful: https://www.beatificabytes.be/unattended-gpg-key-generation-to-sign-synology-packages/
The realization why SSH is limited to admin users because all files are 0777: https://community.synology.com/enu/forum/1/post/125859?page=1&sort=oldest
DNS
To inject a local resource on an existing namespace domain (e.g. local.
reserved for mDNS), create the zone with the full domain name, e.g.
Primary zone type: Forward zone Domain name: rojak.local Master DNS server: {{ SERVER_IP }} # Resource records ns.rojak.local A {{ SERVER_IP }} rojak.local NS ns.rojak.local rojak.local A {{ INJECTED_IP }}
Samba
Final Cut Pro
Straightforward tutorial for configuring Samba to be compatible with Final Cut Pro (FCP). The main cause of incompatible SMB location is due to additional Samba module dependencies, specified here. Fix reproduced here.
root:~# cat /etc/samba/smb.conf ... [global] vfs objects=catia,fruit,streams_xattr ... root:~# synopkg restart SMBService
Some problems though, e.g. these changes are incompatible with Time Machine backups via SMB.