Synology: Difference between revisions

From Edgar BV Wiki
Jump to navigation Jump to search
No edit summary
 
(57 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Change IP =
= First things to do =
 
== Change IP ==


Control panel -> (connectivity) Network -> tab network interface -> select LAN and click Edit.
Control panel -> (connectivity) Network -> tab network interface -> select LAN and click Edit.


= Enable SSH =
== Enable home directories + 2 step verification ==
 
Control panel -> User -> Advanced -> Enable home user service + Enforce 2 step verification
 
== change workgroup ==
 
control panel -> file services -> smb/afp/nfs -> smb / enable smb service : workgroup
 
== change low volume warning threshold ==
change the % by going into my Storage Manager -> Storage -> Clicking the ... on the Volume I was looking for then Clicking Settings. If I scroll down on the Settings there is a Low Capacity Notification where you can change the % to 5% from 20%
 
== Enable SSH ==


Control panel -> (System) Terminal & SNMP enable SSH
Control panel -> (System) Terminal & SNMP enable SSH
Line 15: Line 28:
to enable your users to access the terminal vi /etc/passwd and change /sbin/nologin to /bin/ash
to enable your users to access the terminal vi /etc/passwd and change /sbin/nologin to /bin/ash


= Prepare for rsync =
== Create shared folders ==
Once you have created your shared folder, also enable snapshots
 
= Migrating from one Synology NAS to another one =
In general if you are upgrading NAS it is just a question of:
 
# moving the drives to the new NAS and starting it up
# visit https://finds.synology.com
# Selecting the migrate option
# NB it will not keep the original IP address so make sure to set that. You will probably have problems with NetBIOS, so look [https://wiki.edgarbv.com/index.php/Synology#NetBIOS_name_not_recognised_after_changing_IP_address below] to resolve that.
 
HOWEVER there are some catches, so make sure to review https://kb.synology.com/en-global/DSM/tutorial/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later
 
Also the following Youtube video is very worth watching for more information: https://www.youtube.com/watch?v=oGvbNlJaQIg
 
= Copying everything from one volume to another to change RAID type =
Because you cannot just change RAID type (https://kb.synology.com/tr-tr/DSM/help/DSM/StorageManager/storage_pool_change_raid_type?version=7) and SHR is a no-brainer, if you want to expand from, say RAID-1 to SHR with multiple disks (with multiple sizes), you need to first add an extra storage pool, then first [https://kb.synology.com/en-my/DSM/tutorial/How_do_I_move_data_from_one_volume_to_another copy all the shared folders] over:<blockquote>
=== Move a shared folder to another volume ===
 
# Go to '''Control Panel''' > '''Shared Folder''', select a folder you want to move, and click '''Edit'''.
# From the '''Location''' drop-down menu, select the volume where you want to move the folder and click '''OK'''. This may take some time, depending on the size of the folder.
</blockquote>NB If you have immutable snapshots on then the move operation below will not work. You need to go to start -> Snapshot replication -> snapshots -> select share -> settings -> disable immutable snapshots -> wait for that long to be able to move the share between volumes.
 
NB2. during the move all services are shut down.
 
If you don't want to wait that long, you need to create a new shared folder, set it to the new volume (using Location menu) and copy stuff over using file station. If you do this, then you will not be able to delete the original shared folder after you remove the disk volume, as it is missing. The solution is to use the 'delete' button on the keyboard.
 
And then you also need to move the packages to the new volume. This can be done in 2 ways - reinstalling the package (which loses all the data) or by [https://www.reddit.com/r/synology/comments/159srwx/migrating_all_files_and_packages_to_another_volume/ migrating] them. The best way to do this is the (well maintained) [https://github.com/007revad/Synology_app_mover/tree/main?tab=readme-ov-file Synology App Mover] script.
 
The script will not move the postgresql database. To do this you need to
 
<pre>
cd /volume1/
systemctl stop pgsql
tar czvf database.tgz ./@database/
mv database.tgz /volume2/
tar xzvf database.tgz
systemctl start pgsql
</pre>
make sure the links are good too:
 
<pre>
find . -iname pgsql
</pre>
 
You should see:
 
'''./volume1/@database/pgsql'''
 
'''./usr/share/pgsql'''
 
'''./var/services/pgsql'''
 
check for errors with <code>tail -f /var/log/postgresql.log</code>
 
If you were using a shared space in photos and have moved the shared space to volume2 you may then need to:
 
# Go to Synology Photos > Settings > Shared Space.
# Disable the Shared Space.
# Then go back in and Enable Shared Space.
 
The other way of doing it is by creating a multibackup in hyperbackup of the packages, saving that to volume2, removing volume1 disks and then restoring the backup.
 
= Backups =
 
== Prepare for rsync ==
NB the synology can only push rsync files, it can not be set up to pull files from a remote rsync server.
NB the synology can only push rsync files, it can not be set up to pull files from a remote rsync server.


Control panel -> (file sharing) Shared Folder -> create folder to back up to. Give the user you want to rsync to Read/Write access
Control panel -> (file sharing) Shared Folder -> create folder to back up to. Give the user you want to rsync to Read/Write access


Control panel -> File services -> enable rsync service
'''depreciated'''
----
Start (top left) -> Backup & replication -> backup services -> enable network backup service
Start (top left) -> Backup & replication -> backup services -> enable network backup service


Line 32: Line 114:


If you don't do this, it will rsync everything without the user name for some reason
If you don't do this, it will rsync everything without the user name for some reason
=== Synology to synology ===
You backup from the source to the target
ssh into the source machine ('''mind the slashes at the end!''' if you don't use the slash it will create a directory in the directory you specify)
  rsync -avn /volume1/sourcedirectory/ user@192.168.0.105:/volume1/targetdirectory/
to check if it works. Drop the n to start the actual copy.
== Hyper Backup ==
This will create a backup directory which you can't browse with file explorer as it stores everything in a hyper backup format (Non browseable). NB this fails pretty badly using l2tp VPN connection. You need to use Tailscale (see VPN below).
=== Using Hyper Backup Vault ===
Note: the destination needs the Hyper Backup Vault package installed. Launch Hyper backup on the source. Use the + bottom left to create a new backup job. Select data backup. Select Remote NAS device. fill in the hostname and then you can select which Shared folder it will use as a destination. Note: you cannot use photos or homes. The Directory is the name of the directory it will make on the shared folder on the destination device.
If the firewall is enabled, go to control Panel -> Network -> Firewall -> Edit rules -> Create -> select hyper backup vault as application
=== Using rsync ===
Launch Hyper backup on the source. Use the + bottom left to create a new backup job. Choose rsync. Fill in the data. As username and password you need a username and pass on the target machine. It will then fill the shared folder list with shares available on the target. You cannot backup to the root directory of the target share, so you need something in the directory field. After this it pretty much sets itself up. This will actually copy files, so you can browse them on the target NAS
=== Using Snapshot replication ===
This is supposed to be fairly effective, but not all DSM versions have this package and it needs to be installed on the recipient as well.


== Netgear ReadyNas Ultra setup for rsync to Synology ==
== Netgear ReadyNas Ultra setup for rsync to Synology ==
''Here we set up the Netgear to pull data from the Synology''


in the /admin interface first
in the /admin interface first
Line 50: Line 157:


Note that the '''Path needs to be EMPTY''' before pressing the 'Test connection' button. It will sometimes work if you fill in NetBackup but you're best off doing the test empty, then typing in the path and then apply bottom right to test the backup job.
Note that the '''Path needs to be EMPTY''' before pressing the 'Test connection' button. It will sometimes work if you fill in NetBackup but you're best off doing the test empty, then typing in the path and then apply bottom right to test the backup job.
This is what the schedule will look like
[[File:backup schedule.png|400px]]
The Daily job
[[File:jobt1.png|400px]]
[[File:job2.png|400px]]
[[File:job3.png|400px]]
[[File:job4.png|400px]]
The Weekly job
[[File:weekly netgear rsync.png|400px]]
The monthly job
[[File:monthly netgear rsync1.png|400px]]
[[File:monthly netgear rsync2.png|400px]]


== useful linkies ==
== useful linkies ==
Line 68: Line 195:


Warning: indexing can take DAYS!
Warning: indexing can take DAYS!
==Converting media files==
Photo station converts your videos to flv as standard and to mp4 if you have '''conversion for mobile devices''' set to on under ''control panel -> indexing service.''
It will also convert your image files to thumbnails as standard.
This can take a few days or even weeks if you upload a lot of new stuff.
To speed the photo thumbnail generation up you can do the following:
/usr/syno/etc.defaults/thumb.conf
* changed the quality of thumbs to 70%
* divided by 2 all the thumbs size
* change the XL thumb size to 400 pixels
[https://forum.synology.com/enu/viewtopic.php?t=95060]
To view the status of the conversion, in /var/spool there are the following files
<pre>
conv_progress_photo
conv_progress_photo.pT5Pu5 
conv_progress_video 
conv_progress_video.CpHdpS 
flv_create.queue 
flv_create.queue.tmp 
thumb_create.queue 
thumb_create.queue.tmp 
</pre>
or
  ps -ef | grep thumb
To see the status of the converter
  sudo synoservicecfg --status synomkthumbd
[https://www.reddit.com/r/synology/comments/7ui7c1/conversion_process_details/]


== Preparing ReadyNAS for rsync towards it ==
== Preparing ReadyNAS for rsync towards it ==
Line 176: Line 265:
   synoservicectl --restart synoindexd
   synoservicectl --restart synoindexd


=Streaming=
The audio station DS Audio app is terrible and hangs a lot.
As alternatives there are Jellyfin; Airsonic and mstream. So far I like mstream, it's very light (388MB docker image, as it's file based.
== Jellyfin ==
very library based and uses quite a bit of CPU - no folder view
== Airsonic ==
Comes in 2 flavours: [https://airsonic.github.io/ airsonic] and [https://github.com/airsonic-advanced/airsonic-advanced airsonic-advanced]. The advanced version is a fork created due to frustration with the glacial pace of development of airsonic. [https://www.reddit.com/r/airsonic/comments/fu4gwd/airsonic_vs_airsonicadvanced/ Reddit rant here]
[https://hub.docker.com/r/airsonicadvanced/airsonic-advanced airsonic-advanced]
== MxStream ==
[https://docs.linuxserver.io/images/docker-mstream linuxserver/mstream]
Super lightweight:
iamge is 388 MB
fresh install:
CPU: 0.85%
RAM: 111 MB
file based
http://netbiosname:3000
== Gerbera BubbleUpnP ==
[https://www.reddit.com/r/synology/comments/adkjsr/my_solution_to_playing_audio_from_nas_on_my_cell/ information]
== Plex ==
Complaints about it being slow and jumpy - only for local streaming - also if you want to use it for streaming or downloading for internet streaming you need to pay
== Beets ==
re-organise your music collection?
= NetBIOS name not recognised after changing IP address =
The reason is probably that there is a DHCP lease somewhere with the old IP in it somewhere. Delete the lease and restart / reload the DHCP server.
You may be able to ping the machine using name. or name.DOMAIN or name.local
You can clear the windows NetBios and IP cache in an elevated (run as administrator) command prompt using the following commands:<blockquote>nbtstat -R
nbtstat -RR
ipconfig /flushdns</blockquote>
=Converting media files=
Photo station converts your videos to flv as standard and to mp4 if you have '''conversion for mobile devices''' set to on under ''control panel -> indexing service.''
It will also convert your image files to thumbnails as standard.
This can take a few days or even weeks if you upload a lot of new stuff.
To speed the photo thumbnail generation up you can do the following:
/usr/syno/etc.defaults/thumb.conf
* changed the quality of thumbs to 70%
* divided by 2 all the thumbs size
* change the XL thumb size to 400 pixels
[https://forum.synology.com/enu/viewtopic.php?t=95060]
To view the status of the conversion, in /var/spool there are the following files
<pre>
conv_progress_photo
conv_progress_photo.pT5Pu5 
conv_progress_video 
conv_progress_video.CpHdpS 
flv_create.queue 
flv_create.queue.tmp 
thumb_create.queue 
thumb_create.queue.tmp 
</pre>
or


  ps -ef | grep thumb
To see the status of the converter
  sudo synoservicecfg --status synomkthumbd
[https://www.reddit.com/r/synology/comments/7ui7c1/conversion_process_details/]


= More info =
= More info =
Line 221: Line 396:
     After Shared Photo Library is enabled, the Photo Station settings such as album permission, conversion rule, or other downloading settings will not migrate to or be inherited by Moments.
     After Shared Photo Library is enabled, the Photo Station settings such as album permission, conversion rule, or other downloading settings will not migrate to or be inherited by Moments.
</pre>
</pre>
=Installing ipkg=
[https://community.synology.com/enu/forum/1/post/127148 by plexflixler]
Go to the Synology Package Center, click on "Settings" on the top right corner and then click on "package sources".
Add the source "http://www.cphub.net" (you can choose the name freely, i.e. "CPHub")
Now close the settings. In the package center on the left go to the "Community" tab.
Find and install "Easy Bootstrap Installer" from QTip. There is also a GUI version if you prefer, called "iPKGui", also from QTip.
IPKG is now installed. The executables are located in "/opt/bin/". You can SSH to your NAS and use it. However, the directory has not yet been added to the  PATH variable, so to use it you would always need to use the full path "/opt/bin/ipkg".
You can add the directory to the PATH variable using the following command:
  export PATH="$PATH:/opt/bin"
However, this would only add the directory to PATH for the current session.
  sudo /opt/bin/ipkg update
  sudo /opt/bin/nano /etc/profile
Now find the PATH variable. It should look something like this:
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin
At the end of this string, just append ":/opt/bin" (don't forget the colon). Then save and close the file
Note that this will not automatically update your PATH for the current session. To do this, you can run:
source /etc/profile
To check wheter it worked, enter the command:
echo $PATH | tr ":" "\n" | nl
You should see the entry for "/opt/bin" there.
Now your all set.
=Installing mlocate=
requires ipkg (see above)
  ipkg install mlocate
Once you have done that, you run
  updatedb
and then you can use the locate command
=Universal search=
For some reason when installing universal search it doesn't add the shares to the index. You have to do this by hand in settings.


=Surveillance Station=
=Surveillance Station=
Line 243: Line 470:


* inclusief hardware, 1TB opslag en implementatiekosten, excl. voorrijkosten en evt backups / netwerkinstellingen
* inclusief hardware, 1TB opslag en implementatiekosten, excl. voorrijkosten en evt backups / netwerkinstellingen
=Docker=
Generally the workflow is:
docker -> add image (from the registry or from a url, eg airsonic/airsonic) -> double click image to create a container -> edit the advanced settings (auto restart on, add volumes, network, etc) -> confirm -> run container -> monitor the container in the container part
== volumes / permanence ==
These are locations on the synology that can be mounted in the container.
When installing docker a new main share: docker is created.
Using add volume you can choose a volume - if it's internal stuff to the container (eg /var/log) you select (or create) the folder(s): /docker/containername/var/log and then use the mount path /var/log to mount that location within the container.
In the docker cli instructions for an image this can be seen as the -v options
So if you are trying to mount your music you would mount /music/ to /music - you need to look out for permissions!
=== Permissions ===
For music and video: DLNA in Media Server;
GUID and GPID env variables
advanced permissions are places to look if you can view the files in the container terminal but the application in the container can't see the files! (you can check for advanced permissions by allowing Everyone read access in the normal permissions and seeing if the application can find them then)
file permissions also of the /docker/imagename/ directories
Don't forget to check Apply this folder, sub-folders and files!
== network port forwarding / connecting from outside the host==
You have two options here:
1. network=host: while starting (=creating) a container from an image, you can enable the checkbox "use same network as Docker Host" at the bottom in the "network" tab in additional settings. As a result you do not need to map any ports from dsm to the container, as dsms network interface is directly used. You need to take care of potential port collisions between dsm and the container of course.
2. network=bridged: map ports from the Docker Host (DSM) to the container. You can not access the ip of the container directly, though, you can access the mapped port on the Docker host. The potential port collision bettween dsm and containers are here possible as well.. but they can be corrected easier since you can just change the Docher host port which needs to be still mapped to the same D
ocker port.
In both cases the port can be accessed via dsm:port, though for option 1) this is only true if you did not change the ip INSIDE the container, if you did it will be container-ip:port.
[https://community.synology.com/enu/forum/17/post/102280 Connect to a docker container from outside the host (same network)]
'''So to have the external port be the same as the external port in bridged mode, edit the container and set the local port to be the same as the container port'''
In the docker cli instructions for an image this can be seen as the -p options
TODO: [https://mariushosting.com/synology-how-to-run-docker-containers-over-https/ How to run docker over https]
== Environment ==
Here you can add extra environment variables, eg TZ / GUID / PUID
You can find your users PID and GID by sshing into the synology and typing
  id
or
  id username
The GUID and PUID are the IDs for which the container itself will run, not docker (which will run as root)
== setting up a docker using cli arguments ==
As an example [https://docs.linuxserver.io/images/docker-mstream Mstream] from linuxserver (you can find the image in the docker registry or add it using url linuxserver/mstream
has docker cli
<pre>
docker run -d \
  --name=mstream \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -p 3000:3000 \
  -v /path/to/data:/config \
  -v /path/to/music:/music \
  --restart unless-stopped \
  lscr.io/linuxserver/mstream
</pre>
So you have to fill in your own gid / pid / timezone in the Environment part of the container. You set the port to be both 3000 inside and outside in a bridged network connection. You select /docker/mstream/config to mount as /config and you select your music library on the synology to mount to /music. You ensure permissions are right and you should see the files in mstream.
== Logging and troubleshooting ==
double clicking an container allows you to see the logs in a tab, which will help a lot. You can also access an active terminal in a tab on a running container
= VPN Connections =
[https://kb.synology.com/en-af/DSM/tutorial/How_connect_Synology_NAS_VPN_server How do I connect my Synology NAS to a VPN server?]
    Go to DSM Control Panel > Network > Network Interface.
    Click Create > Create VPN profile.
    Follow the instructions in this article to set up a VPN connection.
If you select
  Use default gateway on remote network
in the VPN profile you can only connect to the synology from the remote network, not from quickconnect [https://kb.synology.com/en-sg/DSM/tutorial/Cannot_connect_Synology_NAS_using_VPN_via_DDNS https://kb.synology.com/en-sg/DSM/tutorial/Cannot_connect_Synology_NAS_using_VPN_via_DDNS]. However if you do not select this you can not access the synology from the VPN IP.
I also had to select
  The server or clients are behind the NAT device
for it to work.
Notes:
    We recommend selecting L2TP as the VPN protocol if it is available from VPN service providers. OpenVPN profiles provided by certain VPN service providers may not be compatible with Synology NAS.
    When your Synology NAS is configured as an L2TP or OpenVPN client, you cannot configure it as a VPN server using the same protocol in VPN Server. PPTP does not have this limitation.
[https://kb.synology.com/en-af/DSM/help/DSM/AdminCenter/connection_network_vpnclient?version=7 VPN Connection] Synology knowledge centre
Unfortunatelty running Hyberbackup over l2tp results in severe slowdowns and even crashes of the VPN interface.
Wireguard connections
== Tailscale ==
You can connect via [https://tailscale.com/ Tailscale] - this should allow for better hyper backup transfers.
<pre>
Simply install Tailscale on both NAS. Then log in to Tailscale and look at the IPs Tailscale has assigned to each machine and use those when configuring things like HyperBackup. Note that to setup Tailscale properly you need to enable outbound connections (explained in Tailscale docs).
</pre>
Also, when you are in the taliscale console and the connection has been made, click on the 3 dots on the right of the machine name and disable key expiry. If your key does somehow expire then first re-authenticate the machine and THEN disable key expiry.
https://www.reddit.com/r/synology/comments/16pe5qn/struggling_to_get_hyper_backup_working_on_remote/
https://tailscale.com/kb/1131/synology#enabling-synology-outbound-connections<pre>
Enabling Synology outbound connections
Synology DSM7 introduced tighter restrictions on what packages are allowed to do. If you’re running DSM6, Tailscale runs as root with full permissions and these steps are not required.
By default, Tailscale on Synology with DSM7 only allows inbound connections to your Synology device but outbound Tailscale access from other apps running on your Synology is not enabled.
The reason for this is that the Tailscale package does not have permission to create a TUN device.
To enable TUN, to permit outbound connections from other things running on your Synology:
    Make sure you’re running Tailscale 1.22.2 or later, either from the Synology Package Center or a manually installed *.spk from the Tailscale Packages server.
    In Synology, go to Control Panel > Task Scheduler, click Create, and select Triggered Task.
    Select User-defined script.
    When the Create task window appears, click General.
    In General Settings, enter a task name, select root as the user that the task will run for, and select Boot-up as the event that triggers the task. Ensure the task is enabled.
    Click Task Settings and enter the following for User-defined script.
    /var/packages/Tailscale/target/bin/tailscale configure-host; synosystemctl restart pkgctl-Tailscale.service
    (If you’re curious what it does, you can read the configure-host code.)
    Click OK to save the settings.
    Reboot your Synology. (Alternatively, to avoid a reboot, run the above user-defined script as root on the device to restart the Tailscale package.)
Your TUN settings should now be persisted across reboots of your device.
If the Synology firewall is enabled: adjust the firewall settings
By enabling TUN, Tailscale traffic will be subject to Synology’s built-in firewall.
The firewall is disabled by default. However, if you have it enabled, add an exception for the Tailscale subnet, 100.64.0.0/10. In Main menu > Control Panel > Security > Firewall, add a firewall rule in the default profile that allows traffic from the source IP subnet 100.64.0.0 with subnet mask 255.192.0.0.
</pre><pre>
Do you also have 5001 forwarded since you require GUI access in order to validate the login ?
[...]
Thread on synoforum claims that is not possible e.g. the 5001 is hard coded:
https://www.synoforum.com/threads/hyper-backup-task-remote-connection-issue-due-to-dsm-7-2-resolved.11633/
Your choices appear to be use a VPN for the hyperbackup task or open port 5001 long enough to log in so the task can receive an authentication token.
</pre>Note - you may also need to enable the firewall for Hyper Backup Vault (see above) as well as allow p2p (uncheck in udmp under Settings - Security - General - Detection Sensitivity choose Customise and make sure p2p is not selected). You need tcp:443 and udp:3478 outbound open and udp:41641 inbound open. I never got this to work through a unifi udmp :( with tailscale, but opening the ports did allow me to use the original IP for Hyperbackup
https://tailscale.com/kb/1082/firewall-ports
= Upgrading to 10GbE =
Some models can be upgraded, eg the DS1255+ can have the 4 1GB eth ports replaced with a E10G22-T1-Mini https://www.synology.com/en-global/products/E10G22-T1-Mini

Latest revision as of 04:11, 27 September 2024

First things to do

Change IP

Control panel -> (connectivity) Network -> tab network interface -> select LAN and click Edit.

Enable home directories + 2 step verification

Control panel -> User -> Advanced -> Enable home user service + Enforce 2 step verification

change workgroup

control panel -> file services -> smb/afp/nfs -> smb / enable smb service : workgroup

change low volume warning threshold

change the % by going into my Storage Manager -> Storage -> Clicking the ... on the Volume I was looking for then Clicking Settings. If I scroll down on the Settings there is a Low Capacity Notification where you can change the % to 5% from 20%

Enable SSH

Control panel -> (System) Terminal & SNMP enable SSH

Control panel -> (connectivity) Security -> Firewall tab -> create a rule for SSH (encrypted terminal services), enable and save

for root login as admin and then sudo sh

(you can then log in to the machine using user root and the default admin password)

to enable your users to access the terminal vi /etc/passwd and change /sbin/nologin to /bin/ash

Create shared folders

Once you have created your shared folder, also enable snapshots

Migrating from one Synology NAS to another one

In general if you are upgrading NAS it is just a question of:

  1. moving the drives to the new NAS and starting it up
  2. visit https://finds.synology.com
  3. Selecting the migrate option
  4. NB it will not keep the original IP address so make sure to set that. You will probably have problems with NetBIOS, so look below to resolve that.

HOWEVER there are some catches, so make sure to review https://kb.synology.com/en-global/DSM/tutorial/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later

Also the following Youtube video is very worth watching for more information: https://www.youtube.com/watch?v=oGvbNlJaQIg

Copying everything from one volume to another to change RAID type

Because you cannot just change RAID type (https://kb.synology.com/tr-tr/DSM/help/DSM/StorageManager/storage_pool_change_raid_type?version=7) and SHR is a no-brainer, if you want to expand from, say RAID-1 to SHR with multiple disks (with multiple sizes), you need to first add an extra storage pool, then first copy all the shared folders over:

Move a shared folder to another volume

  1. Go to Control Panel > Shared Folder, select a folder you want to move, and click Edit.
  2. From the Location drop-down menu, select the volume where you want to move the folder and click OK. This may take some time, depending on the size of the folder.

NB If you have immutable snapshots on then the move operation below will not work. You need to go to start -> Snapshot replication -> snapshots -> select share -> settings -> disable immutable snapshots -> wait for that long to be able to move the share between volumes.

NB2. during the move all services are shut down.

If you don't want to wait that long, you need to create a new shared folder, set it to the new volume (using Location menu) and copy stuff over using file station. If you do this, then you will not be able to delete the original shared folder after you remove the disk volume, as it is missing. The solution is to use the 'delete' button on the keyboard.

And then you also need to move the packages to the new volume. This can be done in 2 ways - reinstalling the package (which loses all the data) or by migrating them. The best way to do this is the (well maintained) Synology App Mover script.

The script will not move the postgresql database. To do this you need to

cd /volume1/
systemctl stop pgsql
tar czvf database.tgz ./@database/
mv database.tgz /volume2/
tar xzvf database.tgz
systemctl start pgsql

make sure the links are good too:

find . -iname pgsql

You should see:

./volume1/@database/pgsql

./usr/share/pgsql

./var/services/pgsql

check for errors with tail -f /var/log/postgresql.log

If you were using a shared space in photos and have moved the shared space to volume2 you may then need to:

  1. Go to Synology Photos > Settings > Shared Space.
  2. Disable the Shared Space.
  3. Then go back in and Enable Shared Space.

The other way of doing it is by creating a multibackup in hyperbackup of the packages, saving that to volume2, removing volume1 disks and then restoring the backup.

Backups

Prepare for rsync

NB the synology can only push rsync files, it can not be set up to pull files from a remote rsync server.

Control panel -> (file sharing) Shared Folder -> create folder to back up to. Give the user you want to rsync to Read/Write access

Control panel -> File services -> enable rsync service

depreciated


Start (top left) -> Backup & replication -> backup services -> enable network backup service

Start (top left) -> Backup & replication -> Backup Destination -> create Local Backup destination -> backup up data to local shared folder -> the name you put in there will be the module name or the path you can use later.

then add the section in /etc/rsyncd.conf changing only the path and comment values (of course under the module name)

restart rsync with

/usr/syno/etc/rc.sysv/S84rsyncd.sh restart

If you don't do this, it will rsync everything without the user name for some reason

Synology to synology

You backup from the source to the target

ssh into the source machine (mind the slashes at the end! if you don't use the slash it will create a directory in the directory you specify)

  rsync -avn /volume1/sourcedirectory/ user@192.168.0.105:/volume1/targetdirectory/

to check if it works. Drop the n to start the actual copy.

Hyper Backup

This will create a backup directory which you can't browse with file explorer as it stores everything in a hyper backup format (Non browseable). NB this fails pretty badly using l2tp VPN connection. You need to use Tailscale (see VPN below).

Using Hyper Backup Vault

Note: the destination needs the Hyper Backup Vault package installed. Launch Hyper backup on the source. Use the + bottom left to create a new backup job. Select data backup. Select Remote NAS device. fill in the hostname and then you can select which Shared folder it will use as a destination. Note: you cannot use photos or homes. The Directory is the name of the directory it will make on the shared folder on the destination device.

If the firewall is enabled, go to control Panel -> Network -> Firewall -> Edit rules -> Create -> select hyper backup vault as application

Using rsync

Launch Hyper backup on the source. Use the + bottom left to create a new backup job. Choose rsync. Fill in the data. As username and password you need a username and pass on the target machine. It will then fill the shared folder list with shares available on the target. You cannot backup to the root directory of the target share, so you need something in the directory field. After this it pretty much sets itself up. This will actually copy files, so you can browse them on the target NAS

Using Snapshot replication

This is supposed to be fairly effective, but not all DSM versions have this package and it needs to be installed on the recipient as well.

Netgear ReadyNas Ultra setup for rsync to Synology

Here we set up the Netgear to pull data from the Synology

in the /admin interface first

Services -> Standard File Protocols -> ensure Rsync is enabled

Shares -> share listing -> click on rsync icon. scroll up and change default access to 'read only'. Set hosts allowed access to ip of synology (192.168.0.101). Fill in the username and password!

Backup -> Add a new backup job

If you don't fill in the path above it will copy the whole share. If you browse the share you can select a subdir to copy.

Note that the Path needs to be EMPTY before pressing the 'Test connection' button. It will sometimes work if you fill in NetBackup but you're best off doing the test empty, then typing in the path and then apply bottom right to test the backup job.

This is what the schedule will look like

The Daily job

The Weekly job

The monthly job

useful linkies

Readynas rsync howto PDF

synology rsync tutorial

small netbuilder tutorial

indexing media files

If you rsync files into the /volume1/photo or /volume1/video directories the system does not index them. They need to be copied in using windows or the internal file manager to index them automatically.

In control panel -> Media library you can re-index the files in the photo directory.

In the video station itself under collection -> settings you can re-index.

As you can't set /video/ as a directory to use in the video station, you have to set a symbolic link from /volume1/video/movie to wherever you want to have your directories.

Warning: indexing can take DAYS!

Preparing ReadyNAS for rsync towards it

Ensure in Shares that the rsync is enabled on at least one share. Ensure the host you are coming from is allowed and that there is a username / password for it.

Symbolic links

Windows can't handle symbolic links on the synology, so you have to mount directories with

mount -o bind /volume1/sourcedir /volume1/destdir

as root

and then copy the mount command to /etc/rc.local to make it stick after a reboot

prepare the windows machine for Picasa

SSH to the synology, log in as root

mkdir /volume1/photo/Pictures
mount -o bind /volume1/photo/ /volume1/photo/Pictures

edit /etc/rc.local

insert the mount command above into the file - this is for when the synology restarts

in windows explorer map \\invader\photo to T:

Picasa database file locations

There are 2 directories in % %LocalAppData%\Google\ which is the same as C:\Users\razor\AppData\Local\Google\


Copy them over before running Picasa for the first time.

NFS

First make sure in Control Panel -> File Services that NFS is enabled.

Then in Control Panel -> File Sharing -> Edit the share -> create NFS Permissions. You should only have to change the IP/Hostname field.

see here

CIFS

sudo mount -t cifs //192.168.0.101/home/ ~/xx/ -o username=razor,uid=razor,gid=users

To find the NAS on linux with netbios

You have to enable the bonjour service on the NAS

Control Panel -> File services -> Enable AFP service, then in Advanced tab -> also enable

The linux machine needs avahi running. Check using

  sudo service avahi-daemon status

DLNA

The video players such as a TV (eg on an LG TV under "photos and videos" app) that play directly from the NAS use DLNA. The synology (or DLNA server) creates a database which the player reads.

The settings for the Media Server / DNLA can be found under the synology start menu -> media server. It is quite possible that Synology decides it doesn't like your player much and gives it a device type which it's not happy with. Under DMA Compatibility -> Device List you can change the profile to Default profile, which may help.

If the database is somehow damaged, you can rebuild it under control panel -> indexing service and then click re-index. This can take days!

To check the status of the rebuild, ssh in (using admin / pw, then sudo su) and you can check to see what's being rebuilt by issuing

  ls -l /proc/$(pidof synomkthumbd)/fd

To monitor what's happening over some time do (nb it will take some time before you see anything appear!)

   while sleep 30; do ls -l /proc/$(pidof synomkthumbd)/fd | grep volume; done

from mcleanit.ca

If the indexing service seems frozen then restart it with

  synoservicectl --restart synoindexd

Streaming

The audio station DS Audio app is terrible and hangs a lot.

As alternatives there are Jellyfin; Airsonic and mstream. So far I like mstream, it's very light (388MB docker image, as it's file based.

Jellyfin

very library based and uses quite a bit of CPU - no folder view

Airsonic

Comes in 2 flavours: airsonic and airsonic-advanced. The advanced version is a fork created due to frustration with the glacial pace of development of airsonic. Reddit rant here

airsonic-advanced

MxStream

linuxserver/mstream

Super lightweight:

iamge is 388 MB

fresh install:

CPU: 0.85%

RAM: 111 MB

file based

http://netbiosname:3000

Gerbera BubbleUpnP

information

Plex

Complaints about it being slow and jumpy - only for local streaming - also if you want to use it for streaming or downloading for internet streaming you need to pay

Beets

re-organise your music collection?

NetBIOS name not recognised after changing IP address

The reason is probably that there is a DHCP lease somewhere with the old IP in it somewhere. Delete the lease and restart / reload the DHCP server.

You may be able to ping the machine using name. or name.DOMAIN or name.local

You can clear the windows NetBios and IP cache in an elevated (run as administrator) command prompt using the following commands:

nbtstat -R

nbtstat -RR

ipconfig /flushdns

Converting media files

Photo station converts your videos to flv as standard and to mp4 if you have conversion for mobile devices set to on under control panel -> indexing service.

It will also convert your image files to thumbnails as standard.

This can take a few days or even weeks if you upload a lot of new stuff.

To speed the photo thumbnail generation up you can do the following:

/usr/syno/etc.defaults/thumb.conf

  • changed the quality of thumbs to 70%
  • divided by 2 all the thumbs size
  • change the XL thumb size to 400 pixels

[1]

To view the status of the conversion, in /var/spool there are the following files

conv_progress_photo 
conv_progress_photo.pT5Pu5  
conv_progress_video  
conv_progress_video.CpHdpS  
flv_create.queue  
flv_create.queue.tmp  
thumb_create.queue  
thumb_create.queue.tmp  

or

  ps -ef | grep thumb

To see the status of the converter

  sudo synoservicecfg --status synomkthumbd

[2]

More info

How to back up data on Synology NAS to another server this should also work for a synology nas to another synology nas

backup via internet NL forum link, aldus deze pagina, poorten:

Network Backup: 873 TCP

Encrypted Network Backup: 873, 22 TCP

backup nas via rsync

How to encrypt shared folders on Synology NAS (uses AES, untick auto mount for stealing, but need to input passd on reboot via web interface)

How to make Synology NAS accessible via the Internet

How to secure your Synology NAS server on the Internet

Reports

In the Storage Analyzer settings you can set and see where the Synology saves reports. For some reason the Synology saves old reports you have deleted and so you can't create new reports with the same name without deleting the old files:

When you create a report task, a dedicated folder for this report will be automatically created under the destination folder that you have designated as the storage location for reports. When you delete a report task from the list on Storage Analyzer's homepage, you delete the report's profile only, while its folder still exists. To delete the report's own folder, please go to the designated destination folder > synoreport, and delete the folder with the same name as the report.

Moments

https://www.synology.com/en-global/knowledgebase/DSM/help/SynologyMoments/moments_share_and_search

Enable Shared Photo Library

Shared Photo Library allows you and users with permissions to collaboratively edit the photos and albums in Moments. Please note that only users belonging to the administrative groups can enable this feature.
To enable Shared Photo Library:

    Click the Account icon on the bottom-left corner and select Settings > Shared Photo Library > Enable Shared Photo Library.
    Click Next to confirm and enable Shared Photo Library.
    Select users to grant them the permissions to access Shared Photo Library.
    Click OK to finish. Now you can switch between My Photo Library and Shared Photo Library.

Note:

    The shared folder named /photo is the default path for Shared Photo Library.
    If you have already installed Photo Station, the photos in Photo Station can be displayed after the source of photos is switched to Shared Photo Library in Moments. Please note that the converted thumbnails in Photo Station will not be processed again.
    After Shared Photo Library is enabled, the Photo Station settings such as album permission, conversion rule, or other downloading settings will not migrate to or be inherited by Moments.

Installing ipkg

by plexflixler Go to the Synology Package Center, click on "Settings" on the top right corner and then click on "package sources".

Add the source "http://www.cphub.net" (you can choose the name freely, i.e. "CPHub")

Now close the settings. In the package center on the left go to the "Community" tab.

Find and install "Easy Bootstrap Installer" from QTip. There is also a GUI version if you prefer, called "iPKGui", also from QTip.

IPKG is now installed. The executables are located in "/opt/bin/". You can SSH to your NAS and use it. However, the directory has not yet been added to the PATH variable, so to use it you would always need to use the full path "/opt/bin/ipkg".

You can add the directory to the PATH variable using the following command:

  export PATH="$PATH:/opt/bin"

However, this would only add the directory to PATH for the current session.

  sudo /opt/bin/ipkg update
  sudo /opt/bin/nano /etc/profile

Now find the PATH variable. It should look something like this:

PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin

At the end of this string, just append ":/opt/bin" (don't forget the colon). Then save and close the file

Note that this will not automatically update your PATH for the current session. To do this, you can run:

source /etc/profile

To check wheter it worked, enter the command:

echo $PATH | tr ":" "\n" | nl

You should see the entry for "/opt/bin" there.

Now your all set.

Installing mlocate

requires ipkg (see above)

  ipkg install mlocate

Once you have done that, you run

  updatedb

and then you can use the locate command

Universal search

For some reason when installing universal search it doesn't add the shares to the index. You have to do this by hand in settings.

Surveillance Station

Timelapse

Using Action Rules

Using ffmpeg video stitching with a useful note on getting rid of audio if you're using setpts in ffmpeg

Using Smart Time Lapse which converts videos



Beveiligde documentopslag vanaf EUR 989,-*

AES encryption beveiligt uw bestanden

Snel geplaatst en ingericht naar uw wensen

Data wordt op extra harde schijf gekopieerd voor het geval er een kapot gaat

Mogelijkheid om versleutelde bestanden over internet te backuppen

  • inclusief hardware, 1TB opslag en implementatiekosten, excl. voorrijkosten en evt backups / netwerkinstellingen

Docker

Generally the workflow is:

docker -> add image (from the registry or from a url, eg airsonic/airsonic) -> double click image to create a container -> edit the advanced settings (auto restart on, add volumes, network, etc) -> confirm -> run container -> monitor the container in the container part

volumes / permanence

These are locations on the synology that can be mounted in the container.

When installing docker a new main share: docker is created.

Using add volume you can choose a volume - if it's internal stuff to the container (eg /var/log) you select (or create) the folder(s): /docker/containername/var/log and then use the mount path /var/log to mount that location within the container.

In the docker cli instructions for an image this can be seen as the -v options

So if you are trying to mount your music you would mount /music/ to /music - you need to look out for permissions!

Permissions

For music and video: DLNA in Media Server;

GUID and GPID env variables

advanced permissions are places to look if you can view the files in the container terminal but the application in the container can't see the files! (you can check for advanced permissions by allowing Everyone read access in the normal permissions and seeing if the application can find them then)

file permissions also of the /docker/imagename/ directories

Don't forget to check Apply this folder, sub-folders and files!

network port forwarding / connecting from outside the host

You have two options here:

1. network=host: while starting (=creating) a container from an image, you can enable the checkbox "use same network as Docker Host" at the bottom in the "network" tab in additional settings. As a result you do not need to map any ports from dsm to the container, as dsms network interface is directly used. You need to take care of potential port collisions between dsm and the container of course.

2. network=bridged: map ports from the Docker Host (DSM) to the container. You can not access the ip of the container directly, though, you can access the mapped port on the Docker host. The potential port collision bettween dsm and containers are here possible as well.. but they can be corrected easier since you can just change the Docher host port which needs to be still mapped to the same D ocker port.

In both cases the port can be accessed via dsm:port, though for option 1) this is only true if you did not change the ip INSIDE the container, if you did it will be container-ip:port.

Connect to a docker container from outside the host (same network)

So to have the external port be the same as the external port in bridged mode, edit the container and set the local port to be the same as the container port

In the docker cli instructions for an image this can be seen as the -p options

TODO: How to run docker over https

Environment

Here you can add extra environment variables, eg TZ / GUID / PUID

You can find your users PID and GID by sshing into the synology and typing

  id

or

  id username

The GUID and PUID are the IDs for which the container itself will run, not docker (which will run as root)

setting up a docker using cli arguments

As an example Mstream from linuxserver (you can find the image in the docker registry or add it using url linuxserver/mstream

has docker cli

docker run -d \
  --name=mstream \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -p 3000:3000 \
  -v /path/to/data:/config \
  -v /path/to/music:/music \
  --restart unless-stopped \
  lscr.io/linuxserver/mstream

So you have to fill in your own gid / pid / timezone in the Environment part of the container. You set the port to be both 3000 inside and outside in a bridged network connection. You select /docker/mstream/config to mount as /config and you select your music library on the synology to mount to /music. You ensure permissions are right and you should see the files in mstream.

Logging and troubleshooting

double clicking an container allows you to see the logs in a tab, which will help a lot. You can also access an active terminal in a tab on a running container

VPN Connections

How do I connect my Synology NAS to a VPN server?

   Go to DSM Control Panel > Network > Network Interface.
   Click Create > Create VPN profile.
   Follow the instructions in this article to set up a VPN connection.

If you select

  Use default gateway on remote network 

in the VPN profile you can only connect to the synology from the remote network, not from quickconnect https://kb.synology.com/en-sg/DSM/tutorial/Cannot_connect_Synology_NAS_using_VPN_via_DDNS. However if you do not select this you can not access the synology from the VPN IP.

I also had to select

  The server or clients are behind the NAT device

for it to work.

Notes:

   We recommend selecting L2TP as the VPN protocol if it is available from VPN service providers. OpenVPN profiles provided by certain VPN service providers may not be compatible with Synology NAS.
   When your Synology NAS is configured as an L2TP or OpenVPN client, you cannot configure it as a VPN server using the same protocol in VPN Server. PPTP does not have this limitation.

VPN Connection Synology knowledge centre

Unfortunatelty running Hyberbackup over l2tp results in severe slowdowns and even crashes of the VPN interface.

Wireguard connections

Tailscale

You can connect via Tailscale - this should allow for better hyper backup transfers.

Simply install Tailscale on both NAS. Then log in to Tailscale and look at the IPs Tailscale has assigned to each machine and use those when configuring things like HyperBackup. Note that to setup Tailscale properly you need to enable outbound connections (explained in Tailscale docs).

Also, when you are in the taliscale console and the connection has been made, click on the 3 dots on the right of the machine name and disable key expiry. If your key does somehow expire then first re-authenticate the machine and THEN disable key expiry.

https://www.reddit.com/r/synology/comments/16pe5qn/struggling_to_get_hyper_backup_working_on_remote/

https://tailscale.com/kb/1131/synology#enabling-synology-outbound-connections

 Enabling Synology outbound connections

Synology DSM7 introduced tighter restrictions on what packages are allowed to do. If you’re running DSM6, Tailscale runs as root with full permissions and these steps are not required.

By default, Tailscale on Synology with DSM7 only allows inbound connections to your Synology device but outbound Tailscale access from other apps running on your Synology is not enabled.

The reason for this is that the Tailscale package does not have permission to create a TUN device.

To enable TUN, to permit outbound connections from other things running on your Synology:

    Make sure you’re running Tailscale 1.22.2 or later, either from the Synology Package Center or a manually installed *.spk from the Tailscale Packages server.

    In Synology, go to Control Panel > Task Scheduler, click Create, and select Triggered Task.

    Select User-defined script.

    When the Create task window appears, click General.

    In General Settings, enter a task name, select root as the user that the task will run for, and select Boot-up as the event that triggers the task. Ensure the task is enabled.

    Click Task Settings and enter the following for User-defined script.

    /var/packages/Tailscale/target/bin/tailscale configure-host; synosystemctl restart pkgctl-Tailscale.service

    (If you’re curious what it does, you can read the configure-host code.)

    Click OK to save the settings.

    Reboot your Synology. (Alternatively, to avoid a reboot, run the above user-defined script as root on the device to restart the Tailscale package.)

Your TUN settings should now be persisted across reboots of your device.
If the Synology firewall is enabled: adjust the firewall settings

By enabling TUN, Tailscale traffic will be subject to Synology’s built-in firewall.

The firewall is disabled by default. However, if you have it enabled, add an exception for the Tailscale subnet, 100.64.0.0/10. In Main menu > Control Panel > Security > Firewall, add a firewall rule in the default profile that allows traffic from the source IP subnet 100.64.0.0 with subnet mask 255.192.0.0.
Do you also have 5001 forwarded since you require GUI access in order to validate the login ? 

[...]

Thread on synoforum claims that is not possible e.g. the 5001 is hard coded:

https://www.synoforum.com/threads/hyper-backup-task-remote-connection-issue-due-to-dsm-7-2-resolved.11633/

Your choices appear to be use a VPN for the hyperbackup task or open port 5001 long enough to log in so the task can receive an authentication token.

Note - you may also need to enable the firewall for Hyper Backup Vault (see above) as well as allow p2p (uncheck in udmp under Settings - Security - General - Detection Sensitivity choose Customise and make sure p2p is not selected). You need tcp:443 and udp:3478 outbound open and udp:41641 inbound open. I never got this to work through a unifi udmp :( with tailscale, but opening the ports did allow me to use the original IP for Hyperbackup

https://tailscale.com/kb/1082/firewall-ports

Upgrading to 10GbE

Some models can be upgraded, eg the DS1255+ can have the 4 1GB eth ports replaced with a E10G22-T1-Mini https://www.synology.com/en-global/products/E10G22-T1-Mini