Keep your firewall rules tight and make sure your patches are applied ASAP, they don’t have a great track record when it comes to security. I would personally disable any public access.
Some things I've had to consider setting these up myself:
Make sure you don't treat the nas as a backup. Keep a third copy of your data, ideally something like an external hard drive you bring to your parents house or a friend's house or something that's not with you.
Make sure you turn on any scanning for corruption That's available. You should have it doing the equivalent of a scrub where it seeks through all of your data and make sure there are no bit rot issues.
The same is true for running smart tests. Make sure you're running those on the drives and that you have some form of alert when they fail so that you can replace them quickly.
You may have trouble doing six terabyte drives in a raid 5. When a drive fails a rebuild of the raid can take up to a day or two, and that's the time that your other drives are the most likely to fail. It's probably going to be fine, but be double sure you keep a backup in case it happens.
Assuming this is your plan, I wouldn't worry about using an SSD for caching/making your nas faster. You're probably going to be more limited by your network than your disk speed, because with a 1 gigabit per second network you can only write/read 125 megabytes per second onto the nas. Use those ssds for VMs instead.
I've had 2 synos over the last 8 years or so and they've been fantastic. Started out with just a simple NAS on one of the J models and then got a 918+ when it came out. I run about 30 docker containers via docker-compose on mine for various homelab type things. With 32GB on yours you'll almost certainly want to do something else with it as well just to get use out of it. The normal NAS operations barely use any RAM at all.
Quick glance over my list is:
❯ docker ps --format '{{.Names}}'
wx # static web server for weewx output
weewx # process mqtt weather data from mqtt
aw2mqtt # recieve weather station data
mosquitto # mqtt!
bitwarden # password vault, it's actually vaultwarden
traefik # docker aware proxy for all the webstuff
loki # logs
minio # s3 storage
lychee # photo sharing
photoprism # also photo sharing
mariadb # smells like mysql
act-runner # runs github actions locally
gitea # web git system
grafana # graphs!
prometheus # data for graphs!
prometheus-blackbox # get data for prom from weirdstuff
node-exporter-fatty # present NAS data to prometheus
promtail-fatty # get logs from NAS for loki
keycloak # authentication, fancy.
unifi-controller # keep the unifi crap in line
dashy # try to be organized, fail
step-ca # local certificate authority, supports acme
speedtest-tracker-att # monitor speedtest for att internet
speedtest-tracker-spectrum # monitor speedtest for spectrum internet
alertmanager # alert me if any of this crap breaks
coredns # fancy local DNS
cloudflare-api # talk to cloudflare to rearrange dns and stuff
cadvisor # provide container data to prometheus
postgres # the other database server
redis # it's just redis.
I’ve had a couple of synology NAS’s for about 8 years or so.
I couple of things that come to mind:
32gb of ram is probably overkill! Moreover synology likes a certain type of error correcting ram and will prompt a warning if you buy non-synology modules. I have 4+4 gb and it’s plenty for my use case.
Label your HDDs with regards to what slot they occupy in the NAS. You may have to replace the NAS and apparently the order is important.
You turn off the unit by holding the power button (it’ll start a safe shut down procedure, won’t just cut the power like a pc would).
You can install apps including docker - which opens lots of possibilities. I have jellyfin (media server) as well as Usenet apps.
Take advantage of users and privileges… for example, you should have a “user” just for backups, this allows you to set a limit to what share folders can be accessed and how much storage they get. It’ll prevent you or someone else from deleting things by accident.
If you chose to encrypt your shared folders, make sure to save the keys somewhere else.
Make sure you set up an email address so that the drive health monitor can send you periodic health updates as well as warnings when something is failing. I did not do this, and much time passed (as it does). I happened to log in one day to find both drives (raid 1 config) in an unhealthy state. Fortunately, I was able to plug in an external USB drive and copy everything off before replacing the drives.
If you're feeling paranoid, you may want to consider buying drives from different manufacturers, or at least the same drive from different sources so that they will be from different batches. This can help with correlated drive failures due to a manufacturing defect.
A useful thing I have set up is automated backups of my other cloud services (google drive, dropbox). There is a "no delete" option so that it only adds files that show up, but doesn't delete ones that are removed, which is handy to guard against someone accidentally or on purpose removing all your files from the cloud drive.
The built in reverse proxy sucks if you want to do anything more than the most basic of things. My solution was to run a small Debian VM with nginx and direct all outside traffic to that instead. This worked perfectly. But then again, I have a some experience using nginx for this. It may not be the best solution for you.
It is performant enough to run VM's, but be aware that some specific actions can be very slow. In my case things like running Debian's apt to upgrade the system, or building a Docker container where agonizing slow. I did not dig further, but I suspect that software often forces filesystem syncs, which I can imagine is a slow operation on a device like this because it has a lot of file caching machinery, lots of calculations for the RAID setup, etc. But this is all speculation on my side.
In the end I decided I wanted to do the automated shutdown and wake up so I removed all containers and VM's. Then the goal becomes to have it start up when you want to access a file share, in my case mounting a folder in Linux over NFS. This has some gotchas and I plan to write a blog post about it, but actually there is already a good one about it here: https://dj-does.medium.com/nfs-mounts-and-wake-on-lan-25c0c1d55c90
Keep your firewall rules tight and make sure your patches are applied ASAP, they don’t have a great track record when it comes to security. I would personally disable any public access.
Some things I've had to consider setting these up myself:
Make sure you don't treat the nas as a backup. Keep a third copy of your data, ideally something like an external hard drive you bring to your parents house or a friend's house or something that's not with you.
Make sure you turn on any scanning for corruption That's available. You should have it doing the equivalent of a scrub where it seeks through all of your data and make sure there are no bit rot issues.
The same is true for running smart tests. Make sure you're running those on the drives and that you have some form of alert when they fail so that you can replace them quickly.
You may have trouble doing six terabyte drives in a raid 5. When a drive fails a rebuild of the raid can take up to a day or two, and that's the time that your other drives are the most likely to fail. It's probably going to be fine, but be double sure you keep a backup in case it happens.
Assuming this is your plan, I wouldn't worry about using an SSD for caching/making your nas faster. You're probably going to be more limited by your network than your disk speed, because with a 1 gigabit per second network you can only write/read 125 megabytes per second onto the nas. Use those ssds for VMs instead.
I've had 2 synos over the last 8 years or so and they've been fantastic. Started out with just a simple NAS on one of the J models and then got a 918+ when it came out. I run about 30 docker containers via docker-compose on mine for various homelab type things. With 32GB on yours you'll almost certainly want to do something else with it as well just to get use out of it. The normal NAS operations barely use any RAM at all.
Quick glance over my list is:
I’ve had a couple of synology NAS’s for about 8 years or so.
I couple of things that come to mind:
Make sure you set up an email address so that the drive health monitor can send you periodic health updates as well as warnings when something is failing. I did not do this, and much time passed (as it does). I happened to log in one day to find both drives (raid 1 config) in an unhealthy state. Fortunately, I was able to plug in an external USB drive and copy everything off before replacing the drives.
If you're feeling paranoid, you may want to consider buying drives from different manufacturers, or at least the same drive from different sources so that they will be from different batches. This can help with correlated drive failures due to a manufacturing defect.
A useful thing I have set up is automated backups of my other cloud services (google drive, dropbox). There is a "no delete" option so that it only adds files that show up, but doesn't delete ones that are removed, which is handy to guard against someone accidentally or on purpose removing all your files from the cloud drive.
Some things I wish I'd known before I bought my 923+, or learned while using it:
apt
to upgrade the system, or building a Docker container where agonizing slow. I did not dig further, but I suspect that software often forces filesystem syncs, which I can imagine is a slow operation on a device like this because it has a lot of file caching machinery, lots of calculations for the RAID setup, etc. But this is all speculation on my side.