I am sorry if this the wrong community to ask in, while I have been on Lemmy for more than a year now, I am still learning my way around, and this seems like a relatively active community in a relevant area.
Right, on to my questions!
I am planning to build a NAS over the summer, at the moment all of my personal photos are stored on a single mechanical 2TB Seagate drive that is about 4 years old.
I have other media on another drive that is older but larger, all in all I expect that I have about 8TB of data that I care about.
I am working as a 365 admin, and have been the main Linux admin at my last place of work, I am also a hobby photographer in my spare time.
Currently, I am looking at using either the N4, the N3 or the N5 from Jonsbo, the N4 is a beautiful case!
I am thinking of running four 6TB drives in a softraid like this:
Linux > MDAM (raid 5) > LVM > ext4
My thinking is that I will probably need to migrate to new drives every X years or so, and with the LVM, I can just add a new external (larger) drive to the VG, and move the LV from the old drives to the external drive, remove the old raid drives from the VG, put in new drives, setup MDAM, add the raid to the VG and move the LV back to the raid.
Am I overthinking this? this NAS will be my main media machine and will probably see a decent ammount of use over the years.
I have thought about setting up OpenMediaVault or TrueNAS as the OS, but having never run them, I wonder if they will be as flexible as I want them to be.
I am currently considering just running Debian and setting this up from the terminal, but I am not a super fan of SMB settings in the terminal, I did consider using cockpit as a web admin tool once it is setup to monitor the system, can I do the SMB config from that?
I am apprehensive about a manual SMB config, as the last time I did it, it was a weird mess for the team who had to use it…
I am more familiar with AMD hardware over Intel, and I am looking at the old AM4 plattfrom, but what I don’t know is how much power a homebuilt NAS will use in standby or when active.
I bought a Synology years ago and it served me well. I bought a newer model that was smaller, two drives.
I like that I don’t need to think about it. It has a simple offsite cloud backup that’s pretty cheap, or you can set up your own. It supports docker and the software packages it supports are good enough for me.
I have been shillyshallying between running Synology or building my own machine, and right now am probably going for what I habe experience with.
Cloud backup sounds cool, but my current thinking is that my ideal system would have two NAS machines, the primary is the one I access in my day to day, and then I run a borg-backup or rsync between them every night to get all changes, over time I could get a few external drives and run my backups over sneakernet to my parents, low update frequency, sure, but would be a simple way to do it.
I do currently have an old Intel NAS that I have had for 10+ years, it was used and had used disks when I got it, as a cold backup.
I used TrueNAS with all of my random old drives in a media center case, it works well. I am uneducated in the technicalities of sysadmin etc, but it was easy to figure it out basic setup for a dufus. It looks like there is way more in there to mess with if you had the knowhow.
Do you really want to run this yourself? If the data is that important to you, I’d probably rather invest in something like a Synology NAS. They make sure that updates won’t kill your data, everything stays secure and you don’t have to mess with
mdadm
or LVM yourself.Under the hood, Synology’s SHR also uses bog-standard MD and LVM. So even if the NAS dies, you can still read your data on any Linux machine. But you won’t have to think about updates potentially breaking anything and it has a plethora of features around storage management that you can configure with a few clicks instead of messing around with system packages, config files and systemd.
TrueNAS Scale is a good option. ZFS is a very resilient filesystem. I lost a lot of data to a software raid in the past that didn’t checksum the data and now I have an affinity for zfs. I believe they have added the ability to grow with larger drives as well - just disconnect drive an and insert new larger drive b, let it resilver, and once you’ve got them all replaced it grows the volume. Set it up, see how you like it, and move your data over if you do.
You may be different, but given that your current situation is a couple drives sitting on a desk for 4+ years, I wouldn’t worry about expansion so much. I built a nas a while ago and figured I’d upgrade it, and I haven’t. Until it’s full, it’ll keep going.
Also check price/gb before settling on 6TB. That’s small.
I know about TrueNAS, but have never run it on dedicated hardware, at most I have run it in a Virtualbox to test it out.
Though to be fair I have never worked with MDAM either, but I did work with ext4 and xfs when I was a Linux admin, the ext4 filesystems I ran, was setup in an LVM, but to be fair, it was just VMs and I never had to consider the hypervisors raid as it was another team dealing with that.
You might want to check out the self-hosted communities on Lemmy for more info.
If you want to use Cockpit, the 45drives Cockpit modules make dealing with SMB easier. I think TrueNAS is a better option. If you want more flexibility, then Proxmox VE is a popular choice.
Generally desktop hardware is surprisingly power efficient, especially with lower-midrange components. Right now my home server is running on an ewaste HP Elitedesk.
For software, I’d really go for a config that uses ZFS over EXT4 for the data storage. ZFS is so battle-tested that anything you might find you want or need to fix or change, someone else has already documented the same situation multiple times over. Personally I went with a config like Apalrd’s with using proxmox for a stable host OS with good management and to create the zfs pool, then a container running cockpit for creating and managing the shares.
Currently that server has a 800GB Intel Datacenter SSD for boot and VM storage, and 2x 4TB HDDs in a ZFS mirror for NAS storage, an with a i5-4590 it’s running 6 Minecraft servers via Crafty Controller, Jellyfin, the Samba shares and I’ve spun up other random servers and VMs as desired/needed without trouble. Basically all of the services which run 24/7 are in LXCs because running Debian VMs on my Debian host seems too redundant for my tastes.
ZFS is damned cool, but it is something I have limited experience with, at the moment I just want something I am familiar with to get something set up.
I will probably get a lab machine with in three years to so to learn more about how to deal with ZFS and TrueNAS over time before I feel comfortable running it myself.
You could use an OS like Unraid that handles ZFS for you. You don’t really need to know how ZFS works if you use Unraid since it’s all set up through the web UI. You can always search for how to do things if needed :)
ZFS has bitrot protection which is very useful for important files. Whenever you save a file, it computes a checksum for that file and stores it in the file table. When you read a file, it can detect if that file is corrupted on the drive it’s reading from (the checksum won’t match) and it’ll silently / automatically repair it using data from a different drive.
AFAIK none of the other file systems support this. You need to use ZFS RAID rather than mdadm RAID for it to work.
Sounds like I need to get a another computer in addition to my NAS to do some testing with.
To be honest I am not 100% locked in on MDADM + LVM and EXT4, I am used to LVMs and have a decent understanding of them, and basically no understanding of ZFS.
From what I can see bit rot is not a huge problems for home users, I’ll add that to the plus side of using ZFS, and decide what I will do when I get the hardware.