[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 176083844253654.jpg (84 KB, 649x1024)
84 KB
84 KB JPG
Dual rack edition

previous: >>107113287

READ THE (temp)WIKI! & help by contributing:
https://igwiki.lyci.de/wiki/Home_server

/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualization. Spun up some VMs? Learn about networking by standing up a OPNsense/PFsense box and configuring some VLANs. There's always more to learn and chances to grow. Think you’re god-tier already? Setup OpenStack and report back.

>What software should I run?
Install Gentoo. Or whatever flavor of *nix is best for the job or most comfy for you. Jellyfin/Emby/Plex to replace Netflix, Nextcloud to replace Googlel, Ampache/Navidrome to replace Spotify, the list goes on. Look at the awesome self-hosted list and ask.

>Why should I have a home server?
De-botnet your life. Learn something new. Serving applications to yourself, your family, and your frens feels good. Put your tech skills to good use for yourself and those close to you. Store their data with proper availability redundancy and backups and serve it back to them with a /comfy/ easy to use interface.

>Links & resources
Cool stuff to host: https://github.com/awesome-selfhosted/awesome-selfhosted
https://reddit.com/r/datahoarder
https://www.reddit.com/r/homelab/wiki/index
https://wiki.debian.org/FreedomBox/Features
ARM-based SBCs: https://docs.google.com/spreadsheets/d/1PGaVu0sPBEy5GgLM8N-CvHB2FESdlfBOdQKqLziJLhQ
Low-power x86 systems: https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-odvwZpQZKv_NCI
SFF cases https://docs.google.com/spreadsheets/d/1AddRvGWJ_f4B6UC7_IftDiVudVc8CJ8sxLUqlxVsCz4/
Cheap disks: https://shucks.top/ https://diskprices.com/
PCIE info: https://files.catbox.moe/id6o0n.pdf
>i226-V NICs are bad for servers
>For more SATA ports, use PCIe SAS HBAs in IT mode
WiFi fixing: pastebin.com/raw/vXJ2PZxn
Cockpit is nice for remote administration

Remember:
RAID protects you from DOWNTIME
BACKUPS protect you from DATA LOSS
>>
File: ehehetilde.jpg (75 KB, 560x578)
75 KB
75 KB JPG
>>107156094
look at those racks
>>
https://www.amazon.com/dp/B0F86CWBBC
Got the 4u version of this case. I'm going to be moving from server to this case and upgrading to a better power supply in an attempt to fix my disk stability (iffy read/write issues on absolutely brand new disks). Also getting sick of having to open my case to get my drives in and out.
>>
i buy enterprise gear and disconnect the fans
i am based
you are not.
>>
File: 1336885124031.jpg (17 KB, 223x226)
17 KB
17 KB JPG
>>107156094
OH MY-
>>
>>107156125

I've got nothing but bad experiences using backplanes. But I guess my cheaper inter-tech case was just not expensive enough to have well functioning backplanes. I do wish I had quick swap but then again I just add disks, hardly ever remove them
>>
If I have two windows PC's and want to have the same internet browser tabs and data across both devices, would it be possible to save the application data to a directory on a home server or NAS?

Or would that not work and it'd have to be on the same system the software is installed to?
>>
>>107156581
just rdp bro it takes 3 clicks
>>
>>107156266
>then again I just add disks, hardly ever remove them
I came to this conclusion too. I've replaced one disk in 7 years and it was a month after the disk arrived. $500+ hotswap chassis aren't worth it, I have a Fractal Design r5 chassis for one system and my other one is going in a Rackowl 15 bay chassis for $200.
I was thinking about building my own 24 bay chassis with a supermicro backplane, some 3d printing and plywood but even the backplanes go for $150-$200. Plus 24 caddies at $4 each for another $100. That's within spitting distance of just buying a chassis yourself.
>>
File: 84c6j5nzqa0g1.jpg (1.34 MB, 3024x4032)
1.34 MB
1.34 MB JPG
>>107157013
Same. I don't need a fancy backplane. I don't replace disks often enough for me to need one. Plus it just introduces more problems with thermals, and in turn noise, and adding another point of failure.
I don't plan on getting a rack since my room is small and I can't put it anywhere else I live with other people and I don't think they'll appreciate a big bulky rack server.
I plan to do something like pic related. Buy those HDD cages and just build a cabinet to house them. Maybe I should get a 3D printer and make it more fancy, but right now I just plan to nail some MDF boards with shelves and holes for fans and call it a day.
>>
File: r5_backplane.png (2.42 MB, 1080x1440)
2.42 MB
2.42 MB PNG
>>107157121
HDD cages are a good idea. I bought an extra one from my r5 so I can house 13 drives total. The only problem with mine is sometimes they have this buzzing sound in the cages until I nudge my case, then it goes away. I have rubber dampers on each sled so I'm not sure which drive is causing the buzzing but it's for sure one of the metal sleds vibrating against the cage.
I've had picrel in my ever increasing 3d print backlog for a diy backplane for my r5 case. Maybe the dimensions are the same for your cages and you can have the best of both worlds, fyi lots of local libraries have free 3d printers nowadays you just book a time and send in an stl.
https://www.printables.com/model/235313-fractal-define-r5-sata-backplane
>>
Going to rebuild my NAS over the holidays. Has 8 SATA ports, planning on using 20TB drives for the new build. Currently has 4 drives installed, so only room for 4 more drives to start the new build. Plan is to migrate the data from the old drives to new, and replace the old drives with more 20TB drives and some SSDs.

I'm thinking about how to expand this. Should I do two striped drives and then mirror them? Or go RaidZ2? I think adding another pair of striped drives to the mirror would be more beneficial in the long run, and get me the bandwidth and storage capacity that I'm looking for. It's basically three raid0 pair of drives all in a raid1 if that makes sense.

Am I going at this wrong? Need a sanity check. SSDs would occupy the other two SATA ports for cache, metadata, OS, VMs, apps, whatever.
>>
>>107157238
Unfortunately my local library doesn't have 3D printing.
>>
>>107155042
>so what would be the least retarded way to upgrade my zfs pool in proxmox, should I make a new pool altogether or add new larger drives to the pool
Depends on what you have and what you're trying to do. We need more information.
>>
>/r/homeserver is now "sirs i have old pc please give me ideas"

sad.
>>
i powered my server on from my phone :3
>>
>>107157510
eww, downtime
>>
what do i do with this. what's wrong with openssl certs?
>>
>>107157541
for me, it's EasyRSA
or acme-tiny if you have to deal with other people...
>>
>>107156094
MILCC
I
L
C
C
>>
>>107157360
build an 8 disk raidz2.
>>
>>107157648
I just want to use 6 disks mask, and people say resilvering takes forever. I appreciate the extra checksums with Z2, but if I have Z1 on three stripped arrays I think the speed increase would be worth it, I don't need more than 40TB, and want to keep the backups as easy as possible, as well as future growth.
>>
>>107157675
>I don't need more than 40TB
Then why are you planning to use 8 20TB drives?

Do a Raidz1 with three
zfs-send snapshots to a second Raidz1 as backup
Send me the remaining two drives.
>>
>>107156796
I don't wanna have to have both PCs on at once though
>>
>>107157764
just write it down on a piece of paper then.
>>
>>107156581
>>107157764
This has been a thing since the early 90s
https://en.wikipedia.org/wiki/Roaming_user_profile
It's complete shit
>>
>HDD shortage
FUUUUUUUUCK
>>
>>107157783
What's shit about it?

To be clear I don't need or want other applications or windows settings synced across both devices, I just want the Vivaldi browser profile data to be

Vivaldi has a built in sync function but I'm reading inconsistent info about what kinds of settings or tab stuff it can actually keep synced or not
>>
Jay here.
Go fuck yourself you dirty slut.
>>
>>107157675
>6 disks mask
>Z1 on three stripped arrays
?????
Also if you need more ports and have the room get an hba from fleabay for like $30
>>
Anyone using Asus XG-C100C?
Mine isn't throttling, but I feel like the temps are a bit high (pic related).
Any tips on cooling mods (besides adding a fan to blow over it)
>>
>>107157753
I literally said 6 drives anon.
>>107157853
it's a pool of three zdevs, each of them is in raid0
Should I do two zdevs mirrored of three drives each in Z1?
>>
>>107157874
>Any tips on cooling mods
a Mellanox CX4
>>
Is there anything wrong with LVM raids? they seem so simple to set up.
>>
>>107156094
sauce on those racks?
>>
File: cat cry.gif (1.64 MB, 221x244)
1.64 MB
1.64 MB GIF
is memory/hdd prices fucked right now? should I wait to buy anything?
>>
>>107158718
i was about to well actually you, but i just checked ali and amazon for sketchy used drives and those prices are fucked. 30-40 dollars a tb lmao. looks like all my drives became read only for a while
>>
>>107157675
You're being silly. Build a nice RAID and stop dicking around with all of these weird configs. You'll find a use for the space and you'll only resilver if a drive dies.
>>
>>107157884
Right now if you lose any 1 drive you lose your whole pool so practically anything is an improvement. Unless you have an actual use case for that much mirroring just do a plain 6 drive z2. Even 6x z3 will give you over 40% more usable space over a 3x z1 mirrors with similar fault tolerance.
>>
>>107156094
who is that semen demon
>>
>>107158718
I don't know about HDDs, but RAM is going to get worse before it gets better. Prices will not peak till end of Q1 or Q2 2026. This doesn't mean it will drop back to today's price shortly after that, it just means prices are expected to slowly trickle down from whatever plateau they reach then. This is the OPTIMISTIC take.
>>
File: 1448831239199.jpg (35 KB, 460x775)
35 KB
35 KB JPG
>>107157675
Resilvering only takes forever if you have heavy fragmentation and are using tiny recordsizes. With larger records and metadata special devices your resilvers can typically cruise at a significant portion of the linear write speed of a drive, provided that the pool is largely idle of course (which 99% of /g/ home servers will be most of the time). You're worried about resilver times but are choosing to use a topology that is LESS safe because????

>I appreciate the extra checksums with Z2
>but if I have Z1 on three stripped arrays
What the fuck am I reading?

>I think the speed increase would be worth it
What speed increase? What the fuck are you talking about?

>>107157884
>zdevs
the fuck is a zdev? you mean a vdev?
>Should I do two zdevs mirrored of three drives each in Z1?
This isn't a valid topology.

Is English not your native language or something? Your posts in the last thread were damn near incomprehensible and this is even worse.
>>
>>107158738
>30-40 dollars a tb
It's been like that in Canada for year now. I'm predicting $60/TB.
>>
File: Fuck.png (313 KB, 556x700)
313 KB
313 KB PNG
>>107156094
Decided to build a home server since the web is going to shit. I ordered most of my parts before the price hikes took place.
>fall for the recertified meme on server HDDs, and spend far too long waiting for a "good" drive to popup at a reasonable price.
>prices continue to rise and drives are not coming back in stock. PANIC!
>reevaluate my server needs, realize I only need it for media storage and a few applications.
>stop looking for 24TB Exos server drives and look into 12TB Ironwolf nas drives, that are quieter and come with a longer warranty.
>all Seagate Ironwolf/Ironwolf-Pro drives are stated to be CMR, yet half of the drives don't list this on Datasheet.
>buy 4 12TB ironwolf drives anyways, because I can't waste anymore time with prices rising and the HDD/SSD shortage on the way.
Kinda feel like I jumped into the home server hobby at the right and wrong time.
>>
>>107157916
Nope, cant do
1. i only have a 4x pcie slot free
2. i dont want to deal with sfp+ connectors
>>
>>107157874
Congrats Anon you've encountered the downside of 10G-BaseT
>>
Hate on me all you want but I just use SnapRAID and MergerFS.
>hurr parity on a cronjob
okay and? This isn't production it's my home server.
>you could lose files if a drive failure happens before a sync
that's why it runs daily

Like it's already saved me from drive failures twice in the 5 years I've been running it.
>>
>>107157874
>>107160799
>besides adding a fan to blow over it
>Aquantia AQC107
Yeah... that's a no.

You can certainly try to strap a heat pipe to something, but the real answer is to get a small fan and zip tie it to the card or some location nearby so that you have airflow across it. It doesn't have to be pretty. It doesn't need to move a lot of air. It just needs to consistently have some degree of airflow across it.

>>107160724
>look into 12TB Ironwolf nas drives, that are quieter and come with a longer warranty.
Ironwolf drives (and all "NAS" drives) are stupid. It's not that their performance sucks (although on many models it does because 5900RPM), it's that you're paying extra for a data recovery warranty. If a drive fails, you don't want to rely on data recovery that MIGHT work. You don't even want to have downtime. RAID solutions and having proper backups let you handle this. Warrantying the physical drive and getting a replacement is what you want. The data recovery warranty is completely pointless if you aren't being retarded, and if you are using it on a NAS they can't really recover anything without the rest of the pool anyways.

For what it's worth, most of the certified refurbs have under 2 weeks of power on time. They aren't rebuilds. That's not really a thing on modern helium drives. They're frequently pulled from preassembled systems where clients don't need all of the drives, or need something else for some particular reason. This is less true as you start going down to low capacity drives, but for most drives 12TB and up it'll be true, and for anything past 20TB it's practically a certainty. Yeah, they don't come with the full 5 year warranty. So what. Unless you're cooking them and abusing them constantly, most of them are going to last more than 5 years if they don't blow up in the first couple of months.
>>
>>107160724
>>107160976
Continued.

The bathtub failure curve is kind of bullshit with modern components. It's more like a rapid drop followed by an slowly increasing rise over long periods of time. There's no sudden spike at the end the way the bathtub curve says there is. The point here being that the refurbs will either fail within their warranty because they were mishandled or something, or they'll be indistinguishable from a factory new drive of the same age in the same environment, so the extended warranty is unlikely to matter much.

A home server isn't going to have the kind of vibrations and heat that you'd get in a real datacenter, so they're much less likely to die to begin with. It's one of the reasons why the backblaze data isn't actually that relevant for people here. Backblaze tortures their drives. Sure, if you're torturing them, it's potentially useful data, but if you're pampering them, then who's to say that one of their middling drives doesn't come out on top reliability-wise for your use case? There's zero data provided by them to say one way or the other.

TL;DR don't worry about the warranties too much. Go for price/TB, and maaaybe avoid certain shucks.

>rest of the post
Yep, it sucks right now. At least you got partway in the door before things got stupid. Count your blessings. You could have been trying to do this shit during covid. We were shucking drives to put into production systems. (and yeah, the failure rates on the shucks were still a pain point for the company when I left it last year, but again, that's because those were in an actual datacenter with heat and constant vibration from racks and racks of other drives.)
>>
Where do I find decent refurbished drives in Europe? Market absolutely sucks here and dont really fancy spending near 300 euro on a 14TB drive
>>
File: That's the neat part.jpg (31 KB, 735x393)
31 KB
31 KB JPG
>>107161060
>>
File: 1540774580023.jpg (43 KB, 593x796)
43 KB
43 KB JPG
>>107160976
>it's that you're paying extra for a data recovery warranty.
I thought it was a full RMA warranty, not a Recovery warranty. I thought the "recovery" badge was just branding. SHIT. That is fucking stupid for a NAS drive.
As for pricing, It was still the cheaper (for what's currently available in bulk) and the most available option (I'm seeing these 12TB drives, NEW, on every site). I intend to pickup 2 more drives on my next paycheck, for backup when shit does hit the fan.

The headache I was mainly running into with the Referb/Recert sellers came from reading reviews. Whilst some sites could be trusted (thus their inventory being wiped clean), others suffered from selling questionable goods (selling one drive on the site, but shipping out something else to the buyer) and I didn't want to deal with the return bullshit. I simply missed my window to obtain a price/TB killer deal from a trustworthy seller in my price range.

>>107161009
Wow, I'm happy I avoided the shucking-hell of covid.
I'll also keep in mind the info about ignoring most of the BackBlaze data, since it doesn't scale down for home server use.
Thanks.
>>
>>107160976
On the other hand, NAS drives are your only real option if you don't want SMR.
>>
File: Secret Ingredient.png (3.94 MB, 2726x1532)
3.94 MB
3.94 MB PNG
>>107161222
To backblaze's credit, if they say a drive is complete garbage, it's probably garbage, and if they say a drive is good, it's quite likely that it is in fact good. There's no good reason to think that their data doesn't map reasonably well to less taxing environments, but there's also no reason to think that there won't be some variations.

>I thought it was a full RMA warranty, not a Recovery warranty
You get a replacement drive. You're just typically paying extra compared to a similar capacity enterprise drive for the (useless) recovery warranty on top of a replacement.

>>107161060
>>107161120
Anyways, non-meme answers aside, have you looked at ebay? I have minimal idea of what your import rules are for legitimate purchases, especially bulky items. Euro VAT is fucked, so I know you have to pay that.

Depending on how well you know some people stateside you could pull a light fraud and smuggle some stuff in. I've sent SSDs and processors to Canadians, Euros, and Australians before. I'd turn their packaging insideout wrap them in a nested envolope or two a random thank you/sorry for your loss card, and shove that in a box with some random snacks and ship it USPS declared as digitized tax/death documents + a few dollars worth of American Candy. Not sure how well that'd work for mechanical drives though, but you could possibly still ship stuff as digital data.

If you're buying a lot of drives and have a friend in the US it may unironically be cheaper to take a vacation and bring the drives/other parts back as carry-on. Just say they contain work data if anyone asks, or bring an old piece of shit to put on top of the pile so you can show it and say they're all used junk. I can practically guarantee the customs agent's eyes will glaze over if you start rambling about how much of a pain it is to get those retarded Americans to upload crash dumps from their system so you can diagnose stuff, and my god the food those people eat is so bad and blah blah blah.
>>
>>107161279
Completely false. The enterprise drives like Seagate Exos, WD ultrastar, and toshiba MG0 lines are CMR. Shucks from externals are nearly universally CMR as well. It was the WD red NAS drives that silently became SMR and got them sued.

Maybe true for small capacity drives, but you'd have to actively try to be more wrong for anything over 10TB.
>>
File: file.jpg (45 KB, 1300x424)
45 KB
45 KB JPG
Got picrel RT-2KvA for mega cheap off marketplace, works great but it's loud as fuck with the 2 fans spinnning always at 100% anyone has any idea how to replace them? They are FFB0812SH-R00 and the same version 50mm one, they have 3 wires, i tought the blue one was gonna be a tachometer but i tried with a frequency generator at 150hz (4500 rpm) with no luck, still fan fault
>>
>>107161386
Enterprise drives are just enterprise level NAS drives. You think enterprise drives are getting put in desktops?
>>
>>107161443
Yes I put 6 golds in my desktop homelab
>>
>>107161386
I think they meant that NAS drives are guaranteed to be CMR if you didn't/couldn't buy into the Enterprise market.

>Shucks from externals are nearly universally CMR as well.
Yup. I know Seagate puts any enterprise drive that failed QC into an external instead of a landfill. At one point you'd shuck the drive and find and Exos, but now you'll find and Exos that was rebranded as a Barracuda, and these "Barracuda" drives are sporting the newer HMMR tech.
>>
>>107161120
Quite literally
>>107161367
Dont know anyone from the states so that rules out that entire option.
Yeah buying parts from the states has shit tons of taxes so makes no sense, might as well buy brand new here instead.
Tried looking at eBay but its either like 10-15 year old drives from big sellers, 1 year warranty max or random one off drives from individuals. Doesn't help really that I dont really know what to buy either I just look up 12TB or 14TB every once in a while and see what pops up
>>
>>107161494
I just keep buying 8TB drives because my power is like 11 cents/kWh and I have 24 bays in my server chassis with only 12 in use. Maybe one day I'll consolidate it down to like 4 22TB drives.
>>
>>107160724
I prefer WD Digital Red drives. Buying from their store gives you points which can give you a discount. In my country used and new HDDs are basically the same price at this point.
I only buy new HDDs only when I need it. Which is like once a year.
>>
File: Untitled.png (197 KB, 1910x994)
197 KB
197 KB PNG
>>107160878
Same.
SnapRAID and mergerFS are the only things I need. I do a daily sync and scrub and status with email verification. I don't need ZFS or any other shit. I just use it as a storage media server. I don't need my files to have redundancy immediately. I don't need to have 100% uptime.

Only thing I really should do it a way to automate backups on an external HDD.
>>
>>107161561
Nice, what options do you use for mergerfs? This is my fstab for it:
fuse.mergerfs moveonenospc=true,minfreespace=100G,fsname=mergerfs,posix_acl=true
>>
I really need to rework my server badly.
I just have like 2 14TB drives together in a lvm using ext4, wondering how I should go about fixing this?
I dont have anywhere to move my files to so no clue how I would rework it in the first instance
>>
i need more storage capacity
its fine, cause apparently zfs expansion works now
but its not fine because I used 12TB drives and now the price is higher than I paid 2 years ago why why why why why why why why why

and i only have 4 sata ports and dont want a hba for power draw reasons so one more drive and im at capacity
reeeeeeeeeeeeeee
>>
>>107161694
what about these m.2 to 6 sata adapters
>>
>>107161641
I don't know what any of those mean. I use defaults,cache.files=off and minimum free space of 400GB.
>>
>>107161722
You should take a look at the documentation for MergerFS Anon, it's recommended to use the posix_acl flag. Also cache.files defaults to off so you don't need to specify it.
>>
>>107161712
>i dont want a hba
>>
>>107161712
oh and the 6x sata ones are dogshit with jmb chips
if i was pushed i'd consider one but only the 4x
p sure there arent any spare m.2 slots anyway though
>>
>>107161736
>You should take a look at the documentation for MergerFS
Maybe I should, but not today. Only when it breaks. If it isn't broken then why fix it.
>>
>>107161443
>You think enterprise drives are getting put in desktops?
Why wouldn't they be?

I have 4 4TB HGST ultrastars in my main desktop to use as scratch space when working with video files or other bulk shit where I don't feel like writing out multiple TB/day to my SSDs. Half the system builds I've done for people that stream a lot have a random Exos, WD Ultrastar, or Toshiba MG0 in them. They're frequently the cheapest per TB, even when you're buying new. If all you're looking at is amazon pricing, you're paying bezos markups. OEMs or OEM resellers like supermicro and wiredzone have had things for quite cheap at various points in time, and the certified refurbs are cheaper than anything other than shucks (and even then they beat them pretty regularly).

The near max capacities are largely all the same under the sticker, just with the possibility of different QC levels. Why wouldn't you buy the ones with the best binning, especially if they're cheaper/TB? The only reason not to have enterprise mechanicals in your system is if you're going purely SSD, which to be fair is reasonable for most people these days.

>>107161515
Sounds like you're fucked then. My condolences. Maybe there's some websites that sell stuff at more reasonable prices for Euros, but I don't know of them.

Here in the states we have dozens of companies selling new and used drives, a billion ebay listings, and up until recently we had free imports from Canada, Mexico, and China. That was before Mango Mussolini's retarded tariffs, but even with them it's still cheaper than in Europe.
>>
File: 1412914986285.jpg (12 KB, 380x304)
12 KB
12 KB JPG
>>107161561
>>107160878
Last I checked, snapraid was readonly during snapshots and while I might be misremembering, I believe it needed to fully traverse the discs to recalculate the parity information. With 20TB+ drives that's more than 24 hours of read only, and that assumes that you have zero other activity. Even assuming it has some capacity to only recalculate parity around regions touched since the last snapshot, going read-only during that is a deal breaker, and not having that degree of redundancy continuously strikes me as a ridiculous constraint.

Snapraid sounds like an alright concept for offline backups on tape or something, but for actual usability it just seems terrible to me. Plus, I know at least 3 people that have had nonrecoverable errors with snapraid. One of them had a drive failure happen during a snapshot, and that completely fucked that drive and all of the parity volumes and he had to regenerate all of his parity drives. It also has no support for cloning, rollbacks or previous versions of files, doesn't improve backups because you still need to traverse file trees with rsync, etc, etc, etc.

I guess I just don't get it. Don't get me wrong, I'm happy that it seems to work for you, but I really cannot see what the advantage that makes snapraid worth the hassle but not zfs or even btrfs?
>>
>>107161515
Buy new since new drives are at least 5 years warranty. Just make sure to register them.
Unless you manage to get a used drive for half the price of a new one I would always suggest to buy new.
>>
>>107161854
lol
thats literally not how it works.
https://www.snapraid.it/faq
go read up on how it works
>>
File: Untitled.png (130 KB, 1044x1188)
130 KB
130 KB PNG
>>107161854
>snapraid was readonly during snapshots
Wrong. When you write during a sync it doesn't get synced and hove to wait for the next sync. It's why it's recommended to sync at least once a day.
>I believe it needed to fully traverse the discs to recalculate the parity information
Doesn't work like that.
>I know at least 3 people that have had nonrecoverable errors with snapraid. One of them had a drive failure happen during a snapshot, and that completely fucked that drive and all of the parity volumes and he had to regenerate all of his parity drives.
I don't think that's ever happened.
>It also has no support for cloning, rollbacks or previous versions of files
It does.
>>
What's /hsg/ choice for self hosted dropbox replacement? Opencloud or nextcloud? They both look too corporate oriented. I just need it to have a companion android app.
>>
>>107161060
datablocks dot dev is the only one I know
>>
>>107156094
>lust provoking image
>irrelevant time-wasting general
>>
Hey /hsg/, I bought 2 Toshiba 16TB HDDs like 3 years ago (model MG08ACA16TE) and have been using them in RAID1 ever since (using plain ol' md). I would now like to buy more of them to extend the array, but it seems this drive model has become mostly unobtanium in the meantime, most shops where I live no longer have them and the few places where I've seen them, the price has gone up so much that these 16TB drives almost match 20TB drive prices.

Obviously I don't want to pay that much for them, so then to my questions:
1. Would it be alright to add different 16TB drives to the array? I don't think md would complain, I can't see how it would, but am I setting myself up for some weird headaches in the future? For example I'm not sure all 16TB drives are the same size right down to the last byte, could this be a problem? I tried to check spec sheets but I didn't find a capacity number that's precise down to the last byte.
2. Any opinions on the 16TB Toshiba N300 drives instead? These are available at somewhat more reasonable prices where I live.
>>
>>107162102
>would it be alright
yes, but do look drive with similar speed, write can only go as fast as the slowest disk.
>any opinion on toshiba
no idea, never use anything other than seagate and wd,
>>
>>107162169
The specs are similar, they're still 7200RPM, 512MB cache, CMR drives, 200-something MB/s sequential speed which is good enough for me as long as there isn't some sort of 'gotcha' I don't know about.
>>
>>107162102
>I'm not sure all 16TB drives are the same size right down to the last byte, could this be a problem?
It could be, but that's why it's good practice to leave about 10MB of padding when you create your partitions. Most ZFS-based NAS software, if not OpenZFS itself, do it by default.
>>
>>107162227
Yeah, that could be a major pain in the ass for me since I don't believe I left any space when I made the array, if the new HDDs are slightly smaller than the ones I already have. I guess it wouldn't be an insurmountable problem though.
>>
>>107162271
you can always just create the array and copy files over. if the target happens to be slightly smaller, then just delete the picture of OP's mom and try again.
>>
>>107162289
Yeah, that's what I'd probably do. Make a new array out of the new drives, copy data over, add old drives to new array. I have backups too so I could just nuke & restore from backups but I think I'd rather not leave myself open with only 1 copy of the data in case something happens. I actually still have a few TBs free on my 16TB mirror now, I mostly want to extend it in order to retire a 10+ year old RAID6 that's using 3TB drives.
>>
What's the easiest solution to set up a library of ebooks on Ubuntu that can sync reading progress between a phone and a jailbroken kindle?
>>
>>107162321
if you're gonna be rebuilding arrays anyway, might as well take the ZFS pill at the same time.
>>
>>107161694
>dont want a hba for power draw reasons
Common ones besides the 9300-16i are only 10-15 watts
>>
>>107162330
smb, callibre is such a niggerlicious software, scratch that, any library management other than folder and proper naming is the definition of niggerlicious.
>>
>>107162350
but they'll prevent higher sleep states
my server idles at 30-40w right now with 3 enterprise disks.

im probably going to get another motherboard anyway so i can download more ram
>>
If I never turn on my home server. My HDDs will never fail and I can wait out the price hikes.
>>
If I never turn off my home server. My HDDs will never fail and I can wait out the price hikes.
>>
>>107162337
I have been considering that but I'm not sure I will. md has been working literally flawlessly for over a decade now and I really, really appreciate how flexible it is with RAID level changes, being able to swap out misbehaving (but not completely failed) drives without a full rebuild and so on. Performance also seems to be far, far superior from what little benchmarking I did for my SSD mirror. The only useful feature ZFS would really have for me is the prevention of silent errors, but with modern HDDs those seem to be more of a ghost story than a real concern.
>>
just run zfs on md raid
>>
>>107162453
>really appreciate how flexible it is with RAID level changes
some of the newer OpenZFS releases have added RAID expansion, so it might already have the level of flexibility you need. and yeah, if performance is much more important than data integrity, then you probably don't want ZFS.
>>
do you guys edit iptables for docker services? I tried securing 53 only for my lan but I don't get this shit at all
>>
>>107162422
brother, hdd fails happens mostly below 10K hours of power on time, after that it's just smooth sailing til the end of time as long as you don't power cycle it too often, by too often i mean less than 5 times in a year.

>t. ibm system x3100 m5 owner with more than 8 years worth of power on time for the hdd it came with
>>
>>107162544
It's not that performance is more important, it's more like ZFS seems to be an extremely complex and involved beast, potentially with significant RAM requirements for a large array and I'm not sure I would get any practical benefits. I did skip ZFS for performance reasons on my NVMe mirror for VM storage, but for mostly inert data I don't think it would be a problem (though I have been reading bad things about CoW fragmentation and perf degradation when it gets full, not sure how accurate that is or not).

Protection against the dreaded silent errors is the only thing I would really care about, but those seem exceedingly unlikely since modern drives already have their own internal ECC and can recognize on-disk corruption. For the error to be silent, both the data and the ECC information would have to be corrupted at the same time and in such a way that they still match, which seems to be effectively impossible. On the practical side I've been running md RAID5 or 6 for over 10 years with a full verify every single month and not once has it found a single inconsistency in my arrays. There HAVE been read errors, but not silent ones, just 'normal' ones where a drive reports and error and md reconstructs the data from parity.

I'm not sure I really need the entire ZFS machinery for anything or if I'd be better served with keeping things lean & simple as they have been until now. You are right though, RAID expansion has indeed been the killer feature I've been waiting for until now and it's nice to hear that it's finally in.
>>
>>107161438
>>>/diy/ has /mcg/ - Microcontroller General and /ohm/ - Electronics General you'll probably have better luck there.
>>
File: 1630517331723.jpg (55 KB, 576x432)
55 KB
55 KB JPG
but really
how has BTRFS not matured enough to trust even RAID5/6
online dynamic RAID on top is some pipe dream
>>
>>107162758
ZFS can't stop winning I'm afraid.
>>
>>107156094
I made AI of her getting naked
>>
>>107161438
I know normal case fans with three wires the third is pwm and four wires the fourth is tacho. Maybe check and see if the board is outputting a pwm to it or not to verify whether it is pwm or tach, and if it's actually tach measure the actual frequency and waveform (I would imagine it's a sine but who knows) while it's running and then try to replicate that and see if it stops having a conniption.
>>
https://www.ebay.ie/itm/365684817771
Do you guys thing these are trustworthy enough? Seller has a lot of these for sale but dont know what to make of them
Around 80 euro less than buying a new drive
>>
>>107162422
Im using hdds from consoomer junker pcs theyve been seeding since 2023
>>
>>107163078
Nah, on 3 pin wires the 3rd is tacho, PWM fans are only 4 pin and the 4th is of course PWM.
>>
status update: everything is still just working
yep that's the power of having no clue what im doing
>>
>>107162937
hngg please post it
>>
>>107163159
they've been used for 5 years already anon
though the seller is offering a 2 year warranty
are they 160 eur or 200
>>
File: 1762789884509.jpg (153 KB, 737x628)
153 KB
153 KB JPG
>>107162663
it can be hairy, but the comfort of knowing what the fuck is going on with your machines traffic - iptables - is beautiful.

Start with weaning off ufw for regular/non-docker traffic. Then once that's well and good, tackle the docker chains.

I'm the odd man out for this opinion, but ufw glows. Look at these default ufw rules
>>
>>107163159
>Around 80 euro less than buying a new drive
If you buy a new drive you are paying 80euro for better warranty.
Up to you.
>>
>>107162351
Good lad.

Works well on desktop, but what do you recommend for server file browsing on Android?

VLC works well for media, but what about other files?
>>
>>107163561
Ah, I found this
https://github.com/egdels/SambaLite

hmu if you know of anything doper
>>
>>107157555
>>107157541
paid for CA makes a mockery of chain of trust principle.
self sign and FUCK anyone who complains. they are wrong and don't deserve to use the internet.
>>
Currently using my old plex/storage server to host game servers (Valheim, Enshrouded etc). It's running Windows 10 IoT and seems fine, I'm monitoring task manager in a separate window. Are there better tools to monitor usage in real time besides Task/Resource monitor built into Windows? I just want to know roughly how many players I can host on these servers before we see performance issues.
>>
>>107163704
run LibreHardwareMonitor
activate its webserver option
then in your Lan visit your server ip:5000 or whatever port you set up and you see all temps and stuff, pic related.
>>
>>107156094
>Dual rack edition
I keked.
>>
File: 4channel.jpg (141 KB, 865x616)
141 KB
141 KB JPG
>>107163771
That was incredibly quick and easy, thanks so much anon. Realise I can probably squeeze more out if I switch to Linux (or anything other than W10) but that's a job for another day.
>>
File: 1762792431258.jpg (87 KB, 720x1533)
87 KB
87 KB JPG
>>107163561
i use this
https://github.com/wa2c/cifs-documents-provider
it's decent enough if your media processor supports saf/document picker, my music player is neutron, ebook reader is moon+ reader, video player is some old version of mx player before it's enshittified (1.13.x i think).
also you can directly share from your smb if the app you're using does use document picker.
>inb4 closed source
i refuse to use shittier version of a software even if it's open source, navigating through vlc is a mess, library management is a mess, and nothing tops moon+ and neutron when it comes to supported file format.
>>
>>107157874
It's fine anon
That card is designed to go in any random PC without an expectation of any real airflow.
It's really one of the only reasons to buy the Asus version of those cards.
>>
>>107163283
Damn you're right, could have sworn it was the other way.
>>
>>107163561
>server file browsing on Android
I have never found a good solution for this. Almost every piece of file browser software out there that supports streaming file CANNOT view high resolution images. They'd download the file, generate a low res jpeg preview and display that instead. They're SUPPOSED to load the full-res version when you zoom in but they almost NEVER do, like one out of 10 times at best.
If you have something like a 6000px blueprint or schematic all you're getting is a blurry unreadable mess.
I believe this is because all these apps use some common android library for viewing pictures, so in a way this is a problem inherent to android itself.

The only two exceptions are Material Files and Xplore. The former is abandonware with half baked support for remote storage, the latter has an absolutely schizophrenic UI.
>>
>>107157121
I thought this thumbnail was a chair.. i need to stay inside more and quit touching grass
>>
>>107163861
>inb4 closed source
are you saying cifs has some closed source aspects, or just that you will prioritize a good app over foss?

also, how is the media viewing? can you get high res images? this guy is also curious,
>>107164016
>>
>>107161986
>Wrong. When you write during a sync it doesn't get synced and hove to wait for the next sync. It's why it's recommended to sync at least once a day.
It used to throw an error, but this is barely any better. If you write to something continuously, IE security camera footage or a database logging events on your cameras, they won't get synced, or if they do, it might be days or even weeks after the fact.
>I don't think that's ever happened.
Believe what you will. The write hole problem is not something you can just handwave away. Snapraid has no clean mechanism to securely handle files being deleted from the same parity stripe between syncs. If you randomly delete or modify files that happen to be in the same stripe, and experience a drive loss, you can

>It does.
>>107161861
How? It doesn't support snapshots of the underlying filesystems. For instance. How would I redundantly take snapshots every 5 minutes of a work directory on the NAS, autoprune snapshots with zero changes, autoprune down to hourlies after a day, dailies after a week, and anything over 2 years old? Not everything you store has version control where rollbacks are trivial like git does.

How does one clone or byreference copy a directory for destructive editing? How does it handle parity information of clones since clones and byreference copies are thin provisioned and can enormously inflate the logical space utilization? From what I recall, doing that meant that your parity drive had to be large enough to handle logical utilization.

As I said... it seems like it could theoretically be reasonable as a parity stripe for tape backups or something, but it doesn't handle active systems in anything close to scale. I guess the issues aren't a major concern to you, If all you're doing is pirating media, but for any actual work it seems insanely failure prone.
>>
Today I learned that configuring a remote logging action and then using that action to set up log rules does not change the rules when changing the linked action.
Good stuff.
>>
File: file.png (81 KB, 755x975)
81 KB
81 KB PNG
>>107161438
I found the fan specs and the wire is a rotation detect that gets pulled to ground, i just shorted the lead to ground and it works, nice i guess
>>107162727
ty anyway, honestly for a moment i was sure i was gonna have to get microcontrollers involved
>>
>>107164311
no, i mentioned closed sourced apps after i promote open sourced apps, it pisses some people off for some reason.
>good app over foss
absolutely, i'm a power user anyway, so i could just block the tracker url or block the activity from the manifest by decompiling.
even if it's obfuscated, i still have my trusty openwrt router to block the url from upstream.
>how's media viewing
as you would expect from default file manager, it's shit, this is not the app fault though, thumbnail generation for files on android is shit, especially if you're accessing file over cloud because nothing is generated at all, use mixplorer for better ux.
>>
>>107164368
kek reminds me of all the high energy studying you need to do to use an ORM like sqlalchemy
>>
What domain name should I register for my services frens?
>>
File: domain.jpg (47 KB, 974x273)
47 KB
47 KB JPG
>>107164616
>>
>>107164616
AlbanianHorsePorn.horse
>>
>>107156581
Sometimes the simplest solution is that your question is shit.
Just use syncthing between the machines or use a dedicated
>>
>>107164840
DEDICATED WHAT ANON? ANON?!!!! ANOOOOOOOOOOOOON!!!!!!!!!
>>
>>107164968
Dedodated WAM.
>>
Is there any reason to mess with Tailscale if my router already has a wireguard server built in?
>>
I've been toying with the idea of getting a relatively dumb NAS and a mini PC or something to do the actual hosting of services rather than just use one box for everything, anyone try it?
>>
>>107162102
Mixing drives is fine. For future reference when using mdadm, don't pass whole disks. Shave off a little bit with a partition and use the rest of it. That gives you some breathing room. ZFS does that automatically.

The N300 drives are fine. Thy aren't rated as high, but see the above conversations about abuse. Drives that aren't being tortured generally live a long time unless they're total garbage. You don't have to buy toshibas. Look for seagate exos drives, WD, and don't be afraid of shucks, etc.

>>107162337
seconding this

>>107162453
>>107162686
>performance
ZFS has per dataset settings you can apply. If you were doing 4k random IO tests with default settings, it was writing 128k records. Even if you had some overlap of multiple 4k blocks in single 128k record, you're still talking about a 30 fold write increase. Even with 2:1 compression ratios that's still above 10x write amplification. Tune datasets for a given workload and this is a non-issue.
>but with modern HDDs those seem to be more of a ghost story than a real concern.
It is until it isn't. As you scale you'll start getting errors. Remember that just because you haven't been able to /detect/ and error doesn't mean that you haven't had them happen. Sure, flipping a bit in the middle of a random video file is probably not critical, but that doesn't meant that those errors didn't happen. It doesn't mean that they did either. You have no basis to make a claim beyond "I haven't had severe enough errors for things to completely blown up yet." which is a very different statement from "I have had zero errors."

continued
>>
>>107165152

>>107165152
>>107162686
>potentially with significant RAM requirements
RAM requirements are FUD. The large RAM requirements are a mix of old implementations of how things were held in RAM and enterprise users running databases with 8k records with deduplication on, or enterprise NFS cases where they're reading and writing multiple GB per second across dozens/hundreds of clients so its effectively doing random work across the entire pool and needs to keep the spacemappings for petabytes of data in memory constantly.

Don't use dedupe unless you have a very specific reason to do so. It's largely limited to enterprise users running enormous databases or maybe some niche workload where you're duplicating hundreds of VMs or something. I've run pools upwards of half a petabyte on VMs with under 40GB of memory allocated when dealing with nasty failover states. Yeah, you weren't going to saturate 100Gb QSFP28 connections with constraints that tight, but all of the data was accessible and we even did multiple resilvers on that pool while the host was in that state. I've got a friend that ran nearly 200TB off of a raspi just to see what would happen and the CPU was arguably a bigger limiter than the 4GB of RAM on it when used for plex.

continued
>>
File: showme-show.gif (903 KB, 400x200)
903 KB
903 KB GIF
>>107162937
>I made AI of her getting naked
>>
>>107165166
>>107165152
Oh, and you can hard limit the RAM usage by ZFS if you want to via module parameters. It's literally typing one line into your terminal to throttle the maximum cache size.

>>107162686
>COW fragmentation
It's a non issue unless you're redlining your pool capacity with tiny records on rust. There have been enormous improvements to the allocation system in the past 5-10 years (particularly as you approach capacity), and SSDs don't gives a shit about it. For bulk media, use large record sizes and then it doesn't exist in any practical sense.

One of the causes of heavy fragmentation was tiny holes left behind by metadata being COWed elsewhere. The holes were small enough that you couldn't fit entire data records into them. This could result in no space for a record being available without breaking it into gang blocks and stuffing it into multiple holes, which added another layer of indirection and multiple random I/Os to retrieve a single record. It's significantly better about mitigating those possibilities now because of things like the embedded SLOG metaslab preventing sync writes from strewing tiny garbage all over the pool, and it batches metadata into larger clusters now. You can also use dedicated metadata special devices which sidestep a lot of the problems entirely as well. You can quite reasonably run pools at upwards of 95% capacity with heavy amounts of random writes without running into serious fragmentation problems now. It sounds like you're mostly doing media storage though, which doesn't typically involve heavy rewriting so it's largely moot.

If you somehow get into a position where fragmentation is a problem, you can prune snapshots and run zfs rewrite. That will reflow data and acts as a defragmentation/rebalance across vdevs as well.

TL;DR 90% of the paranoia you see about ZFS is some mix of extremely outdated information, completely made up, or people failing to RTFM and failing to or improperly adjusting settings.
>>
>>107165063
tailscale traverses nat. if you don't need that then there's no reason.
>>
>>107165152
>For future reference when using mdadm, don't pass whole disks. Shave off a little bit with a partition and use the rest of it.

why?
>>
>>107165095
yes
it's a good idea because it shields the nas from all other local gadgets and machines
a nas shouldn't be accessible by anyone and anything in your network, we are talking both inward and outward traffic
if someone connects to your wifi and can see your nas then you fucked up
just my 2 cents about security and separations of concerns
>>
>>107165324
this anon gets it. the best way to configure your nas is so that nothing on your network can use it. don't plug any ethernet into it. you can also save money if you keep it turned off also.
>>
File: NVMe RAID1 testing.png (2.41 MB, 1252x2756)
2.41 MB
2.41 MB PNG
>>107165152
>don't pass whole disks
I did make partitions on the drives first, but I just filled up the whole drive with one partition. I wasn't planning on using different model HDDs. These drives no longer being really available came as a surprise, pretty sure the 3TB WD Reds in my decade-old array are still available to this day, I really didn't think these Toshibas would basically disappear in a mere 2-3 years.

>You don't have to buy toshibas.
I'm not looking at Toshibas specifically, those are just the cheapest drives at 16TB at the shops I'm looking at. Good to know that the N300s are alright though.

>Remember that just because you haven't been able to /detect/ and error doesn't mean that you haven't had them happen
It does mean exactly that, though. All my arrays are checked and verified against parity (or against each other if they're mirrors) every month and this has never revealed any corruption. Every single byte is read and verified every month, how can there be undetectable corruption if everything matches?

>If you were doing 4k random IO tests with default settings
I was interested how a Windows VM would perform on the different solutions so what I did was make a Windows VM and then gave it a 1TB sparse, raw disk image on each FS that I tested. I then ran good ol' Crystal Disk Mark on that, from inside the VM. Hardware backing were a pair of NVMe SSDs and pic related were the results.

I don't remember what settings I used, no idea if they were fully default or not but they could very well have been. Still, while mdadm with whatever FS on top and BTRFS all came in very close to each other, ZFS was like 3-4x slower even in sequential access. Are the defaults really so fucking turbotrash that it performs this poorly even in sequential read/write? Frankly I'm not exactly interested in babysitting my FS for every workload out there so it doesn't shit itself, everything else somehow manages to run well with defaults in every scenario, but not ZFS.
>>
How do you guys manage all your ssh keys? Is there a good portable database program i can load them onto a USB key with or should i just resign myself to keeping them on an encrypted USB?
>>
>>107165291
Because HDDs of the "same" size aren't necessarily identical in size down to the very last byte, one "1TB" drive may be a very tiny bit larger or smaller than another "1TB" drive. The different devices you give mdadm have to be the same size, so if you end up needing to use different HDD models (like me right now) then if you buy a model that's even a tiny bit smaller it won't work in the array. If you make a partition that's just a little bit smaller than the full HDD, then you can accommodate for HDD models which are ever so slightly lower capacity.

Now in my case, if I buy a different model and it's the same or larger capacity, it will work fine (the partition size from the smaller HDDs will fit on the bigger ones) but if the new one is slightly smaller then the partition no longer fits and I'll have to nuke the whole thing to make slightly smaller partitions on all my drives. Pain in the ass.
>>
>>107165814
If you absolutely want to go down this rabbit hole, the solution is likely using pgp keys and subkeys and maybe hardware keys (like yubikey) for storage.
>>
>>107166005
yeah you're right, i just made a luks container and dumped them onto a USB.
>>
Anyone use blu ray for backups? I'm schizo about my data and I want to back it up so it can't be wiped. I have:
>backups saved to email
>backups saved via icloud
>backups saved on pc
>backups on 1TB external HDD
>working on printing pictures along with compilation of data

Options 1,2, and 3 I don't consider safe, but I do consider useful. 4 is good but is a single point of failure. 5 is also good but it's clunky and harder to store things like exif. So I want a backup for #4 and blu-ray MDISC sounds really good. Can anyone recommend an mdisc capable reader/burner, and where to find mdiscs?
>>
>>107166091
i just write my backups down on a piece of paper.
>>
>>107156094
left girl looks like sliceofsalami/bel larcombe
>>
>>107157486
Damn. Sounds liek you should get out of the hobby and give me all of your ddr4 3200 64gb rdimms
>>
ok is there a better way to raise a GPU off the motherboard than a riser cable ? my 3090s are too fucking thicccc
>>
>>107165291
what >>107165886 said is correct.
If you have the capacity to shrink a file system this is less of an issue, but many file systems don't shrink cleanly. I don't recall offhand if you can shrink ext4 if it's your root filesystem. You can certainly do it if it's not. XFS still doesn't have shrinking support. A good raid system "should" factor in a few megs of padding, but I don't believe mdadm does that out of the box. If it does, that's probably a relatively new addition to the system because I don't recall that being a thing in the past.

>>107165806
>how can there be undetectable corruption if everything matches?
It's not likely, but it's possible. Silent errors generally require multiple errors to occur concurrently. Suppose your system had a brain fart and wrote data to the wrong stripe. The parity information would be correct as long as the write was completed, but mdadm has no way of detecting that it was written to the wrong location. ZFS, btrfs, and bcachefs will detect that kind of error. There are other errors that can result in this, but that's a simple example, albeit improbable.

But lets suppose that you did encounter an error due to a bit flip on a raid5. Is the parity correct, or is the data on disk correct? You can't tell. What about mirrors? How do you tell which side of the mirror is the accurate one? These tie into the more realistic issue which is the write hole problem. The problem with mdadm and various other simple raid solutions is that they're only fine as long as everything is fine. Suppose that you crash during a write. What happens? No idea. Maybe nothing. Maybe something. What happens if you only partially flushed a write to disk? ZFS, btrfs, and bcache address those issues. Basic raid 10/5/6 with mdadm can't because it can't say definitively which version is correct.
>>
>>107166705
>>107165806
Even worse, writing a file and having it corrupt during a crash can corrupt other files. Suppose you crash during a write with 3 disks in raid 5. You are writing a file to disk 1, and updating parity on disk 3. As part of the crash, disk 2 fails. If the write wasn't flushed completely to both disk 1 and disk 3, you cannot recover disk 2, and disk 1 might not even be correct either. You won't even be able to detect an error because your parity state is inconsistent with the data. Again... everything works fine until it doesn't. This is what I mean by it becomes more of an issue as you scale. The chances of this happening are tiny at smaller scales, but as you grow they add up. We detected hundreds of bit flip type events across many petabytes every year at my previous job. Most of those would either have not been caught by mdadm, or would resulted in random guessing to fix. Even shit in the cloud isn't perfect. We've had AWS give us corrupted block storage before, although my details on that are a bit thin because I wasn't the one dealing with it.

>make a Windows VM and then gave it a 1TB sparse, raw disk image on each FS that I tested. I then ran good ol' Crystal Disk Mark on that, from inside the VM. Hardware backing were a pair of NVMe SSDs and pic related were the results.
You need to make sure that your virtual sector alignments on the disk image align with file system record sizes. ZFS is block based, not extant based. There's a whole host of pros and cons tradeoffs to that, and one of the nastier negative effects is that you need to ensure that your recordsize is appropriate for your workload. It's quite common for ZFS to be slower than raw ext4/xfs for raw performance, but it frequently makes up for that with better caching and the inline compression, but if you're doing massive write amplification because you're flushing records that are larger and you're further doubling things because of the ZIL, yeah, it's going to look like shit.
>>
>>107166717
>>107165806
Your 4k write performance is nearly an order of magnitude worse, and that's probably because of mismatched record sizes. Try it with logbias=throughput on the dataset, and 16 or 32k recordsizes, or something more appropriate to the underlying disk image. Also try it with zvols. zvols are a built in virtual disk that can directly be managed within zfs and have sizes/features adjusted dynamically. They (generally) aren't as performant as a basic qcow stored on a dataset with correct tunings, but they have a few upsides.

All that being said
>btrfs only doing 1.5GB/s at Q1T1 1M sequential.
>xfs smashing ext4 at Q1T1 1M sequential.
Something else seems fucky with this test. Those should be very similar out of the box. btrfs should be a touch slower, but not 40% slower. Are these dramless SSDs that are shitting up system memory or something?

>>107166091
I keep tax paperwork and related information in encrypted files on dvd-ram disks in a safety deposit box at the bank. I also store some sentimental stuff that way. I think optical is largely pointless, but I already use the box for other stuff, and I had a handful of them lying around from the early DVR days back when there was stuff worth watching on cable TV.
>>
Am I better off with a seagate exos with 100~1000 Bad Sectors that has a 5 year warranty for 160$, or a WD blue with a 3 year warranty for 190$ without any listed bad sectors?

, both 12tb and refurbished/recertified
>>
>>107167139
>am I better of with a dying drive or a non-dying drive
tough question
>>
>>107167171
I'm very new to all this.

Are bad sectors inherently a sign of a total drive failure, or does it just mean reduced performance and a higher likelyhood of failure?
>>
>>107166389
>she's on OF
there goes my no nut november
>>
>>107167192
>Are bad sectors inherently a sign of a total drive failure
not if its being sold with a 5 year warranty
>>
>yeah, I run syslog-ng
.t opnsense
>yeah, i run syslog-ng
.t truenas

You'd think it would be simple to forward that shit to another syslog-ng
But it's all
>Invalid frame header; header=''
and
>SSL error while reading stream; tls_error='error:0A00010B:SSL routines::wrong version number'
>>
>>107166705
>The parity information would be correct as long as the write was completed
Right, but if a program or something is bugged and writes incorrectly, that's not really the same thing as the random, undetectable corruption that was being discussed. I could also hit delete on the wrong thing by mistake, these are the kinds of situations which are solved with backups as far as I'm concerned. In the exceedingly unlikely situation something like this happens, backups have me covered.

>Is the parity correct, or is the data on disk correct?
The drive with the bit flip will report a read error because the data it is reading doesn't match its internal ECC. Modern drives don't write your data raw to the platter. I don't even know if old HDDs ever did that, really. I have encountered drives reporting read errors without bad sectors or any other problems, so I have encountered random corruption actually happening but modern drives detect this and report it, they don't just read garbage and pretend it's OK like you seem to assume.

>>107166782
>Try it with logbias=throughput on the dataset
Those tests are from years ago, I've been using those SSDs for a long time, I'm not going to nuke them just to play around with ZFS.
>Are these dramless SSDs
No, they have 2GB RAM each IIRC
>>
>>107164342
>they won't get synced, or if they do, it might be days or even weeks after the fact.
????
You're supposed to sync everyday.
It's simple as. Are all your files mostly WORM, Write Once, Read Many? Or are they something getting constantly changed? First is better to use SnapRAID. Second then use ZFS.
>>
what are some domain registrars that don't just use cuckflare
>>
Will there ever be a moment where prices are normal for every single hardware component?
I've not found in the last year where prices are stable.

>>107167231
Just don't visit coomer sites, it's not that hard.
>>
>>107166091
Blu-ray degrades in 5-10 years. It's not a good long term backup.
>>
>>107167949
clownfart for what? just run your own dns bro
i use namesilo
>>
>>107167949
porkbun.
>>
>>107168363
what did you call me you faggot? fight me.
>>
File: 1753186156587650.gif (863 KB, 220x123)
863 KB
863 KB GIF
>>107168460
>>
>>107161561
Turn that IPv6 on, anon
>>
>>107168616
make me
>>
>>107168669
Okay let me remote in
>>
>>107168686
no, i was joking. dont hack me
>>
>>107165225
>>107163360
https://files.catbox.moe/qwpui9.mp4
>>
>>107168746
>the subtle tan lines
ai will be the death of humanity
>>
>>107168746
why cant AI do nice nipples?
>>
>>107168746
Could you feed a LLM her actual nudes, then the OP photo, then make it generate the movie based on her nudes?
>>
>>107168746
shes looks fat
>>
File: wew.jpg (54 KB, 800x600)
54 KB
54 KB JPG
>>107168746
>https://files.catbox.moe/qwpui9.mp4
my nigger
>>
>>107168809
>>
>>107167942
>Right, but if a program or something is bugged and writes incorrectly, that's not really the same thing as the random
I said nothing about user applications writing wrongly. That is undetectable in any file system. I'm talking about hardware here. My posts assume that the storage software is functioning correctly, and that the system isn't experiencing corruptions in RAM or the CPU. If that's happening you're completely fucked regardless.

>The drive with the bit flip will report a read error because the data it is reading doesn't match its internal ECC.
>but modern drives detect this and report it, they don't just read garbage and pretend it's OK like you seem to assume.
They don't detect all errors because the basic ECC embedded in the sectors can't reliably handle complex state mismatches. It's robust, but not bulletproof. Furthermore, the drive will report that everything is healthy if you have a bit flip occur in transit to the drive. A bad sata/sas cable/controller can scramble in flight data, or a sickly drive can scramble the data as it's being paved out to disk. As long as it occurs before the ECC bits are calculated and flushed to the drive, the corruption is silent on the physical disk. A drive fucking up and paving things out to the wrong sector isn't detectable within the drive either. Your raid implementation will be able to detect mismatches like this as part of scrubs, but without external checksumming somewhere you can't authoritatively declare one state to be the correct one if you didn't get a read error on any of the drives.
>>
>>107167942
>>107168957
These aren't theoretical events either. I have a WD blue that will start paving shit out to the wrong sectors if you load it with a large I/O queue when it gets hot. Something on it got cooked too hard at one point and it'll just vomit shit out to random nearby sectors. When I last spun it up a few years ago it wouldn't even generate UREs because it was paving out correct sectors, but in the wrong locations. SMART says it's fine. Badblocks, and various other tests can pass if they aren't stressing the drive, but it'll start causing havoc pretty quickly in any system you throw it in.

Even if you did get a read error, you have no clean way of validating that you don't have compound errors. IE in a RAID5, a single read error couple with a silent corruption somewhere else is undetectable because you don't have external checksums to validate things. A RAIDZ1 won't necessarily be able to recover from a URE+ a silent drive corruption, but it'll accurately detect it, and mdadm won't. RAID6 could handle a URE, but would still have a state mismatch if a second silent error occurred. RAIDZ2 would trivially recover from that because it stores checksums in physically separate locations on disk.

You're also completely ignoring the write hole problems. What happens if you crash during a write operation and one drive finishes a write while another one doesn't? You don't necessarily get sector errors when this happens. On a mirror that means you have one side updated and one side not. On raid 5 that means that you have inconsistent erasure encoded parity bits to the data. ZFS can detect that and fix it accurately because external checksums stored outside of that range allow you to try multiple permutations between what you have on disk, but mdadm doesn't have that functionality because it isn't aware of the file structure on disk.
>>
>>107167942
>>107168968
Again, this goes back to what I said earlier. md raid is fine as long as everything is working fine. It's when weird shit starts happening that you rapidly get lost in the woods.

>Those tests are from years ago, I've been using those SSDs for a long time, I'm not going to nuke them just to play around with ZFS.
That's fair. I was more trying to illustrate that you were probably testing with bad configurations. That's honestly the bigger hitch with zfs these days than the lack of incremental expandability. It's very easy to read some guide out there and make settings adjustments that can completely break your performance, and the default settings aren't great for VMs, but if you do configure things reasonably well it's at least close, and can sometimes outperform standard xfs/ext4.

You're talking about expanding your capacity a fair degree. I think you should at least look at ZFS with a clean slate and do some more reading to understand the nastier failure states you can get into with non-CoW systems, but at the end of the day I'm not your mother. Do as you will.
>>
>>107167192
Sectors are written by the manufacturer and can't be reestablished by modern drives. Once the sectors begin to fail, the drive just loses the ability to tell where data begins and data ends over time. The warranty is irrelevant, the person trying to sell you that drive is planning on not honoring it.
>>
I have so many wall wart plugs. How do I fit them in a UPS instead of buying a bigger UPS (and not using some plugs because they get covered anyways)?
>>
>>107167944
>You're supposed to sync everyday.
What use is syncing every day if it skips over files you're actively writing to during a sync and you are writing to those files frequently? Why would you not want that to be handled continuously as part of writing?

>WORM workloads are better using SnapRAID.
I still don't see how. It still sounds like a bunch of asinine convolutions filled with write/erase hole edge cases. The performance sucks. The failure cases are numerous. You can't load thin provisioned snapshots on it because it has no understanding of them if they're mounted, and will balloon any parity information calculated based off of them. It seems alright for offline backups like tape, but using it for important data on a live system seems absurd to me.

The snapraid page spends a great deal of time hyping up that you can partially recover data if you lose more than your parity..... so what? The IRS isn't going to let me pay only part of my taxes because 20% of my paystubs got lost. My wife's clients aren't going to accept losing 20% of their wedding pictures because, oops, oh no, I was a retard and didn't replace my failed drives fast enough and didn't have a real backup.
>>
>>107169107
You're comparing snapraid, a HOME redundancy software VS enterprise shit. You are fucking retarded.
If it doesn't work for your use case then it doesn't work for your use case. For others that's all they need. No one is fucking saying snapRAID is better, but if it fits what people need then they use it.
This is home servers, not business servers. So I don't give a fuck if your wife dies in a drink driving accident, she can use whatever the fuck she wants.
>>
File: pained.jpg (83 KB, 498x506)
83 KB
83 KB JPG
>>107169173
>No one is fucking saying snapRAID is better
>>107167944
>First is better to use SnapRAID
This fucking board I swear to god.

>>107169102
Buy a cheap long PDU and plug that in. Or if things take USB-C get a multiport USB block.
>>
speaking of ups and pdus
i have a 1500va 1000w ups with 6 outlets. is this a good set up so i dont start a fire?

6 outlets:
1 pdu
1 gaming pc
1 NAS
1 proxmox machine with a lot of services
2 monitors

8 outlet pdu:
1 set of speakers
1 network switch
1 powered USB hub
1 DAS enclosure*
2 mini pc*
*only on when I need them

or can i plug my monitors in a pdu? do people do that or do they directly plug monitors in a ups?
>>
>>107169102
You need some "outlet savers". Basically 6" extension cords.
>>
>>107168746
>/hUGEBOOBsg/
>>
>>107169102
I needed to power a modem, router, switch, moca adapter AND the power supply thing for the fiber to coax converter (for tv) in my smart panel, but only had 2 sockets to work with and no room for a power strip with all of that bullshit in there. Luckily everything is 12 volts and uses the same exact barrel jack so I bought a 12v 10a power supply (sealed laptop style, not the metal cage ones) and some bare barrel jacks and made up a 5 ended fan fanout adapter. It's been running undisturbed for 7 years now.
This might be a bit much for your needs, but it's an option.
>>
everyone laughed when i bought a bunch of 256tb ssd's from chink express

WELL WHO IS LAUGHING NOW YOU NIGGER FUCKERS.
>>
>>107166389
bless you anon
brb time to "resilver my raid" if you know what i mean
>>
>>107169747
I am actually laughing at this. This really is in the realm of you just buying shit just to buy shit. What is the homelab usecase where you're planning on using those? Unironically the only use case for those drives would be in instances where you're actually making money.
>>
>>107158738
>>107160573
Glad i bought 8x WD RED 22tb @ $16/TB a month ago.
>>
>>107169815
In Canada? 22TB is around $30/TB at the moment.
>>
File: noice0.png (396 KB, 497x395)
396 KB
396 KB PNG
>>107169815
>>107169849
>tfw 12x 12tb hgst refurbs (2nd hand) 2 months ago for $1400, less than $10/tb
>>
>>107169885
Refurbs costs more because of the shipping. It's like $250 shipping. I guess if you're ordering in bulk then it's cheaper, but buying one by one wouldn't work for me.
>>
Is it normal for shucks to throw CRC errors?
After I set the drives up properly I haven't seen any errors since, but when I first threw them in I was being spammed with them. Not sure why?
>>
>>107170673
Did you double check your cabling and such?
>>
>>107170735
Would an inadequately cooled HBA card throw CRC errors? I'm thinking maybe the fan was fucked and I fixed it without realizing. That's the only thing I can think of.
>>
File: ௵.jpg (21 KB, 372x260)
21 KB
21 KB JPG
I need recommendations for a 2.5G PCIe NIC that works reliably with Linux with ASPM enabled, from cursory research it seems even the Intel chips have problems with power management for some reason.
>>
>>107169432
Depending on the actual power consumption of the PCs and monitors, it's likely you either won't be able to power all of this with a single UPS, or you will only get literal seconds of runtime.
Whether people put monitors on a UPS depends on your needs... what's more important to you... not losing your monitor signal, or purely battery runtime? The monitor(s) will severely lower your runtime.
>>
>>107170753
It's possible. CRCs can crop up with bad cabling or because some component is shitting itself.

>>107170963
>ASPM enabled
Why though? 2.5 gig NICs draw a couple of watts at most. Let it burn a few cents a month in electricity. Are you trying to get your CPU into the C3 state or something?

Is there a reason that 10Gb nics like x710 chipset based ones aren't adequate for you?
>>
>>107169102
Yes I've been in this situation. I just bought power strips. It's fine since you're not overloading it
>>
>>107171177
nise dijit :DD
>>
>>107171187
Thank
>>
>>107171010
while hooking up 5 computers to a single UPS is insane, you underestimate just how much runtime a 1000W UPS can provide. Usually 3-6 minutes at full load, with it going up significantly if he sets up scripts to do load shedding (powering down PCs) as soon as it's on battery power. My 300W server can run for an hour on its 1500W UPS.
>>
>>107171088
My home server idles at around 12W and runs a Core i5 7500, I'd like to keep the idle draw as minimal as possible since most of the time it's idling.
I just back up files on it occasionally and run Jellyfin and Immich on it. I also don't want to spend too much on a NIC and nothing else supports 10G anyway.
>>
>>107171323
>My 300W server can run for an hour on its 1500W UPS
if you're not actually using it and the monitor is off, then sure
>>
Is an m90q worth the upgrade over my optiplex 7040 sff? My place is pretty small and i heard the newer intels are a lot more power efficient. Is an m720q also a better choice than a 7040?
>>
>>107171447
No, that's how much power the server is drawing.
>>
>>107171447
Anon do you have any idea just how little power a modern LCD monitor draws?
>>
>>107171509
So it's okay to plug a monitor to the PDU? I'm in the same boat. Two monitors plugged into the UPS and I want to move them to the PDU.
And yeah, 3-6 minute runtime is more than enough for me to shut down my PC safely.
>>
>>107164681
Something slightly more heterosexual perhaps?
>>107164715
I don't like horses
>>
>>107171590
If your UPS isn't shit, the software will report the estimated runtime based on current load. If it is shit, just use Cyberpower's runtime calculator based on load reported by the UPS:

https://www.cyberpowersystems.com/tools/runtimes/
>>
>>107171609
I know my run time. It's 6+ minutes under load. 15 minutes while idle. And 45+ minutes when shutting down my monitors, gaming PC and everything else other than the network switch and my server.
I just want to know if it's okay to plug it in a PDU.
>>
>>107171618
The only thing you shouldn't plug into battery backups is printers and like... high torque motors. Corded drills come to mind. Monitors are nothing.
>>
>>107171624
Thanks, friend.
I only wanted to plug it in the PDU because it's closer than my UPS. Better cable management.
>>
>>107171627
Yeah a modern LCD monitor is nothing to be concerned about for a PDU, fren
>>
Terramaster+TrueNAS or Unas 2? I have a proxmox server, just need the storage/smb for my media server/immich
>>
>>107171690
You're using USB connections? Good luck. Use TrueNAS.
>>
>>107171434
Intel x710 chipsets support the true low power states and they support 2.5/5GbE. You can get them on ebay for like 30 or 40 dollars X710 is the only nic that I'm aware of that I know for a fact supports the low power states and supports 2.5G. There's a bunch of stuff like the Chelsio T520s that support low power, but are 1 and 10G only. Couple the NIC with a low power transceiver and you should be in business. See
>>107145037
>>https://store.10gtek.com/nvidia-compatible-10gbase-t-sfp-transceiver-up-to-80-meters-cat-6a-40-85-hpe-aruba/p-23474
>BROADCOM BCM84891 PHY
>peaks at around 2 watts on long runs

Bluntly though, even for 70 dollars for the NIC and transceiver combined, Unless you pay a fortune for electricity, it will take multiple years before the power savings will beat buying some random 20 dollar 2.5G NIC and not worrying about the low power states. I live in an area with expensive electricity, and napkin math of a 15 watt increase would take something like 2.5 or 3 years to be cheaper. It's just not worth the headache. I burn several hundred watts on drives and NICs alone, and yearly it's a fraction of what it costs to heat my house from December through February. I can practically guarantee that you'll save more by throwing some insulation in your attic, or popping the trim around your windows and injecting some foam into any leaky areas before putting down barrier tape and reseating the trim.

I get it. Power saving is great and all, but some shit just doesn't math out to be worth your time.

Something else to chew on. If this is such a low power build and the performance is so non-critical, why do you even need 2.5G? If you're usage is intermittent, does it really matter if it takes an extra few minutes to back stuff up? You don't even need 100Mbit to stream jellyfin, so what's wrong with gigabit? I run 40GbE because I'm running half a petabyte of storage and a small cluster for testing builds before pushing shit to prod. But I'm a psycho.
>>
>>107168789
>>107162937
wait, what? name?
>>
>>107171928

>>107166389
>>
File: FunnyFrijole.jpg (35 KB, 600x600)
35 KB
35 KB JPG
>>107156094
Post more server racks.
>>
File: download (1).jpg (250 KB, 768x768)
250 KB
250 KB JPG
>>107172494
>>
I want to build a Proxmox/Opnsense box to try it out and move all my networking shit from my jellyfin box to a dedicated machine.

Found this motherboard on Alibaba and it seems pretty decent? https://www.alibaba.com/x/1l9zTwm?ck=pdp

Any other recommended hardware to look at. I'd prefer a total of 4 storage ports, any configuration of m.2/sata.
At least 4 ethernet ports, preferably 2.5g
Either built in 10gb sfp ports or PCIe slots so i can add a sfp nic myself.
>>
File: 1753871567664419.jpg (105 KB, 915x833)
105 KB
105 KB JPG
>>107172918
>full x16 PCIe slot
>hdmi AND vga header
>SODIMM
>>
just use a switch retard
>>
File: EJJYNY-X0AIOXUX.jpg (32 KB, 360x482)
32 KB
32 KB JPG
>>107173004
Chinesium is truly amazing.

>>107173006
What's the point if it's actually gonna work? Ducttaping some bullshit together that works 57% of the time is half of the fun with this hobby.
>>
>>107172918
holy shit
>>
buying random chinesium motherboards from alibaba isnt "ducttaping some bullshit together"
its just a dice roll
>>
File: 1638198138985.png (15 KB, 500x400)
15 KB
15 KB PNG
>>107156094
not sure if i should ask this question here or in SQG, but im looking for a sata expansion card to upgrade my truenas server.
i have seen many recomend the LSI HBA's but there are also a lot of fake ones.
so im wondering if a card like this would also work bc it seems like a hit or miss:
>https://www.startech.com/en-us/cards-adapters/8p6g-pcie-sata-card
>>
>>107173102
I'd call it horrifying but amazing is one way to put it lol
>>
>>107173227
you'd be better off with LSI hba
>there are also a lot of fake ones
no there isnt
>>
this sounds bright, but anyone have or know of special/common nginx configs to boost security?
>>
>>107173338
boost security how? for what?
give some fucking context you utter moron
>>
>>107172918
>I'd prefer a total of 4 storage ports, any configuration of m.2/sata.
>At least 4 ethernet ports, preferably 2.5g
>Either built in 10gb sfp ports or PCIe slots so i can add a sfp nic myself.
Just get the cheapest AM4 board with a good enough PCIe lane config. CPUs are dirt cheap, boards are dirt cheap second hand and are still less sketchy than these chink ones.
>>
>>107173227
>>107173323
Go on eBay, search for the model you want, and then filter out China. Done.
>>
File: 1619977730243.jpg (138 KB, 1000x875)
138 KB
138 KB JPG
>>107173227
LSI 8i cards have been cloned forever that triple-knockoff chinkshit may actually be okay now, but that site's pricepoint seems quite expensive, inc. cables
if you search some combination of HBA, JBOD, LSI + "8i" in any store, you might luck out with a used "official" for just 20USD, just make sure to (carefully) crack off the thermal cement for new paste
old card standards also took the SFF-8087 cables that are around 10USD, but make sure you also have compatible power cables for whatever your outs use
perhaps you may also need more than 8i internal ports in the future
>>
>>107173227
no
for that cash get a 9500 lsi hba bigdick arm lowpower card
>>
>>107173547
I actually bought two obviously faked ones for 15$ ~2 years ago because I figured for 15$ why the fuck not. So far they have worked just fine, no errors in the logs or data corruption on the pool but it is a complete throwaway pool so data integrity doesn't concern me.
And don't get me wrong I am not endorsing these knockoffs, they are noticeably worse build quality than real ones and I doubt they will live as long as real ones do i.e. easily two decades, but if you are that tight for cash I guess they work.
>>
>>107173353
The problem is trying to fit it all into a 1u chassi.
>>
i want to build a cheap server at home for NAS, jellyfin, maybe one or two game servers + a windows 10 or 11 VM for light office desktop use
problem is that the pc parts market in my country is dogshit and imports are expensive, but with the 11.11 deals + coupons on aliexpress i can get some stuff
for my use case will i miss much if i go with a ryzen 2600 in place of those old x99 xeons? they have a lot of cores but i'm concerned about the power consumption
also, is a 1050ti good enough to transcode 4k on jellyfin for 2 to 3 clients at the same time?
>>
>>107174002
Better off grabbing an intel cpu and doing the transcoding on the igpu probably.
>>
>>107174002
>will i miss much if i go with a ryzen 2600 in place of those old x99 xeons
ryzen 2700 is 10-20% slower than the best x99 xeons. but its literally only going to be noticeable for transcoding.
>also, is a 1050ti good enough to transcode 4k on jellyfin for 2 to 3 clients at the same time?
probably not.
but it should manage a couple 4k>1080p transcodes just fine.
>>
TRANScoding
-ACK!
>>
>>107174026
i had that in mind too but... which one?
>>107174142
>ryzen 2700 is 10-20% slower than the best x99 xeons
but the xeons chug power like a motherfucker even on idle, i'm saving some money and getting better performance but in the long run i'm spending a lot more on the energy bill
if import taxes weren't so bad i'd just bite the bullet and get an m1 mac mini for this
>4k>1080p transcodes
yeah that's what i meant, 4k to 1080p, not 4k to 4k in another codec, for 4k playback i'll just store the files in h264 or h265 and let the clients handle it
>>
File: 1580312400903.jpg (388 KB, 1500x1000)
388 KB
388 KB JPG
>>107173787
I actually don't know if mine was real too
it had all the debug LEDs, battery pinout, and official-looking print, but it came pre-flashed with a super bare IT mode
if you changed the PCIe arrangement even just for other cards, it would get dementia and hang at device search unless it got re-initialized alone first
>>
>>107174231
If you store everything in 264/265 even something semi old will probably do fine. Anything Coffee Lake or newer should be enough if you're gonna do like 2-3 simultaneous transcode max
>>
most of the fake lsi hbas were just old chipsets remanufactured on new boards with fake screen printing
think 9200 being sold as 9300 series (but way worse)

if you paid like $15 chances are you didnt get a fake at all just an ancient piece of shit.
>>
>>107173915
>PCIe extender cables with 90° angled connector for sideways mounted NICs
>5750GE 8C/16T 35W TDP
>NH-L9a-AM4 or Thermalright AXP90-X36 for CPU cooler
Been there done similar shit.
>>
or just stick with 2U
you don't need 1U density in your house so why bother
>>
For me, it's 4U
>>
>>107175276
4U for storage nodes, 3U for anything else.
>>
File: 1647115990547.jpg (134 KB, 1200x1200)
134 KB
134 KB JPG
>>107173323
>>107173547
a quick search on aliexpress brought me this:
https://aliexpress.com/item/1005005028899772.html

seems like worth trying
>>
>>107175530
You can buy a Fujitsu 9211-8i with cables for ~30 eur on ebay
>>
>>107169432
>1 NAS
>1 proxmox machine with a lot of services
These are the only two machines I'd put on the UPS, and have them setup to gracefully shut down when the UPS detects a power outage.
>>
>>107175530
reviews say it runs hot so you may need decent case airflow or ziptie some small fan to it
seems like nobody also removed the heatsink to check if it was compound, pad, cement or what either
>>
>>107156094
Lord give me strength
>>
>>107175878
>>107168746
>>
File: rack.jpg (856 KB, 2064x3496)
856 KB
856 KB JPG
is Btrfs good for your home server?
>>
>>107175559
looks like most sellers are chinese and the eu sellers are 2x the price.
>>107175719
i have a Define R5 case and a sidepanelfan.
was planning to repasting anyway
>>
>>107176151
i personally hate btrfs. ext4 or zfs
>>
>want to build a NAS
>check ebay for cheap X10 motherboards
>find a decent one
>ask seller if they ship with 2 ram sticks or only 1
>says they cant disclose that information
What the fuck. Just fucking tell me.
Anyone got experience with plusboards on ebay? They sound super shady when they can't even provide simple information.
https://www.ebay.ca/itm/306408547094
I would buy an X11 board but RAM prices are almost the same price as a used board.
>>
>>107176151
unlike certain other filesystems, it does not murder your wife
>>
File: fs.jpg (247 KB, 1032x1210)
247 KB
247 KB JPG
>>107176241
yep me too
>>
>>107176401
Anon, It's pretty obvious how many sticks you get. Max stick capacity for basic (IE non registered) dimms was 8GB for DD3.
>>
>>107176177
I've ordered one from Mr Lin and his Wie Bo technology mall 2012
Dude got a 99.2% rating
It should be arriving by the end of the week and if it doesn't work, eh its 30eur piece
>>
File: full-3229818210.jpg (30 KB, 656x679)
30 KB
30 KB JPG
>>107176401
>What the fuck. Just fucking tell me.
No.
>>
>>107176666
So even though I don't need 16GB RAM I still need to buy their 16GB version to guarantee 2 sticks? Fucking gaaaaaaaaaaaaaay.
>>
>>107176798
well, i will be ordering from Ali soon so hopefully i dont get chinked
>>
>>107173227
artofserver on youtube has good videos on how to spot the difference between fake and real lsi cards.
>>
>>107172918
This is high art. Put it in the Louvre. The shitpost in physical form.
>>
>>107177108
>m-muh fake LSI cards
There's no such thing. This faggot only made that video because he has a monetary incentive to do so. Look at the price of his HBA and look at the "fake" cards.
Companies would tell the chinks to produce X amount of HBA cards for them. But of course the chinks make a lot more, usually because they want to overproduce to cover the ones that didn't meet the standard. And once they meet the demands of the company that told them to do it, they now have excess to sell. It's literally using the same parts and design as 'genuine' HBA cards.
Don't fucking fall for the meme.
>>
What are some *actually useful* containers I can run on my server? No *arr shit. I got MeTube running, and I was considering starting to document my shit after 8 years with Wiki.js, but what else?
>>
Does FreeBSD play well with Dell PowerEdge tower servers? I want a big box to hold a ZFS array and be my home server.
>>
>>107177521
BSD works fine with 99.99% of enterprise server hardware. Where it falls apart is when you put it on consumer hardware that's less than 3 years old.
>>
>>107177471
NextCloud
SyncThing
Transmission or Deluge (bittorrent)
PiHole (adblocking DNS)
>>
>>107177542
The .1% seems to be SuperMicro servers. I ran into a nasty SATA driver bug on FreeBSD once there. Dell disk arrays and controllers are better made so that shouldn't be an issue.
>>
>>107177471
Depends on what you consider useful.
I have one for streaming videos. I have a separate one for streaming music. I have a manga reader. I also have pdf and ebook reader. Wiki.js is nice as I also use it to document what I am doing, so in the off chance I have to nuke everything and start over I can just follow what steps I did. Also I have one that detects when my UPS turns on and after it reaches a certain % of battery it starts shutting down.
>>
>>107177563
Does modern enterprise equipment even have SATA controllers? I know on EPYC that shit's handled by the CPU itself.
>>
>>107177635
>modern
That's a pretty wide range of equipment and how it's configured anon. BSD is like anti-linux, you have one option for every tool, so no variety, but everything should also just work.
>>
>>107177651
Yeah sorry I should have specified: Rackmount servers. EPYC and high core count Xeon.
>>
>>107177670
I would think that it will work, don't see why not, not heard of any issues in general with BSD to be honest. Largest gripe I see about it is the lack of options for tools/apps. Pretty sure that's what drove IXsystems away from it. I think TrueNAS is actually running on Fedora now if memory serves.
>>
>>107177635
Dell still has PERC RAID cards in most models. I don't know if "JBOD mode" passes them through to a mobo/CPU controller or if the PERC does that itself.
>>
>>107177707
>Pretty sure that's what drove IXsystems away from it
It was the fact that they kept finding portability bugs in tools and needing to pay developers to fix them, iirc. If your applications work on FreeBSD, it's still fine.
>>
Yeah there's a reason BSD is used for a very narrow set of services, because the most iconic duo in computing is BSD and Needless Hassle.
>>
>>107177471
I use authentik for sso, gitlab for my C++ projects, and bind9 to use my own domain name
>>
>try to set up podman auto update
>struggle with it for 10 minutes
>remember I can just make a timer to run a bash script that does the manual commands
The simplest methods really are the best, aren't they?
>>
I just realized that there's no simple service/bot to wake up hosts via LAN. Does /hsg/ happen to know one?
>>
>>107179320
You're not gonna believe it but I found this while looking at selfh.st earlier today: https://github.com/Trugamr/wol
>>
>mini PC with two NVMe slots
https://www.amazon.com/JMT-ADT-R43SR-Extension-PCI-Express-Graphics/dp/B0DK6ZP1B3?th=1
Is there a more elegant way to get multiple SATA rust drives onto this thing other than cobbling together a bunch of adapter cables and running custom power to things?
It's a custom build either way, just using the tiny motherboard as a launching pad for a build in a larger case.
>>
>>107179357
very nice anon, thanks
unfortunately I already started working on something similar but with an API exposed so I can integrate it to my other workflows
>>
I have bitched about the UK HDD market being bad a few times, but today the place I camp out for drives at has seen a really dramatic drop in stock. People are buying up all the shitty drives that I didn't really want to use. I think the news about a potential hard drive shortage has hit and people are buying bottom of the barrel stuff. Not sure if it's legitimate but the news will change buying behaviours anyway.
I got 5/6 of the 8TB drives I wanted. I've been keeping an eye out for that 6th but at this point I think I'm going to be grateful for what I have and start my server setup (OS and software side, hardware is already in the case).
>>
>>107156094
BIG TITTY RACKS
>>
>>107176241
>zfs
So how do you use this shit file system without it constantly waking up your drives
Do not tell me to keep my drives running 24/7
>>
>>107168746
god i love big fattitties
>>
>>107179929
disable striping
>>
>>107176417
BOOBY
O
O
B
>>
>>107176151
NEEDS BIGGER TUMMY
[spoiler]would[/spoiler]
>>
I was looking at the prebuilt NAS options and they all look way too overpriced.
However, has anyone had any good luck with Orico brand NAS?
UGREEN seems like another option.

I currently have 22TB of data mostly movies/tv/anime that I wish to move from my old dell optiplex + external Drive to something more contained with grow.

Usage would be self-hosting jellyfin, and maybe hosting a game server for a small group of friends 6-8 max.
>>
>>107179929
>Do not tell me to keep my drives running 24/7
You're supposed to leave it 24/7. If you don't have a need of having access to your files 24/7 then you don't need to use ZFS.
>>
>>107180191
I tried both, And Synology too.
Orico - The NAS was so flimsy. I thought I would break something when I had to replace drives. The software sucks too. Died in 2 years.
UGREEN - Was also flimsy and cheaply built. Dies in a year.
Synology - More robust than the other two. Software is shit though.
Out of all three if I had to pick I would go Synology > UGREEN > Orico
>>
>>107180191
>>107180235
Coworkers agree with Synology, but I run my own custom on TrueNas
>>
>>107180223
kill yourself retard
>>
>>107180338
Yup, this is the way. After trying all three NAS brands I realized it was easier to build it myself. I believe Synology NAS are also stoping consumers putting in any HDD and you have to get their branded HDDs. Not sure how that turned out but when they announced that I ditched them.
>>
>>107180191
diy

>>107180405
pretty sure EU throatfucked them for that and you can still use any drives
>>
>>107180223
So avoid all benefits of ZFS if you don't need to be accessing your files 24/7. Yeah no, you are a retard.
>>
>>107179929
ZFS is designed for servers. Choose something else if keeping your spinny boyes spinning 24/7 isn't an option.
>>
>>107180402
>>107180440
>waaaaah
>i need real time protection for my files
>except not really
lol
k
>>
>>107180496
How about your face to my fist, howdoya like them apples? Huh punk!?
>>
>>107179929
>Do not tell me to keep my drives running 24/7
Spinning drives up and down many times a day is worse than just letting them run. You can adjust the timeout periods in ZFS if you want to power down drives and let them park, but it's retarded to do that. You'll burn through drives faster, and that means you'll end up spending more money in the long run.
>>
>>107180678
This is just old wives tale bullshit. Modern drives are designed for hundreds of thousands of cycles.
>>
>>107180710
No it isn't. There's literally articles about this.
>>
>>107180678
stop spreading this retarded bullshit
yes technically it puts more wear on the motors but unless you're spinning them up and down literally every 5 minutes for months it wont have any noticeable affect on drive lifetimes.
>>
>>107180744
Journalists would never lie, of course. Eat the bugs, live in the pod.
>>
>>107180744
theres articles on how the world is flat and how the kgb did 9/11 it doesnt make either of those things true
>>
>>107180748
shut up pajeet
>>
Finally got everything setup and working.
Before I had Nextcloud snap running through its own Apache exposed on 443 and a handful of docker services I accessed locally.
Now I have nginx proxy manager running on 443.
I bought a domain through Cloud Flare and have it all setup so I can easily add any service I want to remotely access in a few clicks.
At the moment I only have NextCloud and Immich remotely accessible.
I'm think of exposing Overseerr and Audiobookshelf as well, but I might setup a VPN because they're not as critical and I'm mildy concerned about their security.

Cloud Flare is nice because it was really easy to set up rules to disallow connections from outside my country and it's really quick.
>>
>>107180748
lol
lmao
sure thing, kiddo.
hope they replace your dead drive in 2 months
:)
>>
>>107180790
i accept your concession
>>
>>107180790
I'm white and I'm right.

>>107180814
notice neither of you retards post any source to back up your myths
because there arent any.
>>
>>107180875
https://www.usenix.org/legacy/event/fast07/tech/full_papers/pinheiro/pinheiro_html/index_bak.html
>Power Cycles. The power cycles indicator counts the number of times a drive is powered up and down. In a server-class deployment, in which drives are powered continuously, we do not expect to reach high enough power cycle counts to see any effects on failure rates. Our results find that for drives aged up to two years, this is true, there is no significant correlation between failures and high power cycles count. But for drives 3 years and older, higher power cycle counts can increase the absolute failure rate by over 2%. We believe this is due more to our population mix than to aging effects. Moreover, this correlation could be the effect (not the cause) of troubled machines that require many repair iterations and thus many power cycles to be fixed.
>Our results find that for drives aged up to two years, this is true, there is no significant correlation between failures and high power cycles count.
>But for drives 3 years and older, higher power cycle counts can increase the absolute failure rate by over 2%. We believe this is due more to our population mix than to aging effects. Moreover, this correlation could be the effect (not the cause) of troubled machines that require many repair iterations and thus many power cycles to be fixed.
Unless you actually bought a brand new enterprise HDD, you don't have to worry about power cycling it. But if you bought a used one, and most likely already had 3+ years on it, then good fucking luck.
>>
>>107180957
honestly beats the forum post from some random retard i was expecting to be linked
still its 20 years old and not really conclusive

>Unless you actually bought a brand new enterprise HDD
no of course not i dont have money to burn
all my disks were bought with <200 hours on them. some of them are marked Dell EMC, you can't get more enterprise than that.
>>
>>107180990
>here's some proof
>no not like that
Sandeep Vijay please touch grass and stop fighting against basic laws of physics.
power cycling electrical moving parts = bad, no matter how much you cry here.
>>
>>107181104
you are wrong and no amount of namecalling will change that
try reading your own quote next time
>>
>>107180957
>We believe this is due more to our population mix than to aging effects.
Retard didn't even read his own quote. How big is that dent in your head?
>>
>>107180748
>yes technically it puts more wear on the motors
It's mostly the bearings. Heavy acceleration applies inconsistent forces on them and can lead to inconsistent wear. This is a very common problem in industrial equipment. Just because something is stable at high RPM doesn't mean that it can accelerate perfectly cleanly. There's also thermal stressors to consider. A few restarts isn't going to make a difference one way or the other, but even a dozen starts a day is nearly 5000 a year.

>but unless you're spinning them up and down literally every 5 minutes for months
That's largely what happens in a lot of retarded homelab setups. People will set the spindown timeout to something asinine like 5 or 10 minutes, and then spin their drives up 20 times a day without realizing it. If you're going to do this stupid shit, set your timeout period to several hours.

>>107180710
They're designed to spin continuously. They can handle thousands of restarts, but spinups, particularly from cold, are hard on them.

We actually tested this at my last job on a limited number of racks that were purely for backups. You could poke holes in the data and methodologies if you felt like it. The data was nowhere near good enough at handling all variables for journal publication. Nevertheless, the trends were that the drives in those racks that were allowed to power down frequently experienced higher failure rates. The first year saw little difference, but the year over year failure rates climbed substantially faster. Did we test this with 20,000 drives and dozens of models from all of the manufacturers? No. We tested it on several hundred. Did we account for all possible discrepancies for things like temperature and vibration? No, but we did try to make things fair. I'm no longer there, so I can't tell you if the rates after years 4 and 5 matched the 24/7 duty ones in years 6-8, but I wouldn't be surprised.

For how little electricity helium drives use when idle, it's not worth it.
>>
Anyone running Romm?
https://github.com/rommapp/romm
>>
>>107181323
>We actually tested this at my last job on a limited number of racks that were purely for backups. You could poke holes in the data and methodologies if you felt like it. The data was nowhere near good enough at handling all variables for journal publication. Nevertheless, the trends were that the drives in those racks that were allowed to power down frequently experienced higher failure rates. The first year saw little difference, but the year over year failure rates climbed substantially faster. Did we test this with 20,000 drives and dozens of models from all of the manufacturers? No. We tested it on several hundred. Did we account for all possible discrepancies for things like temperature and vibration? No, but we did try to make things fair. I'm no longer there, so I can't tell you if the rates after years 4 and 5 matched the 24/7 duty ones in years 6-8, but I wouldn't be surprised.
You're talking about a job that is probably constantly waking and sleeping the drives all day every day. I'm talking about a home server that is asleep for at least 10 hours a day (because nobody is accessing the server in their sleep)
>>
>>107181323
>It's mostly the bearings
i meant bearings
>That's largely what happens in a lot of retarded homelab setups
not my problem
>set your timeout period to several hours.
at that point you might as well not bother

its all academic anyway since this only goes for toshiba and other shitter drives without low rpm idle

>>107181352
>(because nobody is accessing the server in their sleep
except, y'know, if you're running torrents and start seeding. or something else running on your server decides it needs to read from or write to your array. and if you're using striping your entire array is getting spun up.
its asinine to suggest that just because the user is sleeping the server is inactive. why keep it on at all? save half(+) the power bill and only have the server powered when you are using it.

>>107180808
>apache
>nginx proxy manager
>I bought a domain through Cloud Flare
i know this is ragebait, fuck you
>>
>>107181545
Next cloud uses Apache by default, NPM is nice and simple and suites 99% of people self hosting, cloud flare is cheap, fast, secure and easy
>>
>>107181571
apache smell
npm i seriously dont understand how people use. it was fucking awful when i tried it, even worse than traefik. maybe they just don't know about caddy docker proxy.
cloudflare locks you into using only them for dns. they're cheap sure but an extra dollar a year is worth it not to have cloudflare dictate what you can or cant do. other registrars are also fast and secure.
>>
>>107181640
What's wrong with NPM? Simple web UI, takes two seconds to set up proxies and handles certs in two clicks. It's extremely convenient.
>>
>>107181352
>You're talking about a job that is probably constantly waking and sleeping the drives all day every day. I'm talking about a home server that is asleep for at least 10 hours a day (because nobody is accessing the server in their sleep)
You're wrong on both fronts here.

1) Torrents will randomly spin up drives at any hour of the day.
2) Our workload was periodic. It was nightly backups. Funnily enough, our workload was actually more forgiving than your typical homelabber setup because we started the drives at most 7 times a day. Backup jobs were staggered across half hour intervals in a 3 hour window. If things went quickly, the drives had time to power down. If memory serves it averaged out to around 4 or 5 spinups a day, which is a fraction of the several dozen you can achieve with a 10 minute interval and intermittent activity over a 8-10 hour window.

>>107181545
>at that point you might as well not bother
My point exactly. I guess it could make sense if you are only touching the server every week or two, but just power it down entirely at that point.
>>
>>107181704
>but just power it down entirely at that point.
Yeah the problem with that is that my server access is not consistent. Turning off the server fully is just a pain.

Also, leaving the drives on isn't just a power problem, it's excess heat, excess noise, increased dust intake (due to fans, which also turn off on sleep), etc.

It's too convenient to leave the server on and put the drives to sleep.
>>
>>107181704
>1) Torrents will randomly spin up drives at any hour of the day.
Also I should mention, anything that constantly wakes up drives (torrents, plex, etc.) are on nvme cache drives.
This is not a problem for me.
>>
>>107177728
>>107177707
>IXsystems
They were responsible for the PCBSD distro which was how I came to get in contact with FreeBSD. Lumina was also an interesting project and came before everyone decided to make their own DE.I was a very good distro, a shame they killed it.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.