• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: November 5th, 2023

help-circle
rss
  • Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.

    Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.

    The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.

    Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.

    I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.

    VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.


  • You are paying for reasonably well polished software, which for non technical people makes them a very good choice.

    They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.

    I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.

    Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.

    For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.

    I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.

    If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.


  • If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.

    Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.

    Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.


  • If you are just using a self signed server certificate anyone can connect to your services. Many browsers/applications will fail to connect or give a warning but it can be easily bypassed.

    Unless you are talking about mutual TLS authentication (aka mTLS or two way ssl). With mutual TLS in addition to the server key+cert you also have a client key+cert for your client. And you setup your web server/reverse proxy to only allow connections from clients that can prove they have that client key.

    So in the context of this thread mTLS is a great way to protect your externally exposed services. Mutual TLS should be just as strong of a protection as a VPN, and in fact many VPNs use mutual TLS to authenticate clients (i.e. if you have an OpenVPN file with certs in it instead of a pre-shared key). So they are doing the exact same thing. Why not skip all of the extra VPN steps and setup mTLS directly to your services.

    mTLS prevents any web requests from getting through before the client has authenticated, but it can be a little complicated to setup. In reality basic auth at the reverse proxy and a sufficiently strong password is just as good, and is much easier to setup/use.

    Here are a couple of relevant links for nginx. Traefik and many other reverse proxies can do the same.

    How To Implement Two Way SSL With Nginx

    Apply Mutual TLS over kubernetes/nginx ingress controller


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.


  • I’ve got about 30 zwave devices, and at first the idea of the 900mhz mesh network sounded like a really solid solution. After running them for a few years now if I were doing it again I would go with wifi devices instead.

    I can see some advantages to the mesh in a house lacking wifi coverage. However I would guess most people implementing zigbee/zwave probably have a pretty robust wifi setup. But if your phone doesn’t have great signal across the entire house a lightswitch inside of a metal box in the wall is going to be worse.

    Zwave is rather slow because it is designed for reliability not speed. Not that it needs to be fast but when rebooting the controller it can take a while for all of the devices to be discovered, and if a device goes missing things break down quickly, and the entire network becomes unresponsive even if there is another path in the mesh. Nothing worse than hitting one of your automations and everything hangs leaving you in the dark because one outlet three rooms over is acting up.

    It does have some advantages, like devices can be tied to each other (i.e. a switch tied to a light) and they will work even without your hub being up and running (zwave controller I think can even be down).

    Zwave/Zigbee also guarantee some level of compatibility/standardization. A lightswitch is a lightswitch it doesn’t matter which brand you get.

    On the security front Zwave has encryption options but it slows down the network considerably. Instead of just sending out a message into the network it has to negotiate the encrypted connection each time it wants to send a message with a couple of back and forth packets. You can turn it on per device and because of the drawbacks the recommendation tends to be, to only encrypt important things like locks and door controls which isn’t great for security.

    For Zwave 900mhz is an advantage (sometimes). 900mhz can be pretty busy in densely populated areas, but so can 2.4 for zigbee/wifi. If you have an older house with metal boxes for switches/plaster walls the mesh and the 900mhz penetration range may be an advantage.

    In reality though I couldn’t bridge reliably to my garage about thirty feet away, and doing so made me hit the Zwaves four hop limit so I couldn’t use that bridge to connect any additional devices further out. With wifi devices connecting back to the house with a wifi bridge, a buried Ethernet cable, etc can extend the network much more reliably. I haven’t tried any of the latest gens of Zwave devices which are supposed to have higher range.

    The main problem with wifi devices is that they are often tied to the cloud, but a good number of them can be controlled over just your LAN though. Each brand tends to have their own APIs/protocols though so you need to verify compatibility with your smart hub before investing.

    So if you go the wifi route make sure your devices are compatible and specifically check that your devices can be controlled without a cloud connection. Especially good to look for devices like Shelly that allow flashing of your own firmware or have standardized connection methods in their own firmware (Shelly supports MQTT out of the box)


  • Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.

    Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won’t be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.

    To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.

    There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn’t an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can’t get rid of it. If you get into a situation where you don’t have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.

    If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn’t too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup

    More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896



  • I assume you are powering the dock? Many docks require external power before they will pass video.

    Does the screen on the deck shut off or stay active?

    If the screen stays active that means that it isn’t detecting an HDMI signal through the dock at all.

    If the screen shuts off but you get no video through the receiver, you should try hitting the power button one to shut it off wait a few seconds then turn it back on (while plugged in). Even the official dock has issues getting the deck to switch to the external output but putting the deck to sleep and back on gets it sorted out.

    If that still doesn’t do it plug in directly to your TV to narrow down the problem (removes the receiver as a variable). Next try a different HDMI cable, and as a last resort try a different dock. If you know someone else with their own deck you can try theirs to eliminate a hardware failure on your deck.



  • Contrary to a lot of posts that I have seen, I would say ZFS isn’t pointless with a single drive. Even if you can’t repair corruption with a single drive knowing something is corrupt in the first place is even more important (you have backups to restore it from right?).

    And a ZFS still has a lot of features that are useful regardless. Like snapshots, compression, reflinks, send/receive, and COW means no concerns about data loss during a crash.

    BTRFS can do all of this too and I believe it is better about low memory systems but since you have ZFS on your NAS you unlock a lot of possibilities keeping them the same.

    I.e. say you keep your T110ii running with ZFS you can use tools like syncoid to periodically push snapshots from the Optiplex to your T110.

    That way your Optiplex can be a workhorse, and your NAS can keep the backup+periodic snapshots of the important data.

    I don’t have any experience with TrueNAS in particular but it looks like syncoid works with it. You might need to make sure that pool versions/flags are the same for sending/receive to work.

    Alternatively keep that data on an NFS mount. The SSD in the Optiplex would just be for the base OS and wouldn’t have any data that can’t be thrown away. The disadvantage here being your Optiplex now relies on a lot more to keep running (networking + nas must be online all the time).

    If you need HA for the VMs you likely need distributed storage for the VMs to run on. No point in building an HA VM solution if it just moves the single point of failure to your NAS.

    Personally I like Harvester, but the minimum requirements are probably beyond what your hardware can handle.

    Since you are already on TrueNAS Scale have you looked at using TrueNAS Scale on the Optiplex with replication tasks for backups?


  • If you are accessing your files through dolphin on your Linux device this change has no effect on you. In that case Synology is just sharing files and it doesn’t know or care what kind of files they are.

    This change is mostly for people who were using the Synology videos app to stream videos. I assume Plex is much more common on Synology and I don’t believe anything changed with Plex’s h265 support.

    If you were using the built in Synology videos app and have objections to Plex give Jellyfin a try. It should handle h265 and doesn’t require a purchase like Plex does to unlock features like mobile apps.

    Linux isn’t dropping any codecs and should be able to handle almost any media you throw at it. Codec support depends on what app you are using, and most Linux apps use ffmpeg to do that decoding. As far as I know Debian hasn’t dropped support for h265, but even if they did you could always compile your own ffmpeg libraries with it re-enabled.

    How can I most easily search my NAS for files needing the removed codecs

    The mediainfo command is one of the easiest ways to do this on the command line. It can tell you what video/audio codecs are used in a file.

    With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend?

    To answer this you need to know the least common denominator of supported codecs on everything you want to play back on. If you are only worried about playing this back on your Linux machine with your 1080s then you fully support h265 already and you should not convert anything. Any conversion between codecs is lossy so it is best to leave them as they are or else you will lose quality.

    If you have other hardware that can’t support h265, h264 is probably the next best. Almost any hardware in the last 15 years should easily handle h264.

    When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them.

    Yes they are generated locally, and Dolphin stores them in ~/.cache/thumbnails on your local system.




  • The switches don’t have to control the lights they are wired to. I have Inovelli z-wave switches, and on these you can disable the relay. So the switch can still send out commands/scenes on the network but the relay is always on.

    Then you would put in a relay unit in the electrical box of the lights or if you have enough room in with the switches. Then setup the switches to control their respective sets of lights.

    Might even be a switch out there that lets you disconnect the relay from the buttons on the switch but still control the relay which would cut down on the device count.




  • I believe so. The package descriptions for most of the ZFS packages in Ubuntu mention OpenZFS, so it certainly appears that way.

    You can still create pools that are compatible with Oracle Solaris, you just have to set the pool version to 28 or older when you create it and obviously don’t update it. That will prevent you from using any of the newer features that have been added since the fork.


  • Well worse than that, Oracle closed sourced ZFS, so OpenZFS was forced to become a fork, and they are no longer compatible with each other.

    As for GPL the CDDL license that ZFS uses made sure that code contributions attribute copyright to the project owners, which means they can change the license as they please without having to track down contributors.

    You would think with their investments in Oracle Linux and btrfs they would welcome that license change, but apparently they need excuses to keep putting money into Solaris, and their Oracle ZFS appliances instead.