• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle





  • Okay, let me just clarify some stuff here because your language has been confusing.

    You’re using a “VPN”, but on a local network. When you say “VPN”, people assume mean you’re using a client to a remote location. That’s super confusing.

    For what you’re trying to do you don’t even need WG unless you mean to use your DNS server from elsewhere.

    Please clarify these two things, but I think you’re just complicating a simple setup for an ad blocking DNS server somehow, right?


    1. This is the most complex way of simply sharing files between containers I’ve ever heard. That sure sounds like bad advice to me. You have a link to that?

    All I’m saying is that if you’re sharing files between two containers, giving them both volumes and using the network to share those files is not the best practiced way of doing that. One volume, two containers, both mount the same volume and skip the network is the way to do that.

    1. Samba maps users in its own DB to users that exist on its host. If you’re running it in a container, it’s likely it’s just going to default to root with uid=1000. So if you start a brand new Samba server, you need a fresh user to get started, right? So you create a user called ‘johndoe’ with uid=1100 and give it a password. Now, that user is ONLY a samba user. It doesn’t get created as an OS user. So if your default OS user is ‘ubuntu’ with uid=1000, you’re going to have permissions issues between created files for these users because 1100 is not equal to 1000.

    To solve for this, you create user mapping in the samba configs that say “Hey, johndoe in samba is actually the ubuntu user on the OS”, and that’s how it solves for permissions. Here’s an example issue that is similar to yours to give you more context. You can start reading from there to solve for your specific use-case.

    If you choose NOT to fix the user mapping, you’re going to have to keep going back to this volume and chown’ing all the files and folders to make sure whichever user you’re connecting with via samba can actually read/write files.


  • Ah, okay. If this is Android, just setup your Unbound host IP under ‘Private DNS’ on your phone then.

    Note: this will cause issues once you leave your home network unless your WH tunnel is available from outside. Set the secondary DNS to Mullvad or another secure DNS provider if that’s the case and you shouldn’t have issues once leaving the house.

    Depending on your router, you can also just set a static DHCP reservation for your phone only that sets these DNS servers for you without affecting all other DHCP devices.


  • The biggest thing I’m seeing here is the creation of a bottleneck for your network services, and potential for catastrophic failure. Here’s where I forsee problems:

    1. Running everything from a single HDD(?) is going to throw your entire home and network into disarray if it fails. Consider at least adding a second drive for RAID1 if you can.
    2. You’re going to run into I/O issues with the imbalance of the services you’re cramming all together.
    3. You don’t mention backups. I’d definitely work that out first. Some of these services can take their own, but what about the bulk data volumes?
    4. You don’t mention the specs of the host, but I’d make sure you have swap equal to RAM here if youre not worried about disk space. This will just prevent hard kernel I/O issues or OOMkills if it comes to that.
    5. Move network services first, storage second, n2h last.
    6. Make sure to enable any hardware offloading for network if available to you.

  • I’m…totally lost here. You’re trying to use two different VPNs on your local network? If you want your Unbound device to be a VPN exit node for your network, why wouldn’t you just setup routes to make it your default gateway?

    Using two different VPN tunnels like this is going to just cause routing issues all over the place if you’re already unfamiliar with how to setup the routing to begin with.

    Maybe explain what your intended use is here to help us understand what you’re trying to accomplish.


  • Two things:

    1. This is the most inefficient way of sharing files between containers. Use the same volume mount between containers if you just want both to have access to the same files
    2. In order for SMB to work properly, and not cause file access violations, you need to have unique users for auth that map to a UID on the filesystem. If the files and folders you’re mounting are owned by root with uid=0, and SMB maps to another user you’ve created with uid=1000, then your SMB user won’t be able to read or write anything.

    It may be easier to explain exactly what you’re trying to achieve here so someone can offer a better way of setting this up for you.





  • Only some models of Synology units have the ability to run containers, so check that first.

    Otherwise, you COULD try and install the deps from the Synocommunity packages, but they get messy pretty quickly due to architecture limitations per package (one package may only work on select models). You can browse those and their architecture targets on the synocommunity site to make sure what you need will be available. If you can’t go the container route, I’d definitely read up on packaging your own app using the synocommunity guides, even if keeping it private.




  • There’s a huge list of reasons why this is not going to work, or not work well.

    I’ll stick to the biggest issue though, which is that OpenWRT expects exclusive control over the wireless chipset, and you’re trying to run it through a VM on whoknowswhat hypervisor settings. Even if nothing else on the host machine uses the Wi-Fi adapter, OpenWRT has specific builds and kernel patches for specific drivers and specific hardware combinations. If it doesn’t see exactly what it’s expecting, it’s not going to work.

    Now…even if you DID manage to get it to seemingly work, it will constantly crash or panic if you engage the wireless chipset on a hypervisor because it’s going to throw some disallowed instruction expecting exclusive control and access to the hardware.

    I know this, because this is how it works, they say so in their own docs, and you can see people say the same thing over and over again this exact same thing. It’s not going to be a good time.

    If you want to just use software portions for network services or whatever, that shouldn’t cause issues, but again, doing it through a VM is like dressing a Yugo up as a Ferrari and expecting the same performance.



  • I’ve not run such things on Apple hardware, so can’t speak to the functionality, but you’d definitely be able to do it cheaper with PC hardware.

    The problem with this kind of setup is going to be heat. There are definitely cheaper minipcs, but I wouldn’t think they have the space for this much memory AND a GPU, so you’d be looking for an AMD APU/NPU combo maybe. You could easily build something about the size of a game console that does this for maybe $1.5k.