

Good that those things are taught in some places. I can only speak from my own experience in high school - we were required to have laptops for school but were never taught how to be safe online.
Good that those things are taught in some places. I can only speak from my own experience in high school - we were required to have laptops for school but were never taught how to be safe online.
Some people put their whole lives on the internet and never once stop to think if it’s a good idea. Then again, online safety and security are never taught or communicated, at least in the west, maybe by design.
Forgejo is foss fork. Gitea, while being free and open source as well for the time being, is run by a for-profit corporation now.
“Forge-yo” difficult to say?
Between basically every process being done on paper, and most of the civil servants having no idea what an operating system is, I’m sure this will go great.
It’s kinda standard but Pihole is how I got into the general realm of home labbing.
The trend to shutting out China from the west started with Obama’s “Pivot to Asia.” At this point the only point of contention between US ruling elites is whether China or Russia is the primary threat.
Political means more than just parties and institutions of government. Society and economy is inherently political. Who owns what is produced and the tools used to produce it is inherently political. Therefore software development, just like any other type or work or other economic interaction, is political.
I like btop. It’s pretty. I just use it for checking resource usage, I rarely have the need to kill a process or anything else one may do with a system monitor.
The switch to Forgejo is super easy, if you don’t mind everything being called “Gitea” you can just switch out the Docker image and carry on.
I just switched recently, maybe around version 1.19.
Forgejo is also working on federation which will give the system an advantage moving forwards. They’re also sticking with Gitea as an upstream source so reasonable changes Gitea makes should make their way to Forgejo pretty quickly.
Without more info it’ll be hard to help.
I got it working in principle, but the Raspberry Pi I wanted to host it on isn’t powerful enough to handle the necessary computing.
Copyright expires long after unprofitable content has been all but lost forever, something like 100 years after the death of the original creator. It used to be a far shorter period, but US corporations with big profitable IP holdings keep bribing lawmakers to extend it, and force its enforcement outside of the US as well. The concept of being able to sell copyrights is also quite silly if you ask me.
So unfortunate Gutenberg and similar libraries can only have really old stuff as things stand.
Corporations will never offer such archives, as they’re a money losing proposition. In some cases IP and copyright law is even such that content can’t be realistically archived and provided.
You could rsync with directories shared on the local network, like a samba share or similar. It’s a bit slower than ssh but for regular incremental backups you probably won’t notice any difference, especially when it’s supposed to run in the background on a schedule.
Alternatively use a non-password protected ssh key, as already suggested.
You can also write rsync commands or at least a shell script that copies all of your desired directories with one command rather than one per file.
I tried migrating my personal services to Docker Swarm a while back. I have a Raspberry Pi as a 24/7 machine but some services could use a bit more power so I thought I’d try Swarm. The idea being that additional machines which are on sometimes could pick up some of the load.
Two weeks later I gave up and rolled everything back to running specific services or instances on specific machines. Making sure the right data is available on all machines all the time, plus the networking between dependencies and in some cases specifying which service should prefer which machine was far too complex and messy.
That said, if you want to learn Docker Swarm or Kubernetes and distributed filesystems, I can’t think of a better way.
I’d run it with Docker. The official documentation looks sufficient to get it up and running. I’d add a database backup to the stack as well, and save those backups to a separate machine.
A Pi 4 draws maybe 5W of electricity most of the time. 24/7 operation at 5W will be your cost (approx 44 kWh per year), not including cost of the Pi, your internet connection, and any time you spend on maintenance.
I didn’t even look to see if the one I linked was a fork. I’m glad it works!
A cool thing about Dockerfiles is that they’re usually architecture agnostic. I think the one I linked is as well, meaning that the architecture is only locked in when the image is built for a specific one. In this case the repo owner probably only built it for arm machines, but a build for x86_64 should work as well.
Building images is easy enough. It’s pretty similar to how you’d install or compile software directly on the host. Just write a Dockerfile that runs the hide.me install script. I found this repo and image which may work for you as is or as a starting point.
When you run the image as a container you can set it up as the network gateway, just find a tutorial on how to set up a Wireguard container and replace Wireguard with your hide.me container.
In terms of kill switches you’d have to see how other people have done it, but it’s not impossible.
I started my Linux journey with a Raspberry Pi and Debian based PiOS four years ago and I haven’t felt the need to mess with that. Since then I have added other machines running other distros, but the Pi running PiOS is always on and always reliable.
Some BIOSes have a built-in update function that can update from a BIOS file stored on a connected flash drive.