So they’ll just start shoveling wood into coal power plants in 2030 and unfurl the “Mission Accomplished” banner.
So they’ll just start shoveling wood into coal power plants in 2030 and unfurl the “Mission Accomplished” banner.
From my reading Hudson’s Superimperialism is an more an extension of Lenin’s Imperialism, based on how material conditions had evolved over the interim fifty years and the lessons learned from (at initial publication) the first generation or so of US dollar hegemony. To simplify it maybe too much, it adds a monetary dimension to the already established framework of finance capital being the driving force behind imperialism.
Superimperialism is indeed the same English term often used for Kautsky’s Überimperialismus hypothesis. Yet apart from the initial parallel of a global cartel, ie. dollar hegemony, I don’t see much of Kautsky’s ideas represented in Hudson’s work, but I’m also not terribly familiar with überimperialism.
For an actual explanation for what happened in 1971, economically and monetarily at least, go ahead and read Michael Hudson’s Superimperialism and Global Fracture. Superimperialism was so prescient at its original publishing that the US government itself used the book and the theory as a manual on how to be better superimperialists right back around 1971, and hired Hudson as a consultant.
I won’t comment on the fascist economics presented in the linked website.
Between basically every process being done on paper, and most of the civil servants having no idea what an operating system is, I’m sure this will go great.
It’s kinda standard but Pihole is how I got into the general realm of home labbing.
Political means more than just parties and institutions of government. Society and economy is inherently political. Who owns what is produced and the tools used to produce it is inherently political. Therefore software development, just like any other type or work or other economic interaction, is political.
I like btop. It’s pretty. I just use it for checking resource usage, I rarely have the need to kill a process or anything else one may do with a system monitor.
You could rsync with directories shared on the local network, like a samba share or similar. It’s a bit slower than ssh but for regular incremental backups you probably won’t notice any difference, especially when it’s supposed to run in the background on a schedule.
Alternatively use a non-password protected ssh key, as already suggested.
You can also write rsync commands or at least a shell script that copies all of your desired directories with one command rather than one per file.
I tried migrating my personal services to Docker Swarm a while back. I have a Raspberry Pi as a 24/7 machine but some services could use a bit more power so I thought I’d try Swarm. The idea being that additional machines which are on sometimes could pick up some of the load.
Two weeks later I gave up and rolled everything back to running specific services or instances on specific machines. Making sure the right data is available on all machines all the time, plus the networking between dependencies and in some cases specifying which service should prefer which machine was far too complex and messy.
That said, if you want to learn Docker Swarm or Kubernetes and distributed filesystems, I can’t think of a better way.
I’d run it with Docker. The official documentation looks sufficient to get it up and running. I’d add a database backup to the stack as well, and save those backups to a separate machine.
A Pi 4 draws maybe 5W of electricity most of the time. 24/7 operation at 5W will be your cost (approx 44 kWh per year), not including cost of the Pi, your internet connection, and any time you spend on maintenance.
I didn’t even look to see if the one I linked was a fork. I’m glad it works!
A cool thing about Dockerfiles is that they’re usually architecture agnostic. I think the one I linked is as well, meaning that the architecture is only locked in when the image is built for a specific one. In this case the repo owner probably only built it for arm machines, but a build for x86_64 should work as well.
Building images is easy enough. It’s pretty similar to how you’d install or compile software directly on the host. Just write a Dockerfile that runs the hide.me install script. I found this repo and image which may work for you as is or as a starting point.
When you run the image as a container you can set it up as the network gateway, just find a tutorial on how to set up a Wireguard container and replace Wireguard with your hide.me container.
In terms of kill switches you’d have to see how other people have done it, but it’s not impossible.
I started my Linux journey with a Raspberry Pi and Debian based PiOS four years ago and I haven’t felt the need to mess with that. Since then I have added other machines running other distros, but the Pi running PiOS is always on and always reliable.
The French had a pretty good way of shutting up insufferable rich asshats.
Memory is fine. I ran a couple disk checks as well and it’s also fine. I was also using two SSDs during the process with no difference in the problems experiences.
The rules are purposefully vague and interpreted to fit the particular political motives of the day.
The sites I’m thinking of never had their IPs completely blocked, the DNS entries for the domains were just removed. If you were to switch to a non-EU or self-hosted DNS server you’d get to the site.
But the domains in question are generally ones the US/EU/NATO propaganda machine has told people are bad, so there’s no outrage when they’re blocked. In many cases there are often cheers.
I linked the specific wiki page section in an edit to the main post. It’s in the troubleshooting part at the end.
I didn’t try the i8k module but looking at a couple things it looks like the issue was more apparent around Linux kernel 4.15 from a few years ago. I also don’t have any specific complaints with temperature control. The fans only ramp up in the 70-80C range which seems to be quite reasonable.
The RAM is fine (Memtest ran 4 times without faults), and cooling seems to work well enough. Storage is ok and I used two different SSDs through this whole process and saw the same problems on both.
I tried the previous known-good kernel options on the Manjaro install and it seems to be OK now. According to the Arch Wiki the Intel 8th Gen mobile CPUs and especially iGPUs are known to be a little problematic on Linux so the kernel options to disable some power saving options are basically non-optional. It’s weird though that it works now and didn’t on the Tumbleweed reinstall.
Doesn’t matter which team nominated him. A spook is a spook, anything they touch is gonna get spookified (not that any product from Silicon Valley isn’t already a fancy surveillance and propaganda system).