Before nginx was a thing, I worked with a guy who forked apache httpd and wrote this blog in C, like, literally embedded html and css inside the server, so when he made a tpyo or was adding another post he had to recompile the source code. The performance was out of this world.
There are a lot of solutions like that in rust. You basically compile the template into your code.
yeah, templates can be parsed at compile time but these frameworks are not embeeding whole fucking prerendered static pages/assets
They are nowadays. Compiling assets and static data into rust and deliver virtual DOM via websocket to the browser is the new cool kid in the corner.
Have a look at dioxus
This reminds me of one of my older projects. I wanted to learn more about network communications, so I started working on a simple P2P chat app. It wasn’t anything fancy, but I really enjoyed working on it. One challenge I faced was that, at the time, I didn’t know how to listen for user input while handling network communication simultaneously. So, after I had managed to get multiple TCP sockets working on one thread, I thought, why not open another socket for HTTP communication? That way, I could incorporate a fancy web UI instead of just a CLI interface.
So, I wrote a simple HTTP server, which, in hindsight, might not have been necessary.
my website’s backend is made with bash, it calls make for every request and it probably has hundreds of remote arbitrary code execution bugs that will get me pwned someday, it’s great
edit: to clarify, it uses a rust program i made to expose the bash scripts as http endpoints, i’m not crazy enough to implement http in bash
it behaves like a static file server, but if a file has the others-execute permission bit set it executes the file instead of reading it
it’s surprisingly nice for prototyping since you can just write a cli program and it’s automatically available over http too
who hurt you?
i thought it was neat how php lets you write your website’s logic with the same directory tree pattern that clients consume it from, but i didn’t want to learn php so i made my own, worse version
These wounds appear to be self-inflicted.
I designed a chip architecture that runs bash code on silicon.
I reimplemented x86 assembly in purely bash script.
Set -e, please for the love of god, set -e
For my own sanity, I choose to believe you’re lying
This is false, you also need vim and tmux
Idk about you but I use echo and sed to edit my files.
Let’s just get this out of the way
Microsoft Word is the only text editor I need.
Just don’t call it with
. Because that’s POSIX shell, not bash.
but effectively it’s bash, I think
/bin/sh
is a symlink to bash on every system I know of…Edit: I feel corrected, thanks for the information, all the systems I used, had a symlink to bash. Also it was not intended to recommend using bash functionality when having a shebang
!#/bin/sh
. As someone other pointed out, recommendation would be, or
!#/bin/sh
if you know that you’re not using bash specific functionality.Still don’t do this. If you use bash specific syntax with this head, that’s a bashism and causes issues with people using zsh for example. Or with Debian/*buntu, who use dash as init shell.
Just use
or
if you’re funny.
doesn’t work on NixOS since bash is in the nix store somewhere,
resolves the correct location regardless of where bash is
Are there any distos with
/usr/bin/env
in a different spot? I still believe that’s the best approach for getting bash.All posix-compliant distros need /usr/bin/env
I do think a simple symlink is superior to a tool parsing stuff. A shame POSIX choose this approach.
Still the issue that a posix shell can be on a non-posix system and vice versa. And certificates versus used practice. Btw, isn’t there only one posix certified Linux distro? Was it Suse?
What if, get this, we put the bash scripts in yaml. And then put it in kubernetes.
d