• 0 Posts
  • 197 Comments
Joined 1 year ago
cake
Cake day: July 23rd, 2023

help-circle



  • It’s very misleading to say “paying for software is stupid” and not consider the total cost of ownership. TCO includes things like infrastructure and maintenance. As an exec, I am constantly faced with two choices: free software that might do what I want or paid software that sort of does what I want. At face value, you would immediately tell me to get the free stuff. That’s where you miss TCO.

    (Read the last paragraph if you think the business lens is bullshit)

    Every FOSS solution I run requires me to deploy and maintain it. I only have so many hours in the day so at some threshold I have to hire more and more people to deploy and maintain. Integrating? That’s on me too because I’m using free software so now I need a resource to glue things together. My “free” option actually costs a portion of my engineering resources. I’m also on the hook for failures. Running my own ERP? I need to have support staff on-call to handle outages.

    Every paid solution I run costs can require some of those things. Let’s ignore paid licenses and just focus on things I can completely outsource. This means I’m no longer on the hook for deployment and maintenance, so if I can show the cost of the paid software is less than my TCO, it’s a better deal. If I have a good relationship with the vendor, I might be able to delegate my integration needs to their product pipeline. I might be able to purchase a support contract that’s cheaper than running my own.

    At some point every company will outgrow certain software. It’s a constant reevaluation of the costs of paid vs TCO of free and when I need to spend resources making it do something it doesn’t. A managed telemetry stack like Sumo or New Relic allows me to scale quickly but cheaply until I have the revenue to build an in-house team to instrument fucking everything.

    The exact same logic applies to my time. I could run free everything. That comes with a higher TCO (usually). I say this as someone who has rebuilt dot files repos on the dot every three years and been running Linux since you could get it in a book at B Dalton at the indoor shopping mall so my tolerance for personal TCO is very high. However, I don’t change my own oil. It’s free! I could do it myself! I don’t want to. I buy certain things, like software, in my personal life because the TCO of FOSS is higher than I want to pay. I have outgrown Windows and Mac so I have some level required cost in Linux. I pay for some things like storage and routing solutions even though I could build and deploy and maintain all of that myself. Sometimes I just want my shit to work and not have to do it myself.




  • Let’s assume you’re arguing in good faith here so we can understand why land deeds and URLs are completely different.

    Deeds are managed by a central authority. There is an agreed-upon way(s) to view and search those deeds. There is a single authority to update or remove deeds. The items the deed refers to also are controlled by a single authority and changing them has a single process.

    URLs are registered (loosely) with a central authority but the similarities end there. I can impersonate a URL on a network (even up to large chunks of the internet if I’m able to confuse DNS in a large enough attack). So just because you’ve bought the domain referenced in the blockchain and set up some name servers doesn’t mean any consumer of the blockchain or even the internet is guaranteed to hit your instance of the domain. All a URL is is a reference to something so let’s assume for a minute we can have a global reference. What’s behind it? Again, completely uncontrolled. For now it could be your NFT; what happens if I am your hosting provider and destroy your instance? Move your hardware? What’s to prevent you, the owner of the assumed global reference, to change what that uniform resource locator is actually locating?

    Land deeds and URLs are not analogous. Land and the content served at a URL are not analogous. Let’s look at NFTs quickly to see if we can actually do something about this!

    Since we have a single-write, read-only database, why not store the full thing in the DB? Well, first you have to agree on a representation. It has to be unchanging so we can’t use a URL. It can’t ever duplicate so realistically hashing is out (unless our hash provides a bijection which is just a fancy way of saying use the fucking object itself). Assuming we’re only talking about digital artifacts (attempting to digitize a physical asset is a form of hashing meaning we get collisions so you can’t prove ownership), we’re now in an arms race for you to register all of your assets and their serialization methods before I brute force everything. Oh and this needs to live everywhere so it can be public so you need peta-many petabyte drives. But wait! Now we’re burning the sun in power just to show you have ownership of 10 and I have ownership of 01. Fuck me that’s dumb.



  • There is literally no way to opt out of Google’s data collection if you are going to use their products. Using another frontend shifts the data profile but it still exists and provides value to them. It’s reasonable to say it’s a bad thing. It’s unreasonable to say there are no other ways. I grew up in a public library and I can still get most of the information I need from a public library without Google products (things I can’t get usually come through inter-library loan or direct connections with subject matter experts at, say, a maker space). This seems to be less of “I’m against invasive corporations” and more of a “I don’t like the solutions available to avoid invasive corporations.”



  • I pay for YouTube Family. I consume a lot of YouTube and I want to support the creators I watch. At its current price point, YouTube Family is reasonable. Several households in my family get ad-free YouTube for what is a reasonably low price point for each household.

    If the price goes up much (eg if I were paying the single price of $11 per household), the creators I really enjoy continue to get pushed out or change content because of shitty ad rules, or they pull the whole “must be in the same household” bullshit I would drop it in a heartbeat just like I’ve dropped most streaming providers. Streaming has become cable and YouTube has been shooting itself in the foot by forcibly changing content for advertisers. I come to the platform for content, not advertisers.





  • Speaking from 10+ YoE developing metrics, dashboards, uptime, all that shit and another 5+ on top of that at an exec level managing all that, this is bullshit. There is a disconnect between the automated systems that tell us something is down and the people that want to tell the outside world something is down. If you are a small company, there’s a decent chance you’ve launched your product without proper alerting and monitoring so you have to manually manage outages. If you are GitHub or AWS size, you know exactly when shit hits the fan because you have contracts that depend on that and you’re going to need some justification for downtime. Assuming a healthy environment, you’re doing a blameless postmortem but you’ve done millions of those at that scale and part of resolving them is ensuring you know before it happens again. Internally you know when there is an outage; exposing that externally is always about making yourself look good not customer experience.

    What you’re describing is the incident management process. That also doesn’t require management input because you’re not going to wait for some fucking suit to respond to a Slack message. Your alarms have severities that give you agency. Again, small businesses sure you might not, but at large scale, especially with anyone holding anything like a SOC2, you have procedures in place and you’re stopping the bleeding. You will have some level of leadership that steps in and translates what the individual contributors are doing to business speak; that doesn’t prevent you from telling your customers shit is fucked up.

    The only time a company actually needs to properly evaluate what’s going on before announcing is a security incident. There’s a huge difference between “my honeypot blew up” and “the database in this region is fucked so customers can’t write anything to it; they probably can’t use our product.” My honeypot blowing up might be an indication I’m fucked or that the attackers blew up the honeypot instead of anything else. Can’t send traffic to a region? Literally no reason the customer would be able to so why am I not telling them?

    I read your response as either someone who knows nothing about the field or someone on the business side who doesn’t actually understand how single panes of glass work. If that’s not the case, I apologize. This is a huge pet peeve for basically anyone in the SRE/DevOps space who consumes these shitty status pages.


  • This is a common problem. Same thing happens with AWS outages too. Business people get to manually flip the switches here. It’s completely divorced from proper monitoring. An internal alert triggers, engineers start looking at it, and only when someone approves publishing the outage does it actually appear on the status page. Outages for places like GitHub and AWS are tied to SLAs that are tied to payouts or discounts for huge customers so there’s an immense incentive to not declare an outage even though everything is on fire. I have yelled at AWS, GitHub, Azure, and a few smaller vendors for this exact bullshit. One time we had a Textract outage for over six hours before AWS finally decided to declare one. We were fucking screaming at our TAM by the end because no one in our collective networks could use it but they refused to declare an outage.