• 0 Posts
  • 296 Comments
Joined 10 months ago
cake
Cake day: November 22nd, 2023

help-circle





  • Another Millennial here, so take that how you will, but I agree. I think that Gen Z is very tech literate, but only in specific areas that may not translate to other areas of competency that are what we think of when we say “tech savvy” - especially when you start talking about job skills.

    I think Boomers especially see anybody who can work a smartphone as some sort of computer wizard, while the truth is that Gen Z grew up with it and were immersed in the tech, so of course they’re good with it. What they didn’t grow up with was having to type on a physical keyboard and monkey around with the finer points of how a computer works just to get it to do the thing, so of course they’re not as skilled at it.


  • Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

    This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

    Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.



  • The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.



  • Yep, they literally cannot work any other way than as a ponzi scheme. Because the people “earning” want to take more money out of the system than they put in, and the company is taking money out as well just to keep the game running and the employees paid, as well as to make a profit. So you need substantially more suckers buying into the system than the money that is being paid out.

    Eventually, somebody is gonna be left holding an empty bag.


  • I’m reminded of a comment I saw once where somebody was saying how when they were young, they were told that AI would do the miserable jobs so that people would have more time to make art and poetry, while today the AI makes art and poetry so that we can work longer hours at the miserable jobs.

    And the AI bros say that this is just a necessary step towards automating away the crappy jobs, but it’s not like they’ll stop automating everything else if/when AI reaches that point. The AI will still continue to automate away the hyman experience of art and culture for the rich. They’re not going to suddenly decide to implement Luxury Gay Space Communism at that point. They’ll just cram everybody into Kow Loon style ghettos.


  • So the way Tumblr works is that your account is basically a blog, with your home page on the site being populated with posts from the accounts that you follow. You can reblog posts onto your own account and comment on them to create individual conversation threads like this one. At one point, there was a bug in the edit post system that let you edit the entirety of a post when you reblogged it, including what other people had said previously, and even the original post. This would only affect your specific reblog of it, of course, but you could edit a post to say something completely different from the original and create a completely unrelated comment chain.




  • I just bought a forester a few months ago, and my 2 stipulations on the cars I was looking at were all-wheel drive because I live in snow country, and a car no newer than 2018 (IIRC) because that was the year car companies largely switched from manual controls to a 16-inch screen with everything, including climate control, accessed from an app.

    When I was talking to the guy at the dealership I bought it from and mentioned how much I disliked the new screens, he outright said, “Yeah, a lot of people don’t like them.”



  • Tell me you don’t know what you’re talking about without telling me.

    You don’t need to hit the bullet with a bullet. You just hit it with a shotgun blast or grenade, either destroying it outright or blowing it off course enough that it loses its energy and becomes ineffective. We literally do this all the time on tanks and humvees. It’s called a hardkill APS. The Russians had one working in the 70s. Modern ones are capable of detecting incoming tank rounds moving between 700-1700m/s, identifying which will hit the vehicle, and blowing them out of the air once they reach 10-15 meters away. All in a span of nanoseconds. It’s standard equipment on Israel’s MBT, and Germany, the US, and the UK have all field tested various systems and are considering making hardkill systems standard for the next generation of tanks and IFVs. Multiple companies across multiple countries make them for upgrade kits. Germany already produces vehicles with standard hardkill APS for their export market.

    This isn’t crazy sci-fi technology. It’s just rocket science.