A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 0 Posts
  • 43 Comments
Joined 1 month ago
cake
Cake day: January 30th, 2025

help-circle
  • By free will I mean the ability to have done otherwise. This, I argue is an illusion. What ever the reason is that makes one choose A rather than B will make them choose A over and over again no matter how many times we rewind the universe and try again. What ever compelled you to make that choise remains unchanged and you’d choose the same thing every time. There’s no freedom in that.

    I also don’t see a reason why humans would be unique in that sense. If we have free will then what leads you to believe that other animals don’t? If they can live normal lives without free will, then surely we can too, right?

    I don’t know where our curiousity or the desire to help the less fortunate comes from. Genes and environmental factors most likely. That’s why cultural differences exists too. If we all just freely chose our likes and not-likes then it’s a bit odd that people living in the same country have similar preferences but the people on the other side of the world are significantly different.

    Also, have you read about split brain experiments? When the corpus callosum is severed which prevents the different brain hemispheres from communicating with each other we can then with some clever tricks interview the different hemispheres separately and the finding there is that they tend to have vastly different preferences. Which hemisphere is “you”?






  • That’s not my argument at all. I never said an algorithm is AI just because it has many steps. The key difference isn’t complexity - it’s the nature of what the algorithm does. A Tic-Tac-Toe AI can be extremely simple yet still counts as AI where as something like a game physics engine is extremely complex yet it doesn’t simulate intelligence, just physics. Bubble sort follows a fixed sequence with no decision-making. A chess engine, on the other hand, evaluates different moves, predicts outcomes, and optimizes decisions based on a strategy. That’s not just ‘many steps’ - it’s a process of selecting the best action based on the current situation. If you think my argument is about complexity rather than decision-making, you’ve misunderstood my point.







  • pretending LLMs are AI

    LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

    However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.


  • Third, it would need free will.

    I strongly disagree there. I argue that not even humans have free will, yet we’re generally intelligent so I don’t see why AGI would need it either. In fact, I don’t even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There’s obviously no freedom in having to do something but you can’t choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn’t choose them and you can’t choose to not have them either.