A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 0 Posts
  • 41 Comments
Joined 1 month ago
cake
Cake day: January 30th, 2025

help-circle



  • That’s not my argument at all. I never said an algorithm is AI just because it has many steps. The key difference isn’t complexity - it’s the nature of what the algorithm does. A Tic-Tac-Toe AI can be extremely simple yet still counts as AI where as something like a game physics engine is extremely complex yet it doesn’t simulate intelligence, just physics. Bubble sort follows a fixed sequence with no decision-making. A chess engine, on the other hand, evaluates different moves, predicts outcomes, and optimizes decisions based on a strategy. That’s not just ‘many steps’ - it’s a process of selecting the best action based on the current situation. If you think my argument is about complexity rather than decision-making, you’ve misunderstood my point.







  • pretending LLMs are AI

    LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

    However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.


  • Third, it would need free will.

    I strongly disagree there. I argue that not even humans have free will, yet we’re generally intelligent so I don’t see why AGI would need it either. In fact, I don’t even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There’s obviously no freedom in having to do something but you can’t choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn’t choose them and you can’t choose to not have them either.