“We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input.”

  • Shurimal@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    What we know about drones is that they have cameras that can discern individuals from 10 km altitude.

    What we suspect is that US has Hubble-sized spy satellites that can do almost the same. There were a lot of classified military STS missions.

    What is theoretically possible is that US drones and spy sats can function as very large arrays (we do this with astronomical telescopes already) to dramatically increase spatial resolution.

    • SeducingCamel@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Oh I wonder if that’s how the Pic was taken that trump tweeted out of that rocket launch site, people didn’t think it was physically possible for a satellite to have that resolution

      • Shurimal@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        It all comes down to the size of the mirror/lense—the bigger, the better. Up to a point. The biggest problem is air currents and different air densities refracting light and distorting the image. That’s what these laser beams are for on photos taken of astronomical observatories—they give reference light spot that can be used to calibrate adaptive optics to current atmospheric conditions reducing distortion.