• 0 Posts
  • 27 Comments
Joined 1 年前
cake
Cake day: 2024年11月21日

help-circle
  • It certainly a convenient place to lay the blame. Makes it real easy to tell flattering narratives. No need to examine what role the party has, since clearly they’re doing what they must to get their candidate elected. Why should they carry any blame? They voted for Harris, after all! Surely, pouring millions of dollars into candidates that don’t resonate with the people and that are unwilling to push the needle against the direction conservatives are pulling it has nothing to do with their consistent messaging that people should just settle for the options they’re presenting the country with?

    Clearly, the DNC has some serious misunderstanding of the electorate given their choices over the past quarter century. But something tells me they’re gonna roll the dice on “we’re your only option” again and act surprised when that wasn’t enough to garner support rather than lukewarm acceptance. Maybe if they really hammer how little they’ll do to offset the damage Republicans have done over the past half century and tell us that they just want to get back to “working across the aisle” on “business as usual”, it’ll actually work this time!




  • chaonaut@lemmy.4d2.orgtoLemmy Shitpost@lemmy.world5 tomatoes
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 个月前

    Because there’s a extra system of measurement change hiding in the middle. The Inches, Feet and Yards system (with the familiar 12:1 and 3:1 ratios we know and love), and Rods, Chains, Furlongs and Miles system. Their conversation rates are generally “nice”, with ratios of 4 rods : 1 chain, 10 chains : 1 furlong, and 8 furlongs : 1 mile.

    So where do we get 5,280 with prime factors of 2^5, 3, 5 and 11? Because a chain is 22 yards long. Why? Because somewhere along the line, inches, feet and yards went to a smaller standard, and the nice round 5 yards per rods became 5 and 1/2 yards per rod. Instead of a mile containing 4,800 feet (with quarters, twelfths and hundredths of miles all being nice round numbers of feet), it contained an extra 480 feet that were 1/11th smaller than the old feet.






  • I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.


  • It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.


  • I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.


  • Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).