• 0 Posts
  • 76 Comments
Joined 1 年前
cake
Cake day: 2024年3月3日

help-circle
  • ignirtoq@fedia.iotoTechnology@beehaw.orgThe rise of Whatever
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    2 天前

    The thing is it’s been like that forever. Good products made by small- to medium-sized businesses have always attracted buyouts where the new owner basically converts the good reputation of the original into money through cutting corners, laying off critical workers, and other strategies that slowly (or quickly) make the product worse. Eventually the formerly good product gets bad enough there’s space in the market for an entrepreneur to introduce a new good product, and the cycle repeats.

    I think what’s different now is, since this has gone on unabated for 70+ years, economic inequality means the people with good ideas for products can’t afford to become entrepreneurs anymore. The market openings are there, but the people that made everything so bad now have all the money. So the cycle is broken not by good products staying good, but by bad products having no replacements.


  • The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.




  • My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

    Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

    And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.


  • My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

    I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.




  • Even more surprising: the droplets didn’t evaporate quickly, as thermodynamics would predict.

    “According to the curvature and size of the droplets, they should have been evaporating,” says Patel. “But they were not; they remained stable for extended periods.”

    With a material that could potentially defy the laws of physics on their hands, Lee and Patel sent their design off to a collaborator to see if their results were replicable.

    I really don’t like the repeated use of the phrase “defy the laws of physics.” That’s an extraordinary claim, and it needs extraordinary proof, and the researchers already propose a mechanism by which the droplets remained stable under existing physical laws, namely that they were getting replenished from the nanopores inside the material as fast as evaporation was pulling water out of the droplets.

    I recognize the researchers themselves aren’t using the phrase, it’s the Penn press release organization trying to further drum up interest in the research. But it’s a bad framing. You can make it sound interesting without resorting to clickbait techniques like “did our awesome engineers just break the laws of physics??” Hell, the research is interesting enough on its own; passive water collection from the air is revolutionary! No need for editorializing!


  • The main issue is that nobody is going to want to create new content when they get paid nothing or almost nothing for doing so.

    This is the whole reason copyright is supposed to exist. Content creators get exclusive control over the content they create for the duration of the copyright, so they can make a living off of work that then enriches society. And for the further benefit of society, after 14 years this copyright ends and the works become public domain, where anyone can create derivative works that will have copyright on them going to their own creators and the cycle continues, further enriching society.

    Large companies first perverted this by getting Congress to extend the duration of copyright to truly absurd levels so they could continue to extract wealth from works they had to spend very little to maintain (mostly lawyers to enforce their copyrights). Since only they could create derivative works for 100(!) years, they did not have to compete with other creators in society, giving themselves a monopoly on what become cultural icons. Now corporate America has found a way to subvert creation itself, but it requires access to effectively all copyrighted works everywhere simultaneously. So now they just ignore the copyright, since it is impeding their wealth accumulation.

    And so now the creative engine copyright is supposed to foster dies, taking the social enrichment it was designed to facilitate with it. People won’t stop making art or generating what’s supposed to be copyrighted works, but when they can’t making a living on it, they have to turn it into a hobby and spend the bulk of their time and energy on work that will put food on the table.





  • People are making fun of the waffling and the apparent indecision and are missing the point. Trump isn’t flailing and trying to figure out how to actually make things work. He’s doing exactly what he intended: he’s holding the US economy for ransom and building a power base among the billionaires.

    He used the poor and ignorant to get control of the public institutions, and now he’s using that power to get control over the private institutions (for-profit companies). He’s building a carbon copy of Russia with himself in the role of Putin. He’s almost there, and it’s taken him 2 months to do it.


  • The author hits on exactly what’s happening with the comparison to carcinisation: crustacean evolution converges to a crab like form because that’s the optimization for the environmental stresses.

    As tiramichu said in their comment, digital platforms are converging to the same form because they’re optimizing for the same metric. But the reason they’re all optimizing that metric is because their monetization is advertising.

    In the golden days of digital platforms, i.e. the 2010s, everything was venture capital funded. A quality product was the first goal, and monetization would come “eventually.” All of the platforms operated this way. Advertising was discussed as one potential monetization, but others were on the table, too, like the “freemium” model that seemed to work well for Google: provide a basic tier for free that was great in its own right, and then have premium features that power users had to pay for. No one had detailed data for what worked and what didn’t, and how well each model works for a given market, because everything was so new. There were a few one-off success stories, many wild failures from the dotcom crash, but no clear paths to reliable, successful revenue streams.

    Lots of products now do operate with the freemium model, but more and more platforms had moved and are still moving to advertising ultimately because of the venture capital firms that initially funded them have strong control over them and have more long term interest in money than a good product. The data is now out there that the advertising model makes so, so much more money than a freemium model ever could in basically any market. So VCs want advertising, so everything is TikTok.



  • The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

    Didn’t a Google engineer put out a white paper about this around the time Facebook’s original LLM weights leaked? They compared the rate of development of corporate AI groups to the open source community and found there was no possible way the corporate model could keep up if there were even a small investment in the open development model. The open source community was solving in weeks open problems the big companies couldn’t solve in years. I guess China was paying attention.




  • It’s not disingenuous. There’s multiple definitions of “offline” being used here, and just because some people aren’t using yours doesn’t mean they’re ignorant or arguing in bad faith.

    Your definition of “offline” is encompassing just the executable code. So under that definition, sure, it’s offline. But I wouldn’t call an application “offline” if it requires an internet connection for any core feature of the application. And I call saving my document a core feature of a word processor. Since I wouldn’t call it “offline” I’m not sure what I would call it, but something closer to “local” or “native” to distinguish it from a cloud based application with a browser or other frontend.