Freedom is the right to tell people what they do not want to hear.
Someone want to explain to a muggle in plain english what this does and how it’s different from simply using a VPN?
Seems better that they rather live within their own community with likeminded people that within the general population. Win-win.
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
That would by definition mean it’s not superintelligent.
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
If you’re genuinely interested in what “artificial superintelligence” (ASI) means, you can just look it up. Zuckerberg didn’t invent the term - it’s been around for decades, popularized lately by Nick Bostrom’s book Superintelligence.
The usual framing goes like this: Artificial General Intelligence (AGI) is an AI system with human-level intelligence. Push it beyond human level and you’re talking about Artificial Superintelligence - an AI with cognitive abilities that surpass our own. Nothing mysterious about it.
“Study my brain. I’m sorry,” Tisch quoted Tamura as having written in the note. The commissioner noted that Tamura had fatally shot himself in the chest.
Didn’t shoot himself in the head to preserve the brain. Reminds me of the “Texas Tower Shooter” Charles Whitman.
In his note, Whitman went on to request an autopsy be performed on his remains after he was dead to determine if there had been a biological cause for his actions and for his continuing and increasingly intense headaches.
During the autopsy, Dr. Chenar reported that he discovered a pecan-sized brain tumor, above the red nucleus, in the white matter below the gray center thalamus, which he identified as an astrocytoma with slight necrosis.
I’ve heard a neuroscientist talk about this and conclude that this tumor could very well have been the cause for his behavior.
Deleted / Wrong thread.
Judging by the comments here I’m getting the impression that people would like to rather provide a selfie or ID.
No other than it’s geographically closer to my actual location so I thought the speed would be faster.
EU is about to do the exact same thing. Norway is the place to be. That’s where I went - at least according to my ip address.
FUD has nothing to do with what this is about.
And nothing of value was lost.
Sure, if privacy is worth nothing to you but I wouldn’t speak for the rest of the UK and EU.
My feed right now.
No disagreement there. While it’s possible that Trump himself might not be - but also might be - guilty of any wrongdoing in this particular case, he sure acts like someone who is. And if he’s not protecting himself, then he’s protecting other powerful people around him who may have dirt on him, which they can use as leverage to stop him from throwing them under the bus without taking himself down in the process.
But that’s a bit beside the point. My original argument was about refraining from accusing him of being a child rapist on insufficient evidence, no matter how much it might serve someone’s political agenda or how satisfying it might feel to finally see him face consequences. If there’s undeniable proof that he is guilty of what he’s being accused of here, then by all means he should be prosecuted. But I’m advocating for due process. These are extremely serious accusations that should not be spread as facts when there’s no way to know - no matter who we’re talking about.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
Now that I think of it, the Finnish translation for this is “purkkapatentti” which translates to “chewing gum patent”