• 4 Posts
  • 127 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle



  • I think there may be more opportunity for success here than your argument seems to suggest.

    I agree with the focus on inequality. The sense that society is fundamentally unfair has a corrosive and a radicalising effect on politics. People can react to it in very different ways, from redistribution to out-group scapegoating, but the underlying motivation is that people see that there is vast wealth available in our society and they’re still struggling.

    Where I may disagree is that most people are non-ideological. Not everyone, but a healthy majority. They aren’t focused on the philosophical roots of a candidate’s policies. They care that the candidate

    1. Sees, likes, and cares about themselves and their group
    2. Has a vision that gives them hope for something better

    Many people can find that in candidates with a variety of ideological positions. The overlap between people who supported Bernie after the great recession, and went on to support Trump is bigger than one would expect.

    So the equation is much less zero sum. You don’t lose one reactionary for every radical you bring into your camp. There really aren’t that many committed radicals and reactionaries.

    The most toxic message today is the economic moderate. “Hey, it’s not so bad. Things could be a lot worse.” This is the zero sum relationship. You can’t keep both the people who are doing well and like how things work, and the people who are struggling and want the life they deserve. The material difference isn’t left vs right, it’s status quo versus change. There’s a lot more room for flexibility in the change camp.




  • I’ve listened to a couple interviews with the author about this book, and I have not found them persuasive. I can accept that there’s a possibility that artificial super intelligence (ASI) could occur soonish, and is likely to occur eventually. I can accept that such an ASI could choose to do something that kills everyone, and that it would be extremely difficult to stop it.

    The two other arguments necessary for the title claim, I see no reason to accept. First that any ASI must necessarily choose to kill everyone. The paper clip scenario is the basic shape of the arguments presented. I think it’s probably impossible to predict what an ASI would want, and very unlikely that it would be so simple minded as to convert the solar system into paper clips. It’s a weird proposal that an ASI must be both incomprehensibly capable and simultaneously brainless.

    Second that the alignment problem can not be solved before the super intelligence problem with current trajectories. Again, this may be true, but I do not think it’s a given that the current AI techniques are sufficient for human-level, let alone super-human intelligence.

    Overall, the problem is that the author argues that the risk is a certainty. I don’t know what the real risk is, but I do not believe it is 100%. Perhaps it’s a rhetorical concession, an overstatement to scare people into accepting his proposals. Whatever the reason, I’m sympathetic to the actual proposals; that we need better monitoring and safety controls on AI research and hardware, including a moratorium if necessary. The risk isn’t 100% but it’s not 0% either.






  • They do, but I’m a little surprised by how well they’ve positioned themselves on this one. It seems to me that the most likely scenario is that the Republicans will give nothing on principal, the shutdown will go until November when the premiums increase, and the country will see that the Republicans would rather close the government for two months than spare them a doubling or tripling of their healthcare costs.

    And all the while Trump trashes the government in an attempt to retaliate, without really understanding that the government provides services that people, his voters included, depend on. I’m not sure, “the Democrats made me do it,” will save him with anyone other than his cult members.

    I am cautiously optimistic.







  • I’m not really qualified to evaluate the merits here, but as a science-interested layman, I’d be glad to see an alternative to dark matter and energy. Setting aside the technical arguments, the dark matter and energy approach smells like questioning the observations when your theory doesn’t match observations.

    I skimmed the paper for testable predictions, and nothing stood out to me. Fitting existing observations is a good place to start, but if the only prediction is that nobody will find dark matter or energy, things may remain undecided forever.