US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
All it took was for us to destroy our economy using it to figure that out!
I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent when it’s really just a more advanced tool like excel compared to pen and paper or an abacus.
The real threat will be people who fool themselves into thinking it’s more than that and that it’s word is law, like a diety. Or worse, the people that do understand that but like various religious and political leaders that used religion to manipulate people, the new AI Pope’s will try and do the same manipulation but with AI.
“I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it’s intelligent.”
So in other words, it will achieve human-level intellect.
For once, most Americans are right.
They’re right. What happens to the workers when they’re no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.
Most people in the early 90’s didn’t have or think they needed a computer.
Its just going to help industry provide inferior services and make more profit. Like AI doctors.
So far AI has only aggravated me by interrupting my own online activities.
First thing I do is disable it
I wish it was optional. When I do a search, the AI response is right at the top. If I want AI advice, I’ll go ask AI. I don’t use a search engine to get answers from AI!
I imagine you could filter it with uBlock right?
New cascadeur update just killed inbetweenjng jobs if its as good as the trailer, but uh I think this is a case where ai good, like yeah jobs lost but the time saved is wild for indie animators
https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo
Try this voice AI demo on your phone, then imagine if it can create images and video.
This in my opinion changes every system of information gathering that we have, and will usher in an era of geniuses, who grew up with access to the answer to their every question in a granular pictorial video response. If you want to for example learn how white blood cells work it gives you ask your chatbot for a video, and you can then tell it to put in different types of bacteria to see the response. Its going to make a lot of systems we have now obsolete.
This presume trust in its accuracy.
A very high bar.
Removing the need to do any research is just removing another exercise for the brain. Perfectly crafted AI educational videos might be closer to mental junk food than anything.
Same was said about calculators.
I don’t disagree though. Calculators are pretty discrete and the functions well defined.
Assuming AI can be trusted to be accurate at some point, your will reduce cognitive load that can be utilized for even higher thinking.
Holy shit, that AI chat is too good.
you can’t learn from chatbots though. how can you trust that the material is accurate? any time I’ve asked a chatbot about subject matter that I’m well versed in, they make massive mistakes.
All you’re proving is “we can learn badly faster!” or worse, we can spread misinformation faster.
This is another level, thanks for sharing!
People aren’t very smart, have trouble understanding new things and fear change - of course they express negative options.
Most Americans would have said the sama about electricity, computers, the internet, mobile phones…
Maybe that’s because every time a new AI feature rolls out, the product it’s improving gets substantially worse.
Maybe that’s because they’re using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
Even if you’re not “out of work”, your work becomes more chaotic and less fulfilling in the name of productivity.
When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.
A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.
It’s just shit honestly. I’d rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.
30 years ago I did a few months of 70 hour work weeks, 40 doing data entry in the day, then another 30 stocking grocery shelves in the evening - very different kinds of work and each was kind of a “vacation” from the other. Still got old quick, but it paid off the previous couple of months’ travel / touring with no income.
It didn’t even need to take someone’s job. A summary of an article or paper with hallucinated information isn’t replacing anyone, but it’s definitely making search results worse.
Maybe it’s because the American public are shortsighted idiots who don’t understand the concepts like future outcomes are based on present decisions.
“Everyone else is an idiot but me, I’m the smartest.”
lmao ok guy
60 million Americans just went to the polls 4 months ago homie. It ain’t about me.
Yeah maybe if your present decisions were smarter you would be even smarter in the future and could agree with his incredibly smart argument. Make better present decisions.
I think they have a point in this respect though. AI doesn’t really think, it doesn’t come up with new ideas or new Innovations it’s just a way of automating existing mental tasks.
It’s not sci-fi AI, It’s not going to elevate us to utopian society because it doesn’t have the intelligence required for something like that, and I can’t see how a large language model will ever do that. I think the technology will be useful but hardly revolutionary.
LLM can’t deliver reliably what they promise and AGI based on it won’t happen. So what are you talking about?
Maybe if a service isn’t ready to be used by the public you shouldn’t put it in every product you make.
Shut up nerd
The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?
Android Messages and Facebook Messenger also pushed in AI as ‘something you can chat with’
I’m not here to talk to your fucking chatbot I’m here to talk to my friends and family.
Who the fuck needs
anything related to AI onWhatsApp?Lots of people. I need it because it’s how my clients at work prefer to communicate with me, also how all my family members and friends communicate.
Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.
There are exactly 0 good reasons to use whatsapp anyways…
Yes, there are. You just have to live in one of the many many countries in the world where the overwhelming majority of the population uses whatsapp as their communication app. Like my country. Where not only friends and family, but also businesses and government entities use WhatsApp as their messaging app. I have at least a couple hundred reasons to use WhatsApp, including all my friends, all my family members, and all my clients at work. Do I like it? Not really. Do I have a choice? No. Just like I don’t have a choice on not using gmail, because that’s the email provider that the company I work for decided to go with.
SMS works fine in any country.
And you can isolate your business requirements from your personal life.
I have 47 good reasons. There’s 47 good reasons are that those people in my contact list have WhatsApp and use it as their primary method of communicating.
SMS works fine.
No it doesn’t. It’s slow, can’t send files, can’t send video or images, doesn’t have read receipts or away notifications. Why would I use an inferior tool?
Why do you even care anyway?
Meta directly opposes the collective interests and human rights of all working class people, so I think the better question is how come you don’t care.
There are many good reasons to not use WhatsApp. You’ve already correctly identified 47 of them.
Well, now im sure it will
It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.
Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.
The current drive behind AI is not progress, it’s locking knowledge behind a paywall.
As soon as one company perfects their AI, it will draw everyone to use it, marketing it as ‘time saver’ so you don’t have to do anything (including browsing the web, which is in decline even now). Just ask and you shall receive everything.
Once everyone gets hooked, and there won’t be any competiton left, they will own the population. News, purchase recommendations, learning, everything we do to work on our congitive abilities will be sold through a single vendor.
Suddenly you own the minds of many people, who can’t think for themselves, or search for knowledge on their own… and that’s already happening.
And it’s not the progress I was hoping to see in my lifetime.
Man it must be so cool going through life this retarded. Everything is fine, so many more things are probably interesting….lucky
What progress are you talking about?
being against progress for that reason is just stupid.
Under the current economic model, being against progress is just self-preservation.
Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.
I’m not sure at this point. The sewing machine was just automated stitching. It is more similar to Photos and landscape painters, only it is worse.
With the creative AI basically most of the visual art skills went to “I’m going to pay 100$ for AI to do this instead 20K and waiting 30 days for the project”. Soon doctors, therapists and teachers will look down the barrel. “Why pay for one therapy session for 150 or I can have an AI friend for 20 a month”.
In the past you were able to train yourself to use sewing machine or learn how to operate cameras and develop photos. Now I don’t even have any idea where it goes.Machine stitching is objectively worse than hand stitching, but… it’s good enough and so much more efficient, so that’s how things are done now; it has become the norm.
AI is changing the landscape of our society. It’s only “destroying” society if that’s your definition of change.
But fact is, AI makes every aspect where it’s being used a lot more productive and easier. And that has to be a good thing in the long run. It always has.
Instead of holding against progress (which is impossible to do for long) you should embrace it and go from there.
The worry is deeper than just different changes in production. Not all progress is good, think of the broken branches of the evolution.
The fact that us don’t teach kids how to write already took a lot of different childhood development and later brain development and memory improvement out of the run.
Qith ai now drawing, writing and music became a single sentence prompt. So why keep all those things? Why literally waste time developing a skill that you can not sell? Sure for fun…
And you are bringing up efficiency. Efficiency is just a buzzword that big companies are using to replace human labor. How much more efficient is a bank where you have 4 machine and one human teller? Or a fast food restaurant where the upfront employee just delivers the food to the counter and you can only place order with a computer.
There is a point where our monkey brains can’t compete and won’t be able to exist without human to human stuff. But I don’t need to worry in 2 years we will be not able to differentiate between ai and humans. And we can just fake that connection for the rest of our efficient lifes.
I’m not against improving stuff, but qhere this is focused won’t help us in the long run…Are you a trust fund kid or something
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it’s being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers’ time much more than any “productivity gained” on the producer side.
Everyone gains from progress.
It’s only true in the long-term. In the short-term (at least some) people do lose jobs, money, and stability unfortunately
And as someone who has extensively set up such systems on their home server… yeah it’s a great google home replacement, nothing more. It’s beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can’t even parse and match items from two lists. Despite my company trying its damn best to encourage “our own” (chatgpt enterprise) AI, nobody i have talked with has found a use.
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.
I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
You’re using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I’m using it every day. And every colleague I talk to agrees 100%.
I’ve found it primarily useless to harmful in my software development, making the work debugging poorly-structured code the major place that time is spent. What sort of software and language do you use it for?
Then you must know something the rest of us don’t. I’ve found it marginally useful, but it leads me down useless rabbit holes more than it helps.
I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.
I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
I’ll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that’s helpful, but I feel as though my normal search experience wasn’t so polluted with AI written regurgitation of the next result down, I would’ve found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.
If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.
Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.
I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:
If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
deleted by creator
If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
This. It seems like they have tried to shoehorn AI into just about everything but what it is good at.
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
we will be no closer to some kind of 8-hour workweek utopia.
If you haven’t read this, it’s short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.
Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.
Universal Base Income - it’s either that or just kill all the un-necessary poor people.
Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.
For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.
And if you run the AI locally, you can also be free of paying a subscription to a big AI company.
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.
The problems are systemic, not individual.
Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.
CEOs are the figurehead, they are virtually bound by law to act sociopathically - in the interests of their shareholders over everyone else. Carl Icahn also has an interesting take on a particularly upsetting emergent property of our system of CEO selection: https://dealbreaker.com/2007/10/icahn-explains-why-are-there-so-many-idiots-running-shit
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.
“Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.