• 4 Posts
  • 753 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle
  • So the two biggest examples I am currently aware of are googles AI for unfolding proteins and a startup using one to optimize rocket engine geometry but AI models in general can be highly efficient when focussed on niche tasks. As far as I understand it they’re still very similar in underlying function to LLMs but the approach is far less scattershot which makes them exponentially more efficient.

    A good way to think of it is even the earliest versions of chat GPT or the simplest local models are all equally good at actually talking but language has a ton of secondary requirements like understanding context and remembering things and the fact that not every gramatically valid bannana is always a useful one. So an LLM has to actually be a TON of things at once while an AI designed for a specific technical task only has to be good at that one thing.

    Extension: The problem is our models are not good at talking to eachother because they don’t ‘think’ they just optimize an output using an intput and a set of rules, so they don’t have any common rules or internal framework. So we can’t say take an efficient rocket engine making AI and plug it into an efficient basic chatbot and have that chatbot be able to talk knowledgably about rockets, instead we have to try and make the chatbot memorise a ton about rockets (and everything else) which it was never initially designed to do which leads to immense bloat.




  • So you’re getting a lot of downvotes and I want to try and give an informative answer.

    Its worth noting that a most (it not all) of the people talking about AI being super close to exponential improvement and takeover are people who own or work for companies heavily invested in AI. There’s talk/examples of AI lying or hiding its capabilities or being willing to murder a human to acheive a goal after promising not to. These are not examples of deceit these are simply showcasing that an LLM has no understanding of what words mean or even are, to it they are just tokens to be processed and the words ‘I promise’ hold exactly the same level of importance as ‘Llama dandruff’

    I also don’t want to disparage the field as a whole, there are some truly incredible expert systems which are basically small specialized models using a much less shotgun approach to learning compared to LLMs that can achieve some truly incredible things with performance requirements you could even run on home hardware. These systems are absoloutely already changing the world but since they’re all very narrowly focussed and industry/scientific-field specific they don’t grab headlines likes LLMs do.














  • to be fair on traffic engineers timing lights we have verifiable proof that sometimes adding roads can actually increase taffic (and that removing them can decrease it) And this information is not always taken into account on a high level so they get stuck trying to fix an innately flawed system.




  • I’ve said this before and I’ll say it again. What Linux needs is a straight forward setup. Yes Mint is normally super easy to install but can also randomly just not work due to what is often a very simple issue but one obscure enough that the inexperienced (like me) will take hours or even days of trying different solutions until they find it. I love how light linux is but an extra half a gigabyte in the setup to just innately include solutions to the most common issues would pull in way more people than it would push away.