So…with all this openclaw stuff, I was wondering, what’s the FOSS status for something to run locally? Can I get my own locally run agent to which I can ask to perform simple tasks (go and find this, download that, summarize an article) or things like this? I’m just kinda curious about all of this.

Thanks!

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Absolutely. There are tons of open-licenced, open-weight (the equivalent of open-source for AI models) models capable of what is called “tool usage”. The key thing to understand is that they’re never quite perfect, and they don’t all “use tools” quite as effectively or in the same way as each other. This is common to LLMs and it is critical to understand that at the end of the day they are just text generators, they do not “use tools” themselves. They create specific structured text that triggers some other software, typically called a harness but could also be called a client or frontend, to call those tools on your system. Openclaw is an example of such a harness (and not a great or particularly safe one in my opinion but if you want to be a lunatic and give an AI model free reign it seems to be the best choice) You can use commercial harnesses too by configuring or tricking them into connecting to a local model instead of their commercial one, although I don’t recommend this for a variety of reasons if you really want to use claude code itself people have done it but I don’t find it works very well since all its prompts and tool calling is optimized for Claude models. Besides OpenClaw, Other popular harnesses for local models include OpenCode (as close as you’re going to get to claude for local models) or Cursor, even Ollama has their own CLI harness now. Personally I use OpenCode a lot but I am starting to lean towards pi-mono (it’s just called pi but that’s ungoogleable) it is very minimal and modular, making it intentionally easy to customize with plugins and skills you can automatically install to make it exactly as safe or capable or visual as you wish it to be.

    As a minor diversion we should also discuss what a “tool” is, in this context there are some common basic tools that some or most tool-use models will have or understand some variation of, out of the box. Things like editing files, running command-line tools, opening documents, searching the web, are common built-in skills that pretty much any model advertising itself capable of “tool use” or “tool calling” will support, although some agents will be able to use these skills more capably and effectively than others. Just like some people know the Linux commandline fluently and can completely operate their system with it, while others only know basic commands like ls or cat and need a GUI or guidance for anything more complex, AI models are similar, some (and the latest models in particular) are incredibly capable with even just their basic built-in tools. However they’re not limited by what’s built in, as like I said, they can accept guidance on what to use and how to use it. You can guide them explicitly if you happen to be fluent in their tools, but there are kind of two competing models for how to give them that guidance automatically. These are MCP (model context protocol) which is a separate server they can access that provides structured listings of different kinds of tools they can learn to use and how they work, basically allowing them to connect to a huge variety of APIs in almost any software or service. Some harnesses have an MCP built-in. The other approach is called “skills” and seems to be (to me) a more sensible and flexible approach to giving the AI model enough understanding to become more capable and expand the tools it can use. Again, providing skills is usually something handled by the harness you’re using.

    To make this a little less abstract you can put it in perspective of Claude: Anthropic provides several different Claude models like Haiku, Sonnet, and Opus. These are the text-generation models and they have been trained to produce a particular tool usage format, but Opus tends to have more built-in capability than something like Haiku for example. Regardless of which model you choose though (and you can switch at any time) you’ll be using a harness, typically “claude code” which is typically the CLI tool most people use to interact with Claude in an agentic, tool calling capacity.

    On the open and local side of the landscape, we don’t have anything quite as fast or capable as Claude code unfortunately, but we can do surprisingly okay considering we’re running small local models on consumer hardware, not massive data center farms being enticingly given away or rented for pennies on the dollar of what they’re actually costing these companies on the hopes of successful marketshare-capture and vendor-lock-in leading to future profits.

    Here are some pretty capable tool-use models I would recommend (most should be available for download through ollama and other sources like huggingface)

    • gemma4 (the latest and greatest hotness, MIT licensed using TurboQuant to deliver pretty incredible capability, performance and results even with limited VRAM)
    • qwen3.5 (from Alibaba, a consistent and traditional leader in open models so far with good capability and modest performance)
    • qwen3-coder-next (a pretty huge coding-focused model you might struggle to run unless you have a very beefy system and GPU)
    • glm4.7-flash (a modestly capable and reasonably fast option)
    • devstral-small-2 (an older, not-so-small variant of mistral, the French open-weight AI model if you’re looking for a non-Chinese, non-US based model which are few and far between)
    • iturnedintoanewt@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      Thank you very much for this reply. I didn’t need to stick to claude or openclaw at all, I definitely don’t want to give any model any free reign over my data. It’s just the ones I’ve seen mentioned the most, I guess. But I’d like to be able to run it all locally, and only on command. Your answer is exactly what I needed. I’m gonna study carefully the options you provided, and I might go from here. Again, thanks!

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        In that case I’d definitely recommend taking a look at pi, it’s a fairly minimal and controllable starting point where you’re in the driver’s seat at all times and most “features” are opt-in and handled responsibly. And since it’s extensible you can even use plugins like the ones here to do things like add more protections against undesired actions if you want and if that is too minimal and you eventually realize you want something a little bit more like OpenClaw you might want to look into Hermes-Agent, which has similar comprehensiveness to OpenClaw but seems to be a lot more responsibly designed. I don’t have any personal experience with it but that seems to be what most of the “security-thoughtful AI keeners” (which I feel is a bit of a contradiction but people seem to be having some success with it) are using these days.

    • PetteriPano@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Gemma4 doesn’t Turboquant. But it is leaner on the KV cache.

      edit: looks like there are forks that do turboquant already