• 1 Post
  • 549 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle

  • Did he implement two different variations? OP said he used two different tools, not that his solutions were any different.

    That said… how so?

    There are many different ways two different brute force approaches might vary.

    A naive search and a search with optimizations that narrow the search area (e.g., because certain criteria are known and thus don’t need to be iterated over) can both be brute force solutions.

    You could also just change the search order to get a different variation. In this case, we have customer, price, meat, cheese, and we need to build a combination of those to get our solution; the way you construct that can also vary.


  • The comparison to your SO’s approach is a bit sloppy. He didn’t reason out a solution himself; he wrote a program to solve the puzzle.

    How do you define “reasoning?” Maybe your definition is different than mine. My experience is that there is a certain amount of reasoning going on, even with non-reasoning LLMs. Being able to answer “What is the capital of the state that has Houston in it?” for example, is something I would classify as very basic reasoning. And now, LLM-powered chat bots are much more capable.

    All that “reasoning” or “thinking” really is, though, is a way to get additional semantic connections in place without:

    • giving an answer in the wrong format
    • filling up context with noise

    There are limits to how well reasoning these char bots can reason. One of those limits is specifically related to the context size. As the context becomes larger, the model’s capabilities become worse. By asking it to show all its work, you exacerbated that weakness.

    That still doesn’t mean LLM-powered chat bots can’t reason, just that there are limits.

    I used to do puzzle books with these sorts of problems when I was younger, and they always came with multiple sets of grids with row and column labels filled out to facilitate the elimination approach. I don’t know that most people would think “Hey, it would be helpful to build a grid for each way of setting up these constraints.” One grid, sure, but I don’t think one grid is sufficient for this sort of problem.

    I don’t think I am - or that most people are, for that matter - capable of reasoning through all of the necessary steps in my head and aloud, without use of, at minimum, pencil and paper. I hope you wouldn’t say that I and most people aren’t capable of reasoning as a result.

    I just asked Claude Opus 4 (with extended thinking and analysis tools enabled) “Given the freedom to use any tools of your choice, design an approach to solve this problem. Feel free to use any tools that are necessary, including writing code in a language of your choice.” followed by pasting your problem text. It wrote and ran some code to solve it, gave me an answer that violated constraint 7, and when I pointed that out, ran some more code and gave me a correct solution on its second attempt.

    You are absolutely correct. I apologize for the error. Let me revise the solution to ensure constraint 7 is satisfied.​​​​​​​​​​​​​​​​

    Thank you for catching that error. Here is the corrected solution that satisfies all nineteen constraints:

    Final Solution:

    Customer Price Cheese Meat
    Carol $5.75 Colby Jack Ham
    Darrell $8.75 Gouda Corned Beef
    Henrietta $6.75 Swiss Salami
    Kenneth $7.75 Provolone Prosciutto
    Lula $3.75 Havarti Bologna
    Omar $4.75 Cheddar Roast Beef
    Peter $2.75 Muenster Liverwurst

    The key correction was recognizing that since ham cannot be paired with swiss cheese (constraint 7), and Henrietta must have swiss cheese based on the process of elimination, Henrietta cannot have ham. Through careful analysis of the remaining constraints, I determined that Carol has ham with her colby jack cheese, which satisfies all requirements.

    This solution demonstrates the importance of thorough verification when solving complex constraint satisfaction problems. Even when a solution appears complete, each constraint must be individually validated to ensure accuracy.​​​​​​​​​​​​​​​​

    This all took 5-10 minutes - and most of that time was spent verifying its solutions - so a third of the time your SO took.

    LLMs, even those with image analysis abilities, are lacking when it comes to spatial awareness, so your critique regarding using a grid to implement a systematic elimination approach is valid.




  • No offense taken, but thanks for the comment! If someone was offended and they saw your comment, I think it would probably help

    I thought it was like the way one’s brain is wired that causes them to have slightly different perception than the rest.

    I’m no expert, either, but this is a solid explanation IMO. It’s why autistic people are prone to sensory overload; their brains don’t filter out noise (like the hum of the refrigerator, the sounds of people chewing, or background conversations) the way that most allistic people’s brains do. It also definitely could have been the reason, or at least contributed to, why the woman from your post was confused - particularly if she was trying to figure out why allistic people did something.


  • Sorry, that’s incorrect.

    Autism is commonly comorbid with mental health disorders (aka “mental illnesses”) like anxiety, depression, ADHD, etc., as well as with intellectual developmental disorders, but autism is still considered, at worst, a neurodevelopmental disorder, regardless of where an individual falls on the spectrum.

    Both the DSM-V and ICD-11 are in agreement about this, for what that’s worth, but you could also just do a search for “Is autism a mental illness?” on Duckduckgo, Kagi, Searx, Bing, Google, or whatever, if you want to confirm.



  • Copyright applies to unfinished works, too. There are many reasons it might not protect an unfinished work, but those reasons are still relevant even for finished works.

    If someone steals your physical drawing, that’s theft. If they take a picture of it, then use the picture - or your picture + modifications - without your permission, particularly in a commercial work, then that’s copyright infringement, but not theft. If they steal your physical drawing and then take a picture and so on, then it’s both theft and copyright infringement.

    Most likely this wasn’t considered copyright infringement because the allegedly copied art isn’t copyrightable, e.g., game mechanics; or the plaintiff didn’t own the copyrights themselves and thus couldn’t sue (possibly the arts were still copyrighted by the original artists, having never been purchased; possibly they were stock assets that were re-purchased by the defendant). There are any number of reasons. However, “the work wasn’t published” isn’t one of them.

    On the other hand, it’s quite likely they were able to sue for theft of trade secrets for that very reason. And they might have chosen to do that simply because proving copyright infringement is much more difficult.






  • hedgehog@ttrpg.networktoComic Strips@lemmy.worldThe Witch's Curse
    link
    fedilink
    arrow-up
    25
    arrow-down
    2
    ·
    13 days ago

    The witch turned the creep into a woman and the spell was complete by the time she flew away. Unfortunately, like many women, the creep was born with the body of a man (she’s AMAB). Maybe the witch could have changed her body, too, but that would have made things far too easy, given that the point of the curse was to teach her empathy.



  • To be clear, I agree that the line you quoted is almost assuredly incorrect. If they changed it to “thousands of deepfake apps powered by open source technology” then I’d still be dubious, simply because it seems weird that there would be thousands of unique apps that all do the same thing, but that would at least be plausible. Most likely they misread something like https://techxplore.com/news/2025-05-downloadable-deepfake-image-generators.html and thought “model variant” (which in this context, explicitly generally means LoRA) and just jumped too hard on the “everything is an open source app” bandwagon.

    I did some research - browsing https://github.com/topics/deepfakes (which has 153 total repos listed, many of which are focused on deepfake detection), searching DDG, clicking through to related apps from Github repos, etc…

    In terms of actual open source deepfake apps, let’s assume that “app” means, at minimum, a piece of software you can run locally, assuming you have access to arbitrary consumer-targeted hardware - generally at least an Nvidia desktop GPU - and including it regardless of whether you have to write custom code to use it (so long as the code is included), use the CLI, hit an API, use a GUI app, a web browser, or a phone app. Considering only apps that have as a primary use case, the capability to create deepfakes by face swapping videos, there are nonetheless several:

    • Roop
    • Roop Unleashed
    • Rope
    • Rope Live
    • VisoMaster
    • DeepFaceLab
    • DeepFaceLive
    • Reactor UI
    • inswapper
    • REFace
    • Refacer
    • Faceswap
    • deepfakes_faceswap
    • SimSwap

    If you included forks of all those repos, then you’d definitely get into the thousands.

    If you count video generation applications that can imitate people using, at minimum, Img2Img and 1 Lora OR 2 Loras, then these would be included as well:

    • Wan2GP
    • HunyuanVideoGP
    • FramePack Studio
    • FramePack eichi

    And if you count the tools that integrate those, then these probably all count:

    • ComfyUI
    • Invoke AI
    • SwarmUI
    • SDNext
    • Automatic1111 SD WebUI
    • Fooocus
    • SD WebUI Forge
    • MetaStable
    • EasyDiffusion
    • StabilityMatrix
    • MochiDiffusion

    If the potential criminals use easier ready-made (commercial) web-services instead of buying a RTX 5090, learning ComfyUI, dealing with the steep learning curve etc, we’d know we have to primarily fight those apps and services, not necessarily the generative AI tools.

    This is the part where, to be able to answer that, someone would need to go and actually test out the deepfake apps and compare their outputs. I know that they get used for deepfakes because I’ve seen the outputs, but as far as I know, every single major platform - e.g., Kling, Veo, Runway, Sora - has safeguards in place to prevent nudity and sexual content. I’d be very surprised if they were being used en masse for this.

    In terms of the SaaS apps used by people seeking to create nonconsensual, sexually explicit deepfakes… my guess is those are actually not really part of the figure that’s being referenced in this article. It really seems like they’re talking about doing video gen with LoRAs rather than doing face swaps.


  • Without searching for them myself to confirm, it’s plausible, especially if you take it to mean “apps leveraging open source AI technology.”

    There are a ton of open source AI repos, many of which provide video related capabilities. The number of true open source AI models is very slim, but “Open weight” AI models are commonly referred to as open source, and from the perspective of building your app, fine tuning the model, or creating Loras for it, open weight is good enough.

    Some Loras come with details on the training data set, so even if the base model is only open weights, the Lora can still be open source.

    Until recently, Civitai had Loras for famous people, e.g., Emma Watson, and apparently just regular people. There was a post here last week, I think (or maybe to some other community), to 404 Media, about those being taken down thanks to credit card processors drawing a line in the sand at deepfake imagery.

    ComfyUI is a self hostable AI platform (and there are also many hosts that offer it) that lets you build a workflow from multiple nodes, each of which generally integrates some open source AI tech that was otherwise released. For example, there are nodes that add the capabilities to perform:

    • image generation with Stable Diffusion, Flux, Hidream, etc
    • TTS with KokoroTTS, Piper, F5 TTS, etc
    • video generation with AnimateDiff, Cog, Wan2.1, Hunyuan, FramePack, FantasyTalking, Float
    • video modification, i.e., LatentSync, which takes a video and lipsyncs it to a provided audio file
    • image manipulation, i.e., controlnet, img2img, inpainting, outpainting, or even specific tasks like “remove the background” or “change the face to this other face”

    If you think of a deepfake as just a video of a recognizable person doing a thing, you can create a deepfake by:

    • taking an existing video and swapping the face in each frame
    • faceswap video specific approaches, i.e., Roop.
    • an image to video workflow, i.e., with Wan: “the person dances.” You can expand the options available with Wan by using Loras.
    • a text to video workflow, where you use a Lora for that person
    • an image+audio to video workflow, i.e., with FantasyTalking/Float, creating a lipsync to an audio file you provide
    • a video+audio to video workflow with LatentSync to make it look like they said something different, particularly using a TTS (like F5 TTS) that does voice cloning to generate the new audio

    My suspicion is that most of the AI apps that are available online are just repackaging these open source technologies, but are not open source themselves. There are certainly some, of course, though the ones I know of are more generic and not deepfake specific (ComfyUI, SwarmUI, Invoke AI, Automatic1111, Forge, Fooocus, n8n, FramePack Studio, FramePack Eichi, Wan2GP, etc.).

    This isn’t a licensing issue, as many open source projects are licensed with MIT or Apache licenses, which don’t require you to open source derivative products. Even if they used the GPL, it wouldn’t be required for a SaaS web app. Only the AGPL would protect against that, and even then, only the changes to the AGPL library would need to be shared; the front end app could still be proprietary.

    The other issue could be them not knowing what “app” means. If you think of a Lora as an app, then the sentence might be accurate. I don’t know for sure that there were thousands of Loras for people that published their training data, but I wouldn’t be surprised if that were the case.