Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.

I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.

my project is pretty cryptography-heavy… the act of me sharing my efforts in an attempt to show transparency… but it is used against my project by calling it AI-slop (undermining Kerkhoff’s principles).

It’s 2026 and most developers are using AI. I have used it to create things like formal proof and verification.

my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation… but if the conversation cant move past “its AI-generated”… then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.

AI is a tool. you cant (and shouldnt) “trust” AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont “trust” my hammer to bash in a nail… i “use” the hammer. AI is not different in how you need to be responsible for how its used.

i’ve busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised… but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.

It’s a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI… and it’s clearer for it. (you can find more detail on my profile)

i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui… but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.

while its understandable people dont want to review AI-slop… i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      30
      ·
      edit-2
      7 days ago

      AI-slop is easy to generate, but there needs to be a recognition that at some point ai-generated code is no longer slop. the failure to recognise that is the issue that seems to have got me banned.

      • toebert@piefed.social
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        7 days ago

        Even if that were true (and in some rare cases it probably is) the machine is trained on stolen data, ignoring all licensing or companies selling people’s contributions without their approval - and that’s just the tip of the iceberg.

        To call it slop is a great way to discredit it and to not support an unethical business/technology.

        • xoron@programming.devOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          19
          ·
          7 days ago

          to call it slop just undermines the time and effort i put into the project. its not just code, i put efforts towards testing and documentation. but sure… if you want to believe you’re poking holes on big-tech’s practices here.

          • Luiz Cavalcanti@lemmy.world
            link
            fedilink
            arrow-up
            20
            ·
            7 days ago

            To focus on the supposed time you put into it ignores the massive and numerous problems people here pointed out in many responses.

            I get you are hurt by being banned, I really do and I probably disagree with it. But the problem with this “tool” (as you repeatedly frame it, trying to make it neutral) is not centered on you or any other individual. I’m sorry, sincerely.

      • baod_rate@programming.dev
        link
        fedilink
        arrow-up
        9
        ·
        7 days ago

        at some point ai-generated code is no longer slop

        That point is when you have a human expert validate that the ai-generated code is correct. If the community as a whole has given up on doing that, it does not retroactively make all LLM-generated code not-slop, it just means slop is the norm.

        And obviously from the reception you’ve gotten, even that has yet to occur.

  • graynk@discuss.tchncs.de
    link
    fedilink
    arrow-up
    30
    ·
    7 days ago

    Cryptography is notoriously easy to get wrong. If you don’t know enough about it - you should not offload it to the hallucination machine, because you will not be able to verify it properly, and those who can - will not bother to.

    This is not what a real audit looks like and it should not be presented as such. This “audit” is, in fact, slop.

    Auditor: Security Analysis (Automated + Manual Review)

    Do you not see the problem in this line?

    The implementation uses real cryptographic primitives

    Or this?

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      10
      ·
      edit-2
      7 days ago

      perfect. you get it. you understand that generating an AI audit is wild!

      https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

      the AI audit comes after a long time of to-and-fro from the various communities that asked for an audit… of course they asked for a professional one… but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself.

      i also understand that people would prefer a project with a team of experts… sorry to break it to you, a team of experts are not going to hire themselves on an unfunded project like this.

      while the security audit, unit test, formal proofs and verification are not good enough when its done with AI, my hope was that it could serve as a starting point for anyone like ROS to perform an actual review. i cant offer more transparancy that open source, documented and discussions.

      • graynk@discuss.tchncs.de
        link
        fedilink
        arrow-up
        27
        ·
        7 days ago

        of course they asked for a professional one… but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself

        then… vibe-code something else?.. why do you think that you should be making something you are not an expert in, that can potentially put your users into danger and make you liable for it? if it’s a learning project - great, go wild. but if it’s intended to be used, then sorry - this is just an irresponsible approach that should not be entertained by anyone. I get that you have “positive intentions” but pick some other venue that you can get right. or contribute to an existing project (being mindful of contribution guidelines).

        • xoron@programming.devOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          15
          ·
          7 days ago

          i vibecode a lot of things. my project is not inherently dangerous. people can use any software irresponsible. in my project and all my communications about it, i make it clear to users to use it cautiously and that its presented for testing and demo purpose. its mentioned in all of my post and i also have terms and condition within my projects the explain as much.

          nobody is being tricked into sharing sensitive information… in fact i made a proactive attempt to create something that doesnt need any personal information.

          dont tell me what i should and shouldnt be coding. i put time and effort into testing and verifying. this is the issue about mentioning AI is that it undermines all other efforts. its the low-hanging-fruit of critisism.

          • solomonschuler@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 days ago

            “I vibecoded a lot of things, my projects is not inherently dangerous”

            except it is dangerous. the fact that you declare yourself as a vibe coder implies to me you don’t know what’s going on in the system you’re developing. Correct me if i’m wrong but everything that you know this point on in your projects is strictly a function of what generative AI Is outputting. Do you see how malicious that is, where all your knowledge comes from a single source and you believe it.

            The reason I’m pointing this out is thaf these AI models make mistake due to text compression. Much like JPEG, when you compress a file, data will be lost. The so called “hallucinations” of the AI models is caused by this data loss from compressing texts.

            Now whether engineers are trying to optimize that or not does not matter, it is a fundamentally flawed system, and to use it as a framework for developing codebases you wouldn’t be able to tell if the code it actually generates is implementing the logic or is fit for commercial use; You involuntarily agree with the code it generates and don’t question anything about it.

            This doesn’t even take into account the errors that I’ve observered that are abstract for humans to understand. Things like time complexity or code relating to physical hardware or microcontrollers that AI tries to generate functionally does nothing. For someone who cannot unit test something abstract like time complexity until it runs, I wouldn’t know.

            what I’m trying to point out is, you have to have a level of skepticism on the code it generates. learn the subject (in whatever you do), that allows you to question what it generates, and use it for verification of your code (that you write) over it writing code for you.

            • xoron@programming.devOP
              link
              fedilink
              arrow-up
              1
              ·
              5 days ago

              This generally seems to elude to my due-diligence. And if it’s low effort AI.

              It’s skepticism that has me put attention towards docs and various details.

              For example: I tried to get a security audit. I can’t get one for free, so I created one with AI. I’d like to be clear that I understand how my apps works and am able to articulate it to the best of my ability to AI to generate the security audit. I was exhausted from the experience of creating the audit with AI and it provides me with good information and advice. I stand by the feedback there isn’t it isn’t ready for production.

              In all my posts on all platforms Im sure to mention that it isn’t production-ready. (The same for the repos on GitHub)… But the general aim is to create something secure.

          • graynk@discuss.tchncs.de
            link
            fedilink
            arrow-up
            16
            ·
            7 days ago

            then what is the point of it existing, if it can’t be used seriously? why should people spend their time on it, when there isn’t a solid base to build on? if you want to do something useful - contribute to an existing project. if you just wanna hack away at something - sure, do that, just don’t be surprised if other people happen to hate it when you try to present it as a serious project. nobody would bat an eye if you presented it as “I wanted a to try and implement Signal protocol, this is what I’ve learned and how far I’ve gotten”.

          • lichtmetzger@discuss.tchncs.de
            link
            fedilink
            arrow-up
            7
            ·
            6 days ago

            my project is not inherently dangerous.

            It is not “your” project - it was generated by a glorified chatbot. Since you lack the experience to judge its output, I cannot trust you to verify the security of the project.

  • spectrums_coherence@piefed.social
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    5 days ago

    the cryptography module has unit tests and formal verification.

    I suspect your formal proof refers to the following files: https://github.com/positive-intentions/signal-protocol/tree/staging/formal-proofs

    It contains 6 files each with less than 100 lines of code, and the claim seems to be it almost prove the entire security of the signal protocol.

    There are three possiblities here: (1) the formal proof community has advanced so much without me knowing (2) your AI produced complete garbage (3) your AI made ground breaking advancements in formal method. Since my best known state of the art is Signal* from project everest. It involves tens of components, and years of works for top academics and proof engineers.

    Each file here, like fstar/Impl.Signal.Core.fst would already be longer than your entire proof, even just the hints provided to the SMT solvers fstar/Impl.Signal.Core.fst.hints are longer than your entire proof.

    So I am interested in what technique did you apply to acheive the almost same effect as this monumental project with less than 5% of the code?


    You have also claimed there is support for Rocq, Lean, and F*, and the code is here https://github.com/positive-intentions/signal-protocol/tree/staging/signal-protocol-core/proofs

    I looked into the Rocq and Lean part of the proof, and there is no proof, all the “correctness” claims are all declared as axioms, which are not proven.


    So far, I have sit down and read your code, and I feel it is either a major breakthrough or a complete waste of my time (I am unfortunately leaning towards the latter). I would be furious if my student or colleagues handed me a work of this quality, and I imagine all the experts reading your code will likely feel the same.

    I am not angry because your work involves LLM (I don’t like that, but I won’t be angry about it), but because you disrespected my time and effort to review your code by presenting a work that is far from your claim. In turn, I also cannot provide you constructive and technical feedback to you, as the technical part of your project seems hollow to me. IMO, disrespecting the time of your peer is a very good reason to ban people from their community.

    Academia is currently being flooded with AI, many are used by compotent individuals so AI is able to hide error in obscure process. For the first time, academia need to deal with a large amount of submission are not in good faith, and that is frustrating for us volunteering reviewers. Your reader, who are also volunteering their time to help you improve, will likely feel the same.

    AI is just a tool, that is, you will get as much expertise out of it as you put into it. Like a computer, it will make producing work easier and faster, but it cannot help you build anything you do not understand yourself.

    I am glad you are interested in crypto and verification. But to make meaningful contribution will take honest effort as opposed to just prompting a couple so called artificial “intelligence”.

  • Pamasich@kbin.earth
    link
    fedilink
    arrow-up
    29
    ·
    7 days ago

    In my opinion, slop is slop. AI tends to result in slop, but it doesn’t have to. But to ensure it’s not slop, one has to put in effort and time. Which kind of defeats the purpose of using AI in the first place. So I think it’s obvious why most people default to AI involvement = slop.

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      18
      ·
      7 days ago

      AI involvement = slop

      thats the part that seems disconnected from reality. im sure there are still people cranking out code manually, but lets be real; it isnt normal anymore.

      in cybersec, there is scrutiny than most against the use of AI… i simply cant believe that the folks at Whatsapp, Signal or simpleX are not using AI in their daily workflow.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    7 days ago

    I’ve read some other comments and wanted to add.

    You cannot use a LLM to verify its own work

    They have no ability to think. Any intelligence they have is extremely limited. There a mostly automatic copy and paste machines. They pull code from their training data and online and attempt to compose the.

    Using a LLM to verify its own work is like asking a criminal to run their own trial.

    That’s just now any of this works. I think you should take a step back from the LLM and really start evaluating your work more critically. There is more to software then “it works!”

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 days ago

      i stated off with a version i created manually without AI. i know how to do this old-school (i tried). that was a different kind of slop.

      https://github.com/positive-intentions/chat

      i use AI in a way i think is appropriate. i check as much as i can myself too. i post online about details and questions. i can iterate with AI. im may naive to think i know how to inpect what is created, so i share it online. im not sharing slop. this is the best i can do. of couse there are countless points of improvement, but there are only so many hours in the day.

      youre sharing a valid opinion, but its difficult for me to quantify my efforts. im sure you dont think i just asked AI something basic (e.g. “verify this code is correct”).

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        6 days ago

        If you can’t write manually and have it not be slop then you can’t program with a LLM effectively.

        It doesn’t matter how much instruction you give a LLM it fundamentally cannot evaluate itself. Because to ensure that it’s evaluating correctly either you need to evaluate it or someone else. These are not deterministic machines and will “lie” to reach their goals. And I put that in quotes because it’s not really lying that’s to much personification.

        These things are not good for literally anything beyond minor transformation or boilerplate.

        Trust me. If you actually spend the time learning to code well written software by hand you will save time and get a better result. LLMs based coding is a anti-pattern.

        You’re not getting push back because “programmers a upset their jobs are getting stolen” you’re getting push back because your falling for LLM company propaganda. LLMs just are not there yet.

        If more then 20% of your code is written by a LLM your using it wrong.

        • xoron@programming.devOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          6 days ago

          here is the open source version i created with out AI: https://github.com/positive-intentions/chat

          its faily ugly and not user friendly, but the core mechanics of secure encrypted communication is demonstrated and documented. it was clear after creating that version, open source was worthless. with or without AI, slop has always been around… for better or worse, i was creating slop before it was cool.

          i then created the newer version of the messaging app with AI (it isnt fully open source but works in a similar way): https://p2p.positive-intentions.com/iframe.html?globals=&id=demo-p2p-messaging--p-2-p-messaging&viewMode=story

          having done it manually and then with AI, i can clearly compare why the close source version is more appealing to users. its not just a nicer UI, its better documented.

          youre making assumptions that if i didnt have AI, i wouldnt be able to work on my project. im naive enough to think that isnt true. the documentation and code might not be to the same quality, but im sure i can still crank out code the old-fashioned way.

          • thedeadwalking4242@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            5 days ago

            Slop is slop, AI slop < human slop.

            All your doing is advertising how much you don’t respect your work or the work of others.

            Code is the app, code is the functionality. If the code is crap then so is the functionality.

            You’ve made a tool with a loose handle. And axe with a crack in they head. It’s gonna fly off and hurt someone.

            Just because you are a crappy programmer doesn’t mean everyone else is.

            There is nothing wrong with open source. Your opened source code just sucks.

            Idk what to tell you besides to take a long critical look at yourself and your work.

  • toebert@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    I don’t think everything is getting called ai slop, but I would say if any part of your project is ai slop (like your “lazy uis”) I’d also immediately lose trust in the entirety of the project, especially if it’s intended to be around security. I do think most projects that use AI for code generation are slop though, I’ve seen far fewer examples of good use (i.e. where the output looks human written because the operator reviewed and refactored every part of it, or where it was used to write small parts of functions rather than entire functionalities)

    Your last sentence I think provides a great argument for why people here (and more and more broadly in engineering) hate on ai generated code in general. It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing. A machine, that we all know hallucinates and generates often low quality garbage, including severe security vulnerabilities by design. According to GitHub, your project has millions of lines of changes on a weekly basis in the earlier days, that does scream slop to me.

    Last, AI is more and more hated due to the increasing number of horrible impacts it has on our world, personally I’d not support AI generated projects just on that principle alone.

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      10
      ·
      7 days ago

      in the recent post that got me banned it was a copy of this post here:

      https://www.reddit.com/r/cybersecurityai/comments/1sxvrmu/browserbased_file_encryption_no_install_or/

      i make a point in all my posts to be clear with the caveats. im not promoting this to replace anything. details to find out more is there along with advice to not use it for sensitive data.

      for me messaging app, the caveats are similarly mentioned: https://positive-intentions.com/docs/technical/p2p-messaging-technical-breakdown

      my projects are reasearch and development projects which i make sure to make clear when i post about them. im fairly consistent with advice around cautious use… knowing full well that it will deter people. im proactively seeking critisism in order to improve it.

      It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing.

      bingo!.. youre framing as a negative understandable, but unless im mistaken, that the way its going to have to go. software development broadly speaking (for better or worse) is going to be AI generated. the tooling and methodologies have to keep up.

      horrible impacts it has on our world

      thats pretty vague, im sure it does some good too. AI is a tool. its easy to talk about how AI is impacting people badly. personally ive been unemployed for the past few months. its a horrible experience to go through countless interview thinking i aced it, but still come up with a rejection because the field has become so competative. but i dont blame AI on that. its a tool that i need to be learn how to use. perhaps others use it better than me.

      • toebert@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 days ago

        I don’t know the context around you getting banned, unless there’s some specific rules you violated. I am not in support of that, but it’s also not the focus of my message.

        I disagree with development having to go that way. If anything, the hatred towards ai is a sign that it’s actively not sought after, or at least not with LLMs. If they managed to develop actual AI that is on par with senior engineers, maybe? But we don’t have that. What we have is faulty and inherently flawed. Why would we have to push ahead forcefully with it…?

        I didn’t include a list of why ai is harmful as the post was already long, but displacing workers is just 1 point.

        • massive waste of resources (as in water, electricity) for tasks which can already be achieved without AI for a fraction of the compute cost (think, search engines as an example). Also consider the environmental impact here in a society where a lot of our power still comes from burning fossil fuels.
        • a war on consume hardware (all compute components “sold out” for 1-2 years ahead making everything expensive for average people)
        • destruction of the workforce pipeline (even if only junior roles got displaced by ai, we will simply not have a pipeline of new staff to step in once seniors had enough, in any industry this is catastrophic, especially when the machine doing this is not actually able to fully replace staff)
        • building a dependence of closed source subscription based tooling or end up locked out of your own codebase because it’s infeasible to do it without once you started
        • theft of intellectual property ignoring all licensing for training data, or companies selling individual contributions
        • the entire thing being funded by imaginary money propped up by a circle of loans driving us towards yet another financial collapse across the modern world

        I’m sure there are even more.

        Not all of these are the fault of the technology, but I’m more than happy to throw the entire technology and everything around it under the bus if it means it makes it easier for people to unite against these companies - which I think it does.

        Saying “it’s a tool and provides value” is like saying “force feeding chickens in a tiny cage” is a tool that provides value. True? Yes. Valid? No.

  • CorrectAlias@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    I avoid slop code like yours because typically the user of the slop generator has no real idea of how things actually work, the slop is over-“engineered”, and it’s likely full of security issues. Further, it also wastes tons of resources just for poorly written slop.

    I especially wouldn’t ever touch your cryptographic slop.

  • luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    17
    ·
    7 days ago

    No matter how hard you pet your LLM, this project is not your work. LLM output attribution is a gray zone by design. Your assumption that vibe coding has overtaken software development is a big red flag imho. I wonder where you’ve acquired this belief. If you’ve been banned from multiple communities already I recommend you reflect upon this.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    7 days ago

    I think you need to speak to a mental health specialist, because AI psychosis can be really destructive. We all have problems, but using chat bots to make us feel better is dangerous for you and those around you, even if it feels good in the moment. These bots are designed to tell you exactly what you want to hear so that you become addicted to them.

    I’m going to guess you didn’t accomplish much as a software engineer before AI? The personal deficiencies at the core of that are still there even if you use AI to tell you otherwise. I won’t speculate what those deficiencies are, but I just want you to engage in some honest introspection. Absolutely nobody will trust someone like you to handle such a sensitive topic like cryptography. Stop wasting your short time on this earth on something so stupid. Go make literally anything else.

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      7 days ago

      wow thats deep analysis and advice. i generally think i do well.

      i work on my project and cryptography because its interesting. i worked with cryptography long before AI… but like a “regular” developer on a sideproject, im going to use AI.

      i actively seek advice about the code in my project. i only share my work after ive put what i think is enough time and effort. it clearly isnt enough that the project “works”. in cybersec its important for code to be audited or reviewed, that fundamentally isnt an option on a project like mine unless i share something that is described as “AI-slop”. that feedback is fine. it’s important that its open source.

      it might not be fun for most, but this is something i work on because its enjoyable to me. its open source for transparency and critisism. i just want to take “AI” as a critisism, off the table because i cant quantify my involvement… which is a understandably wild thing to ask so i try to approach it with caution.

      i work on several project that interest me. many but not all are open source. they exist because i woke up some day and decided i wanted to create something.

      • entwine@programming.dev
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        6 days ago

        i generally think i do well.

        What are some of your engineering or research accomplishments? Where is your linkedin or github profile showing projects before ~2022?

        i worked with cryptography long before AI

        What kind of work did you do with cryptography? It couldn’t have been much if you don’t see what’s wrong with what you’re doing. “I set up LetsEncrypt on a web server” doesn’t count as experience.

        Any answer you provide to these questions are worthless unless you’re willing to reveal your identity here. That’s the only way to build any credibility, and without credibility nobody should trust you with something like this.

        this is something i work on because its enjoyable to me

        No, this is something you’re working on because you’re hoping to make money from it. I remember you posting about this project some months ago and you mentioned as much. If it isn’t AI psychosis, then it’s a grift and you’re a snake oil salesman. Idk what you’re expecting to hear? This is a programming community; it’s probably the last place you’ll get positive feedback for this obvious trainwreck.

        • xoron@programming.devOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          the only project relevent here is: https://positive-intentions.com/

          the parts i want open source are on github. my project wasnt always open source. i created this without AI agents. then i open sourced it thinking it would gain more trust with users… and it did, but a key observation is that there are folks like yourself that will never be satisfied. if open source code, docs and my communication isnt enough… i have no delusuion that identifying myself would benefit the project in any way… its simply a vector by which people will highlight why im not qualified to work on the project.

          critisism in cybersec is common and expected. my ideas should be challenged. but the code is right there. feel free to ignore any details you think might not be up to your quality standard. you linked my previsous post which is more technical about how my app works. you can ask for further clarity on those details… but your critisism on previous posts suggest to me, that you dont actually want clarity because you alrealy already have the references to find out more.

          the project is enjoyable for me. its why i still work on it. would it be wild for me to want to make money from it? im trying to be more transparent about my process. the post here highlight my AI usage and how im using it to create high-effort work. “high-effort” is hardly quantifyable, but i see many reponses are along the lines that “AI cant be trusted to do things perfectly”… as if i dont also agree to that. you linked my previsous post which i would hope made it clear that my AI prompt wasnt “create me a messaging app”.

          a key and worrying observation is that mentioning that i use AI is the only thing that makes a different in feedback about the project (as per the subject of this post). you can see that in my previous post was significantly better recieved compared to this current post. that is the project where im using AI… because duh! it is a game changer.

          the point im making on the OP still stands that people cant see past my project after i mention i used an AI. human effort has never been easy to quantify… the best you got is storypoints and thats hardly meaningful.

          • entwine@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Are you even reading the criticisms people are making, or just asking an LLM to generate responses for you? Or have you developed brain damage as a result of so much LLM usage? Those are the most charitable explanations for your behavior here, because you seem to be incapable of understanding the criticisms. This has nothing to do with “effort”.

            When I said you should seek a mental health professional, that wasn’t a joke. Some people might say that as a joke, but I wasn’t. AI psychosis is a serious thing we don’t fully understand yet. From my perspective, you’re just an annoying Lemmy user/possible troll, which is easy to ignore. But if you’re not just trolling, then you’re probably damaging your health and/or relationships for a very stupid reason.

            • xoron@programming.devOP
              link
              fedilink
              arrow-up
              1
              ·
              19 hours ago

              so now youre saying AI psychosis is a serious thing we dont understand… but you seem to be convinced that youre qualified to diagnose it?

              unlike many others responding to this threat. you’ve seen my work and even linked a omprehensive post of how my project works. your pushback here doesnt contain any substance. you can call AI psychosis so you can avoid giving the project actual attention.

              ultimately i dont think my project is interesting to you. if im seen as an annoying lemmy user/troll, i encrourage you to block me.

        • Senal@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          The rest of your reply aside, I do disagree with one point in particular.

          Where is your linkedin or github profile showing projects before ~2022?

          A github public profile and linkedin history are not reliable indicators of comparative programming competence.

          I.e it’s entirely possible to be a competent programmer and also not want to participate in self marketing or promotion.

          They are sometimes indicative of the soft skills that go along with being a programmer.

  • mlatu@moist.catsweat.com
    link
    fedilink
    arrow-up
    14
    ·
    7 days ago

    using AI is now common practice for developers of all levels

    is not a fact.

    but one person standing in front of their (in part) dice-rolled “work” is not a welcomed sight is one.

    any dev much rather would brown their own greenfields than help you regreen your AI-brownies…

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 days ago

    Was the AI you’re using trained like most; scrapping the internet and disregarding the licenses of code?

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      8
      ·
      7 days ago

      i used opencode (various models), cursor (claude, composer)

      how these models are trained is arguably not ethical. the disregard of licences of code is not something i can influence.

  • farbidden_lands@quokk.au
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 days ago

    Unless you invented some new form of encryption why are you generating so much ai slop?

    Just reuse human made cryptography libraries that are battle tested. Then you won’t have to do disastrous things like putting ai to review your ai slop.

    You know that it lies, gaslights, writes or deletes production databases, tests etc as it pleases.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    7 days ago

    look I’m all for using LLMs for tedious or straight forward transformations of easily verifiable logic. The issue is they LLMs are sycophantic by nature and we are seeing a lot of newly freed “geniuses” who have promised “no no no. You see! I know the secret to using them for good!”

    It’s like the one ring. If you start using it for doing anything beyond reformating, anything that requires critical thinking, you’ve already trapped yourself.

    You’ll feel like your work is quality when it isn’t.

    Personally I still think the quality of LLM code is crap for pretty much anything. Much better done by a well seasoned developer, which is harder to come by then people think. A LLM can help in some narrow cases but not many.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    7 days ago

    It’s a broad topic. Everytime I see some new AI-coded project linked in the selfhosted community, it’s kinda shit… I had hallucinated installation instructions. Very overexagerrated claims of what it’s supposed to do… Sometimes it looks okay but some buttons don’t do anything and then I look at the code and everything is more of a stub. Some projects have ridiculous security issues like someone finds a master key buried in the code, and of course none of the “developers” ever noticed because noone ever had a look at the code…

    You’re somewhere in the same territory. Maybe you’re the one who gets it applied properly. But once I’m going to notice the tell-tale signs of vibe-coding, I’m going to start looking at it with the prejudice that got shaped by my prior experience. And I tend to be right most of the times.

    But with that said, I don’t think it’s healthy to have a war over it, ban people and yell at each other. Most I want is transparency. I think all software projects should just disclose if and how they use AI, to what extent. And the users can make up their mind.

    And with cryptography code… Isn’t that a bit dangerous? From my own experience, AI models tend to learn a lot of example code and the standard documentation of libraries… Wikipedia articles and such… And then generate responses closer to that, than completely new thoughts… But(!) all these examples, tutorials and boilerplate code use a lot of shortcuts to explain it in simpler terms. Shortcuts that weaken security. And I wouldn’t be surprised if your AI is then going ahead to reproduce that, and casually forget about the steps to prepare the numbers and follow up on the next steps if that wasn’t ever in the Wikipedia example code. And I’ve seen a lot of wrong advice on StackOverflow and Reddit, so you better hope it also didn’t internalize that. There’s some fairly common myths about security or cryptography details out there. And I never know if your average Claude learned more from Reddit discussions, or from computer science technical literature… And you probably used Claude to skip reading the computer science books as well (and have a really close look at the code), or you probably would have just typed it down yourself. So I’d expect your software to be roughly as sound as newbie code, up to the average of projects that’s out there on GitHub, which your AI has probably learned from. Not any better than that.

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      7 days ago

      Most I want is transparency.

      i agree with all youre saying. especially this which is why i entertain the idea of open source at all. what does transparency look like to you? code? documentation? open discussion? transparency is undermined when im trying to talk about something clearly complicated in order to seek feedback.

      cryptography code… Isn’t that a bit dangerous?

      in software dev we have thing like unit test (you already know that)… but when diving into cryptography we have formals proofs and verification we can use. it doesnt need AI to extract abstraction from the code implementation to run verification on. the tooking there is common practice and if we question if AI is doing it ptoperly we bring into question if the tooling used is good enough.

      • security audit
      • unit tests
      • formal proof
      • formal verification
      • documentation

      individually, they are all easily AI slop. but combined i hope it can serve as a starting point for a proper review. i dont mean a proper review from you either… im was seeking a review from orgs that specialise in such review.

      https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

      you make a lot of assumptions about how i code and what i understand about my project. enumerating what ive done and plan to do wouldnt do it any justice… but i will say this project is the result of a long-term effort. i created the project without AI originally. the idea is unique around client-managed cryptography (https://github.com/positive-intentions/chat)… ultimately it was clear that open-source is dead and so ive started introducing less transparency in the project as i introduce a close-source UI. i still keep the cryptography related modules open for transparency (whatever thats worth when people see that AI was involved).

      i wouldnt put my project out there if i didnt have faith in the implementation. i have actively seeked feedback and recieved good advice from which i iterated and improved. particularly concerning if im being banned from from communities for posting slop.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        7 days ago

        diving into cryptography we have formals proofs and verification we can use

        Did you do formal proofs or verification? I had a quick look at the repos and I can’t find them.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            7 days ago

            Uh, sorry your code is a bit difficult to read. There seems to be one implementation in the ‘src’ directory, which is referenced in your ProVerif pi code. But then there’s another one(?) in the ‘signal-protocol-core’ directory which seems to be the one that’s actually built?

            And how did you arrive at those proverif files? Do they come from your Rust code? How? And how do you make sure they relate to your code? I mean for all I know they could contain some correct design, while your code does something else… I’m not really an expert at this, but they seem (to me) just to appear in some commit but I don’t really get how it relates to the Rust code. Or how it came to be.

            And then it’s a bit difficult to tell for me whether your Chat uses the cryptography code from the ‘cryptography’ repository. Or the one from the ‘signal-protocol’ repository. It seems to load both?! But your own AI security audit flagged a lot of issues with your ‘cryptography’ repository. I can’t tell if that’s still up-to-date information but there was some report with mostly exclamation marks and red crosses in it. And a recommendation not to do it this way.

            While at it, I had a look at the browser’s developer console, and you have a lot of JavaScript warnings and errors there. Which I guess isn’t good?! And another sidenote: If I were you and developing a secure and private messenger, I’d skip all the requests to Google fonts, AWS, JSdelivr, third party JS CDN, analytics… It directly connects to Youtube and another analytics service which gets broad permissions. The infrastructure isn’t entirely controlled by you, for example the signalling server is the default free one. All of that isn’t great for privacy. Plus your content security policy has way too many asterisks in it with external domains and domains you control but there’s debugging stuff on there. And I don’t think you even put further restrictions on what JavaScript can be loaded or injected, other than the CSP?!

            And the hax just traslates code and is supposed to do a bit of type-checking and see if your code generates things with the correct length. It doesn’t currently do any theorems or verification regarding cryptography, does it? I’m not sure where to look.

            Sorry I’m not exactly a security researcher… Maybe my layman’s audit is shit… But I think there’s quite some stuff going on which pretty much renders any verification of a component irrelevant. I could be wrong though. But I’d still be interested to hear how the code relates to the ProVerif files, and what kind of assurance there is, they’re the same.

            • xoron@programming.devOP
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              hi. thanks for taking a look. sorry for the delay in responding, i wanted the heat on this post to settle down a bit.

              i originally started with src, but then when it some to formal verification and proofs, i came to the conclusions that you cant simply point it to a single folder are various functions are better separated to make it easier to document.

              unlike the formal verification with tools like hax, formal proofs are loosely related to the code. there isnt a direct relation too the proverif files and the code itself. if i change the code, i should also adjust the proverif. i documented it on the website to help me keep track of the functionality.

              https://positive-intentions.com/docs/technical/signal-protocol-formal-verification/proverif https://www.reddit.com/r/cryptography/comments/1evdby4/comment/liwyn3o/

              regarding how the cryptography is loaded, im using module federation. the signal protocol is imported into the cryptography modules (so the app doesnt need to load the signal protocol project explicitly). that cryptography modules is itself loaded into the p2p-framework repository so that i can automate the handling of p2p authentication.

              that AI audit as critical as it is of my implementation is the best source of truth for my project. there is simply not going to be a third-party audit and so it is intended to be objective, but i think i signpost enough that its AI generated. i need to clean up the exclamation marks and emoji’s, but the information there should all be correct.

              there are indeed a lot of debug messages logged. its worth repeating the project is still a work in progress and far from finished., im sharing it now at this point because it seems like a reasonable state. i understand people can have high expectations around perfection,… this is not that kind of project. perfection would be a waste of my time at this stage in the project.

              the CSP headers there are all deliberate to support things like gifs and simpleanalytics. ther could do with a bit of a clean up and taking ownership of things like fonts… its been on the todo-list for a while but i didnt proritise it. thanks for raising it… i’ll see about cleaning it up.

              the hax extraction is doing the abstraction to axioms and you right that the axions arent proven… this is something im actively investigating.

              thanks for your time and attention on the project. sorry if ive misled you to belive the project is more mature than it is… its is however a genuine attempt to create something safe and secure.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                18 hours ago

                Thanks. Sadly I can’t even get the latest version to work. It does find the other peer and loads the chat interface, but doesn’t open a data channel, so it’ll say “not connected” and do an error popup everytime I try to send a message. And I’ve spend enough time debugging it for now.

                Just some general words of my wisdom: I think software projects are first and foremost about focus. I don’t really know what you’re trying to do here. If that’s writing a cryptography library, I think focus is about right. You first need to lay down the design properly. Make sure you factor in advanced tech like formal proofs from the start. After that you need to write the actual code. And then also make sure it aligns with your testing. I mean it’s fairly common to make mistakes while writing computer code, or have bugs… And any of those could render your more formal methods useless. For example like that one time when some Debian package always sent the same random number as a seed… That meant the algorithms were 100% correct. Just used in a wrong way so most of the encryption was futile. Things like that require an equal amount of focus. If not more, since we already know how Double ratchet works, the important part is to implement it correctly and use it correctly. That deserves a massive amount of focus (and effort). It’s also the major part of a security audit of a software project as a whole.

                We also have things like sidechannel-attacks, which aren’t covered. But I think that’s a minor thing with what we’re looking at.

                And if you’re trying to develop a chat app, Your focus probably needs to be somewhere aimed to make it work, first. Make it connect reliably and across a multitude of devices. Cryptography is pretty much dispensable at that step. Then focus on the UX. Make sure it’s not vulnerable to just bypass any subsequent encryption because for example you don’t have script nonces and everyone in the chat can inject JavaScript and just bypass your entire encryption.
                Think about metadata and if your software product wants to address that. You could be doing encrypted messages but all kinds of third parties know who is talking to whom… Make sure you do what your users expect!

                And I think that’s also the reason for some of the downvotes here. You have a narrow focus on the formal proof of your encryption algorithm. While your audience probably expects a working Chat app. For all they care it could be entirely unencrypted in the alpha version, and encryption comes in a later version. We as users need something that works in the first place. We want to know what happens to our metadata. If there’s security vulnerabilities in the software. And once all of that is in place, then we start to worry about the specifics of the end-to-end-encryption.

                Probably also related to the AI-slop argument. I don’t really know what shaped your focus. But it must look to your audience like you’re deep in some singular rabbit hole, because you write about formal proofs a lot. But then there’s this huge disparity with what your audience assumes you’re doing, or what you have to show off. Just my opinion. But it’s kinda like that for me. You write about how great AI assisted coding is, and where it led you. But then I try to use your software. And it doesn’t even connect. And that really shapes my first impression of it all, in a very negative way. I mean… If we hadn’t talked, I would have just assumed your cryptography is on the same level as your code to do the peer connections. And that wasn’t a good first impression.

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              7 days ago

              @[email protected] Does the currently deployed version on chat.positive-intentions.com work? I tried to connect and try some more. But somehow it doesn’t ever connect. I’m following the procedure in the Youtube video. It reloads something on the page intermittently but never connects to the other browser.

              And already after opening the page, it says: “My peer ID is: xy”
              But then immediately “peer disconnected” and “peer closed: undefined”. Even before I do anything. Is it supposed to say that?

              I tried several combinations of Chromium 147 and LibreWolf 150. And whatever Vanadium is on my phone. I tried phone-computer and two different browsers on the same computer. Is that an issue? Other PeerJS applications work just fine.

              And does the QR scanner work? It opens the camera and scans the QR code just fine, but then reloads and doesn’t put any ID into the field?! So I guess that’s broken and I need to copy-paste it?

              Edit: Your file demo seems to work better. It at least gets to the point where it tries to open a connection. For some reason it also fails (ICE failed, your TURN server appears to be broken, see about:webrtc for more details). But at least that demo gets far enough to listen to connections and try to initialize them.