• 0 Posts
  • 200 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle



  • Hi! Firstly, thank you for using /dev/urandom as the proper source for random bytes.

    Regarding the static H1-H4 issue, does your repo have any sort of unit tests that can verify the expected behavior? I’m aware that testing isn’t exactly the most pressing thing when it comes to trying to overcome ISP- and national-level blocking. But at the same token, those very users may be relying on this software to keep a narrow security profile.

    To be abundantly clear, I’m very glad that this exists, that it doesn’t reinvent the WireGuard wheel, and that you’re actively fixing bug reports that come in. What I’m asking is whether there are procedural safeguards to proactively catch this class of issues in advance before it shows up in the field? Or if any are planned for the future.


  • Ok, I’m curious as to the DPI claims. Fortunately, AmneziaWG describes how it differs from WG here: https://docs.amnezia.org/documentation/amnezia-wg/

    In brief, the packet format of conventional WireGuard is retained but randomized shifts and decoy data is added, to avail the packets with the appearance of either an unknown protocol or of well-established chatty protocols (eg QUIC, SIP). That is indeed clever, and their claims seem to be narrow and accurate: for a rule-based DPI system, no general rule can be written to target a protocol that shape-shifts its headers like this.

    However, it remains possible that an advanced form of statistical analysis or MiTM-based inspection can discover the likely presence of Amnezia-obfuscated WireGuard packets, even if still undecryptable. This stems from the fact that the obfuscation is still bounded to certain limits, such as adding no more than 64 Bytes to plain WireGuard init packets. That said, to do so would require some large timescales to gather statistically-meaningful data, and is not the sort of thing which a larger ISP can implement at scale. Instead, this type of vulnerability would be against particularized targets, to determine if covert communications is happening, rather than decrypting the contents of said communication.

    For the sysadmins following along, the threat of data exfiltration is addressed as normal: prohibit unknown outbound ports or suspicious outbound destinations. You are filtering outbound traffic, right?



  • I’m of the opinion that hashtags are one of the most egalitarian things recently devised, because they require no advanced arrangements to use, can be created by anyone, can by adopted by everyone, and are amplified solely by their enduring usage. It is very much a popularity contest if a hashtag comes into vogue or if it is abandoned and something else is used, or maybe the specific community isn’t as large as imagined. So for any given hashtag, I’d say just try it and see if it sticks. The Internet Police will not issue citations for improper hashtag use.

    As for the underlying exercise of inviting LinkedIn people to break into your homelab, I’m not sure I see their incentive to do so. Why would unsolicited people (as in, not the AI bots) have any interest in doing so? If they had the chops to break into a network, why expend that time and effort for bragging rights, when instead that sort of work is billable?

    As a general rule, I’m not thrilled when there’s an implicit assumption that other people’s labor is being valued at $0.00/hr. There’s a fine line where it might be OK to ask an expert for a bit of help or advice, but the premise of your request is to get pentest professionals to do work for no compensation, and it’s not even for a charitable, educational, or otherwise enriching purpose. Why should they?

    I’m reminded of the email exchange referenced in this blog post, where an “unbreakable” encryption scheme is presented to an audience of highly capable cryptographers, and they proceed to demolish the scheme as being wholly broken, because the person who presented it could not take no for an answer. Do not be like this person.



  • That is an opinion, but certainly isn’t settled law in any jurisdiction. Indeed, the answer to whether some, all, or none of an LLM’s output is ever copyrightable and under what terms is the billion dollar question.

    A project that incorporates code with shaky legal foundation will find it tough to convince others to contribute, if it’s possible one day that their contributions were in vain. The right answer would be to extricate such code upon discovery, like what OpenBSD had to do when the IPFilter license turned out to be incompatible with the project.



  • without always accounting for development speed, cross-platform consistency, ecosystem maturity, plugin/runtime complexity, UI flexibility, and the fact that some apps are doing much more than others

    From the perspective of a user, why would they care about development speed? A user, by sheer definition of wanting to use the software, can only use software that is already developed. If it’s not actually developed yet… they can’t use it. So either they see the software at the end of the development cycle, or they never see it at all. Development speed simply isn’t relevant to a user at that point. (exception: video games, but I’m not aware of any desktop game developed using a web framework)

    As for platform consistency, again, why would the user care? Unless each user is actually running the same software on multiple platforms (ie a Windows user at work, Arch at home, and BSD at their side-gig), this is a hard sell to get users to care. A single-platform user might never see what the same software looks like on any other platform. Even mobile apps necessarily differ in ways that matter, so consistency is already gone there.

    What I’m getting at is that the concerns of developers will not always be equally concerning to users. For users to care would be to concern themselves with things outside of their control; why would they do that?



  • Was this question also posted a few weeks ago?

    In any case, what exactly are the requirements here? You mentioned encrypted journaling app, but also gave an example of burning a handwritten sheet. Do you need to recover the text after it is written, or can it simply be discarded into the void once it’s been fully written out?

    If encryption is to protect the document while it’s still a draft, then obviously that won’t work for handwritten pages.


  • 128 MB (1024 Mb) of RAM, 32 MB (256 Mb) of Flash

    FYI, RAM and flash sold to consumers is always in Bytes (big B); it’s only RAM manufacturers (and EEPROMs) that use the bit (small b) designation for storage volume, I think. If you’re using both to avoid any confusion, I would suggest the following instead: 128 MByte. No one will ever get that confused with megabits, and it’s the same style used for data transfer, which does still use bits: Mbit/sec.

    I wish you the best of luck in your search.


  • The only way I’m able to reconcile the author’s title and article to any applicability to software engineers (ostensibly the primary audience in this community) is to assume that the author wants software engineers to be involved further “upstream” of the software product development process.

    Code review answers: “Should this be part of my product?” That’s a judgment call, and it’s a fundamentally different question than “does it work.”

    No, but yes. Against the assertion from the title, bug-finding is very much a potential answer to “does this bug belong in the codebase?”. After all, some bugs aren’t bugs; they’re features! Snide remarks aside, I’m not sure that a code review is the time to be making broader choices about product architecture or market viability. Those should already have been done-and-settled a good while ago.

    Do software engineers make zero judgement calls? Quite the opposite! Engineers are tasked with pulling out the right tool from the toolbox to achieve the given objective. Exactly how and which tools are used is precisely a judgement call: the benefit of experience and wisdom will lean towards certain tools and away from others. But a different group of engineers with different experiences may choose differently. Such judgement calls are made in the here-and-now, and I’m not exactly keen on going back in time to berate engineers for not using tech that didn’t yet exist for them.

    If the author is asking for engineer involvement earlier, well before a code review, then that’s admirable and does in-fact happen. That’s what software architects spend their time doing, in constant (and sometimes acrimonious) negotiation with non-engineering staff such as the marketing/sales team.

    That said, some architectural problems only become apparent when the rubber meets the road, when the broader team is engaged to implement the design. And if a problem is found during their draft work or during code review, that’s precisely the right time to have found that issue, given the process described above where the architects settle on the design in advance.

    If that outcome is not desirable, as the author indicates, then it’s the process that must change. And I agree in that regard. But does that necessarily change the objective of what “code review” means? I don’t think so, because the process change would be adding architectural review ahead of implementation.

    If we’re splitting hairs about whether a broad “review” procedure does or doesn’t include “review of code”, then that’s a terminological spat. But ultimately, any product can only be as good as its process allows. See aviation for examples of excellent process that makes flying as safe as it is.

    Making the process better is obviously a positive, but it’s counterbalanced by the cost to do so, the overhead, and whether it’s worthwhile for the given product. Again, see aviation for where procedural hurdles do in-fact prevent certain experimental innovations from ever existing, but also some fatal scenarios that fortunately no longer happen.

    In closing, I’m not entirely sure what the author wants to change. A rebrand for “code reviews”? Just doing something different so that it feels like we’re “meeting the crisis” that is AI? That’s not exactly what I would do to address the conondrums presented by the rapid, near-uncontrolled adoption of LLMs.


  • It’s hard for me to agree with this premise. Specifically, the motion that companies will abdicate having their own space, in the form of a mobile app and UI. The author seems to suggest that the future will be API-driven, as more people want to “do things” rather than “go somewhere”. That is to say, if I may further summarize the author’s claims, the future of mobile computing is less about creating a digital storefront to invite potential customers into, but to be as transactional as possible.

    And while it is exceedingly enticing for me to think that one day, we could have a way to instantly cancel a Netflix or Comcast subscription, without the need to interact with any service agent, skipping over the upsell or retention attempts, and getting straight to the point, that just seems too far-fetched and anti-capitalist to actually happen in the near future.

    Why would it be that at this particular moment in history, when corporations seek to own more capital, would they then seek to abandon their digital storefronts? At the moment, they have sole control over that space, and the present abandonment of anti-trust enforcement means they can force people into their storefronts against their will. In an environment where arbitration agreements are forced upon consumers, why would large companies want mobile apps that don’t hold their customers as hostage? Having an open API to do the same thing as their app is tantamount to freeing the consumer.

    And that’s precisely why I can’t see why they would do that. I don’t like it, but that’s the present reality. But even more to the point, abandoning apps would be bending the knee to AI companies like Google or OpenAI, since it establishes the AI agents as the kingmakers. What sort of a Game of Thrones is this?

    For each app that exists now, their corporate owner is a king in their own kingdom. In this supposed new world, those kings are now mere nobles that pay tithe to their new emperor, from the treasuries of their kingdoms. An entertaining fiction, yes. But as a non-fiction? I might pick a different book.


  • but what if for example an issue is internal to a WIP feature?

    I forgot to answer this. The question is always: will this materially impact the deliverable? Will the customer be unhappy if they hit this bug?

    If the WIP feature isn’t declared to be fully working yet, then sure, let it on the branch and create a ticket to fix this particular bug. But closed-loop requires making this ticket, as a reminder to follow it up later, when the feature is almost complete.

    If instead the bug would be catastrophic but is exceptionally rare, then that’s a tough call. But that’s precisely why the call should involve more people, not less. A single person making a tough call is always a risky endeavor. Better to get more people’s input and hopefully make a collective choice. Also, humans too often play the blame-game if there isn’t a joint, transparent decision making process.

    But where would all these people convene to make a collective choice? How about during code review?


  • people that work on the same things can need each other['s] changes to move on

    If this is such a regular occurrence, then the overarching design of the code is either: 1) not amenable to parallelized team coding at all, or 2) the design has not properly divided the complexity into chunks that can be worked on independently.

    I find that the latter is more common than the former. That is to say, there almost always exists a better design philosophy that would have allowed more developers to work without stepping on each other’s toes. Consider a small group designing an operating system. Yes, there have to be some very deep discussions about the overall design objectives at the beginning, but once the project is rolling, the people building the filesystem won’t get in the way of the UI people. And even the filesystem people can divide themselves into logical units, with some working on the actual storage of bits while others work on implementing system calls.

    And even when a design has no choice but to have two people working in lock-step – quite a rarity – there are ways to deal with this. Pair programming is the most obvious, since it avoids the problem of having to swap changes with each other.

    I’ve seen pair programming done well, but it was always out of choice (such as to train interns) rather than being a necessary mandate from the design. Generally, I would reject designs that cannot be logically split into person-sized quantities of work. After all, software engineering is ultimately going to be performed using humans; the AIs and LLMs can figure out their own procedures on their own, if they’re as good as the pundits say (I’m doubtful).

    TL;DR: a design that requires lock-step development with other engineers probably is a bad design


  • Ah, I see that OP added more details while I was still writing mine. Specifically, the detail about having only a group of 5 fairly-experienced engineers.

    In that case, the question still has to focus on what is an acceptable risk and how risk decisions are made. After all, that’s the other half of code reviews: first is to identify something that doesn’t work, and second is to assess if it’s impactful or worth fixing.

    As I said before, different projects have different definitions of acceptability. A startup is more amenable to shipping some rather ugly code, if their success criteria is simply to have a working proof of concept for VCs to gawk at. But a military contractor that is financially on the hook for broken code would need to be risk-adverse. Such a contractor might impose a two-person rule (ie all code must have been looked at by at least two pairs of eyeballs, the first being the author and the second being someone competent to review it).

    In your scenario, you need to identify: 1) what your success criteria is, 2) what sort of bugs could threaten your success criteria, 3) which person or persons can make the determination that a bug falls into that must-fix category.

    On that note, I’ve worked in organizations that extended the two-person rule to also be a two-person sign-offs: if during review, both persons find a bug but also agree that the bug won’t impact the success criteria, they can sign off on it and it’ll go in.

    Separately, I’ve been in an organization that allows anyone to voice a negative opinion during a code review, and that will block the code from merging until either that person is suitably convinced that their objections are ameliorated, or until a manager’s manager steps in and makes the risk decision themselves.

    And there’s probably all levels in between those two. Maybe somewhere has a 3-person sign-off rule. Or there’s a place that only allows people with 2+ years of experience to block code from merging. But that’s the rub: the process should match how much risk is acceptable for the project.

    Boeing, the maker of the 737 MAX jetliner that had a falty MCAS behavior, probably should use a more conservative process than, say, a tech startup that makes IoT devices. But even a tech startup could be on the hook for millions if their devices mishandle data in contravention to data protection laws like the EU’s GDPR or California’s CCPA. So sometimes certain parts of a codebase will be comparmentalized and be subject to higher scrutiny, because of bugs that are big enough to end the organization.


  • litchralee@sh.itjust.workstoProgramming@programming.devYour thoughts on Code Reviews
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    20 days ago

    With regards to the given list, I think #2 would be the most forgiving, in the sense that #1 suggests that code reviews are viewed solely negatively and are punishable if undertaken. But that minor quibble aside, I have some questions about what each of these would even look like.

    For example, #3 seems to be that code can be committed and pushed, and then review sought after-the-fact, but any results of the code review would not be binding for the original commit author to fix, nor apparently tracked for being fixed later. If that’s a correct description, I would describe that as the procedurally worst of the bunch, since it expends the effort to do reviews but then has such an open-loop process that the results of the review can be swept under the rug.

    On the note of procedure, it is always preferable to have closed loops, where defects are: a) found, b) described, c) triaged, d) assigned or deferred, e) eventually fixed, and f) verified and closed out. At least with your examples #1 and #2, they don’t even bother to undertake any of the steps for a closed loop. But #3 is the worst because it stops right after the first step.

    Your example #4 is a bit better, since in order to keep to a specified timeframe, the issues found during review have to at least be recorded in some fashion, which can satisfy the closed-loop steps, however abbreviated. If the project timeline doesn’t allow for getting all the code review issues fixed, then that’s a Won’t Fix and can be perfectly reasonable. The issue has been described and a risk decision (hopefully) was made to ship with the issue anyway. All things in life are a balance of expected risk to expected benefit. Ideally, only the trivial issues would be marked as Won’t Fix.

    But still, this means that #4 will eventually accumulate coding debt, probably quite quickly. And as with all debt, interest will accrue until it is either paid down or the organization succumbs to code insolvency, paralyzed because the dust under the rug is so large that it jams the door shut. I hope you’ll allow me to use two analogies in tandem.

    Finally, there is #5 which is the only one that prevents merging code that still has known issues. No doubt, code that is merged can still have further bugs that weren’t immediately obvious during code review. But the benefit is that #5 maintains a standard of functionality on the main branch. Whereas #4 would wilfully allow the main branch to deteriorate, in the name of expediency.

    No large organization can permit any single commit to halt all forward progress on the project, so it becomes imperative to keep the main branch healthy. At a minimum, that means the branch can be built. A bit higher would be to check for specific functionality as part of automated checks that run alongside the code review. Again, #4 would allow breaking changes onto the branch due to expediency, whereas #5 will block breaking changes until either addressed, abandoned, or a risk decision is made and communicated to everyone working on the project to merge the code anyway.

    TL;DR: software engineering processes seek to keep as many people working and out of each other’s way as possible, but it necessarily requires following steps that might seem like red-tape and TPS reports