Whether you’re really passionate about RPC, MQTT, Matrix or wayland, tell us more about the protocols or open standards you have strong opinions on!

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    159
    ·
    8 months ago

    RSS. It’s still around but slowly dying out. I feel like it only gets added to new websites because the programmers like it.

    • mesamune@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      8 months ago

      Theres quite a few sites that still use it and existing ones in the Fediverse have it built in (which is really cool). But your right, the general public have no concept of having something download and queue up on a service rather than just going to the site. And the RSS clients are all over the place with quality…

    • Static_Rocket@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      8 months ago

      WebSub (formerly PubSubHubbub). Should have been a proper replacement for RSS with push support instead of polling. Too bad the docs were awful and adopting it as an end user was so difficult that it never caught on.

          • mark@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            8 months ago

            Oh neat! I didn’t know this existed. By any chance, do you know of any RSS readers that have implemented it?

            • smpl@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 months ago

              No I’m sorry, I pull my feeds manually using a barebones reader. I’m guessing your best bet is one of the web-based readers as it would require a client with a TCP port that’s reachable from the web. I have never seen a feed who provided the rssCloud feature though.

      • kevincox@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        I wouldn’t say that it never caught on. I run a feed reader and ~6% of feeds have WebSub. Most of these are probably wordpress.com blogs which include it by default.

        YouTube also sort of supports it, but they don’t really follow the standard so I don’t think it counts.

        But the nice thing about WebSub is that it is sort of an invisible upgrade to the existing feed (or any other HTTP URI) so it just works when blogs enable it.

        Most major feed reader services support it. One problem is that you need a stable URL to receive the notifications. So it is hard to make work with client-side readers. But I don’t think there is really a way around this other than holding a connection open to every feed you follow. So I would say that it does its job well. I don’t really see a need to get to 100% adoption or whatever. If you have a simple static-site blog that updates every month or so I don’t think it is a big deal that it doesn’t support WebSub.

      • folkrav@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        How so? Outside very niche stuff or podcasts I just don’t seem to it used that often.

        • TechNom (nobody)@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 months ago

          Most websites still use standard back ends with RSS support. Even static site generators also do it. The only difficulty is user discovery.

          • folkrav@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            8 months ago

            Yeah… It always being there hardly makes it a “renaissance”, no?

  • aarroyoc@lemuria.es
    link
    fedilink
    arrow-up
    96
    ·
    8 months ago

    IPv6. Lack of IPv4 addresses it’s a problem, specially in poorer countries. But still lots of servers and ISPs don’t support it natively. And what is worse. Lots of sysadmins don’t want to learn it.

    • PlexSheep@infosec.pub
      link
      fedilink
      arrow-up
      31
      ·
      edit-2
      8 months ago

      My university recently had Internet problems, where the DHCP only leased Out ipv6 addresses. For two days, we could all see which sites implemented ipv6 and which didn’t.

      Many big corpo sites like GitHub or discord Apperently don’t. Small stuff like my personal website or https://suikagame.com do.

    • vzq@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      21
      ·
      8 months ago

      Lots of really large sites are horribly misconfigured. I had intermittent issues because one of the edge hosts in Netflix ‘s round robin dns did not do MTU discovery properly.

      • Alk@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        My isp decided to put me behind a CGNAT and broke my access to my network from outside my network. Wanted to charge me $5 a month to get around it. It’s not easy to get around for a layman, but possible. More than anything it just pissed me off that I’d have to pay for something that 1 day ago was free.

          • Alk@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 months ago

            Set up a reverse proxy on another machine (like one of those free oracle cloud things). I can’t go into detail because I don’t know exactly how. I think cloudflare also has options for that for free. Either way it’s annoying.

            • ChilledPeppers@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              Cloudflare tunnel, and its alternatives, such as localXpose, altho the privacy is probably questionable, and a many of them require a domain.

        • cmnybo@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          3
          ·
          8 months ago

          NAT is not for security, that’s what the firewall is for. Nobody can access your IPv6 network unless you allow access through the firewall.

              • ReversalHatchery@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                If computers connect to others through the internet, the IPv6 address can reveal how many computers there are on the local network, and if certain traffic to different destinations are coming from the same computer, but also if one of the computers has gone offline but then resumes from sleep/hibernation.
                To me their comment means they want to avoid that, and I agree, I want to avoid that too. To fix these, I would need to configure NAT on my router for IPv6.

                Yes IPv6 address privacy extensions help somewhat, but

                • computers won’t use a different v6 address for every distinct destination, they will just start using a new one from time to time
                • computers won’t stop using the old v6 address immediately after wakeup

                With v4 addresses these did not really matter, because everything was being sent from the same public IP, and and outside observer could only see what a “network” is doing collectively. But with v6 an address identifies a computer, across websites/services. Even if it’s just for a "short’ time, even if the address is randomized.

                • frezik@midwest.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  8 months ago

                  If you want privacy, you need some kind of VPN or onion routing. Even if everything you list were correct, the difference between IPv4 and 6 for privacy would be marginal.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          8 months ago

          You’re thinking of a firewall. NAT is just the thing that makes a connection appear to come from an IP on the internet when it’s really coming from your router, and it’s not needed with IPv6. But you would not see any difference with IPv6 without it.

          • Dave.@aussie.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            8 months ago

            You’re thinking of a firewall. NAT is just the thing that makes a connection appear to come from…

            That connection only “appears to come from” if I explicitly put a rule in my NAT table directing it to my computer behind the router doing the NAT-ing.

            Otherwise all connections through NAT are started from internal->external network requests and the state table in NAT keeps track of which internal IP is talking to which external IP and directs traffic as necessary.

            So OP is correct, it does apply a measure of security. Port scanning someone behind NAT isn’t possible, you just end up port scanning their crappy NAT router provided by their ISP unless they have specifically opened up some ports and directed them to their internal IP address.

            Compare this to IPV6 where you get a slice of the public address space to place your devices in and they are all directly addressable. In that case your crappy ISP router also is a “proper” firewall. Strangely enough it usually is a “stateful” firewall with default deny-all rules that tracks network connections and looks and performs almost exactly like the NAT version, just without address translation.

            • Domi@lemmy.secnd.me
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              So OP is correct, it does apply a measure of security. Port scanning someone behind NAT isn’t possible, you just end up port scanning their crappy NAT router provided by their ISP unless they have specifically opened up some ports and directed them to their internal IP address.

              You end up just port scanning their crappy router on IPv6 as well because ports that are not opened are stuck at the firewall either way, no matter if you use IPv4 or IPv6.

              Just because every device gets a public IP does not mean that IP is publicly accessible.

              An advantage that IPv6 has against port scanning is the absurdly large network sizes. For example, my ISP gives me a /56 prefix, that is 4,722,366,482,869,645,213,696 IPv6 addresses. Good luck finding the used ones with the port open you need.

              Even with just a /64 prefix you get 18,446,744,073,709,551,616 addresses, way outside the feasibility of port scanning.

            • KillingTimeItself@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              Compare this to IPV6 where you get a slice of the public address space to place your devices in and they are all directly addressable. In that case your crappy ISP router also is a “proper” firewall. Strangely enough it usually is a “stateful” firewall with default deny-all rules that tracks network connections and looks and performs almost exactly like the NAT version, just without address translation.

              realistically, it wouldnt surprise me if ISPs started NATing on residential IPV6 networks, just for the simplicity, but still allowed end users to assign their own IPs if they so pleased. Given the surge in shitty IOT devices, that’s probably a good thing for most people. Though a firewall would also accomplish this as well.

        • frezik@midwest.social
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          No. Stop spreading that myth. NAT does fuck all for security. If you want a border gateway, you can just have a border gateway.

    • folkrav@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      8 months ago

      Say this to my very large Canadian ISP who still doesn’t support IPv6 for residential customers. Last I checked, adoption in Canada was still under 50%.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        50%?? I fucking wish. In Spain we are at 5%. I finally got IPv6 in my phone this year, but I want it in my home, which is still only available as IPv4 even if they’re the same ISP.

  • Dessalines@lemmy.ml
    link
    fedilink
    arrow-up
    96
    arrow-down
    4
    ·
    edit-2
    8 months ago

    Markdown. Its only in tech-spaces that its preferred, but it should be used everywhere. You can even write full books and academic papers in markdown (maybe with only a few extensions like latex / mathjax).

    Instead, in a lot of fields, people are passing around variants of microsoft word documents with weird formatting and no standardization around headings, quotes, and comments.

    • xigoi@lemmy.sdf.org
      link
      fedilink
      arrow-up
      56
      arrow-down
      5
      ·
      8 months ago

      Markdown is terrible as a standard because every parser works differently and when you try to standardize it (CommonMark, etc.), you find out that there are a bajillion edge cases, leading to an extremely bloated specification.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        26
        arrow-down
        1
        ·
        8 months ago

        Agreed in principle, but in practice, I find it’s rarely a problem.

        While editing, we pick an export tool for all editors and stick to it.

        Once the document is stable, we export it to HTML or PDF and it’ll be stable forever.

        • TechNom (nobody)@programming.dev
          link
          fedilink
          English
          arrow-up
          15
          ·
          8 months ago

          Commonmark leaves some stuff like tables unspecified. That creates the need for another layer like GFM or mistletoe. Standardization is not a strong point for markdown.

          • Dessalines@lemmy.ml
            link
            fedilink
            arrow-up
            6
            ·
            8 months ago

            I believe commonmark tries to specify a minimum baseline spec, and doesn’t try to to expand beyond that. It can be frustrating bc we’d like to see tables, superscripts, spoilers, and other things standardized, but I can see why they’d want to keep things minimal.

            • TechNom (nobody)@programming.dev
              link
              fedilink
              English
              arrow-up
              7
              ·
              8 months ago

              Asciidoc is a good example of why everything should be standardized. While markdown has multiple implementations, any document is tied to just one implementation. Asciidoc has just one implementation. But when the standard is ready, you should be able to switch implementations seamlessly.

        • xigoi@lemmy.sdf.org
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Have you read the CommonMark specification? It’s very complex for a language that’s supposed to be lightweight.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            What’s the alternative? We either have everything specified well, or we’ll have a million slightly incompatible implementations. I’ll take the big specification. At least it’s not HTML5.

            • xigoi@lemmy.sdf.org
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              An alternative would be a language with a simpler syntax. Something like XML, but less verbose.

              • frezik@midwest.social
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                And then we’ll be back to a hundred slightly incompatible versions. You need detailed specifications to avoid that. Why not stick to markdown?

    • southsamurai@sh.itjust.works
      link
      fedilink
      arrow-up
      20
      ·
      8 months ago

      Man, I’ve written three novels plus assorted shorter form stories in markdown.

      There’s a learning curve, but once you get going, it’s so fluid. The problem is that when it comes time to format for release, you have to convert to something else, and not every word processor can handle markdown. It’s extra work, but worth it, imo.

      • Handles@leminal.space
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 months ago

        Just set up pandoc and Bob’s your uncle. It’ll convert markdown to anything. You’ll never have to open another word processor.

        • southsamurai@sh.itjust.works
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          8 months ago

          Nice! Thanks for the tip!

          Edit: holy shit, how have I never run across that before? That’s a brilliant program right there.

          • Handles@leminal.space
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 months ago

            Pandoc + [your markdown editor of choice] is magic. Some editors even come with Pandoc as a dependency so you can export to more or less anything from the GUI. I think GhostWriter and Zettlr at least (I honestly can’t be sure, I’ve changed editors so often and now I just have some Pandoc conversion scripts in my file manager menu).

        • southsamurai@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          Because it isn’t doc is docx.

          Publishers are pissy about such things. Even self publishing (which is what I do now), the various outlets still have limits to what they will use. Amazon accepts something like three file formats, including their own, and pdf isn’t on the list.

          I could just do pdf for directly giving them away to people, but even then, epub is usually a better pick in terms of readability since that’s the standard for actual books since ereaders tend to display it better than pdfs. Most people reading books via files would be using something that can give a better experience with epub vs pdf.

    • warmaster@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      8 months ago

      Depends on the type of book. Since you need HTML for all non default styles. Therefore, it raises the bar… you need a bit of web dev knowledge which removes the biggest benefit of markdown: simplicity / ease of use.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      Markdown is awesome, I agree! I did not realize you could extend markdown with anything other than html. The html extension is quite nice to do anything that markdown doesn’t support natively, but I wish there was an easier way to extend markdown. Maybe the ones you listed are what I need.

      • Dessalines@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Hedgedoc / hackmd support a good amount of extensions out of the box. I think typora and obsidias do also (but not open source).

      • Dessalines@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        My main wishlist for markdown, is a better live collaborative markdown editor. Hedgedoc works, but it’s showing it’s age, and they don’t seem to be getting close to releasing v2.

        Etherpad also has a markdown extension, but it doesn’t import / export that well.

  • x3i@kbin.social
    link
    fedilink
    arrow-up
    73
    ·
    8 months ago

    Unified Push.

    Unbelievable that we have to rely on Google and co for sth as essential as push messages! Even among the open source community, the adoption is surprisingly limited.

    • TechNom (nobody)@programming.dev
      link
      fedilink
      English
      arrow-up
      32
      ·
      8 months ago

      Nobody knows about unifiedpush. Last time I checked, their Linux dbus distributor also wasn’t ready. There has to be a unified push to get it adopted.

    • kevincox@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Fuck Unified Push. Just use the Web Push standard. https://www.rfc-editor.org/rfc/rfc8030

      It is what is used for browser push messages, is already widely supported. Is compatible with existing push infrastructure and users and is end-to-end encrypted. IDK why Unified Push felt the need to create a new protocol when a perfectly good one already existed.

      Although there is no “client side” spec. The Unified Push client side could be useful. But they should throw away their custom backend protocol and just use Web Push.

    • JasonDJ@lemmy.zip
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      Because SecOps still thinks NAT is security, and NetOps is decidedly against carrying around that stupid tradition.

      • pastermil@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        I hear you on this! Took me a whole day to get my router to delegate IPv6 properly. I’m sure that had it been better adopted, I wouldn’t be having such a hard time.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        29
        arrow-down
        1
        ·
        8 months ago

        In the world of computers, why would remembering numbers be the stop for new technologies?

        Do you remember anyone’s public key? Certificate?

        I don’t even remember domain (most) names, just Google them or save them as bookmarks or something.

        The reason IPv4 still exists is because ISPs benefit from its scarcity. Big ISPs already paid a lot of money to own IPv4 addresses, if they switched to IPv6 that investnywould be worthless.

        Try selling static IPv6 addresses as they do now with IPv4. People would laugh at them and just get a free IPv6 address from an ISP that wants to get new users and doesn’t charge for it.

        The longer ISPs delay the adoption of IPv6, the longer they can milk IPv4 scarcity.

          • calcopiritus@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            8 months ago

            IPv6 addresses are practically endless, therefore their value is practically 0. ISPs justify charging extra for static IPv4 because IPv4 addresses do have a value.

            If ISPs charge for static IPv6, then one of them could just give that service for free (while keeping the rest of the prices the same as their competitors). That would get them more customers while costing them nothing.

            EDIT: I can’t give you an example of an ISP that offers free static IPv6 because there are no ISPs in my country that offer IPv6.

            • frezik@midwest.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              8 months ago

              For that matter, you should be getting an entire /60 at a minimum. Probably more like /56.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        damn if only we had a service that like, obfuscated and abstracted these hard to remember IPs that aren’t very user friendly, and turned them into something more usable. That would be cool i think. Someone should make that.

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    arrow-up
    66
    arrow-down
    1
    ·
    8 months ago
    • IPv6, needed for modern Internet not to collapse, would make many other important things easier. Easier to become an ISP, to selfhost, to build P2P networks, etc.
    • GNU Taler, a payment protocol just look at it go: https://101010.pl/@didek/111934952208145427, or just imagine building a payment terminal of a Raspberry Pi
    • Matrix, to unify chat, conference and calling apps
    • some self-arranging darknet protocol becoming a norm like I2P, GNUNet or Yggdrasil, so we could have a backup when mass Internet blockage happen
    • Secret300@sh.itjust.works
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      8 months ago

      I really hope matrix gets native VoIP. I saw like 2 years ago it was in beta, haven’t kept up with it though. I’d also really like voice channels like discord so my friends and I can replace discord but it seems like matrix isn’t interested in being a discord replacement

      • ducklingone@lemmy.today
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 months ago

        Matrix can be configured to have VoIP. I have it set up on my server. Haven’t tried it in group voice chat setting yet though. Only 1 on 1

    • rottingleaf@lemmy.zip
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Matrix I have doubts about. The idea of Tox was nicer, but the implementation quality and the scandal at some point didn’t help.

      Tox felt more playable, like piping files over it or a remote shell over it (I know, bad associations, but still), or even using it for VPN. I think there were clients allowing to do such stuff, and the protocol allows it.

      EDIT: I mean, it’s still alive, just don’t see it claiming the place of FOSS old Skype replacement as it did.

      GNUNet - all you people mentioning it have peers? I tried to set it up a few weeks ago, couldn’t get peers.

      Yggdrasil - feels cool.

      I2P - not intended for that, I think.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I2P - not intended for that, I think.

        to be clear, I2P is not really intended for anything, it’s used for everything. It supports all kinds of things, and there are people doing all kinds of things on it. Though i could see potential technological limitations being a problem.

      • Cosmiss@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        What scandal did Matrix have? I only just tried out Matrix like a month ago and am unaware of anything like that.

      • smileyhead@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        About Tox, I am not a fan of mixing up universal delivering of packets and applications. Piping files or using as VPS feels like something that would be better done with proper full network and not be mixed with chat.

        • rottingleaf@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          8 months ago

          I, on the contrary, think it’s cool for things to be universal, layered and reusable for different tasks.

  • shrugal@lemm.ee
    link
    fedilink
    arrow-up
    62
    arrow-down
    1
    ·
    8 months ago

    Do Not Track

    Such a simple solution for the cookie banner issue. But it prevented websites from tricking users into allowing them to gather their data, so it had to go.

    • jkrtn@lemmy.ml
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      8 months ago

      Nobody was going to honor that. That’s just giving them an extra bit of data to track you with.

          • qaz@lemmy.world
            link
            fedilink
            arrow-up
            16
            ·
            edit-2
            8 months ago

            Those cookie banners were introduced because of an EU law and are seen all over the world

            • Tanoh@lemmy.world
              link
              fedilink
              arrow-up
              11
              ·
              8 months ago

              Most of those cookie banners are not even needed, you only need them for tracking cookie, not login and session cookies. But of course everyone decided it is just easier to nag all the users with a big splash screen.

              A lot of them are not even doing it right, you are not allowed to hint the user that accept all is the “correct” choice by having it in a different color than the others. And being able to say no to all shouls be as easy as accepting all, often it isn’t.

              Basically, cookie banners are usually not needed and when they are they are most often incorrectlt designed (not by accident).

              • words_number@programming.dev
                link
                fedilink
                arrow-up
                4
                ·
                8 months ago

                But of course everyone decided it is just easier to nag all the users with a big splash screen.

                Nope, the thing is, you’ll very rarely find a website that only uses technically necessary session/login cookies. The reason every fucking website, yes, even the one from the barber shop around the corner, has a humongous cookie banner is that every fucking website helps google and other corporations to track users across the whole internet for no reason.

            • jkrtn@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              Yes, seen by people visiting EU websites or companies with an EU presence. And because whether or not they assign a cookie is easily verifiable by the person on the other end.

  • RotatingParts@lemmy.ml
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    8 months ago

    RSS (RDF Site Summary or Really Simple Syndication) It is in use a fair amount, but it is usually buried. Many people don’t know it exists and because of that I am afraid it will one day go away.

    I find it a great simple way to stay up to date across multiple web sites the way I want to (on my terms, not theirs) By the way, it works on Lemmy to :)

    • kevincox@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Honestly there is rarely a blog I want to follow that doesn’t have it. I do think it would be great to have more readers using it so that it becomes more significant, but for my reading it is actually pretty great.

    • Southern Wolf@pawb.social
      link
      fedilink
      arrow-up
      12
      ·
      8 months ago

      Markdown really should have more widespread support than it does. It’s just the right mix between plain text and an office document, I took my college notes with it in fact cause of how fast it was to format stuff. But as far as I know, there’s no default program on any of the (major) OS’s or Distros for viewing it.

      Maybe it’s just due to a lack of standards for formatting or something, but regardless I do wish it was used and supported more.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        markdown is standardized? I haven’t found two parsers that parse the same file the same for any but the most trivial documents

        • Southern Wolf@pawb.social
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          That’s what I mean by a lack of a standard for markdown. There needs to be at least a core standards for stuff (like bolding and italics), that is universal across stuff. Then if a program wants to add onto it, that’s fine. But just the core parts being standardized would help a lot.

          • Norah - She/They@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 months ago

            There are some pseudo-standards for it. Github-flavoured markdown is probably the biggest of them. Then you get things like Obsidian-flavoured markdown that is based off of Github’s.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      8 months ago

      Heads up for anyone (like me) who isn’t already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added “linux” into the search on a lark.

      Anyway it’s a chat protocol

    • Handles@leminal.space
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      I came here to say matrix but I’m not gonna lie. If XMPP had gotten the traction it deserved we wouldn’t need matrix.

      • lemmyreader@lemmy.ml
        link
        fedilink
        English
        arrow-up
        22
        ·
        8 months ago

        You’re going off-topic from the OP question :-) But to answer your new question : I do not trust Matrix enough when it comes to privacy. I know that this link is old but still. https://disroot.org/en/blog/matrix-closure

        Then again I do not trust Signal that much either but sometimes compromises need to be made to get things done. With XMPP the end user can host their own server if they wish to, without meta data going to a centralized point. And video calls via XMPP and Conversations were a pleasure to use when I used it during the Covid-19 pandemic.

  • saigot@lemmy.ca
    link
    fedilink
    arrow-up
    42
    ·
    8 months ago

    IOT devices shouldn’t connect to wifi. ZWave or zigbee is much better suited to IOT stuff, but it seems to mostly get adopted in very limited, locked down proprietary shit like Hue Lights.

    • zarenki@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

      I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.

    • F04118F@feddit.nl
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      Yes but at least Hue (and IKEA and LIDL and many other brands’) lights work well with open Zigbee coordinators, like deconz and ZHA in Home Assistant.

      I wish there were more Zigbee and Zwave and less WiFi IoT devices too. I don’t even have a Zwave coordinator because I never found anything I wanted with Zwave support.

  • ѕєχυαℓ ρσℓутσρє@lemmy.sdf.org
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    8 months ago

    LaTeX. As someone in academia, I absolutely love it. It has some issues like package incompatibility, but it’s far far better than anything else I’ve used. It’s basically ubiquitous in academia, and I wish it were the case everywhere else as well.

      • Urist@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 months ago

        The Typst compiler is open source. It is the open core of the web app and we will develop and maintain it in cooperation with the community

        Try Typst now!

        Create a free account to join the public beta.

        Beta software marketing with “free accounts” and an open core compiler for a (probably) future paid web service tells me all I need to know.

        Even though LaTeX has issues, not being an online service is not one of them.

        • boredsquirrel@slrpnk.net
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          They host a proprietary service that does all the stuff, the compiler and spec are completely FOSS. So you need to create your own implementations, which is not hard.

          I dont think they will close source the compiler. And thats basically everything thats needed?

          I have 0 problems with people creating a fancy proprietary implementation to get people hooked. I will never use an online editor, but why care?

          • Urist@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            8 months ago

            Learning LaTeX and working around its quirks seems like a much better time investment than sidegrading to something that lives on premises given by a proprietary commercial project. If someone saw LaTeX and said “I want to make some version of this that is better”, without alterior motives, they would probably just work on improving LaTeX (which a whole lot of people do).

            Fancy does not mean better, and often is in many ways worse than plain old boring.

            • boredsquirrel@slrpnk.net
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              8 months ago

              You know Overleaf is a thing right?

              Many projects need to be rewritten from scratch I think. But I also think an easier markup language for LaTeX could be possible, keeping all the nice templates etc.

              • Urist@lemmy.ml
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                8 months ago

                From the LaTeX project:

                The experience gained from the production and maintenance of LaTeX2e (the version you have been using for many years) had a major influence on our goals for future development and on new code which is now integrated into LaTeX.

                A while ago we made the decision to drop the idea of a separate LaTeX3 format that would exist in parallel to LaTeX2e, but instead decided to gradually modernize LaTeX to keep it competitive in today’s world while maintaining compatibility methods for older documents.

                I think this decision was pretty much a good one.

                Overleaf does not modernize LaTeX in meaningful ways. It only adds cloud functionality and glossy appearance that you can get on dedicated editors anyways.

                • boredsquirrel@slrpnk.net
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  8 months ago

                  No, but Overleaf is just a proprietary fancy editor like the Typst one. Meanwhile typst is just as usable for building editor too.

                  I dont see any arguments against typst really. I am using Markdown all time and find it best, but lacking. Then LaTeX, honestly I dont want to learn as it must be a pain to write.

                  Now in typst, you can write academic papers etc just as well. All you need is free software, with good backing, modern tooling (rust, cargo), thus it runs everywhere. Its pretty cool!

            • boredsquirrel@slrpnk.net
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              And it isnt :D the compiler produces PDFs which can be read with anything. The spec is open so you can write the code with any editor.

              Just needs integration, will see if I can add the syntax highlighting to Kate

    • embed_me@programming.dev
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      8 months ago

      It’s not a standard but still its an interesting software so I’ll post this here:

      Joking aside, I love and hate it. Its paradigm is almost like using the C preprocessor to build a really awkward Turing-machine. TeX/LaTeX does a great job of what it was intended to do; it applies high quality typesetting rules to complex material and produces really good results. I love the output I can get with it and I will be eternally grateful that Donald Knuth decided to tackle this problem. And despite my complaints below, that gratitude is genuine. Being able to redefine something in a context-sensitive way, or to be able to rely on semantics to produce spacing appropriate to an operator vs a variable etc; these are beautiful things.

      The problem is, at least once a day I’m left wishing I could just write a callable routine in a normal language with variables, types, arrays, loops and so on. You can implement all those things in TeX, but TeX doesn’t have a normal notion of strings, numbers or arrays, so it is rare that you can do a complicated thing in an efficient way, with readable code. So as a language, TeX frequently leads to cargo-cult programming. I’m not aware that you can invoke reflection after a page is output, to see what decisions on glue and breaks were made; but at the same time you can’t conditionally include something that is dependent on those decisions, since the decision will depend on what is included. This leads to some horrible conditionals combined with compiling twice, and the results are not always deterministic. Sometimes I find it’s quicker to work around things like that by writing an external program that modifies the resulting PDF output, but that seems perverse.

      At the same time, there’s really nothing else out there that comes close to doing what LaTeX does, and if you have the patience, the quality of documents it can produce is essentially unbounded. The legacy of encodings, category codes, parameter limits, stack limits etc. just makes it very hard for package writers, and consumes a great deal of time for a lot of people. But maybe I am picky about things that a saner person would just live with.

      A lot of very talented people have written a lot of very complex packages to save the user from these esoteric details, and as a result LaTeX is alive and well, and 99% of the time you can get the results you want, using off-the-shelf parts. The remaining 1% of the time, getting the result you want requires a level of expertise that is unreasonable to expect of users. (For comparison, I wrote an optimising C compiler and generally found it far easier to make that work as expected, than some of the things I’ve tried, and failed, to do properly in LaTeX. I now have a rule; if getting some weird alignment to work takes me more than an hour, I just fake it with a postscript file, an image, or write an external program to generate it longhand, in order to save my sanity.)

      I think (and certainly hope) that LaTeX is here to stay, in much the same way that C and assembly language are. As time moves forward I think we’ll see more and more abstractions and fewer people dealing with the internals. But I will be forever grateful to the people who are experts in TeX, and who keep providing us with incredible packages.

    • folkrav@lemmy.ca
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      I honestly just use it for my resume with a template I found, so my knowledge is extremely basic, but I really do love the concept that I can “compile” and actually see the source of my document’s formatting.

      • zagaberoo@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        8 months ago

        Nope and yep. It’s an incredible tool, but it’s got a vim-sized learning curve to really leverage it plus other significant drawbacks. Still my beloved one-and-only when I can get away with it, but its a bit of a masochistic acquired taste for sure.

        Template tweaking, as I imagine academia heavily relies on, is really the closest to practical it gets. You do still get beautiful results, it’s just hard to express yourself arbitrarily without really committing to the bit.

          • TechNom (nobody)@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 months ago

            Markdown and LaTeX are meant for entirely different purposes. It’s somewhat analogous to HTML vs PDF. While it’s possible to write books with Markdown, it’s a vastly inferior solution compared to latex or typst (for fixed format docs like books).

        • embed_me@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          It’s got a vim-sized learning curve to really leverage it

          As a regular vim user, I have to say. Vim makes sense after you put some effort into learning it. I can’t say the same about latex.

    • oldfart@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      I wrote my masters in LaTeX and while I appreciate the structuredness and the fact I could use vim, it was so quirky. Having to spend half an hour to fix a non obvious compile error, more than once, was a big distractor. I’m sure it gets better when you use it more but I don’t think I have ever used it since. I’m not in academia and I don’t need to solve compile problems when creating an invoice or writing a letter to local government.

    • Handles@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      It’s basically ubiquitous in academia

      You mean STEM. In the humanities we do just fine without, tyvm.

    • Caveman@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      I personally feel like it should be a standard extended markdown that allows latex code.

  • barbara@lemmy.ml
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    8 months ago

    Matrix… it’s on such a good path I can’t complain. Adoption could be faster but it’s alright.

    I2p, although I have no idea if the lack of adoption has not a very good reason.

    • Preflight_Tomato@lemm.ee
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      8 months ago

      I second Matrix, though I’ve been waiting for e2ee direct p2p (the Dendrite project) do be worked on for a while. Having something like that, that’s truly decentralized while secure and hiding metadata where possible, would be a dream.

      • timbuck2themoon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Apparently dendrite is just on maintenance due to insufficient funds. It was what i set up on a test instance because it is lighter, etc. Go figure.

          • timbuck2themoon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Yeah I’ve been following that. It seemed at the time the project didn’t implement nearly all the specs as dendrite which was still lagging synapse.

            Might take another look though. I really did want to use it since it was written in rust. Seemed it should probably be more performant, everything else being equal.

  • Rikj000@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    edit-2
    8 months ago
    • Communication: Matrix
    • Browsing: I2P
    • Communities: ActivityPub / Mastodon
    • Software Forge: Fogejo + ForgeFed
    • OS: Linux
    • Money: Monero

    Since they meet at least one of,
    if not all of the following:

    • Decentralized / Federated
    • Sensorship resistant
    • Privacy respecting
    • Open source