I just spun up Lemmy on my Kubernetes cluster with nginx-unprivileged and ingress-nginx. All is well so far! I’m thinking about posting the Kustomization manifests and continuing to maintain and publish OCI’s per version release of Lemmy.

  • magus@l.tta.wtf
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    👋 I’m not using Kustomize, just throwing Deployment manifests and such at the cluster manually. Works pretty nicely, though I had some trouble setting up the custom nginx stuff to proxy stuff in - I ended up running a new nginx instance and pointing the Ingress at that rather than the Lemmy pods directly. Maybe there’s a more elegant solution I’m missing?

  • dudeami0@lemmy.dudeami.win
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    I currently am running the instance I am responding from on kubernetes. I published a helm chart, and others are working on them too. I feel being able to quickly deploy a kubernetes instance will help a lot of smaller instances pop up, and eventually be a good method of handling larger instances once horizontal scaling is figured out.

      • Andreas@feddit.dk
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Saved this comment. It claims that the Lemmy frontend and backend are stateless and can be scaled arbitrarily, as can the web server. The media server (pict-rs) and Postgres database are the limitations to scaling. I’m working to deploy Lemmy with external object storage to solve media storage scaling and there’s probably some database experts figuring out Postgres optimization and scaling as well. None of the instances are big enough to run into serious issues with vertical scaling yet, so this won’t be a problem for a while.

        • blazarious@mylem.me
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’ve got my pictrs backed by an S3, so that should scale well.

          I had some issues with the image server, though, and I had multiple of them running at the same time at some point, so that may have been the cause.

    • gabe565@lemmy.cook.gg
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      Yep I’m still working on a helm chart. Currently, each service is deployed with the bjw-s app-template helm chart, but I’d like to combine it all into a single chart.

      The hardest part was getting ingress-nginx to pass ActivityPub requests to the backend, but we settled on a hack that seems to work well. We had to add the following configuration snippet to the frontend’s ingress annotations:

      nginx.ingress.kubernetes.io/configuration-snippet: |
        if ($http_accept = "application/activity+json") {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
        if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
        if ($request_method = POST) {
          set $proxy_upstream_name "lemmy-lemmy-8536";
        }
      

      The value of the variable is $NAMESPACE-$SERVICE-$PORT.
      I tested this pretty thoroughly and haven’t been able to break it so far, but please let me know if anybody has a better solution!

      • anthr76@lemmy.kutara.ioOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 years ago

        Firstly, awesome to hear you’re using bjw-s app-template helm chart. He’s my good friend and former coworker :)

        I’m also doing what @[email protected] is doing.

        While I don’t consider this completed yet I have posted how I’m doing things so far here

        • gabe565@lemmy.cook.gg
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 years ago

          That’s awesome! I love his Helm chart. It’s the most impressive Helm library I’ve ever seen. I maintain a bunch of charts and I exclusively use his library chart :)

          I just mentioned in a response to @[email protected], but I feel like deploying a separate nginx is probably cleaner, I just didn’t want another SPOF that I could break at some point in the future.