Interests: programming, video games, anime, music composition

I used to be on kbin as [email protected] before it broke down.

  • 1 Post
  • 73 Comments
Joined 1 year ago
cake
Cake day: November 27th, 2023

help-circle
  • e0qdk@reddthat.comto196@lemmy.blahaj.zoneceleste rule
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    3 days ago

    I suggest using H264 instead of H265 for better compatibility. The video doesn’t play in my browser, and I think it’s likely because of that. The audio works but the video is just black in my browser. (I can play it with another player like VLC though, of course.)


  • Below, I’ve quoted a comment I wrote last year on kbin (RIP) about what I keep in my journal.

    I started keeping a daily journal about 10 years ago. It’s helpful for tracking what I worked on as well as various health issues. I skim through it once a week before talking to my therapist and read all entries from the past year when I need to prepare documentation for my annual performance review at work. I’ll grep through the whole thing occasionally when I’m trying to remember when some particular event was. (I don’t do that very often, but it is handy when I need it!)

    I typically track:

    • current date for the entry (both in the file and as the file name)
    • date and time I wrote the entry
    • when I went to bed
    • when I woke up
    • health issues (if any)
    • what I worked on (professionally and for my hobbies)
    • places I went (if anywhere)
    • significant conversations (particularly if there’s something I need to follow up on)
    • what I’m watching/reading/playing/etc.
    • anything else that seems noteworthy

    I keep my journal in plain text files named like YYYY-MM-DD.txt. Right now it’s all in one big folder. I have it in version control and back it up to various places occasionally. I’ll probably split it so there is a folder for each year eventually.

    I started doing this after someone came up to talk to me and I realized that I’d recognized him from a particular place a few years earlier but could not for the life of me remember his name!

    A notable change since then is that I’ve augmented the journal with a set of weekly “time card” files where I jot down a few words about what I’m doing each day as I do it – super useful for preparing summaries for my boss on what I got done each week, and it’s helped reduce some of my anxiety/depression problems. I keep that as a set of conceptually related but separate files. To be clear, I make those for my own use; work doesn’t require it, and I don’t share them verbatim with anyone. They’re just another tool to help me remember the things I want to make sure I don’t forget.


  • It’s not a particular protocol right now, but it would be a URI that refers to a specific resource. A protocol could also be defined – e.g. a restricted subset of HTTPS that returns JSON objects following a defined schema or something like that – but the point really is that I want to be able to refer to a thread not a webpage. I don’t think that’s a silly thing to want to be able to do.

    Right now, I can only effectively link to a post or thread as rendered by a specific interface – e.g. for me, this thread is https://old.reddthat.com/post/30710789 using reddthat’s mlmym interface. That’s probably not how most users would like to view the thread if I want to link it to them. Any software that recognizes the new URI scheme could understand that I mean a particular thread rather than how it’s rendered by a particular web app, and go fetch it and render it appropriately in their client if I link it. (If current clients try to be clever about HTTP links, it becomes ambiguous if I mean the thread as rendered into a webpage in specific way or if I actually meant the thread itself but had to refer to it indirectly; that causes problems too.)

    I don’t think lemmy:// is necessarily the best prefix – especially if mbin, piefed, etc. get on board – just that I would like functionality like that very much, and that something like a lemmy URI scheme (or whatever we can get people to agree on) might be a good way to accomplish it.


  • Not that I’m opposed, but I’m not sure if it’s practical to make a fediverse-wide link that’s resolvable between platforms since there are so many differences and little incompatibilities and developers who don’t directly interact with each other – or even know each other exist!

    Even if it isn’t though, it would be nice to be able to do something like lemmy://(rest of regular url) to indicate data from a lemmy(-compatible) server that should be viewable by all other lemmy clients without leaving your particular client and having to open some other website.





  • Try adding some prints to stderr through my earlier test program then and see if you can find where it stops giving you output. Does output work before curl_easy_init? After it? Somewhere later on?

    Note that I did update the program to add the line with CURLOPT_ERRORBUFFER – that’s not strictly needed, but might provide more debug info if something goes wrong later in the program. (Forgot to add the setup line initially despite writing the rest of it… 🤦‍♂️️)

    You could also try adding curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); to get it to explain more details about what it’s doing internally if you can get it to print output at all.



  • As a sanity check, does this work?

    #include <curl/curl.h>
    #include <stdio.h>
    #include <stdlib.h>
    
    size_t save_to_disk(char* ptr, size_t size, size_t nmemb, void* user_data)
    {
        /* according to curl's docs size is always 1 */
        
        FILE* fp = (FILE*)user_data;
        fprintf(stderr, "got %lu bytes\n", nmemb);
        return fwrite(ptr, size, nmemb, fp);
    }
    
    int main(int argc, char* argv[])
    {
        char errbuf[CURL_ERROR_SIZE];
        FILE* fp = NULL;
        CURLcode res;
        
        CURL* curl = curl_easy_init();
        
        if(!curl)
        {
            fprintf(stderr, "Failed to initialize curl\n");
            return EXIT_FAILURE;
        }
        
        fp = fopen("output.data", "wb");
        if(!fp)
        {
            fprintf(stderr, "Failed to open file for writing!");
            return EXIT_FAILURE;
        }
        
        curl_easy_setopt(curl, CURLOPT_URL, "https://www.wikipedia.org");
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, save_to_disk);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, fp);
        curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errbuf);
        
        errbuf[0] = 0;  /* set error buffer to empty string */
        res = curl_easy_perform(curl);
        
        if(fp)
        {
            fclose(fp);
            fp = NULL;
        }
        
        if(res != CURLE_OK)
        {
            fprintf(stderr, "error code   : %d\n", res);
            fprintf(stderr, "error buffer : %s\n", errbuf);
            fprintf(stderr, "easy_strerror: %s\n", curl_easy_strerror(res));
            
            return EXIT_FAILURE;
        }
        else
        {
            fprintf(stderr, "\nDone\n");
            return EXIT_SUCCESS;
        }
    }
    

    That should write a file called output.data with the HTML from https://www.wikipedia.org and print out the number of bytes each time the write callback receives data for processing.

    On my machine, it prints the following when it works successfully (byte counts may vary for you):

    got 13716 bytes
    got 16320 bytes
    got 2732 bytes
    got 16320 bytes
    got 16320 bytes
    got 128 bytes
    got 16320 bytes
    got 16320 bytes
    got 1822 bytes
    
    Done
    

    If I change the URL to nonsense instead to make it fail, it prints text like this on my system:

    error code   : 6
    error buffer : Could not resolve host: nonsense
    easy_strerror: Couldn't resolve host name
    

    Edit: corrected missing line in source (i.e. added line with CURLOPT_ERRORBUFFER which is needed to get extra info in the error buffer on failure, of course)

    Edit 2: tweaks to wording to try to be more clear








  • Most of the posts here are webcomics made by pmjv, the poster of this thread – though other people can and sometimes do post their own art. pmjv makes a surrealist webcomic (see Analogue Nowhere) that incorporates various themes and imagery related to Unix-derived and Unix-adjacent operating systems.

    Note that the imagery drawn from is not just things associated with Linux (which famously has a penguin mascot), but also OpenBSD (started by Theo de Raadt and which has a pufferfish mascot), Plan 9 (which has a bunny mascot called Glenda), etc.

    There are elements from other fandoms (e.g. Cirno, who appears in some comics, is from Touhou and is associated with ⑨ because of an old joke – which seems to have become entangled with Plan 9 in pmjv’s mind), influences from tech politics, and whatever other crazy things are bouncing around in pmjv’s head.

    It’s good surrealist fun, generally.




  • Back before kbin fell off the internet it had a really neat experimental “collections” feature that would let you make named groups of communities. Collections could be used either privately or made public so other people could subscribe to your curated feed on a topic. The owner could update the collection as needed (e.g. adding or removing communities/magazines as they changed over time).

    It’s one of the kbin features I miss most on lemmy.

    Does anyone know if mbin ever got a copy of that? I know they forked off before it was added to kbin, but I don’t know if it ever got integrated later. (I don’t see it from a quick glance at moist and fedia, but I haven’t dug into the dev history.)