@hoppolito - eviltoast
  • 0 Posts
  • 96 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2024

help-circle
  • I mean there are better tools, such as TMSU or tagfs, which i think are actually better approaches to having (part of) your file system displayed in a non-hierarchical way - or rather in a dynamic hierarchy.

    But the other poster is also right in that for your system file system, i.e. root and operating-system critical paths, do not benefit from such an approach. I think asking for such a thing kind of sounds like an XY problem and requires listing the actual problems to solve with an alternative approach first.


  • Additionally, while technically imbued with ‘meaning’, even the number 420 itself is somewhat meaningless and was originally used to delineate those who knew from those who don’t. It’s just that it got famous enough that we now almost all know.

    In that sense I would argue it filled more or less the same function as 67.




  • I think Isaac is a great game but it’s also right around the time when one of my greatest gaming pet peeves began - merging the concept of a roguelite and a roguelike to call them all the latter.

    Roguelite was such a linguistic stroke of genius as it both differentiated them from the classic genre - turnbased top-down movement, no-knowledge procedural levels, permadeath, craft fight plunder core loop - while still being self-describing as something which keeps some of those elements though being more light to digest.

    …and then we just discarded the term and chose to call them all roguelike with, for me, no discernible advantage.


  • As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:

    1. The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.

      There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.

    2. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.

      If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.


  • Luanti and Minecraft are two distinct, if similar-looking things.

    Luanti is an open-source voxel game engine implementation which allows running a wide variety of different ‘games’ on it (including two which mimic Minecraft very closely, like the above-mentioned Mineclonia).

    Minecraft is the closed-source game owned by Mojang.

    The two don’t interact and servers for the one are completely unrelated to the other as well.

    So, to answer the question - yes, they still need a Minecraft license if they want to play Minecraft. But this is disconnected from having a Luanti server, for which you don’t need any licenses but which will in turn also only allow you to play Luanti stuff, not Minecraft.








  • I think you really nailed the crux of the matter.

    With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.

    If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.

    I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.




  • I’ve been exclusively reading my fiction books (all epubs) on Readest and absolutely love it. Recently I also started using it for my nonfiction books and articles (mostly pdf) as an experiment, and it’s workable but a little more rough around the edges still.

    You can highlight and annotate, and export all annotations for a book once you are done, for which I have set up a small pipeline to directly import them into my reference management software.

    It works pretty well with local storage (though I don’t believe it does ‘auto-imports’ of new files by default) and I’ve additionally been using their free hosted offering to sync my book progress. It’s neat and free up to 500mb of books, but you’re right that I would also prefer a byo storage solution, perhaps in the future.

    The paid upgrades are mostly for AI stuff and translations which I don’t really concern myself with.



  • Open source/selfhost projects 100% keep track of how many people star a repo, what MRs are submitted, and even usage/install data.

    I feel it is important to make a distinction here, though:

    GitHub, the for-profit, non-FOSS, Microsoft-owned platform keeps track of the ‘stars of a repo’, not the open-source self-host projects themselves. Somebody hosts their repo forge on Codeberg, sr.ht, their own infrastructure or even GitLab? There’s generally very little to no algorithmic number-crunching involved. Same for MR/PRs.

    Additionally - from my knowledge - very few fully FOSS programs have extensive usage/install telemetry, and even fewer opt-out versions. Tracking which couldn’t be disabled I’ve essentially never heard of in that space, because every time someone does go in that direction the public reaction is usually very strong (see e.g. Audacity).