

Exactly that, yeah. Thank you for the link.
I am also ‘Andrew’, the admin of this server. I’ll try to remember to only use this account for posting stuff.


Exactly that, yeah. Thank you for the link.


It’s straight-forward enough to do in back-end code, to just reject a query if parameters are missing, but I don’t think there’s a way to define a schema that then gets used to auto-generate the documentation and validate the requests. If the request isn’t validated, then the back-end never sees it.
For something like https://freamon.github.io/piefed-api/#/Misc/get_api_alpha_search, the docs show that ‘q’ and ‘type_’ are required, and everything else is optional. The schema definition looks like:
/api/alpha/search:
get:
parameters:
- in: query
name: q
schema:
type: string
required: true
- in: query
name: type_
schema:
type: string
enum:
- Communities
- Posts
- Users
- Url
required: true
- in: query
name: limit
schema:
type: integer
required: false
required is a simple boolean for each individual field - you can say every field is required, or no fields are required, but I haven’t come across a way to say that at least one field is required.


PieFed has a similar API endpoint. It used to be scoped, but was changed at the request of app developers. It’s how people browse sites by ‘New Comments’, and - for a GET request - it’s not really possible to document and validate that an endpoint needs to have at least one of something (i.e. that none of ‘post_id’ or ‘user_id’ or ‘community_id’ or ‘user_id’ are individually required, but there needs to be one of them).
It’s unlikely that these crawlers will discover PieFed’s API, but I guess it’s no surprise that they’ve moved on from basic HTML crawling to probing APIs. In the meantime, I’ve added some basic protection to the back-end for anonymous, unscoped requests to PieFed’s endpoint.


The best way to provide ! links that work for the most people is just to type them out as plain text, not as a hyperlink to anything.
So, these communities can be found at:
!roughromanmemes@piefed.social
!politicalcompassmemes@piefed.social
!inhabitedbeauty@piefed.social
!historyartifacts@piefed.social
!noncredibledefense@piefed.social
Also, from what I can tell, they haven’t been moved using PieFed’s community migration facility (which squishes the old remote community into a new local one and retains the history (e.g. like what happened with like !casualconversation@piefed.social ). These are just brand new communities, starting from scratch.


I’ll just remove the ‘freamon’ one when the auto-generated one is up to date.
The manually-generated one had 5 missing routes, which I’ve since added.
The auto-generated one at crust has about 48 missing routes. It’s the right approach, and I’ll help out with it when I can, but - for now at least - it makes no sense to redirect people to it (either automatically or via a comment).
Some thoughts for @wjs018@piefed.social
/site/instance_chooser probably doesn’t need to be a route. It’s just the data format returned by /site/instance_chooser_search. As a route, it’s returning the instance info for the site you’re querying, so if you want to keep it as a route, it should probably be called /site/instance_info or something.
In the query for /site/instance_chooser_search, nsfw and newbie are both booleans. With the rest of the API, these are sent as ‘true’ or ‘false’, but they are ‘yes’ and ‘no’ for this route.
The newbie query should probably be newbie_friendly
In the response, monthsmonitored should probably be months_monitored
There’s no way to exclude communities for the response to /topic/list and /feed/list: If you don’t put ‘include_communities’ in the query, it’s defaults to True, but if you put ‘include_communities=false’ in the query it ends up being True also (because the word ‘include_communities’ is in the data).
I’m too sophisticated to watch movies, so I only really know of this line because it was sampled in a Guns n’ Roses song.


For this particular issue, perhaps not.
But the fact that both me and you are using PieFed instances and are participating in a comment chain started by a hexbear user demonstrates that there isn’t much ‘hard-coding’ against other sites.
The commit that @davel@lemmy.ml referenced is for initial database setup. It’s not an unreasonable default list to populate the ‘banned_instances’ table, and is trivial for admins to change after setup is complete.
I watched Jurassic World: Rebirth the other day (it’s alright). It’s such an odd franchise - one that seems to have lost faith in its own premise. There’s this meta assumption that audiences are bored with dinosaurs (I’m not), and that the solution to this imagined problem is to mutate them (it really isn’t, it’s invariably just silly).
I also don’t care that dinos couldn’t really survive in the modern climate - that’s what the whole ‘suspension of disbelief’ thing is for.
Your link is broken.
It’s probably not worth editing (vs. deleting), 'cos the video is also linked to in !games@lemmy.world, !gaming@lemmy.ml and again in this community too (a minute after you).


It doesn’t, no. The bot gets its data from another bot, at https://lemmyverse.net/, which only crawls Lemmy and MBIN instances at the mo.
We’d either need to send a PR to get that bot to crawl PieFed instances too, or just replicate the functionality from the same machine that runs ‘tcbot’. Communities would also need to provide their ‘active users / month’ too. It’s just the subscriber count currently, but it shouldn’t be too much of a problem hopefully.


I’m now concerned that I’ve unfairly brought PieFed into all this. It’s not my project, and it will continue to thrive irrespective of how much I do or don’t contribute to it.
I do, however, think that cm0002’s current project is doomed. The idea that the admins of Lemmy instances of any significant size will defed from ML on the promise that one person will continue to be willing and able to replicate missing content, presumably forever, is not one I can foresee succeeding. If the admins of lemmy.ml weren’t also the devs, then maybe, but otherwise no.
It was this approach that I was attempting to criticize, not any fundamental political disagreements.


I mean, sure. If they recognize what’s happened. We can hope that every person who sees a post just visits the original source, or I can keep hoping that just one person stops needlessly fragmenting everything. Whatever’s easier, I suppose.


Maybe we both do. First link I found: https://lemm.ee/post/66363651 - OC content, no attribution, 198 votes and 3 comments that could’ve gone to the creator of that meme, but didn’t. There’s a comment from OP in this thread saying that they just use the built-in cross-post functionality (crediting the author would require extra steps).


On the off-chance someone creates some original content, it’s likely that they’ll want their name to be associated with (as the post author), and I assume that they’re motivated to do it by the potential engagement they’ll receive (in terms of replies and upvotes).
Should they commit the crime of submitting the post to a lemmy.ml community though, then cm0002’s bot will come along, and post it elsewhere. The attribution will be lost, and the post will be newer and possibly on a bigger server, so is likely to get the more engagement.
It’s not like everybody who posts to a ML community is a tankie or whatever, but they lose out because of some spat they have nothing to do with.


I’d be wary of getting a conversation node from anybody other than the original author (as described in the second approach).
There’s a reason why, if you want to resolve a missing post in Lemmy, etc, you have to use the fedi-link to retrieve it from its source, not just from any other instance that has a copy (because, like the “context owner”, they could be lying).
For Group-based apps, conversation backfill is mostly an issue for new instances, who might have a community’s posts (from its outbox), but will be missing old comments. Comments can be automatically and recursively retrieved when they are replied to or upvoted by a remote actor, but fetching from the source (as you arguably should do) is complicated by instances closing (there’s still loads of comments from feddit.de and kbin.social out there - it will be much worse when lemm.ee disappears). So perhaps Lemmy could also benefit from post authors being considered the trusted owner of any comments they receive.


Sure, okay, let’s say I’m demoralised and depressed about all that too. My potential for unhappiness knows no limit.
It doesn’t change the fact that I find the ways in which you’re perpetuating your endless feud with Lemmy’s developers to be overly disruptive. As a compromise, would you at least consider packing in the reposting of questions asked to asklemmy@lemmy.ml to the community on world?
That’s not using existing functionality. It’s not a question for which you’re genuinely seeking an answer. The pollution from mindless crossposting of URL-based posts can be mitigated somewhat, but it’s a harder problem to solve for text-based ones. Also, traffic from lemmy.world is more than all the traffic from every other instance combined, so there’s no value in needlessly adding to it.
Thanks.


PieFed’s API has in very large part been added by me. There’s more to do before the likes of Boost could provide the same experience as a Lemmy backend, but every time I look at this site I see your behavior, I find it a bit demoralising and depressing.
I understand what you’re trying to do, with all this cross-post spam, and content-theft, and obsessive crawling through modlogs, but I highly doubt you’ll achieve your goals.
One thing you have achieved though, is to actively discourage me from Fediverse development. So, yeah, well done: this is one reason why it will indeed be a ‘hot minute’ before Boost can use PieFed, but considering that you seem to have rejected the entirely viable option of just ignoring lemmy.ml and getting on with your life, I don’t really believe you’ll ever want to move platforms anyways.


What is the update delay for Fediseer?
I don’t know. It’s not something I’m familiar with - it might just default to saying ‘closed’ if it doesn’t have the data.
It’s interesting that the obvious bot accounts on those instances were set up in mid-March last year, so I’m guessing that these are somebody’s army that they’ve used before, but overplayed their hand when they turned it on the DonaldJMusk person. The admins can reasonably be blamed for setting up instances with open registrations and no protections and then forgetting about them, but I’d be wary of blaming them for being behind the attack directly. The ‘nicole’ person is unlikely to have used their own instance - it’s probably just someone with the same MO as whoever owns the bots, finding and exploiting vulnerable instances.
Speaking of being needlessly destructive with stupid bots, these duplicates of other user’s posts don’t even register as cross-posts anymore (due to image proxying).