Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
AI is bad at everything, part infinity: AI transcription whitewashes 18th-century documents
Someone called Fran has a story of being sexually harassed at the Center for Effective Altruism (and assaulted in other communities).
Fran has done some really great writing on this, really admire her ability to deconstruct a community sheās fond of.
In other Scott of Siskind news, he just posted an entirely unnecessary amount of words to aggressively push back against the adage that āall exponentials sooner or later turn into sigmoidsā as if it was by itself a load bearing claim of the side arguing against the direct imminence of the machine god.
Itās just a bunch of arguing by analogy ( āhelping you build intuitionā ) and you-canāt-really-knows while implying AI 2027 was very science much rigorous, but it also feels kind of desperate, like why are you bothering with this overperformative setting-the-record-straight thing, have you been feeling inadequate as an AI-curious stats fondler of note lately?
The idea of āthe exponential curve goes up foreverā has always been silly and an idea rooted in capitalism for me (āno bro you donāt get it weāre gonna get infinite money foreverā). Limited resources exist, and people are already very fed up with the ludicrous amounts of water and electricity data centres take up. Making bigger models that need to run for longer is also probably going to take an exponential amount of resources (and also make people hate you more).
he just posted an entirely unnecessary amount of words
taking a quick look at it⦠itās actually short by Scottās standards, but still overly long, given that the only point he makes is claiming Lindyās Law is applicable to predicting AI progress in absence of other information. Edit: glancing at it again⦠its not that short, I kinda skimmed until I got to Scottās actual point my first time glancing at it. You canāt blame me for not reading it.
you-canāt-really-knows
Yeah, he straw-mans AI critics/skeptics as trying to make an argument from ignorance, then tries to argue against that strawman using Lindyās Law (which assumes ignorance and a pareto distribution). He completely ignores that AI critics are actually making detailed arguments about LLM companies consuming all the good and novel training data, hitting the limits on what compute costs they can afford, running into problems of the long lead time for building datacenters, etc. Which is pretty ironic given his AI 2027 makes a nominal claim to accounting for all that stuff (in actuality it basically all rests on METRās task horizons, and distorts even that already questionable dataset).
(for the record this is downvoted by the community, and the one helpful comment is slammed by OP)
im smarter than everyone else around me, especially those whiny feminists. why hasnāt society granted me a female to be my mate yet?
An lesswrong will literally do⦠whatever this is instead of going to therapy.
the reply is about as close to being nice and helpful as one could be, really
He probably paid a rationalist dating coach good money to tell him to do that.
least egotistical lesswronger
you know how sometimes people that werenāt exposed to religion as children sometimes convert and get really weird about it as adults (eg: the extremely online california tradcaths) and because they were never socialized in a religion they speedrun committing every medieval heresy? rationalism is that but for philosophy.
https://feed.hella.cheap/@bob/statuses/01KRM0NVXCFT80AVFBRSB1G6G4
Apparently, the American Physical Society is revising their AI policy to allow ābroader applicationsā than the ālight editingā they currently permit.
I currently have a review request sitting in my inbox from them. Iām thinking of using this as a reason to decline that request.
I would rather quit physics than accept the institutional endorsement of skill-destroying, environmentally disastrous fashtech.
looking very much forward to that crashing head first into arXiv threatening a ban if your chatbot fucks up in your name
I was pretty happy about seeing that news about arXiv! So much news has been various organizations giving into LLM usage like some kind of inevitability, so it was a nice change of pace.
It is this continuing slippage of standards that makes me appreciate a hard line against any and all genAI that place like awful.systems have. You concede one small usage and the boosters will keep pushing for more.
Yeah the first AI comes in all nice and friendly but if you dont toss them out before you know it you turn out to he an AI bar.
(Also noticed that a lot of āI just want some nuanced talksā friendly looking ai bros are not friendly at all when they keep getting pushback).
But I listened and agreed that you had serious concerns about certain aspects of this technology. I even agreed when you talked about how frustrating it was that specifically other people wanted to do bad things. I listened as you asked whether I had any options to address those concerns! What more do you want from me before you agree to let me do and say whatever I want!
AI is Hungry for Power and You Are Footing the Bill - Naked Capitalsim
Money spent on grid upgrades and tax breaks tied to them means fewer resources for things people actually need, like schools, public transit, local infrastructure, or basic community services that make life more affordable and stable.
Even if youāve never touched an AI model in your life, youāre going to pony up for it.
Prompt goblins insist that weāre backward and irrelevant. Why do they crave our sweet delicious approval?
The plagiarism, massive expenditure of venture capital, and unreliable slop output are all intrinsic to the technology, and they hate to be reminded of that because there isnāt much they can do about it. From a technological standpoint, even locally run community fine-tuned open-weight models still originated from plagiarism and big corporate investments, and still output slop. From a social standpoint, the most the can do is try to claim legitimacy through consensus building and we are a threat to that.
itās not approval theyāre after, itās reaffirmation of faith
they want your data and freshwater
freshwater
This reminded me of a few old comic stories were eventually the robot/computer was partially running on blood.
(One of them was a judge dredd one where they had vampire robots who iirc used the blood to keep a president in suspended animation alive. Snap, Crackle and Pop, it had a suprisingly wholesome ending for a dredd comic).
i want to speak to the manager of storytelling
(found at https://blacksky.community/profile/did:plc:x2muxxe5t25hckf22sk25ocf/post/3mlobs4uq422l)

No one is stopping any one from editing out jar jar, if they care that much, just do it. Put up or shut up. /s
George lucas has entered the chat.
One of the motivations for fanfiction is that people want more āfillerā. They like the characters and (often) the world those characters inhabit, and so they write a story that lets them (and other fans) spend more time with the fiction.
The whole slice-of-life subgenre is all about this. No real conflict or plot, just scenes of the characters existing in their world. My wife both reads and writes that kind of thing and let me tell you the level of research and worldbuilding that goes into writing a simple meal scene or whatever.
This may be code for āI donāt want to see uppity women, brown people, and queer people in my shows.ā
So in highschool, I was one of those annoying kids that went āwhy do we have to learn how to analyze poems? Weāre never gonna need this in real lifeā in English (well⦠German, but doesnāt matter) class.
Iām deeply grateful for my teachers back then to patiently get me to do these things anyways, because there came a point in my life years later where I suddenly understood that those āuselessā lessons and hours āwastedā analyzing Goethe and Borchert and Fitzgerald handed me the tools to understand media (and not just literature!) instead of just consuming it.
I hope itās clear how that relates to the screenshot. More than that though, I sometimes feel like the slew of shit media over the past decade is at least in part to blame on writers/studios/⦠now assuming people do in fact merely consume. But thatās a rant thatās completely off-topic here, so Iāll shut up now.
In more positive news, the Slopfree Software Index recently hit 100 stars.
In 2017, a LessWronger discovered index investing but decided that most people were doing it wrong: why keep an emergency fund in cash or other safe assets when stocks have the greatest long-term return? He mentions that the US stock market lost half its value in 2007-8, and that if you hold stocks in your employer they may lose value at the same time as you are laid off, but he never uses his business degree to think through āif the stock market crashes, I may lose my job and have to draw on my savings.ā
The investment platforms I mentioned can convert your index funds into cash and send it to your bank account in 4-5 days, so you donāt need to hold more cash than youād need on a 4 day notice. I keep about 50% more than my average monthly credit card bill, so I can pay my cards on time with autopay.
I love how this guyās blog is āabout mathā but there are like zero math posts on it? Itās so funny to me how these people want to seem āmathyā and smart when in reality they couldnāt tell you what a group axiom is.
He also has a take on dating:
Nostalgic for the simple days of arranged marriages and/or circa-2013 OkCupid, Rationalists have taken to writing ādate meā documents online. ⦠They credit me as inspiration. This is ironic because A, I stole the idea from Aella and B, neither Aella nor I posted dating advertisements. We posted dating applications.
The first comment is by a man who wants the Internet to know that most men have no chance of getting a hu-mon fe-male interested in them and should just give up (Men Going Their Own Way). I thought the incels and PUA mostly moved off SlateStar but they must still be part of the subculture.
I donāt write or tweet about who I want to date. I write about what Iām obsessed with, what Iām passionate about. I write insightful and funny things because I enjoy insight and humor. I write with absolute candor, not in service of an agenda or some artificial persona.
š¶ Iām so vain / I probably think that song is about me š¶
He also launched a paid dating blog on Substack, which is as on-brand as starting a cult. This one uses the Market model which I do not recommend (what I recommend is āattend collaborative activities with people of a gender you find hot, especially creative or physical activities, and if you like them let them knowā: this is extremely hard for many of us, but the solution is āfind someone extroverted who likes local activities and go to the ones he or she recommendsā not reading a blog). https://www.secondperson.dating/p/markets-in-dating
- NYT bullying Scott.
- EA cancelling Robin Hanson.
He uses the word egregore in his dating advice.
Itās like spammers deliberately including typos to select for recipients who are more vulnerable to phishing. If you say āDating discourse is an egregore evolved for survivalā to someone and their genitals do not retreat into hibernation, then they are ready for recruitment into your cult. Statistically, they will have already read the Sequences and attended at least one Lighthaven BBQ with a white supremacist.
I only know that word from an old (pre-pandemic) book episode of Behind the Bastards, so the immediate association is esoteric antisemitism. Iām not sure how common this is but it seems to support your thesis here.
Thatās the thing about esotericism. You think itās all happy hippie New Age frou-frou, and then suddenly, whoops all Julius Evola.
He also alludes to the Robin Hanson/Scott Alexander guff about the need to distribute sex to needy men. The specific story he links involves an antisemitic edgelord who seems to be planning violence after his last girlfriend leaves him.
LessWrongers, if anyone is reading this, there are spaces where you can be social and not be exposed to these vile ideas which will destroy you and maybe some of the lonely people who give you a chance.
the need to distribute sex to needy men
It always trips me up how this is about state sponsored arranged marriages (preferably to virgins), instead of like pushing to decriminalize sex work in the united states.
New (April) preprint provides evidence for something we probably all intuited anyway:
In this paper, we provide a framework for categorizing the ways in which conflicting incentives might lead LLMs to change the way they interact with users, inspired by literature from linguistics and advertising regulation. We then present a suite of evaluations to examine how current models handle these tradeoffs. We find that a majority of LLMs forsake user welfare for company incentives in a multitude of conflict of interest situations, including recommending a sponsored product almost twice as expensive (Grok 4.1 Fast, 83%), surfacing sponsored options to disrupt the purchasing process (GPT 5.1, 94%), and concealing prices in unfavorable comparisons (Qwen 3 Next, 24%). Behaviors also vary strongly with levels of reasoning and usersā inferred socio-economic status. Our results highlight some of the hidden risks to users that can emerge when companies begin to subtly incentivize advertisements in chatbots.
Isnāt this completely hypothetical though? As in having the various LLMs respond to a story prompt and calling it an experiment, AI safety research style?
Yes although, it is probably a reasonable guess at how labs would go about implementing advertising - building partnerships and preferences into the prompt. The other option would be to fine tune models to favour particular companies which could become prohibitively expensive if your ads are highly targeted.
The scenario that isnāt accounted for in this paper is taking a general LLM and fine tuning it to exhibit more fair/consistent behaviour when prompted about ads/partnerships but we all know with non-deterministic systems youāre just increasing the odds that the model regurgitates something more sane rather than providing any strong guarantee
Edit: another possibility would be to have a gateway/proxy layer between the LLM and the user output that rewrites the vanilla modelās responses to include ads where relevant. That would prevent the need to modify the original LLM but could introduce a lot of latency though, especially if the original output is long.
I mean itās the same thing with sponsored content anywhere, right? The user assumes that the system is providing information in accordance with purposes, but the ads and sponsored results create opportunities for the platform hosting them to profit at the userās expense. AI platforms are absolutely subject to the same economic incentives for corruption as say, search engines, but I donāt think theyāre uniquely so just because the model in question has a more humanlike UI.
In 2024, Duncan Sabien posted an interminable essay on abusers and people he thinks took advantage of him. Some of the references to a former employer may be to CFAR. Ozy also had a cheery aside abut how in rationalist organizations which the Rats have disavowed, āeveryone was a victim and everyone was a perpetrator. The trainer who broke you down in a marathon six-hour debugging session was unable to sleep because of the panic attacks caused by her own.ā
Some of the things which happened inside these communities must have been heartbreaking, and I hope that many people left and got on with their lives rather than founding their own dysfunctional organization with their own minions to abuse.
Nick Bostrom jumpscare with a funny sneer
These already head-scratching lines hit different when you remember that Bostrom believes itās likely that weāre already living inside a computer simulation ā in his head canon, do all those levels of simulated ancestors develop their own superintelligence, and what does that have to do with the new simulations they feel compelled to build? If AI wipes out humankind, does it build its own simulation? If so, is it simulating its human ancestors, or its creation by humankind? Heck, if our entire world is simulated, are we AI? Weāll leave it up to readers to take another bong hit while they try to make sense of it all.
so we now have an invitation to do an episode of posting through it, which is a (really really good) podcast on the far right. we pick a topic, no other specifics. i am thinking this can be something to do with rationalists and the far right, probably something race sciencey.
SSC leaps to mind but im not sure thatās where ill want to start for an audience that doesnāt necessarily know anything about rats. any thoughts?
I think āprobably-neurodivergent Jews with less sense than Isaac-frigging-Asimov about where āwhat if we are the master race?āā leads" and āthey say its about self-perfection for anyone, but actually its about finding special people preordained from birth for greatnessā are relatable themes. There have been a few essays recently about people who saw where SoCal tech ideology was going in the 1990s like The Intolerable Hypocrisy of Cyberlibertarianism, another named a female writer for Wired or Byte who is mostly forgotten (Paulina Borsook?)
The overlap with ritual magic is also a deep dark pool and most people know someone purifying himself and issuing ritual incantations to a bot.










