solarbird: (korra-on-the-air)
2025-07-17 08:43 am

sometimes, I think of ponies

Have you ever noticed that every projection about “AGI” and “superintelligence” has an “and then a miracle occurs” step?

I have.

I shouldn’t say every projection – there are many out there, and I haven’t seen them all. But every one I’ve personally seen has this step. Somewhere, sometime, fairly soon, generative AI will create something that triggers a quantum leap in capability. What will it be? NOTHING MERE HUMANS CAN UNDERSTAND! Oh, sometimes they’ll make up something – a new kind of transistor, a new encoding language (like sure, that’ll do it), whatever. Sometimes they just don’t say. Whatever it is, it happens, and then we’re off to the hyperintelligent AGI post-singularity tiems.

But the thing is … the thing is … for Generative AI to create a Magic Something that Changes Everything – to have this miracle – you have to already have hyperintelligent AGI. Since you don’t… well…

…that’s why it’s a miracle. Whether they realise it or not.

I’m not sure which is worse – that they do realise it, and know they’re bullshitting billions of dollars away from productive society to build up impossible wealth before the climate change they’re helping make worse fucks everything so they can live like feudal kings from their bunkers, or whether they don’t, and are spirit dancing, wanking off technofappic dreams of creating a God who will save the world with its AI magic, a short-term longtermism, burning away the rest of the carbon budget in a Hail Mary that absolutely will not connect.

Both possibilities are equally batshit insane, I know that much. To paraphrase a friend who knows far more about the maths of this than I, all the generative AI “compute” in the universe isn’t going to find fast solutions to PSPACE-HARD problems. It’s just not.

And so, sometimes, sometimes, sometimes, I think of…

…I think of putting a short reading/watching list out there, a list that I hesitate to put together in public, because the “what the actual fuck” energies are so strong – so strong – that I can’t see how anyone could take it seriously. And yet…

…so much of the AI fantasia happening right now is summed by three entirely accessible works.

Every AI-fantasia idea, particularly the ideas most on the batshit side…

…they’re all right here. And it’s all fiction. All of it. Some of it is science-shaped; none of it is science.

But Alice, you know, we’re all mad here. So… why not.

Let’s go.

1: Colossus: The Forbin Project (1970)

This is the “bad end” you see so much in “projections” about AI progression. A new one of these timelines just dropped, they have a whole website you can play with. I’m not linking to it because why would I, holy shit, I don’t need to spread their crazy. But there’s a point in the timeline/story that they have you read – I think it’s in 2027 – when you can make a critical choice. It’s literally a one-selection choose-your-own-path adventure!

The “good” choice takes you to galactic civilisation managed by friendly hyperintelligent AGI.

The “bad” choice is literally the plot of The Forbin Project with an even grimmer ending. No, really. The beats are very much the same. It’s just The Forbin Project with more death.

Well. And a bioweapon. Nukes are so messy, and affect so much more than mere flesh.

2: Blindsight, by Peter Watts (2006)

This rather interesting – if bleak – novel presents a model of cognition which lays out an intriguing thought experiment, even if it … did not sit well with what I freely admit is my severely limited understanding of cognition.

(It is not helped that it directly contradicts known facts about the cognition of self-awareness in various animals, and did so even when it was published. That doesn’t make it a worse thought experiment, however. Or a worse novel.)

It got shortlisted – deservedly – for a bunch of awards. But that’s not why it’s here. It’s here because its model of cognition is functionally the one used by those who think generative AI and LLMs can be hyperintelligent – or even functionally intelligent at all.

And it’s wrong. As a model, it’s just wrong.

Finally, we get to the “what.” entry:

3: Friendship is Optimal, by Iceman (2012)

Friendship is Optimal is obviously the most obscure of these works, but also, I think maybe the most important. It made a big splash in MLP fandom, before landing like an absolute hand grenade in the nascent generative AI community when it broke containment. Maybe not in all of that latter community – but certainly in the parts of which I was aware. So much so, in fact, that it made waves even beyond that – which is when I heard of it, and how I read it.

And yes… it’s My Little Pony fanfic.

Sorta.

It’s that, but really it’s more an explicit AI takeoff story, one which is absolutely about creating a benevolent hyperintelligent Goddess AI construct who can, will, and does remake the world, destroying the old one behind her.

Sound familiar?

These three works include every idea behind every crazy line of thought I’ve seen out of the Silicon Valley AI crowd. These three works right here. A novel or a movie (take your choice, the movie’s quite good, I understand the novel is as well), a second novel, and a frankly remarkable piece of fanfic.

For Musk’s crowd in particular? It’s all about the model presented in Friendship is Optimal, except, you know, totally white supremacist. They’re even kinda following the Hofvarpnir Studios playbook from the story, but with less “licensed property game” and a lot more more “Billionaire corporate fascism means you don’t have to pay employees anymore, you can just take all the money yourself.”

…which is not the kind of sentence I ever thought I’d write, but here we are.

You can see why I’m hesitant to publish this reading list, but I also hope you can see why I want to.

If you read Friendship is Optimal, and then go look at Longtermerism… I think you definitely will.

So what’re we left with, then?

Some parts of this technology are actually useful. Some of it. Much less than supports the valuations, but there’s real use here. If you have 100,000 untagged, undescribed images and AI analysis gives 90% of them reasonable descriptions, that’s a substantial value add. Some of the production tools are good – some of them are very good, or will be, once it stops being obvious that “oh look, you’ve used AI tools on this.” Some of the medical imaging and diagnostic tools show real promise – though it’s always important to keep in mind that antique technologies like “Expert Systems” seemed just as promising, in the lab.

Regardless, there’s real value to be found in those sorts of applications. These tasks are where it can do good. There are many more than I’ve listed, of course.

But AGI? Hyperintelligence? The underlying core of this boom, the one that says you won’t have to employ anyone anymore, just rake in the money and live like kings?

That entire project is either:

A knowing mass fraud inflating a bubble nobody’s seen in a century that instead of breaking a monetary system might well finish off any hopes for a stable climate in an Enron-like insertion of AI-generated noise followed by AI-generated summarisation of that noise that no one reads and serves no purpose and adds no value but costs oh, oh so very much electricity and oh, oh, oh so very much money;

A power play unlike anything since the fall of the western Roman empire, where the Church functionally substituted itself in parallel to and substitute of of the Roman government to the point that the latter finally collapsed, all in service of setting up a God’s Kingdom on Earth to bring back Jesus, only in this case, it’s setting up the techbro billionaires as a new nobility, manipulating the hoi polloi from above with propaganda and disinformation sifted through their “AI” interlocutors;

Or an absolute psychotic break by said billionaires and fellow travellers so utterly unwilling and utterly unable to deal with the realities of climate change that they’ll do anything – anything – to pretend they don’t have to, including burning down the world in the service of somehow provoking a miracle that transcends maths and physics in the hope that some day, some way, before it’s too late, their God AI will emerge and make sure everything ends up better… in the long term.

Maybe, even, it’s a mix of all three.

And here I thought my reading list was the scary part.

Silly me.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (fascist sons o bitches)
2025-03-09 10:04 am

mission unfortunately accomplished

Fuck, I hate the new generation of blog comment spambots.

See – some years ago, there was an XKCD comic about training spambots to make more and more accurate and relevant comments that ended with “What will you do when spammers train their bots to make automated constructive and helpful comments?” and “MISSION. FUCKING. ACCOMPLISHED.”

it was really pretty funny at the time

But now with LLMs it’s real easy to make spambots make reasonable comments that are actually talking about your blog posts. They’re on topic, they’re cogent, they’re positive – of course – and the ad is in the URL attached to the commenter name, thus far universally shilling one or another sort of AI tool.

Turns out, on topic and cogent botposts are still just noise of emptiness, since, after all…

…the ad is the only reason they’re there…

…and you can’t not know that…

…so…

…it’s all clockwork and empty, and…

…sorry, Randall. Turns out it’s not Mission. Fucking. Accomplished. It’s more Mission. Unfortunately. Accomplished, and the punchline this time is that the mission itself kinda sucked.

I do think that as this ramps up – which it absolutely will, I mean, it’s incredibly obvious that it will, I don’t even know how you write spam filters against this – Federated comments from bot-disallowing instances will be the only thing keeping blog comments usable at all. Fortunately for me, that’s where most comments on this blog come from these days, so… maybe ActivityPub will save this, too. You never know.

But holy shit, once those spam-blockers stop mattering, with LLM-generated boring-but-cogent comments start taking over, there’ll be no way to have even remotely reasonable blog comment sections on their own.

Not ones with actual people, anyway.

I was so excited about what’s now called LLMs in undergrad, too. The first time my incredibly primitive toy version talked back to me was like being struck by lightning in the best possible way, walking around in the lab shrieking “IT WORKS! IT’S ALIIIIIIIIIVE!” Then when I did my project demo, the head of the department was so disturbed by it that he literally left the room. Just walked out. I was expecting a bunch of Q&A and chat about the models and all the commentary I’d written about the kind of data you’d actually need vs. the bullshit 450-ish word database with made-up numbers I was using and talking about how to connect actual meaning to all this word probability chewing – you know, the actual hard part that the LLM people just decided to leave out – but insead he just freaked and left.

And now, well, here we are. Instead of incredibly cool game engines and entirely new computer control systems, it’s all just… “what can we fuck up today?”

Y’know, I kinda liked Dr. M – the guy who walked out – even if I did make fun of him at times.

Because, well, honestly…

…maybe he was the smart one about all this after all.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (molly-thats-not-good-green)
2024-12-02 09:20 am
Entry tags:

Explaining tariffs to gamers, and don’t rent OR buy from NZXT

Linus Tech Tips dropped a video on Sunday explaining tariffs to gamers, particularly gamer bros. They’re expecting some blowback over “politics” (pre-responding: look, bro, these numbers don’t lie), and while they’re definitely getting some, it’s not nearly as bad as I would’ve expected.

It’s good educational content, particularly when showing how the need to maintain not just a profit margin (in absolute dollars) but a percentage markup (in percent increase over cost) causes multiplicative, not merely additive, price increases in the face of tariffs.

In other computer news, Gamer’s Nexus’s video on NZXT’s shriekingly horrific terms, margins, and abusive and sometimes outright fraudulent handling of its “rental gaming PC” business has been tearing through the space. I’ve never seen Steve so angry. TLDR: don’t rent computers for gaming. Or at all, really.

Seriously, don’t, unless you’re a business or other organisation that has specialised needs and you’ve actually worked out the numbers. That space is entirely different anyway and if you’re not already in it, you’re not in it. For my readers? Just don’t.

But even within the terrible consumer rental space, NZXT’s rental programme is special. It’s literally worse than illegal payday loan scams. Meaningfully. And GN have the numbers to prove it! I’m not sure which is my favourite part – that one, or the part where NZXT’s dumping buttloads of money to have tiktok influencers lie specifically to kids.

It’s a big ad blitz, and you know that on some level they’re putting this out there now specifically to build up mindspace in the face of big price hikes coming on PCs, thanks to Trump’s tariffs. Particularly with teenagers, who are least likely to know better.

It’s absolute fire.

Or, well, should be.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (korra-on-the-air)
2024-10-25 11:30 am
Entry tags:

Jeff Bezos and the shameful betrayal

Jeff Bezos ordered the Washington Post editorial board to withhold their endorsement of Harris for President in order to suck up to the fascist, serial rapist, Hitler-admiring convicted felon running against her.

That was happening right Junior Spaceboy’s distant-third space launch company was meeting with Trump. Sure, go ahead. Tell me that’s a coincidence.

Here’s how to tell the Washington Post what you think about that, and how you’re never going to be a subscriber again.

Here’s where to tell Amazon, the corporation, what you think of that, and how you can tell them that, say, you’ll never be a member of Amazon Prime again, and will divert all possible purchases away from Amazon.

Maybe add that you’ll never trust anything owned or operated by Jeff Bezos again.

Given the nature of the betrayal to the Republic, I suggest that colourful language is appropriate.

10 days remain.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (korra-on-the-air)
2024-08-23 03:34 pm
Entry tags:

disinformation creep or disinformation explosion?

Google’s dropping an easy-to-use AI photo editor with the Pixel 9 phone that’s going to cause some… real adventures… in disinformation.

You might give “No one’s ready for this” a read, because, well… it’s going to be incredibly easy for entirely untrained people to make floods of disinformation now, ones without so many of the obvious tells, because – like with the best propaganda and disinformation – most of what you’ll be seeing is real.

Just some parts won’t be. Just the parts that matter.

73 days remain.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: From moongazeponies on deviantart (pony-pinkie-hax)
2024-08-21 06:52 pm
Entry tags:

i would not buy ecovax devices

Hm, ecovax feels no need to patch this bluetooth exploit that lets you p0wn their robot vacuums and lawnmowers and do things like turn on their built-in cameras and microphones, and have them connect to random servers directly themselves.

It’s on the basis that since it’s bluetooth, you need to be in the area of the device, but, uh, that’s also true for hax0ring your local wifi? So I don’t see how that’s actually better.

Anyway – neat! Maybe don’t buy their stuff.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (shego-rule?-you?)
2024-07-01 02:05 am
Entry tags:

this is good, not bad

It’s hilarious to me that this is being presented as a bad thing:

Germany has too many solar panels, and it’s pushed energy prices into negative territory.

NO, it does NOT. Power export and increased electrification will solve the “not making money” part and it will be awesome.

Somewhere I had but misplaced an “oh noes” article about the price of used electric cars starting to come down and that’s also being presented as a bad thing and once again I am asking why is this bad, affordable electrics are good, absolute turnip of a journalist.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (korra-on-the-air)
2024-06-12 02:44 am
Entry tags:

on thursday, at tesla…

Apparently the big stockholder election at Tesla is tomorrow, seeing whether they’ll approve Elon’s demand for an absolutely obscene amount of money, and honestly all I can picture is

and holy hell I hope they tell Actual Dr. Evil to take his demand for $56 billion dollars and go fuck himself with it.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (korra-on-the-air)
2024-05-24 08:30 am
Entry tags:

what drives me mad about LLMs in particular

Here’s what’s driving me crazy, okay? Here’s what’s driving me crazy about all this LLM shit in particular.

I made a baby version of this myself. Years ago, in UNDERGRAD, on a MAINFRAME, in FORTRAN IV, writing an English parsing and construction AI in a language that didn’t even have STRINGS.

I had to make up the language statistical data and did so over an hilariously minimal domain because I was making it all up and even that much was a lot of work. But I knew what it would take to get the real data I’d need and I couldn’t get that. I was thinking about what it would take to scan every library on campus and thinking that would maybe be enough to start. Maybe.

So I just made up probabilities over a tiny subset of language – I think it knew like 150 words and how they all related or could relate to each other.

It was, obviously, a silly little toy.

And even then, when I showed it off, and had my limited little conversations with my very stupid bots, the chair of the department freaked out so bad that he walked out on me, because oh my god the machine was conversing with me.

I mean it. He left the room.

Later – presumably when he recovered his composure – he said it was truly extraordinary and gave me the best possible mark, but he clearly didn’t really want to talk to me about it any further, not that I really tried much – it was last quarter and I was on my way out.

And that was just when it was my little toy.

But even then

E. VEN. THEN.

Even then, as dumb as it was, as limited as it was, it was shriekingly obvious that there had to be fundamental connections to actual understanding for it to have any actual intelligence at all. That you COULD NOT DO IT without that.

Not if you wanted it to actually fucking work.

Not if you wanted its output to have anything to do with reality.

Not if you wanted it to actually fucking think.

(I did try. I had some baby approaches to that, too. They were hopelessly inadequate except for – maybe – establishing a framework in terms of how to figure it out.)

It is a hard problem

and it is an obvious problem.

But now you’ve got these jackholes, these goddamn Blindsight cultists, these AGI spirit dancers and these EA “longtermers” becoming increasingly aware that they are not, in fact, here for the long term, and so in their panic have not only decided that knowledge is an entirely acontextual mechanical process, but that thinking isn’t actually real and that the actual physical universe doesn’t matter, because if you just throw enough stolen words on the fire somehow somewhere MAGIC WILL HAPPEN yielding STEP THREE: PROFIT and they’ll get to dry-hump their confirmation-bias god-computer all the way to Line Goes Up Forever Heaven, and they’re absolutely going to keep pushing this insane calliope of jumped-up spreadsheets until they get there…

…no matter how many people are consumed in their grasping desperation.

They’re pushing this stuff into health care.

Into HEALTH CARE.

Insurance companies are using it to deny claims. Doctors are being told to use it for their own notes and diagnoses…

…and it thinks Godzilla plays baseball, that a fictional character invented a real life piano key tie, and that you can improve your pizza sauce by adding glue.

And they know that.

And they do not fucking care.

That’s the thing that drives me crazy.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (pindar-most-unpleasant)
2024-02-21 03:59 pm
Entry tags:

chatGPT, metacognition, and Blindsight

ChatGPT had a “meltdown” today, variously described as that, as “going crazy,” and so on. It can’t “go crazy,” there’s no mind behind it to go crazy. There is no one there, no there there, and most of all no metacognition there at all.

That last bit’s really important, and it reminds me of [our lord Jesus] [no, no, that’s Eddie Izzard again, stop it] and it reminds me of a generative-language (“AI”) spew article that our ever-worse internet search capability served me as the top non-paid hit when I was looking up a word I didn’t know.

It’s hard to describe how bizarre the article was, as it free-floated from one definition to another, as if in a fever dream, completely without rhyme or reason. One definition was a slang insult; the other was fairly technical in nature. At no point did the two meanings intersect, and yet, here this was being described as a short article explaining the meaning of the word in question.

Clearly, whatever training data was in use weighted the two definitions equally, and so, they were not just equal but the same to the text generator, and so, they were blended blindly together.

Anything – anyone – with metacognition – the ability to think about what they’re thinking – would understand immediately what was wrong here. But ChatGPT (or OpenAI or whatever was used to generate this trainwreck) didn’t, and so generated it just the same, and kept going for the amount of length required of it by whatever script some operator ran to crank out the word slurry necessary to serve some ads.

A few years ago, a set of novels made a big splash with the proposition that self-awareness was in fact a liability, and that true intelligence was not self-aware, that metacognition was a hindrance, not an aid. It was in some cases an attempt to work with the Fermi paradox, because such an intelligence would have no need for or interest in communicating with anyone or anything else. The most widely discussed of these was Blindsight, which I thought was a bit of a shame since to my thinking it was the least interesting of the three I read.

I really disliked it. Not for the tacked-on vampires plot (though as a novel I felt that weakened it badly), not for the dreary conclusion, not even for the thought experiment, but for essentially the reason we’re seeing right now.

ChatGPT is a Blindsight intelligence. Certainly, a primitive example – a far, far simpler example than in the novel – but one like those in that it’s completely lacking in self-awareness, weighting exclusively on external inputs with no metacognition.

And this kind of half-baked melange of text – this sad underbaked word pudding – is the result.

The ability to tell equally-weighted but completely orthogonal meanings behind language is what metacognition gets you, and self-awareness is what happens when you get metacognition.

Some of us have understood this from the start, which is why when I was doing this kind of work as an undergraduate – Google replicated my results circa 2006, but not in an inappropriate way, I never published because I didn’t have the massive library of data I knew I needed, I just made up a tiny model of it – I was focused hard on how can such a system consider actual meaning?

I considered it core to the entire concept.

My solution involved branching hierarchies of knowledge and almost certainly wasn’t enough to solve it, but it was a start, and good enough for my project. I also played about in my head with contextualising those words with external data in the form of visual, audio, and tactile information, but had absolutely no ability or support to bring it forward.

It freaked out the head of the maths department quite enough as it was. He was genuinely disturbed.

If whatever engine had rendered that stupid article had any actual concept of real knowledge behind the words, then it would’ve been able to detect what it was doing wrong. Properly trained, it would’ve stopped doing it – or more likely never started doing it at all.

Sure, while you could code around this particular case, and many others like it, having it happen in general would mean it would have to be able to think about what it was thinking.

And that means it stops being a Blindsight-style intelligence.

I always kinda hated that book. I’m almost glad for our current misadventures in “artificial intelligence,” just because they’ve finally given me such good examples as to why.

Posted via Solarbird{y|z|yz}, Collected.

solarbird: (cascadia dance dance revolution)
2024-01-18 08:51 pm

WaPo story on electric trucks and trucker reactions

Hmm, EVs starting to win over truckers. That’s definitely good. From the Washington Post:

For truckers driving EVs, there’s no going back
By Shannon Osaka – January 18, 2024

[T]he drivers operating them say they love driving electric. Marty Boots, a 66-year-old driver for Schneider in South El Monte, Calif., appreciates the lightness and the smoothness of his Freightliner eCascadia semi-truck. “Diesel was like a college wrestler,” he said. “And the electric is like a ballet dancer.”


Boots, who also trains other drivers on how to optimize the battery in the electric truck, said some drivers were hesitant when first trying out the technology. But once they try it, he said, most are sold. “You get back into diesel and it’s like, ‘What’s wrong with this thing?’” he said. “Why is it making so much noise? Why is it so hard to steer?”

“Everyone who has had an EV has no aspirations to go back to diesel at this point,” said Khari Burton, who drives an electric Volvo VNR in the Los Angeles area for transport company IMC. “We talk about it and it’s all positivity. I really enjoy the smoothness … and just the quietness as well.”

Posted via Solarbird{y|z|yz}, Collected.