solarbird: (pindar-most-unpleasant)
[personal profile] solarbird

ChatGPT had a “meltdown” today, variously described as that, as “going crazy,” and so on. It can’t “go crazy,” there’s no mind behind it to go crazy. There is no one there, no there there, and most of all no metacognition there at all.

That last bit’s really important, and it reminds me of [our lord Jesus] [no, no, that’s Eddie Izzard again, stop it] and it reminds me of a generative-language (“AI”) spew article that our ever-worse internet search capability served me as the top non-paid hit when I was looking up a word I didn’t know.

It’s hard to describe how bizarre the article was, as it free-floated from one definition to another, as if in a fever dream, completely without rhyme or reason. One definition was a slang insult; the other was fairly technical in nature. At no point did the two meanings intersect, and yet, here this was being described as a short article explaining the meaning of the word in question.

Clearly, whatever training data was in use weighted the two definitions equally, and so, they were not just equal but the same to the text generator, and so, they were blended blindly together.

Anything – anyone – with metacognition – the ability to think about what they’re thinking – would understand immediately what was wrong here. But ChatGPT (or OpenAI or whatever was used to generate this trainwreck) didn’t, and so generated it just the same, and kept going for the amount of length required of it by whatever script some operator ran to crank out the word slurry necessary to serve some ads.

A few years ago, a set of novels made a big splash with the proposition that self-awareness was in fact a liability, and that true intelligence was not self-aware, that metacognition was a hindrance, not an aid. It was in some cases an attempt to work with the Fermi paradox, because such an intelligence would have no need for or interest in communicating with anyone or anything else. The most widely discussed of these was Blindsight, which I thought was a bit of a shame since to my thinking it was the least interesting of the three I read.

I really disliked it. Not for the tacked-on vampires plot (though as a novel I felt that weakened it badly), not for the dreary conclusion, not even for the thought experiment, but for essentially the reason we’re seeing right now.

ChatGPT is a Blindsight intelligence. Certainly, a primitive example – a far, far simpler example than in the novel – but one like those in that it’s completely lacking in self-awareness, weighting exclusively on external inputs with no metacognition.

And this kind of half-baked melange of text – this sad underbaked word pudding – is the result.

The ability to tell equally-weighted but completely orthogonal meanings behind language is what metacognition gets you, and self-awareness is what happens when you get metacognition.

Some of us have understood this from the start, which is why when I was doing this kind of work as an undergraduate – Google replicated my results circa 2006, but not in an inappropriate way, I never published because I didn’t have the massive library of data I knew I needed, I just made up a tiny model of it – I was focused hard on how can such a system consider actual meaning?

I considered it core to the entire concept.

My solution involved branching hierarchies of knowledge and almost certainly wasn’t enough to solve it, but it was a start, and good enough for my project. I also played about in my head with contextualising those words with external data in the form of visual, audio, and tactile information, but had absolutely no ability or support to bring it forward.

It freaked out the head of the maths department quite enough as it was. He was genuinely disturbed.

If whatever engine had rendered that stupid article had any actual concept of real knowledge behind the words, then it would’ve been able to detect what it was doing wrong. Properly trained, it would’ve stopped doing it – or more likely never started doing it at all.

Sure, while you could code around this particular case, and many others like it, having it happen in general would mean it would have to be able to think about what it was thinking.

And that means it stops being a Blindsight-style intelligence.

I always kinda hated that book. I’m almost glad for our current misadventures in “artificial intelligence,” just because they’ve finally given me such good examples as to why.

Posted via Solarbird{y|z|yz}, Collected.

Date: 2024-02-22 01:12 am (UTC)
canyonwalker: Sullivan, a male golden eagle at UC Davis Raptor Center (Golden Eagle)
From: [personal profile] canyonwalker
Thanks for an interesting blog and introducing me to the term "metacognition". It describes, for me, a missing piece I've struggled with in working with people to develop their expertise in various areas. Here I'd define metacognition broadly as a broad awareness of the scope and requirements of the problem domain. More specifically it's the ability to think a level above (since "meta-" means up or above) any one particular approach to solving a problem. People with good metacognitive ability will try one approach, gauge how well it's working within the scope and requirements of the problem, and pause to gather more data and/or change approaches if it's not working well enough. People with weak metacognitive ability demand that expertise be distilled down to step-by-step instructions, which they'll then dutifully follow even as they step off a proverbial cliff.

Date: 2024-02-22 05:20 am (UTC)
kathmandu: Close-up of pussywillow catkins. (Default)
From: [personal profile] kathmandu
"ChatGPT had a “meltdown” today, variously described as that, as “going crazy,” and so on. It can’t “go crazy,” there’s no mind behind it to go crazy."

My first thought relates to the reports that ChatGPT had such good results because there were underpaid 3rd-world workers manually editing the results and preventing the worst ones from being shown.

So what are the odds there was a major holiday, or a major internet outage, in the relevant 3rd-world country on the day ChatGPT showed a sudden absence of sense?

June 2025

S M T W T F S
1 234 5 67
891011 1213 14
15 16 17181920 21
2223 2425 26 2728
2930     

Most Popular Tags