Date: 2023-04-09 06:01 am (UTC)
graydon2: (Default)
From: [personal profile] graydon2
Yeah I've been using it to teach me stuff and the mode of interaction that seems to emerge is one in which it says somewhat interesting and almost right stuff mixed with nonsense and then I learn by working my way through all the logical inconsistencies and factual errors and correct it. It's like having a weird platonic dialogue where with a well-read person who didn't really get the meaning of what they were reading, and then teaching myself what I wanted to know via interrogation of their remarks and contrast with other material? Idk. It's weird.

Date: 2023-04-22 06:34 pm (UTC)
graydon2: (Default)
From: [personal profile] graydon2
I don't have a philosophically coherent or even definite answer to this, except to say that it still seems to have educational value to me, as a form of dialogue. Maybe this is because being right enough about generalities is enough to help me learn. Maybe it is because no subject is ever learned in a vacuum, always with reference to existing knowledge, so the line between "factually wrong" and "logically inconsistent" is blurrier in practice, and factual errors can often be seen via their inconsistency. Maybe it is because I simply am not relying on it for factual specificity in places it's wrong (I'm not blindly following its advice on anything).

Thinking more generally, it might be that it doesn't feel so deadly because real human teachers are themselves often a bit wrong? They remember something's important but not exactly how. Or like, wrong about some critical / essential thing that they remember the gist of but flip the polarity of some detail of?

I guess it just seems weirdly human / tolerable in the types of mistakes it makes, and I've lived with humans already.

Even its lack of verbal hedging when it's actually entering territory it's likely to get wrong (places where "good" / reliable expert humans will hedge) isn't without precedent. People call it "mansplaining as a service" for a reason: it's similar to the experience of learning from "a well-read but overconfident human" -- maybe the typical self-assured autodidact. Less valuable than an actual expert, but not useless! Especially for brainstorming, summarizing, confirming your understanding of something by providing examples.

Version 4 is also much better about facts, and much more likely to flatly tell me I'm wrong when I try to coerce it into stating something false. I really have no idea how they represent knowledge, so this is all very trial and error. A fascinating time to be alive -- it has the feeling of google coming online, like a whole new entity in the noosphere with whole new forms of interaction. I do feel a little dread!

Date: 2023-04-12 06:36 pm (UTC)
joshua0: (Default)
From: [personal profile] joshua0
Wait so what *is* choreographic programming? Is there a foundational paper I should read? ChatGPT's description makes it sound suspiciously close to, say, the digital logic programming model of "everything is happening everywhere at once"... and that has piqued my interest.

Date: 2023-04-18 03:00 pm (UTC)
ikeepaleopard: (Default)
From: [personal profile] ikeepaleopard
One thing I've had a really hard time understanding, probably because it's kind of arbitrary, is what stuff it knows about—even vaguely or incorrectly. Like, I know it read the entire internet, but presumably part of being a model is dimensionality reduction and e.g. it doesn't know who I am. In particular, I'm surprised that it knows about this area that has some papers about it, but, and correct me if I'm wrong, isn't a big thing

September 2024

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 14th, 2025 09:20 pm
Powered by Dreamwidth Studios