Yeah I've been using it to teach me stuff and the mode of interaction that seems to emerge is one in which it says somewhat interesting and almost right stuff mixed with nonsense and then I learn by working my way through all the logical inconsistencies and factual errors and correct it. It's like having a weird platonic dialogue where with a well-read person who didn't really get the meaning of what they were reading, and then teaching myself what I wanted to know via interrogation of their remarks and contrast with other material? Idk. It's weird.
I don't have a philosophically coherent or even definite answer to this, except to say that it still seems to have educational value to me, as a form of dialogue. Maybe this is because being right enough about generalities is enough to help me learn. Maybe it is because no subject is ever learned in a vacuum, always with reference to existing knowledge, so the line between "factually wrong" and "logically inconsistent" is blurrier in practice, and factual errors can often be seen via their inconsistency. Maybe it is because I simply am not relying on it for factual specificity in places it's wrong (I'm not blindly following its advice on anything).
Thinking more generally, it might be that it doesn't feel so deadly because real human teachers are themselves often a bit wrong? They remember something's important but not exactly how. Or like, wrong about some critical / essential thing that they remember the gist of but flip the polarity of some detail of?
I guess it just seems weirdly human / tolerable in the types of mistakes it makes, and I've lived with humans already.
Even its lack of verbal hedging when it's actually entering territory it's likely to get wrong (places where "good" / reliable expert humans will hedge) isn't without precedent. People call it "mansplaining as a service" for a reason: it's similar to the experience of learning from "a well-read but overconfident human" -- maybe the typical self-assured autodidact. Less valuable than an actual expert, but not useless! Especially for brainstorming, summarizing, confirming your understanding of something by providing examples.
Version 4 is also much better about facts, and much more likely to flatly tell me I'm wrong when I try to coerce it into stating something false. I really have no idea how they represent knowledge, so this is all very trial and error. A fascinating time to be alive -- it has the feeling of google coming online, like a whole new entity in the noosphere with whole new forms of interaction. I do feel a little dread!
Wait so what *is* choreographic programming? Is there a foundational paper I should read? ChatGPT's description makes it sound suspiciously close to, say, the digital logic programming model of "everything is happening everywhere at once"... and that has piqued my interest.
I'm glad you asked! A go-to reference is Fabrizio Montesi's dissertation. We have a new draft paper that's our first foray into this space -- check the related work section for a bunch of pointers.
One thing I've had a really hard time understanding, probably because it's kind of arbitrary, is what stuff it knows about—even vaguely or incorrectly. Like, I know it read the entire internet, but presumably part of being a model is dimensionality reduction and e.g. it doesn't know who I am. In particular, I'm surprised that it knows about this area that has some papers about it, but, and correct me if I'm wrong, isn't a big thing
Yeah, choreographic programming is a pretty niche thing, but there is a somewhat-accurate Wikipedia article, and ChatGPT had a little bit of a clue.
Someone at RC tried asking a different LLM, Claude, about choreographic programming, and the results were worse -- Claude decided that choreographic programming meant something like live coding: "Choreographic programming refers to a software development approach in which the programming process is viewed as a choreographed 'dance' between the developer and the programming environment or tools." This answer is basically bullshit.
When then asked about endpoint projection, Claude doubled down on the bullshit: "Endpoint projection is a technique used in choreographic programming. It means that the programming environment provides the programmer with a projection or preview of the end result or output of the program, even as the programmer is interacting with the system and creating the program." But what's interesting to me is that this is a totally plausible bullshit definition of EPP that's consistent with its previous bullshit definition of choreographic programming. It's like Calvin's dad.
no subject
Date: 2023-04-09 06:01 am (UTC)no subject
Date: 2023-04-22 05:36 pm (UTC)Logical inconsistencies are one thing, but if it's a topic you're just learning, how do you know when you're encountering factual errors?
no subject
Date: 2023-04-22 06:34 pm (UTC)Thinking more generally, it might be that it doesn't feel so deadly because real human teachers are themselves often a bit wrong? They remember something's important but not exactly how. Or like, wrong about some critical / essential thing that they remember the gist of but flip the polarity of some detail of?
I guess it just seems weirdly human / tolerable in the types of mistakes it makes, and I've lived with humans already.
Even its lack of verbal hedging when it's actually entering territory it's likely to get wrong (places where "good" / reliable expert humans will hedge) isn't without precedent. People call it "mansplaining as a service" for a reason: it's similar to the experience of learning from "a well-read but overconfident human" -- maybe the typical self-assured autodidact. Less valuable than an actual expert, but not useless! Especially for brainstorming, summarizing, confirming your understanding of something by providing examples.
Version 4 is also much better about facts, and much more likely to flatly tell me I'm wrong when I try to coerce it into stating something false. I really have no idea how they represent knowledge, so this is all very trial and error. A fascinating time to be alive -- it has the feeling of google coming online, like a whole new entity in the noosphere with whole new forms of interaction. I do feel a little dread!
no subject
Date: 2023-04-12 06:36 pm (UTC)no subject
Date: 2023-04-12 06:42 pm (UTC)I'm glad you asked! A go-to reference is Fabrizio Montesi's dissertation. We have a new draft paper that's our first foray into this space -- check the related work section for a bunch of pointers.
no subject
Date: 2023-04-18 03:00 pm (UTC)no subject
Date: 2023-04-22 01:43 am (UTC)Yeah, choreographic programming is a pretty niche thing, but there is a somewhat-accurate Wikipedia article, and ChatGPT had a little bit of a clue.
Someone at RC tried asking a different LLM, Claude, about choreographic programming, and the results were worse -- Claude decided that choreographic programming meant something like live coding: "Choreographic programming refers to a software development approach in which the programming process is viewed as a choreographed 'dance' between the developer and the programming environment or tools." This answer is basically bullshit.
When then asked about endpoint projection, Claude doubled down on the bullshit: "Endpoint projection is a technique used in choreographic programming. It means that the programming environment provides the programmer with a projection or preview of the end result or output of the program, even as the programmer is interacting with the system and creating the program." But what's interesting to me is that this is a totally plausible bullshit definition of EPP that's consistent with its previous bullshit definition of choreographic programming. It's like Calvin's dad.