I don't have a philosophically coherent or even definite answer to this, except to say that it still seems to have educational value to me, as a form of dialogue. Maybe this is because being right enough about generalities is enough to help me learn. Maybe it is because no subject is ever learned in a vacuum, always with reference to existing knowledge, so the line between "factually wrong" and "logically inconsistent" is blurrier in practice, and factual errors can often be seen via their inconsistency. Maybe it is because I simply am not relying on it for factual specificity in places it's wrong (I'm not blindly following its advice on anything).
Thinking more generally, it might be that it doesn't feel so deadly because real human teachers are themselves often a bit wrong? They remember something's important but not exactly how. Or like, wrong about some critical / essential thing that they remember the gist of but flip the polarity of some detail of?
I guess it just seems weirdly human / tolerable in the types of mistakes it makes, and I've lived with humans already.
Even its lack of verbal hedging when it's actually entering territory it's likely to get wrong (places where "good" / reliable expert humans will hedge) isn't without precedent. People call it "mansplaining as a service" for a reason: it's similar to the experience of learning from "a well-read but overconfident human" -- maybe the typical self-assured autodidact. Less valuable than an actual expert, but not useless! Especially for brainstorming, summarizing, confirming your understanding of something by providing examples.
Version 4 is also much better about facts, and much more likely to flatly tell me I'm wrong when I try to coerce it into stating something false. I really have no idea how they represent knowledge, so this is all very trial and error. A fascinating time to be alive -- it has the feeling of google coming online, like a whole new entity in the noosphere with whole new forms of interaction. I do feel a little dread!
no subject
Thinking more generally, it might be that it doesn't feel so deadly because real human teachers are themselves often a bit wrong? They remember something's important but not exactly how. Or like, wrong about some critical / essential thing that they remember the gist of but flip the polarity of some detail of?
I guess it just seems weirdly human / tolerable in the types of mistakes it makes, and I've lived with humans already.
Even its lack of verbal hedging when it's actually entering territory it's likely to get wrong (places where "good" / reliable expert humans will hedge) isn't without precedent. People call it "mansplaining as a service" for a reason: it's similar to the experience of learning from "a well-read but overconfident human" -- maybe the typical self-assured autodidact. Less valuable than an actual expert, but not useless! Especially for brainstorming, summarizing, confirming your understanding of something by providing examples.
Version 4 is also much better about facts, and much more likely to flatly tell me I'm wrong when I try to coerce it into stating something false. I really have no idea how they represent knowledge, so this is all very trial and error. A fascinating time to be alive -- it has the feeling of google coming online, like a whole new entity in the noosphere with whole new forms of interaction. I do feel a little dread!