I think this summarizes in one conversation what is so fucking irritating about this thing: I am supposed to believe that it wrote that code.

No siree, no RAG, no trickery with training a model to transform the code while maintaining identical expression graph, it just goes from word-salading all over the place on a natural language task, to outputting 100 lines of coherent code.

Although that does suggest a new dunk on computer touchers, of the AI enthusiast kind, you can point at that and say that coding clearly does not require any logical reasoning.

(Also, as usual with AI it is not always that good. sometimes it fucks up the code, too).

  • Kg. Madee Ⅱ.@mathstodon.xyz
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    22 hours ago

    @HedyL @diz I kinda wonder if this would work better if it just was worded the other way round: “must be supervised always”
    If I understand correctly, LLMs have difficulties encoding negative correlations (not, un-, …)

    Edit: or maybe not, seeing it did this transformation already in the introduction and still lets the dog escape on the very first turn