I think this summarizes in one conversation what is so fucking irritating about this thing: I am supposed to believe that it wrote that code.

No siree, no RAG, no trickery with training a model to transform the code while maintaining identical expression graph, it just goes from word-salading all over the place on a natural language task, to outputting 100 lines of coherent code.

Although that does suggest a new dunk on computer touchers, of the AI enthusiast kind, you can point at that and say that coding clearly does not require any logical reasoning.

(Also, as usual with AI it is not always that good. sometimes it fucks up the code, too).

  • diz@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    Its not about moats, it’s about open source community (whose code had been trained on) coming out with pitchforks. It has nothing to do with moats.

    You are way overselling coding agents.

    Re-creating some open source project with a similar function is literally the only way a coding agent can pretend to be a programmer.

    I tried latest models for code and they are in fact capable of shitting out a thousand lines of working code at a time, which obviously can only be obtained via plagiarism since they are also incapable of writing the most trivial code for a novel situation. And the neat thing about plagiarism is that once you start you can keep going since there’s more of compatible code where it came from.