• fuck_u_spez_in_particular@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    but it can be a very helpful assistant.

    can, but usually when stuff gets slightly more complex, being a fast typewriter is usually more efficient and results in better code.

    I guess it really depends on the aspiration for code-quality, complexity (yes it’s good at generating boilerplate). If I don’t care about a one-time use script that is quickly written in a prompt I’ll use it.

    Working on a big codebase, I don’t even get the idea to ask an AI, you just can’t feed enough context to the AI that it’s really able to generate meaningful code…

    • Lovable Sidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      I actually don’t write code professionally anymore, I’m going on what my friend says - according to him he uses chatGPT every day and it’s a big help. Once he told it to refactor some code and it used a really novel approach he wouldn’t have thought of. He showed it to another dev who said the same thing. It was like, huh, that’s a weird way to do it, but it worked. But in general you really can’t just tell an AI “Create an accounting system” or whatever and expect coherent working code without thoroughly vetting it.

      • fuck_u_spez_in_particular@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        Ughh I tried the gemini model and I’m not too happy with the code it came up with, there’s a lot of intrinsities and concepts that the model doesn’t grasp enough IMO. That said I’ll reevaluate this continuously converting large chunks of code often works ok…