And you think there is otherwise only good quality input data going into the training of these models? I don’t think so. This is a very specific and fascinating observation imo.
I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.
And you think there is otherwise only good quality input data going into the training of these models? I don’t think so. This is a very specific and fascinating observation imo.
I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.