It makes sense to try to give users an idea of how robust a project is, but the exact details of the tools involved in its creation rarely add much to that. It gets a little weird with LLMs because they allow someone with no programming skill to create software that appears to work, which ought to be disclosed; “I don’t know what I’m doing and I asked a robot to make this” does indicate unreliable code. A skilled developer having an LLM fill in some extra test cases, on the other hand can only make the project more robust.
Why?
It makes sense to try to give users an idea of how robust a project is, but the exact details of the tools involved in its creation rarely add much to that. It gets a little weird with LLMs because they allow someone with no programming skill to create software that appears to work, which ought to be disclosed; “I don’t know what I’m doing and I asked a robot to make this” does indicate unreliable code. A skilled developer having an LLM fill in some extra test cases, on the other hand can only make the project more robust.