Working on a big codebase, I don’t even get the idea to ask an AI, you just can’t feed enough context to the AI that it’s really able to generate meaningful code…
That’s not a hard limit, for example google’s models can handle 2-million-token context window.
Ughh I tried the gemini model and I’m not too happy with the code it came up with, there’s a lot of intrinsities and concepts that the model doesn’t grasp enough IMO. That said I’ll reevaluate this continuously converting large chunks of code often works ok…
That’s not a hard limit, for example google’s models can handle 2-million-token context window.
https://ai.google.dev/gemini-api/docs/long-context
Ughh I tried the gemini model and I’m not too happy with the code it came up with, there’s a lot of intrinsities and concepts that the model doesn’t grasp enough IMO. That said I’ll reevaluate this continuously converting large chunks of code often works ok…