The Perchance dev noted that the new image model “is still training” and “will steadily improve in quality and gain new knowledge over the next few months.” Some users found this wording suggests possible self-improvement or ongoing learning. However, in standard practice, most AI models do not learn or evolve after deployment unless manually retrained by developers.
Some questioned whether the model is truly learning over time or whether this statement is simply a placeholder for continued dev-side maintenance.
Goal of the Question: To clarify based on common AI practice:
Is this model capable of continuous or micro-scale learning post-deployment?
If so, can users contribute indirectly (e.g., through use, rating, or feedback) to guide its development?
Or is all improvement strictly the result of developer-side retraining and version updates?
Understanding this helps the community know whether their activity matters in shaping the model—or if they should wait for official updates. It also opens the door to potential participatory development, if such feedback loops are supported.
The model is trained by the devs. Continuous learning from user experience is a thing of 2026 I think, maybe 2027.
You mean the devs of FLUX or the devs of perchance? Because they are obviously not the same. I am not sure about the resources of Perchance but guess it is not enough to really train big new models.
That’s a very optimistic answer to a very big hydra. How do you reckon?
That’s what CEO’s of major AI companies like OpenAI say on youtube and x. All but one of them are optimistic about continuous learning AI’s by mid 26, and the other guy says “within 5 years”.