I’ve noticed that certain terms or tags are causing rendering issues with the new model. The outputs are highly unstable and inconsistent—beyond what I would consider normal variation.
This doesn’t appear to be due to new interpretation logic or prompt strategy shifts. Instead, many of these generations look glitched, underprocessed, washed out, or as if rendering was prematurely stopped. The saturation is often low, and overall image quality degraded.
I suspect that some of these tags may be acting like “stop codons”, halting generation early—possibly similar in effect to using guidance_scale = 1.
From my testing, the problematic tags seem to fall into two groups:
Furry-related terms: furry, fursona, anthro, etc.
Illustration-related terms: drawing, line work, cel shading, etc.
It’s possible these tags are being masked or diluted when mixed with stronger or more stable tags, which may explain why some prompts still produce acceptable or mixed results. However, when multiple of these unstable tags are combined, the generation almost always fails—suggesting a kind of cumulative destabilization effect.
By contrast, photography and painting-style tags remain mostly unaffected and render normally.
No, if there’s anything about selectivnism or guessing around, you also fall into not reading the last line about dev log about update
Send feedback and report bugs using the feedback box below.
I feel like doing my part and I feel even better when this actually improved.Maybe it’s coincidence but I found a way to brighten my moody mind.
It’s much meaningful then telling people things will get better in… How many months?
It’s never about computer science.
deleted by creator