• @[email protected]
      link
      fedilink
      21 year ago

      That’s not concept mixing, also, it’s not proper origami (paper doesn’t fold like that). The AI knows “realistic swan” and “origami swan”, meaning it has a gradient from “realistic” to “origami”, crucially: Not changing the subject, only the style. It also knows “realistic human”, now follow the gradient down to “origami human” and there you are. It’s the same capability that lets it draw a realistic mickey mouse.

      It having understanding of two different subjects, say, “swan” and “human”, however, doesn’t mean that it has a gradient between the two, much less a usable one. It might be able to match up the legs and blend that a bit because the anatomy somehow matches, and well a beak is a protrusion and it might try to match it with the nose. Wings and arms? Well it has probably seen pictures of angels, and now we’re nowhere close to a proper chimera. There’s a model specialised on chimeras (gods is that ponycat cute) but when you flick through the examples you’ll see that it’s quite limited if you don’t happen to get lucky: You often get properties of both chimera ingredients but they’re not connected in any reasonable way. Which is different from the behaviour of base sdxl, which is way more prone to bail out and put the ingredients next to each other. If you want it to blend things reliably you’ll have to train a specialised model using appropriate input data, like e.g. this one.