It's a known problem in Generative Adversarial AI technology. The reason is complex but relates to how they train a network then iterate over the output responses, further training it to learn what's right and what isn't. There are flaws in how the loop works. As a result, many aberrations can creep in the new stuff it generates.
For the technically curious, this link explains it but it is not at all plain and simple to understand: https://www.geeksforgeeks.org/modal-collapse-in-gans/
Also, if the user does not specifically say 'no six fingers!' the system may assume six or three or twelve are allowed!
It's a known problem in Generative Adversarial AI technology. The reason is complex but relates to how they train a network then iterate over the output responses, further training it to learn what's right and what isn't. There are flaws in how the loop works. As a result, many aberrations can creep in.
For the technically curious, this link explains it but it is not at all plain and simple to understand: https://www.geeksforgeeks.org/modal-collapse-in-gans/
It's a known problem in Generative AI technology. The reason is complex but relates to how they train a network then iterate over the output responses, further training it to learn what's right and what isn't. There are flaws in how the loop works. As a result, many aberrations can creep in.