AI image generators took the creative world by storm last year, unleashing a wave of stunning surreal images created with text commands. But some issues can still be difficult for their algorithms. Apparently that includes cycling.
The CyclingTips website observed the outrage around the new technology and decided to test it out by asking several AI models to render images of cycling-related scenes. If you want to avoid a horror show, close your eyes now. Some of the results look like after a particularly gruesome Tour de France stack (see our guide on how to use the DALL-E 2 to learn how the tech works).
Cycling Tips (opens in new tab) Ever since text-to-image rendering has made great strides forward with beta versions of diffusion models such as DALL-E 2, Stable Diffusion and Midjourney, impressive AI-generated images have captured his attention. He had to try the technology himself.
“What would these AI platforms create if we fed a series of requests for cycling? And would any of them be suitable for use in CyclingTips?” he wondered. He used several AI models, including Stable Diffusion. (opens in new tab) and the free browser-based app Craiyon (opens in new tab)and wrote text prompts asking for images of different aspects of cycling, from “a cyclist climbing a mountain” to “a showroom full of bikes.”
The results proved to be “enchanting, entertaining, and in some cases more than just a little spooky.” Yep, it seems like AI image generators are struggling with bikes more than cereal boxes.
The site was pleased with some of the “black and white, still movie, sore face, cyclist, bicycling” results suggesting they might look great on the cover of an indie rock album. Stability AI’s DreamStudio (opens in new tab) He was less successful at generating the prompt, “The tired cyclist collapsed on the bike with sweat on his forehead with dark sky and lightning.” This poor man is going to need some serious physiotherapy.
So what went wrong? One thing that can cause problems is that most of the rendered images are landscape. Most AI renderers can only produce square footage at the moment. Stable Spread allows for different aspect ratios to be selected, but because of the images it’s trained on, it doesn’t usually seem to do a very good job on images that are wider than their height. It usually fills the extra space with duplicates and gives you two of what you want.
Some explanations can also produce strange results. It seems that some producers use “close-up” to refer to the closeness between subjects in a composition rather than the shot type. And the AI doesn’t know what looks true or realistic. For example, he does not know how many wheels a bicycle should have. The training data undoubtedly included images where the wheel of a bicycle was hidden behind another wheel, or all of a person’s fingers were not visible.
Some glitches and flaws in AI-generated images can be fixed in the editing software, but in this case, most of them are completely wrong, like the smashed bikes depicted. Check out this comparison of the best AI renderers to learn more about how different AI renderers are.
Read more: