… and I’m not sad about it.

The rise of large language models — and the tools built upon them like DALL-E mini, Stable Diffusion, and others — is being touted as the herald of a new age of capabilities in AI tools and products. On a micro scale, they are already changing my day-to-day work life in a profoundly mundane way. Welcome to the era of webinars and slide decks augmented with AI art.

Slide made from output from DALL-E Mini: an AI learns to paint intelligence

I’m sure I’m not alone in feeling the fatigue of seeking out the perfect image for a visual presentation and dreading the time it will take me to find it! So, I decided to throw a couple prompts into DALL-E Mini and see if I found anything helpful. It turns out that only about 25% of the images were useful — but since each run of a prompt produces nine images, that still gave me more than enough content to work with. I’m not going to be replacing all the stock images in my presentations with AI-generated ones tomorrow, but I have little doubt that these tools will settle into a well-used spot in my toolbox. Aside from generating images that are tailored to language I’m expressing, using AI generated images also adds a distinctiveness to my presentation since they are created from my own words.

While these image generators are incredibly useful for automatically generating visual content, there are some limitations with using the no-cost, publicly available web services for AI image generation. A few that I have encountered so far are:

  • Image resolution. Currently the images created by stable diffusion and DALL-E mini’s webservice are not high enough resolution to display on anything larger than a high-resolution desktop monitor.
  • Service time and availability. Running a prompt on the image generation services can take anywhere up to 10 minutes to execute, so iterating on a image without alternative tasks to switch to while the model is running can waste your time. At times of heavy usage, you may not even be able to access the models.
  • Prompt construction skills. Like any creative tool, knowing how to manipulate it to translate ideas from your brain to the world is key. It can take some time to understand how to phrase and shape your language to produce the results you want. Set aside time to experiment and enjoy the unexpected results along the way.
  • Ownership is uncertain. While the license for DALL-E allows the use of generated images within the bounds of local laws, the ownership of an AI-generated image (particularly one that might reference copyrighted IP in the prompt) is not settled. Commercial sites like Getty Images are not permitting the sale of AI-generated images to avoid any future change in the laws, but for now, using them in presentations or social posts is safe and legal.
  • Hands are weird. Something about human hands makes them come out oddly in a lot of the images I have created.

Even with those limitations, AI-generated images can be a fantastic way to quickly add some useful visual material to presentations, reports, webinars, or articles. And it’s not only for single images! My colleague Jeremy Vale has used DALL-E Mini to augment our talk on synthetic data at this year’s Forrester Data Strategy & Insights conference with AI-art animation. We’ll give you a teaser with our title: “The Value Of Tilting At Windmills.” Now get out there and start augmenting your creative process with AI today!

I’ll leave you with a suitable output from DALL-E mini: “making machine learning easy.”