Washington DC Music Photographer & Filmmaker

View Original

How AI Will Change Photography As We Know It

This was originally published on my Substack on APR 11, 2023.

Hi Everyone! Hope you all have been well since my last post. I’ve spent a good part of March traveling, spending time in Las Vegas for WPPI, the Wedding and Portrait Photographers International Conference, and Outer Limits, a blockchain technology conference in Los Angeles. Each offered a glimpses into how advancements in technology are going to impact the visual arts, whether it comes in the form of camera to cloud technology for real time editing or how visual art can be used to empower non-profit work that helps people in underserved communities.

Coming back home, I’ve spent some time playing with two of the most popular image generation based AI tools out right now, Midjourney and Adobe Firefly and posting my results on Instagram. Full disclosure here, I’m currently an Adobe Express Ambassador but my opinions here are completely my own. Each of these tools offer a different experience when it comes generating ideas but go about different ways of getting there.

I’ll start with Midjourney since that was the tool that I first started using when experimenting with AI. Midjourney is an app based on text prompts given in the chat app Discord. Users describe in as much detail as they can think of what type of image that they want to see and in general Midjourney will create an image that looks convincing enough to what you entered. By adding various parameters to the end of an entry, you can specify which version of algorithm you want to use, the amount of detail in the image and other factors that can define the results.

What makes Midjourney special is that not only can it generate images using just prompts, but it features the option to use your own images as input data. Essentially, you can use it to manipulate and edit your own images. I’ve tested this with photos I’ve shot of friends and through repeated image generations I was able to get some astounding results. The header image was generated from a portrait I shot and it created a photorealistic variant of an image that didn’t exist from that shoot.

In the above image, I was able to take a photo I shot of a friend taken in the mountains outside of Seattle and move the model into downtown Seattle at golden hour. It’s incredible to see how Midjourney was able to take the supplied photos and create photorealistic and accurate images. For photographers I think that this can be a powerful tool to give alternative takes on images that they create, or take older work and breathe fresh life into it with options that weren’t previously available. For reference, here are the sample images that I used to create the above images.

Midjourney Images

What I learned about image prompts in Midjourney is something that I brought into using Adobe Firefly. Firefly utilizes Adobe’s massive stock library in order to create images, so while you can’t use your own images like Midjourney, the amount of images available to use are massive.

As another text to image based generator, and because of the current setup, using highly detailed prompts help to create better results. By being very specific about what I want, I’ve been able to create photoshoot concepts that I’d like to execute.

Adobe Firefly Images

Each of the above images was created in Firefly and is photorealistic, outside of a few things that I’ll get to shortly. Not only does Firefly help illustrate a concept I might have but also give various interpretations of it. It makes for a good jumping off point to create unique images in real life as well as exploring options that might not have been initially thought of based on the input prompt. Firefly excels in this area and being able to use a successful image generation to influence a new set of photos is incredibly useful.

While both of these tools are very powerful and have their own strengths, neither of these apps are perfect. Their are two key places that both of these apps fall short right now when generating portraits that is inconsistent, and that’s in the eyes and the hands. If you’ve followed news about image based AI tools like DALL-E or Midjourney, the running joke about hands being off and looking like a scene from Everything Everywhere All At Once is very true. I’ve gotten results where the hands seem like they were sourced from three or four different images at once and then loosely stitched together. The other side of are how eyes are rendered. Like hands, it seems that sometimes eyes are sourced from multiple images so sometimes the images feels off. Both tend to be fixed after a few rounds of refinement.

“While both of these tools are very powerful and have their own strengths, neither of these apps are perfect.”

Overall Midjourney and Firefly represent a shift in how we create and refine photography. Whether its refining images that we have already created or coming up with concepts for future shoots, both of these tools will transform how we create visually and as they evolve will continue to impact photography.