You Can Now Edit ChatGPT Images, Photoshop-Style

You Can Now Edit ChatGPT Images, Photoshop-Style

In recent months, there's been an explosion of AI art generators. These clever tools can create images that look like photographs, paintings, or even something straight out of your imagination. They're constantly improving, with some features even popping up in familiar programs like Microsoft Paint.

One exciting update comes to DALL-E, an AI image model available to paying subscribers of ChatGPT Plus. Now, you can edit specific parts of an image, just like you would with Photoshop. No more starting from scratch if you just want to tweak a detail! Simply show DALL-E the area you want to change, give it new instructions, and the rest of the image stays the same.

This solves a major hurdle for AI art: each image (and video) is one-of-a-kind, even if you give the program the same starting prompt. This makes it difficult to create a series of consistent images or refine an idea step-by-step. While AI art creators, based on diffusion models, are impressive, they still have some kinks to iron out, which we'll explore further.

Editing images in ChatGPT

Imagine this: you're a ChatGPT Plus member, scrolling through your phone or browsing the web. Suddenly, a craving for a specific image strikes you. Maybe it's a comical cartoon detective dog, trench coat billowing, hot on the case in a neon-lit cyberpunk alley. Or perhaps you envision a vast, rolling landscape dotted with lonely hills, a solitary figure silhouetted against a gathering storm. Whatever your fancy, ChatGPT Plus has you covered.

With a simple prompt, the app conjures your desired image within seconds. But that's not all! Feeling the need to tweak your masterpiece? No problem. Just click on the generated picture, then tap the "Select" button in the top right corner (look for the little pen icon). A handy slider in the top left corner lets you adjust the size of your editing tool. Then, simply draw over the area you want to change, and watch your vision come to life


This is a game-changer! Now you can edit specific parts of an image without affecting the rest. Before, if you wanted to tweak something small, the whole picture would be redone, often looking quite different from the original.

Here's how it works: you choose the area you want to adjust, and then tell the AI what changes you want. Just like with other AI art tools, the more details you give, the better the results. Imagine asking for a character to smile brighter, or a building to be painted a different color – your wish is the AI's command!

In my tinkering with ChatGPT and DALL-E, I got a sense of deja vu. It felt similar to some editing tricks we've seen before, like the Magic Eraser on Google Photos. Both tools seem to work by cleverly filling in the empty spaces based on what's already there, leaving the rest of the image alone.

While it's not the most sophisticated selection tool, I did run into some issues with the borders and edges of objects being a bit wonky. This isn't too surprising considering how much freedom you get when choosing what to edit. Overall, the editing features worked decently most of the time, but weren't perfect. It's clear that OpenAI, the company behind these tools, is still working on making them more reliable.

Where AI art hits its limits

Eager to test the limits of the new editing tool, I threw a variety of tasks its way. It handled simple edits with ease, like swapping the color of a dog frolicking in a meadow or nudging it to a different spot. However, more complex adjustments proved trickier. Shrinking a towering figure on a castle wall resulted in him vanishing into a confusing mess of pixels, suggesting the AI struggled to seamlessly integrate the change.

The tool also faltered in fantastical scenarios. My attempt to add a sleek car to a cyberpunk scene went awry, leaving the cityscape stubbornly car-free. In another castle scene, I envisioned a majestic green dragon breathing fire, turned menacingly red. But after some whirring and processing, the program simply banished the dragon altogether, leaving the castle strangely bereft of its mythical inhabitant.


DALL-E is a brand new tool, and while it's impressive, OpenAI isn't saying it can replace human image editors just yet. It's clear there's still a learning curve. These mistakes actually highlight some of the challenges AI-generated art faces.

DALL-E excels at arranging pixels to create a decent image of, say, a castle. After all, it's been trained on millions of them! But here's the catch: AI doesn't truly understand what a castle is. It doesn't grasp concepts like geometry or physical space. That's why my castles sprout turrets from thin air. This is a common issue in AI art, especially when it comes to buildings, furniture, or anything that requires a deeper understanding of form and function.


These models are essentially fancy probability machines. They can create impressive images, but they don't actually grasp the meaning behind what they're generating. This is why you might see strange things happening in OpenAI's Sora videos, like people disappearing out of thin air. The AI is just really good at manipulating pixels, not at understanding or tracking objects in the scene. You might have also heard about AI having trouble generating images of interracial couples. This is because the training data they use likely contains more images of same-race couples, reflecting real-world biases. So, the AI simply follows the pattern it's been shown.

Next Post Previous Post
No Comment
Add Comment
comment url