This is a video showing off each of the edits from the initial AI generation to the final edit of the image here. Inpainting was the primary method of editing, which involves selecting an area of the image and writing a prompt based on the desired change, as well as changing the settings depending on different conditions for the desired effect. Gimp was also used for editing, primarily for changing colours or moving body parts.
During the process, I saved each image after an edit had been made, and put them together into this video using a tool called Flowframes to interpolate between each of them. Each edit involves a lot of experimentation with prompts, settings, and generating to get the desired effect.
Before I had begun even prompting for this image, I had envisioned this scene of a moth pony on the right staring in awe at a lamp on a brown wooden table in a dark room, and successfully achieved my goal of creating this scene using AI with great accuracy. This demonstrates how AI can be used as a tool to express ideas that are entirely of human origin with the proper usage and application.
The model used for AI generation was the Pony Soup V2 Remix model, a custom merge I created of different AI models- mostly Pony Diffusion- that is optimized for coherent SFW pony image generation. Keep in mind that this video does not highlight the process of the initial generation, which is built upon many hours of prompt engineering and finely-tuned generation parameters specific to this model and use case.