You guessed it, it's another Rules update. Check here for more info

Generation methods, UI

Poll results: Which method/UI are you using for generating images?

Forge/ReForge
25.93% 7 votes
ComfyUI
25.93% 7 votes
Online generators (CivitAI, SeArt.ai, Tensor.art, Bing, etc.)
25.93% 7 votes
Automatic1111
11.11% 3 votes
Other
11.11% 3 votes

Poll ends . 27 votes cast so far.

tyto4tme4l

Something of an artist
I’m curious which method/UI people are using for image generation. Is A1111 still widely used? Is ComfyUI more popular than Forge? How many are generating locally compared to online?
Thoryn

Latter Liaison
I voted for Automatic1111 as it’s what I used at the time of voting, but today I switched to Forge (very much because of @tyto4tme4l mentioning their speed difference with it) - and in my case as well, it’s much, much faster - gens that took me 10-15 minutes on Automatic takes me 10-25 seconds with Forge.
The UI for the most part also looks the same for me, since I wasn’t familiar with all of the functionality in either yet, so it was an easy switch. ComfyUI’s UI looks very interesting though.
Right now I’m hoping Stability Matrix gets some, you know, stability, and can install and run properly on my system, so it’s easier to handle different UIs, models, LoRAs etc. Will definitely look at it again once I move on to Linux, but for now I am very happy with basic Forge.
Thoryn

Latter Liaison
Can’t say it enough how extremely happy with Forge I am so far, so damn snappy compared to Automatic1111.
However, I’m curious how ReForg differs from Forge. Tried googling it but didn’t get any relevant results.
Tempted to test out ComfyUI once I’ve learned the basics and need to get a more proper workflow, but preliminary reading suggests that it generally needs beefier hardware than e.g. Forge, so it’s not a high priority for me at the moment.
Since Windows 10 hits EOL this year, I’ll be moving over to Linux soon-ish (once I can afford a new m.2 to put it on, so I have a fallback period on Windows just in case), hopefully I have better luck with Stability Matrix on there.
Lord Waite

Tempted to test out ComfyUI once I’ve learned the basics and need to get a more proper workflow, but preliminary reading suggests that it generally needs beefier hardware than e.g. Forge, so it’s not a high priority for me at the moment.
Not really the case. The original idea behind Forge was actually to take some of the better generation code ComfyUI uses and bring it over to A1111, so it’s going to be fairly comparable.
I am running all of this on Linux personally, incidentally, though I have get to try out Stability Matrix.
And basically at one point, it looked like Forge wasn’t going to continue being updated, so ReForge forked it, IIRC. Forge has been updated since then.
Thoryn

Latter Liaison
@Lord Waite
Aha, good to know then, that I should check the repos for Forge and reForge every now and then to see if Forge has gone stale.
I’m still very new to this, so am wondering if there’s any obvious or maybe not-so-obvious beginner tips anyone has.
E.g. I’ve seen some new people being very surprised when they found out web UIs have a place you can drop .png images that still have the metadata, and send it to other parts of the UI (text to text, image to image, inpainting..)
Anything that would be recommended to look into ASAP?
Personally I’m thinking of looking into scripts for more dynamic prompting (reading from file various places in prompt, so large jobs while afk gives more varied results), and some x y charting/plotting for comparisons (to compare e.g. models, LoRAs, sampling method, steps..).
I also need to look into ways to make output look more unified in style, since one of my long-term goals is to make series of images telling a longer story. Guessing ComfyUI would be good for that, from what little I have read so far.
Lord Waite

It’s one of these areas where there are so many different things you can look into that it’s hard to be sure what to go into.
X/Y charting is easier on the A1111/Forge side of things, and can definitely be useful. Bear in mind that different sampling methods need different numbers of steps to get good pictures. Also, some vary more between number of steps, which can be both a good and bad thing.
While, unfortunately, this is easier when you have fast generations, I find it very helpful to just set a fixed seed and change things one at a time and generate to see how they affect things, and just change the seed manually. When you have random seeds on every generation, it can be hard to know whether the change you made or the seed caused the changes in the picture.
A1111/Forge/etc. and ComfyUI store different information in the metadata. With the former, it’s prompt/generation information, and with the latter, it is the entire workflow used as a json (which does include prompt/generation info). In fact, you can actually open a picture generated with ComfyUI as a workflow (which will probably make more sense when using ComfyUI).
As far as dynamic prompting goes, there is an extension for it on A1111/Forge/etc…, and custom nodes on ComfyUI for it, both of which use the same library, so how it’s done will be pretty similar either way. Civitai has a large number of wildcard files you can download for dynamic prompting (as well as ComfyUI workflows).
There’s some information on the A1111 extensions page:
https://github.com/adieyal/sd-dynamic-prompts
It’s best to get an idea of the different types and families of models, and bear in mind they often need to be prompted differently.(pony and illustrious based models are the most likely ones for pony related things. Both are based on xl, but other things are too. And there’s also Flux and 3.5. 2.1/1.5/1.4- are all rather old, and there’s other stuff out there.)
Pony, in general, isn’t great at unified style, and there are a lot of style loras for it, and a lot of models based off of it that do have a better unified style. There are some words that tend to trigger certain styles, though. The Purplesmart.ai discord has a list of styles people have found. You get a feel for words and phrases and how they change things after a while, too. Illustrious is more the sort of model where you can mention artists names and have it change styles, which opens up its own can of worms…
Controlnet and ipadapter are probably good to look into, and img2img and inpainting.
If you do get into ComfyUI, check their examples page:
https://comfyanonymous.github.io/ComfyUI_examples/
Even if you don’t, glancing through there might give you an idea of things that are possible. (Also, if you install ComfyUI, make sure to also install the ComfyUI Manager, as that lets you easily install custom nodes, and there are a million custom nodes.)
AIPonyAnon

@Thoryn
I have not had great success with any of the dynamic prompt nodes that I’ve tried in ComfyUI. They did the job of randomly choosing, but had no reproducibility, which made them useless for testing artist tag combinations.
Lord Waite

Reproducibility could definitely be better. You pretty much want to make sure you’ve got a node showing what the output was, and copy it down somewhere if you want to keep it…
AIPonyAnon

@Thoryn
The seed never seemed to set which combination was chosen and even if I connected a text output to display it wouldn’t save the metadata properly if I did more than one image gen at once.
@Lord Waite
Yeah, it’s a pain. I have access to an unreleased node that does work properly in that it directly maps seed value to selection. I’ll try to remember to post it here once it’s out.
Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide