Introducing Better Forge: Spin Up New Stable Diffusion Pods Quicker Than Before
Our very own Madiator2011 has done it again with the release of Better Forge, a streamlined template that lets you spin up an instance with a minimum of fuss. One fairly consistent piece of feedback brought up by RunPod users is how long it takes to start up an image generation pod for the first time, especially in Community Cloud where machines may not have access to data-center quality bandwidth levels. The primary reason for this long wait is because models are often "baked into" the Docker image, which drastically increases the size of the image, leading to a long download before you can start working in the pod – for a model that you may not even want to use. Better Forge installs Forge with a total payload of approximately 12gb when all is said, done, and unpacked.
If you'd like to try out Madiator's previous work, Better ComfyUI, that is also still available here!
Better Forge comes with the following perks:
- Supports network storage and custom extension installation (remember, saving models to network storage is the fastest way to get up and running ASAP in Secure Cloud!)
- Comes with API access enabled
- Flux support (must be installed by the user manually after the fact due to licensing)
You can get started with Better Forge today by going to its page in the template explorer, clicking Deploy, selecting a GPU spec, and off you go.
Quick Start Guide
On the Deploy Pod page, select a GPU spec and then choose the Better Forge template, and click Deploy On Demand.
When you click on Connect, you'll see two options for different ports:
This template is Bring Your Own Model, so you'll need to download a model on your own. One of the preferred ways to do so is the model downloader by ashleykleynhans which you can run in the Web Terminal. Enter the download URL for the model (right click the download link on CivitAI and click Copy.) You'll also need an API key that can be created under the CivitAI user page. This will download the model.
Once completed, you can connect to the pod on port 7860 and select the model in the Checkpoint dropdown, enter a prompt, click Generate, and there you have it.
Questions? Feel free to pop on our Discord and ask for help!