motion_diffusion_model@daanelson

A diffusion model for generating human motion video from a text prompt

bladerunner@roshansood

SDXL Blade Runner By Stylize

genstruct-7b@hamelsmu

Genstruct 7B is an instruction-generation model, designed to create valid instructions given a raw text corpus. This enables the creation of new, partially synthetic instruction finetuning datasets from any raw-text corpus.

latent-diffusion-text2img@cjwbw

text-to-image with latent diffusion

sdxl-soviet-propaganda@davidbarker

SDXL fine-tuned on Soviet propaganda posters

stable-diffusion-videos@wcarle

Generate videos by interpolating the latent space of Stable Diffusion

rivers-stable-diffusion-upscaler@lucataco

RiversHaveWings Stable Diffusion Upscaler

uform-gen2-qwen-500m@cjwbw

Pocket-Sized Multimodal AI For Content Understanding and Generation

loose-control@chigozienri

Depth Controlnet, but fuzzier

upscaler@alexgenovese

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration

fin-llama-33b@tomasmcm

Source: bavest/fin-llama-33b ✦ Quant: TheBloke/fin-llama-33B-AWQ ✦ Efficient Finetuning of Quantized LLMs for Finance

controlnet-v1-1-multi@zylim0702

clip interrogator with controlnet sdxl for canny and controlnet v1.1 for the others

visual-style-prompting-controlnet@camenduru

Visual Style Prompting with Swapping Self-Attention

pixart-sigma-900m@lucataco

PixArt Sigma 900M is a text-to-image generation model based on the PixArt Sigma architecture

faster-diffusion@cjwbw

Rethinking the Role of UNet Encoder in Diffusion Models

big-medium-images@kevin-coyle

Generates Images in the Big Medium Style

debvc@cudanexus

Deep Exemplar-based Video Colorization Simply put, the mission of this project is to colorize and restore old images and film footage. We'll get into the details in a bit, but first let's see some pretty pictures and videos!

logos@profdl

Trained on logo designs, black logos on white backgrounds

sdxl-ww2@hunterkamerman

A fine-tuned SDXL LoRA trained on images from World War 2