oot_segmentation@viktorfa

Only makes segmentations for further processing

tarot-style@roskideluge

SDXL fine-tuned on the Raider-Waite-Smith tarot deck

dreamshaper-v8@pagebrain

T4 GPU, negative embeddings, img2img, inpainting, safety checker, KarrasDPM, pruned fp16 safetensor

starling-lm-7b-alpha@tomasmcm

Source: berkeley-nest/Starling-LM-7B-alpha ✦ Quant: TheBloke/Starling-LM-7B-alpha-AWQ ✦ An open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)

deliberate-v6@asiryan

Deliberate V6 Model (Text2Img, Img2Img and Inpainting)

clip-guided-diffusion@cjwbw

Clip-Guided Diffusion Model for Image Generation

yi-34b-200k@01-ai

The Yi series models are large language models trained from scratch by developers at 01.AI.

style-clip-draw@pschaldenbrand

Styled text-to-drawing synthesis method.

chatglm2-6b-int4@nomagick

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型 (int4)

clip-interrogator@lucataco

CLIP Interrogator (for faster inference)

conceptual-image-to-image@vivalapanda

Conceptual image-to-image model for Stable Diffusion 2.0

ccpl@jarrentwu1031

Contrastive Coherence Preserving Loss for Versatile Style Transfer

zone@tmcdepix

Replicate version from the work of Shanglin Li et al. called "ZONE: Zero-Shot Instruction-Guided Local Editing"

deliberate-v2-img2img@mcai

Generate a new image from an input image with Deliberate v2

clip-guided-diffusion@afiaka87

Generate image from text by guiding a denoising diffusion model. Inference is somewhat slow.

woolitize-diffusion@misbahsy

Diffusion model to generate Woolitize images

hololive-style-bert-vits2@zsxkib

🎙️Hololive text-to-speech and voice-to-voice (Japanese🇯🇵 + English🇬🇧)

sdxl-toro-gen7@toro-org

AI model where $TORO meme is born. http://torocoin.top

emoji-me@martintmv-git

RealVisXL_V3.0, img-to-emoji

sdxl-victorian-illustrations@davidbarker

SDXL trained on illustrations from the Victorian era