Stability AI Releases Stable Diffusion 35
Stability AI has released Stable Diffusion 35, the latest version of its open-source text-to-image diffusion model. This update brings several improvements, including better image quality, faster generation times, and new features.
Improved Image Quality
Stable Diffusion 35 generates images with improved quality, thanks to a number of under-the-hood improvements. The model now better captures details and textures, and produces more realistic and cohesive images.
Faster Generation Times
Stable Diffusion 35 is also faster than previous versions. The model can now generate images in as little as 10 seconds, making it more practical for real-time applications.
New Features
Stable Diffusion 35 includes several new features, including:
Inpainting:
The model can now be used to inpaint images, filling in missing or damaged areas with generated content.
Outpainting:
The model can now be used to extend images beyond their original boundaries, generating new content that is consistent with the existing image.
Depth2Img:
The model can now be used to generate 3D images from 2D depth maps.
Availability
Stable Diffusion 35 is available now on the Stability AI website. The model can be used for free for non-commercial purposes.
Conclusion
Stable Diffusion 35 is a significant update to the Stable Diffusion model. The improvements in image quality, generation speed, and new features make it a more powerful and versatile tool for image generation.