In a major breakthrough, Meta just announced the debut of 3D Gen – an AI model capable of creating 3D images directly from text prompts in under a minute. This new tool represents an exciting advancement over existing 3D generation solutions, producing results up to 10 times faster according to Meta.

So how did they create this technology? Meta shares the details in a recently published research paper. As the abstract outlines, 3D Gen ingeniously combines two of their prior models – 3D AssetGen for text-to-image and 3DTextureGen for text-to-texture. Working in tandem, these AI powers can now interpret user descriptions and quickly manifest photo-realistic 3D scenes.

© REUTERS/Dado Ruvic/Illustration

But the capabilities don’t stop there. 3D Gen will also allow retexturing of self-generated or artist-made 3D designs on the fly. Simply provide additional text instructions and the AI seamlessly updates the textures in real-time. This level of customization opens the door to limitless creative expression.

In testing, 3D Gen proved massively impressive. Meta boasts the model delivers “high prompt fidelity and high-quality 3D shapes and textures.” Even more impressively, test subjects chose 3D Gen over rival solutions nearly 70% of the time – that’s no small feat in the competitive 3D AI space.

While the technology remains in development for now, Digital Trends notes 3D Gen’s breakthroughs could revolutionize many creative fields. From next-gen game assets to photorealistic VR worlds, the applications are vast. Keep an eye out for more developments from Meta as this innovative new AI gets put to work across entertainment, design and beyond.

Leave a Reply

Your email address will not be published. Required fields are marked *