Transform Images to Editable 3D Models
PartPacker uses NVIDIA's breakthrough AI technology to convert single 2D images into interconnected, editable 3D models. Generate part-based 3D models perfect for content creation, 3D printing, and advanced research applications.
Loading PartPacker...
Upload any image above to generate an editable 3D model with PartPacker's AI technology
Trusted by Researchers & Creators
PartPacker represents a breakthrough in 3D generation technology, enabling novel workflows that were previously impossible with traditional single-mesh models.
Why Choose PartPacker?
Dual-volume packing strategy for part-based generation
Individual part manipulation for editing and animations
Export to STL/3MF formats for multi-material 3D printing
Pre-trained models available on Hugging Face Hub
3D Model Generation
Transform any 2D image into editable 3D components using advanced AI technology
Revolutionary 3D Generation
PartPacker introduces breakthrough technology in 3D model generation, offering capabilities that transform how we create and interact with three-dimensional content.
Single Image Input
PartPacker requires only a single 2D RGB image to generate comprehensive 3D models, unlike competitors that need multiple viewpoints.
Part-Based Generation
Revolutionary dual-volume packing strategy creates interconnected parts that can be individually manipulated.
Diffusion Transformer
Advanced DiT architecture ensures high-quality 3D model output with superior detail and accuracy.
Editable Components
Generated models feature separable parts perfect for animations, modifications, and interactive applications.
3D Printing Ready
Export to STL/3MF formats optimized for multi-material 3D printing applications.
Research Development
Enables novel 3D generation methods for academic research and professional development.
How PartPacker Works
Upload Image
Simply upload any 2D RGB image (518×518 resolution recommended) to begin the PartPacker generation process.
AI Processing
Our Diffusion Transformer architecture analyzes your image and generates interconnected 3D parts using dual-volume packing.
Download & Edit
Receive your editable 3D model with separable components, ready for animation, modification, or 3D printing.
Use Cases & Applications
Content Creation
Generate editable 3D assets for games, films, and interactive media from concept art or reference images.
3D Printing
Create printable models with separable components for multi-material printing and assembly.
Research & Development
Advance 3D generation research with PartPacker's innovative part-based approach.
Frequently Asked Questions
Get answers to common questions about PartPacker's 3D generation technology, system requirements, and usage guidelines.
PartPacker uses a revolutionary dual-volume packing strategy that generates interconnected, separable 3D parts from single images. Unlike traditional methods that create single-mesh models, PartPacker enables individual part manipulation for editing and animations, making it perfect for advanced applications like 3D printing and content creation.
PartPacker requires a CUDA-enabled GPU with 16GB+ VRAM recommended for optimal performance. The system supports Python 3.8+ and PyTorch 2.0+. For users without high-end hardware, you can use our online demo through the Hugging Face Spaces interface above.
PartPacker works with standard RGB images and supports 518×518 resolution input for optimal results. The system can handle various common image formats including JPEG, PNG, and other standard formats. Single-view images are sufficient - no multiple viewpoints required.
PartPacker is currently available for non-commercial use only as specified in the license. For commercial applications, please contact NVIDIA Research Labs for licensing information. The models generated can be exported to STL/3MF formats for 3D printing and other applications within the license terms.
You can access PartPacker through multiple channels: use the live demo on Hugging Face Spaces (embedded above), download from the GitHub repository at github.com/NVlabs/PartPacker, or access pre-trained models from the Hugging Face Hub. Documentation includes data processing scripts for mesh conversion.
PartPacker can export generated 3D models to STL and 3MF formats, which are optimized for multi-material 3D printing. The part-based nature of the generated models makes them particularly suitable for applications requiring separable components and multi-material printing workflows.
Still Have Questions?
Our team is here to help you get the most out of PartPacker's 3D generation capabilities. Reach out for technical support, research collaboration, or licensing inquiries.
Technical Support
Get help with installation, usage, and troubleshooting
Research Collaboration
Explore opportunities for academic and industry partnerships
Commercial Licensing
Discuss commercial usage and licensing options
Ready to Transform Your Images?
Join researchers, creators, and developers worldwide who are pushing the boundaries of 3D generation technology with PartPacker's revolutionary approach.
Developer Resources & Downloads
GitHub Repository
Access the complete PartPacker source code, documentation, and installation guides.
Hugging Face Hub
Download pre-trained PartPacker models and explore model cards with detailed information.
Live Demo
Try PartPacker instantly in your browser with our interactive Hugging Face Spaces demo.
Research Paper
Read the technical details and methodology behind PartPacker's dual-volume packing strategy.
Quick Installation Guide
Install Dependencies
Python 3.8+, PyTorch 2.0+, CUDA support
Download Models
Pre-trained VAE and Flow models from Hugging Face
Run PartPacker
Generate your first 3D model from a 2D image
# Download pre-trained models
wget https://huggingface.co/nvidia/PartPacker/resolve/main/vae.pt
wget https://huggingface.co/nvidia/PartPacker/resolve/main/flow.pt
# Generate 3D model
python generate.py --input image.jpg
PartPacker
Revolutionary 3D generation technology by NVIDIA Research
© 2024 NVIDIA Corporation. PartPacker is available for non-commercial use.