Back
HY 3D 2.1

The HY 3D 2.1 workflow is designed to transform single 2D images into detailed 3D models using the Hunyuan3D 2.1 model by Tencent. This workflow leverages a series of specialized nodes to process an image and convert it into a 3D mesh. The process begins with the 'ImageOnlyCheckpointLoader' and 'LoadImage' nodes, which prepare the image for 3D transformation. The 'ModelSamplingAuraFlow' node then applies the Hunyuan3D model to generate a latent representation of the image in 3D space. This latent data is further processed by the 'EmptyLatentHunyuan3Dv2' and 'Hunyuan3Dv2Conditioning' nodes to refine the 3D structure.

The workflow continues with the 'KSampler' node, which samples the latent data to ensure the 3D model is as detailed and accurate as possible. The 'VAEDecodeHunyuan3D' node decodes this sampled data into a voxel representation, which is then converted into a mesh using the 'VoxelToMesh' node. Finally, the 'SaveGLB' node exports the 3D model in GLB format, ready for use in various applications. This workflow is particularly useful for artists and developers looking to create 3D content from existing 2D assets, offering a streamlined process that minimizes manual modeling efforts.