Researchers have developed a new technique to create realistic 3D models with textures from small 2D images. This approach could change how 3D content is created.
A team from Simon Fraser University and the City University of Hong Kong introduced a method called "Object Images" (Omages) that can efficiently generate 3D models with UV textures from 2D images. The technique condenses the geometry, appearance, and surface structure of a 3D object into a 64x64 pixel image.
This compact representation allows complex 3D shapes to be converted into a more manageable 2D format. Complete 3D models can then be reconstructed from these images.
Object images enable 3D models from tiny images
The Object Images contain 12 channels with information about the object like geometry, texture or normal maps. By processing the 2D image, a full 3D model with PBR materials can be generated.
To test their method, the researchers trained an AI model on the Amazon Berkeley Objects dataset, which includes about 8,000 high-quality 3D models. The team reports that the quality of the generated 3D shapes and textures is comparable to current 3D generation models.
However, the approach has some limitations: The generated 3D models still have gaps, the training requires high-quality 3D data, and the current 64x64 pixel resolution limits the possible quality of the final results.
Despite these challenges, the researchers view their work as an important step toward efficient generation of high-quality 3D assets. They plan to address the existing limitations of the method in future work.
More information and the code will soon be available on the project's website.