Repository: panmari/stanford-shapenet-renderer
Branch: master
Commit: 5ddf7e9e13e1
Files: 3
Total size: 11.3 KB
Directory structure:
gitextract_22t_ndew/
├── LICENSE
├── README.md
└── render_blender.py
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
The MIT License (MIT)
Copyright (c) 2016 Panmari
Copyright (c) 2020 Markus Völk
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Stanford Shapenet Renderer
A little helper script to render .obj files (such as from the stanford shapenet database) with Blender.
Tested on Linux, but should also work for other operating systems.
By default, this scripts generates 30 images by rotating the camera around the object.
Additionally, depth, albedo, normal and id maps are dumped for every image.
Tested with Blender 2.9
## Example invocation
To render a single `.obj` file, run
blender --background --python render_blender.py -- --output_folder /tmp path_to_model.obj
To get raw values that are easiest for further use, use `--format OPEN_EXR`. If the .obj file references any materials defined in a `.mtl` file, it is assumed to be in the same folder with the same name.
## Batch rendering
To render a whole batch, you can e. g. use the unix tool find:
find . -name *.obj -exec blender --background --python render_blender.py -- --output_folder /tmp {} \;
To speed up the process, you can also use xargs to have multiple blender instances run in parallel using the `-P` argument
find . -name *.obj -print0 | xargs -0 -n1 -P3 -I {} blender --background --python render_blender.py -- --output_folder /tmp {}
## Example images
Here is one chair model rendered with 30 different views:

or a teapot with all available outputs

================================================
FILE: render_blender.py
================================================
# A simple script that uses blender to render views of a single object by rotation the camera around it.
# Also produces depth map at the same time.
#
# Tested with Blender 2.9
#
# Example:
# blender --background --python mytest.py -- --views 10 /path/to/my.obj
#
import argparse, sys, os, math, re
import bpy
from glob import glob
parser = argparse.ArgumentParser(description='Renders given obj file by rotation a camera around it.')
parser.add_argument('--views', type=int, default=30,
help='number of views to be rendered')
parser.add_argument('obj', type=str,
help='Path to the obj file to be rendered.')
parser.add_argument('--output_folder', type=str, default='/tmp',
help='The path the output will be dumped to.')
parser.add_argument('--scale', type=float, default=1,
help='Scaling factor applied to model. Depends on size of mesh.')
parser.add_argument('--remove_doubles', type=bool, default=True,
help='Remove double vertices to improve mesh quality.')
parser.add_argument('--edge_split', type=bool, default=True,
help='Adds edge split filter.')
parser.add_argument('--depth_scale', type=float, default=1.4,
help='Scaling that is applied to depth. Depends on size of mesh. Try out various values until you get a good result. Ignored if format is OPEN_EXR.')
parser.add_argument('--color_depth', type=str, default='8',
help='Number of bit per channel used for output. Either 8 or 16.')
parser.add_argument('--format', type=str, default='PNG',
help='Format of files generated. Either PNG or OPEN_EXR')
parser.add_argument('--resolution', type=int, default=600,
help='Resolution of the images.')
parser.add_argument('--engine', type=str, default='BLENDER_EEVEE',
help='Blender internal engine for rendering. E.g. CYCLES, BLENDER_EEVEE, ...')
argv = sys.argv[sys.argv.index("--") + 1:]
args = parser.parse_args(argv)
# Set up rendering
context = bpy.context
scene = bpy.context.scene
render = bpy.context.scene.render
render.engine = args.engine
render.image_settings.color_mode = 'RGBA' # ('RGB', 'RGBA', ...)
render.image_settings.color_depth = args.color_depth # ('8', '16')
render.image_settings.file_format = args.format # ('PNG', 'OPEN_EXR', 'JPEG, ...)
render.resolution_x = args.resolution
render.resolution_y = args.resolution
render.resolution_percentage = 100
render.film_transparent = True
scene.use_nodes = True
scene.view_layers["View Layer"].use_pass_normal = True
scene.view_layers["View Layer"].use_pass_diffuse_color = True
scene.view_layers["View Layer"].use_pass_object_index = True
nodes = bpy.context.scene.node_tree.nodes
links = bpy.context.scene.node_tree.links
# Clear default nodes
for n in nodes:
nodes.remove(n)
# Create input render layer node
render_layers = nodes.new('CompositorNodeRLayers')
# Create depth output nodes
depth_file_output = nodes.new(type="CompositorNodeOutputFile")
depth_file_output.label = 'Depth Output'
depth_file_output.base_path = ''
depth_file_output.file_slots[0].use_node_format = True
depth_file_output.format.file_format = args.format
depth_file_output.format.color_depth = args.color_depth
if args.format == 'OPEN_EXR':
links.new(render_layers.outputs['Depth'], depth_file_output.inputs[0])
else:
depth_file_output.format.color_mode = "BW"
# Remap as other types can not represent the full range of depth.
map = nodes.new(type="CompositorNodeMapValue")
# Size is chosen kind of arbitrarily, try out until you're satisfied with resulting depth map.
map.offset = [-0.7]
map.size = [args.depth_scale]
map.use_min = True
map.min = [0]
links.new(render_layers.outputs['Depth'], map.inputs[0])
links.new(map.outputs[0], depth_file_output.inputs[0])
# Create normal output nodes
scale_node = nodes.new(type="CompositorNodeMixRGB")
scale_node.blend_type = 'MULTIPLY'
# scale_node.use_alpha = True
scale_node.inputs[2].default_value = (0.5, 0.5, 0.5, 1)
links.new(render_layers.outputs['Normal'], scale_node.inputs[1])
bias_node = nodes.new(type="CompositorNodeMixRGB")
bias_node.blend_type = 'ADD'
# bias_node.use_alpha = True
bias_node.inputs[2].default_value = (0.5, 0.5, 0.5, 0)
links.new(scale_node.outputs[0], bias_node.inputs[1])
normal_file_output = nodes.new(type="CompositorNodeOutputFile")
normal_file_output.label = 'Normal Output'
normal_file_output.base_path = ''
normal_file_output.file_slots[0].use_node_format = True
normal_file_output.format.file_format = args.format
links.new(bias_node.outputs[0], normal_file_output.inputs[0])
# Create albedo output nodes
alpha_albedo = nodes.new(type="CompositorNodeSetAlpha")
links.new(render_layers.outputs['DiffCol'], alpha_albedo.inputs['Image'])
links.new(render_layers.outputs['Alpha'], alpha_albedo.inputs['Alpha'])
albedo_file_output = nodes.new(type="CompositorNodeOutputFile")
albedo_file_output.label = 'Albedo Output'
albedo_file_output.base_path = ''
albedo_file_output.file_slots[0].use_node_format = True
albedo_file_output.format.file_format = args.format
albedo_file_output.format.color_mode = 'RGBA'
albedo_file_output.format.color_depth = args.color_depth
links.new(alpha_albedo.outputs['Image'], albedo_file_output.inputs[0])
# Create id map output nodes
id_file_output = nodes.new(type="CompositorNodeOutputFile")
id_file_output.label = 'ID Output'
id_file_output.base_path = ''
id_file_output.file_slots[0].use_node_format = True
id_file_output.format.file_format = args.format
id_file_output.format.color_depth = args.color_depth
if args.format == 'OPEN_EXR':
links.new(render_layers.outputs['IndexOB'], id_file_output.inputs[0])
else:
id_file_output.format.color_mode = 'BW'
divide_node = nodes.new(type='CompositorNodeMath')
divide_node.operation = 'DIVIDE'
divide_node.use_clamp = False
divide_node.inputs[1].default_value = 2**int(args.color_depth)
links.new(render_layers.outputs['IndexOB'], divide_node.inputs[0])
links.new(divide_node.outputs[0], id_file_output.inputs[0])
# Delete default cube
context.active_object.select_set(True)
bpy.ops.object.delete()
# Import textured mesh
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.import_scene.obj(filepath=args.obj)
obj = bpy.context.selected_objects[0]
context.view_layer.objects.active = obj
# Possibly disable specular shading
for slot in obj.material_slots:
node = slot.material.node_tree.nodes['Principled BSDF']
node.inputs['Specular'].default_value = 0.05
if args.scale != 1:
bpy.ops.transform.resize(value=(args.scale,args.scale,args.scale))
bpy.ops.object.transform_apply(scale=True)
if args.remove_doubles:
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.remove_doubles()
bpy.ops.object.mode_set(mode='OBJECT')
if args.edge_split:
bpy.ops.object.modifier_add(type='EDGE_SPLIT')
context.object.modifiers["EdgeSplit"].split_angle = 1.32645
bpy.ops.object.modifier_apply(modifier="EdgeSplit")
# Set objekt IDs
obj.pass_index = 1
# Make light just directional, disable shadows.
light = bpy.data.lights['Light']
light.type = 'SUN'
light.use_shadow = False
# Possibly disable specular shading:
light.specular_factor = 1.0
light.energy = 10.0
# Add another light source so stuff facing away from light is not completely dark
bpy.ops.object.light_add(type='SUN')
light2 = bpy.data.lights['Sun']
light2.use_shadow = False
light2.specular_factor = 1.0
light2.energy = 0.015
bpy.data.objects['Sun'].rotation_euler = bpy.data.objects['Light'].rotation_euler
bpy.data.objects['Sun'].rotation_euler[0] += 180
# Place camera
cam = scene.objects['Camera']
cam.location = (0, 1, 0.6)
cam.data.lens = 35
cam.data.sensor_width = 32
cam_constraint = cam.constraints.new(type='TRACK_TO')
cam_constraint.track_axis = 'TRACK_NEGATIVE_Z'
cam_constraint.up_axis = 'UP_Y'
cam_empty = bpy.data.objects.new("Empty", None)
cam_empty.location = (0, 0, 0)
cam.parent = cam_empty
scene.collection.objects.link(cam_empty)
context.view_layer.objects.active = cam_empty
cam_constraint.target = cam_empty
stepsize = 360.0 / args.views
rotation_mode = 'XYZ'
model_identifier = os.path.split(os.path.split(args.obj)[0])[1]
fp = os.path.join(os.path.abspath(args.output_folder), model_identifier, model_identifier)
for i in range(0, args.views):
print("Rotation {}, {}".format((stepsize * i), math.radians(stepsize * i)))
render_file_path = fp + '_r_{0:03d}'.format(int(i * stepsize))
scene.render.filepath = render_file_path
depth_file_output.file_slots[0].path = render_file_path + "_depth"
normal_file_output.file_slots[0].path = render_file_path + "_normal"
albedo_file_output.file_slots[0].path = render_file_path + "_albedo"
id_file_output.file_slots[0].path = render_file_path + "_id"
bpy.ops.render.render(write_still=True) # render still
cam_empty.rotation_euler[2] += math.radians(stepsize)
# For debugging the workflow
#bpy.ops.wm.save_as_mainfile(filepath='debug.blend')
gitextract_22t_ndew/ ├── LICENSE ├── README.md └── render_blender.py
Condensed preview — 3 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (12K chars).
[
{
"path": "LICENSE",
"chars": 1105,
"preview": "The MIT License (MIT)\n\nCopyright (c) 2016 Panmari\nCopyright (c) 2020 Markus Völk\n\nPermission is hereby granted, free of "
},
{
"path": "README.md",
"chars": 1407,
"preview": "# Stanford Shapenet Renderer\n\nA little helper script to render .obj files (such as from the stanford shapenet database) "
},
{
"path": "render_blender.py",
"chars": 9073,
"preview": "# A simple script that uses blender to render views of a single object by rotation the camera around it.\n# Also produces"
}
]
About this extraction
This page contains the full source code of the panmari/stanford-shapenet-renderer GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 3 files (11.3 KB), approximately 2.9k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.