Documentation for Starfish

Starfish is a Python library for automatically creating synthetic, labeled image data using Blender.

This library it extends Blender’s powerful Python scripting ability, providing utilities that make it easy to generate long sequences of synthetic images with intuitive parameters. It was designed for people who, like me, know Python much better than they know Blender.

These sequences can be smoothly interpolated from waypoint to waypoint, much like a traditional keyframe-based animation. They can also be exhaustive or random, containing images with various object poses, backgrounds, and lighting conditions. The intended use for these sequences is the generation of training and evaluation data for machine learning tasks where annotated real data may be difficult to obtain.


Though Starfish has only been tested with Blender 2.82, it should work with any version 2.8+. Please open an issue on GitHub if it doesn’t.


  1. Identify the location of your Blender scripts directory. This can be done by opening Blender, clicking on the ‘Scripting’ tab, and entering bpy.utils.script_path_user() in the Python console at the bottom. Generally, on Linux, the default location is ~/.config/blender/[VERSION]/scripts. From now on, this path will be referred to as [SCRIPTS_DIR].

  2. Create the addon modules directory, if it does not exist already: mkdir -p [SCRIPTS_DIR]/addons/modules

  3. Install the library to Blender: pip install git+ --no-deps --target [SCRIPTS_DIR]/addons/modules. Starfish does not require any additional packages besides what is already bundled with Blender, which is why --no-deps can be used.

Starfish can also be pip-installed normally without Blender for testing purposes or for independent usage of certain modules.


Running Scripts in Blender

The easiest way to experiment with the library is by opening Blender, navigating to the Scripting tab, and hitting the plus button to create a new script. You can then import starfish, write some code, and hit Alt+P to see what it does.

Once you’re ready to execute a more long-running script, you can write it outside Blender and then execute it using blender file.blend --background --python (or blender file.blend -b -P for short).

Generating Images


At the core of Starfish is the Frame class, which represents a single image of a single object. A frame is defined by 6 parameters:

frame = starfish.Frame(
    position=(0, 0, 0),
    pose=mathutils.Euler([0, math.pi, 0]),
    lighting=mathutils.Euler([0, 0, 0]),
    offset=(0.3, 0.7),
    background=mathutils.Euler([math.pi / 2, 0, 0])

See the starfish.Frame documentation for more details about what each parameter means. Once you have a frame object, you use it to ‘set up’ your scene:


This moves all the objects so that the image that the camera sees matches up with the parameters in the frame object. At this point, you can render the frame using bpy.ops.render.render. You can also dump metadata about a frame into JSON format using the Frame.dumps method.


Of course, Starfish wouldn’t be very useful without the ability to create multiple frames at once. The Sequence class is essentially just a list of frames, but with several classmethod constructors for generating these sequences of frames in different ways. For example, Sequence.interpolated generates ‘animated’ sequences that smoothly interpolate between keyframes, and Sequence.exhaustive generates long sequences that contain every possible combination of the parameters given.

Once you’ve created a sequences, you can iterate through its frames like so:

seq = starfish.Sequence...

for frame in seq:

The Sequence.bake method also provides an easy way to ‘preview’ sequences that you’re working on in Blender. See Sequence for more detail.


The utils module provides a few more functions that may be useful for core image generation, such as random_rotations or uniform_sphere.


Starfish also contains an annotation module that provides utility functions related to annotating generated data.

  • One common type of annotation generated for computer vision tasks is some sort of segmentation mask (e.g. using the ID Mask Node) where having perfectly uniform colors is important. Unfortunately, I’ve often encountered an issue in Blender where the output colors differ slightly: for example, instead of the background being solid rgb(0, 0, 0) black, it will actually be a random mix of rgb(0, 0, 1), rgb(1, 1, 0), etc. The normalize_mask_colors function can be used to clean up such images.

  • Once a mask has been cleaned up, get_bounding_boxes_from_mask and get_centroids_from_mask can be used to get the bounding boxes and centroids of segmented areas, respectively.

  • Another common type of annotation is keypoints: e.g. where particular 3D points on the object appear in the 2D image. generate_keypoints can be used to automatically generate evenly distributed 3D keypoints from an object’s mesh; project_keypoints_onto_image can then take these keypoints and map them to 2D image locations after rendering a particular frame.

Example Script

All together, here is what an image generation script might look like:

IMPORTANT NOTE: This script is just for demonstrating the various capabilities of Starfish, and is not meant
to be run as-is. If you try to run this script without modifications, it will probably not work, unless you have
your Blend file set up with the exact same scenes, objects, and compositing nodes. Even then, it will immediately
start rendering several long sequences and writing files to disk, with the files from each sequence overwriting
the files from the previous one.
import time
import bpy
import numpy as np
from mathutils import Euler
from starfish import Sequence
from starfish.utils import random_rotations
from starfish.annotation import normalize_mask_colors, get_centroids_from_mask, get_bounding_boxes_from_mask

# create a standard sequence of random configurations...
seq1 = Sequence.standard(
    distance=np.linspace(10, 50, num=100)
# ...or an exhaustive sequence of combinations...
seq2 = Sequence.exhaustive(
    distance=[10, 20, 30],
    offset=[(0.25, 0.25), (0.25, 0.75), (0.75, 0.25), (0.75, 0.75)],
    pose=[Euler((0, 0, 0)), Euler((np.pi, 0, 0))]
# ...or an interpolated sequence between keyframes...
seq3 = Sequence.interpolated(
    waypoints=Sequence.standard(distance=[10, 20], pose=[Euler((0, 0, 0)), Euler((0, np.pi, np.pi))]),

for seq in [seq1, seq2, seq3]:
    # render loop
    for i, frame in enumerate(seq):
        # non-starfish Blender stuff: e.g. setting file output paths['Real'].node_tree.nodes['File Output'].file_slots[0].path = f'real_{i}.png'['Mask'].node_tree.nodes['File Output'].file_slots[0].path = f'mask_{i}.png'

        # set up and render
        scene =['Real']

        scene =['Mask']

        # postprocessing
        label_map = {'object': (255, 255, 255), 'background': (0, 0, 0)}
        clean_mask = normalize_mask_colors(f'mask_{i}.png', label_map.values())
        del label_map['background']
        bboxes = get_bounding_boxes_from_mask(clean_mask, label_map)
        centroids = get_centroids_from_mask(clean_mask, label_map)

        # add some extra metadata
        frame.timestamp = int(time.time() * 1000)
        frame.sequence_name = '1000 random poses'
        frame.tags = ['front_view', 'left_view', 'right_view']
        frame.bboxes = bboxes
        frame.centroids = centroids

        # save metadata to JSON
        with open(f'meta_{i}.json', 'w') as f: