Tracer Customization
For Advanced Users
This page is for advanced users. For Classification and Object Detection tasks, using ClassificationTracer / ObjectDetectionTracer / ObjectDetection3DTracer as-is is usually sufficient.
If customization is needed, please first contact support@adansons.ai.
This page explains how to customize Tracers for special use cases.
Object Detection Standard Support
Since v0.3.0, Object Detection 2D/3D is supported by the standard ObjectDetectionTracer and ObjectDetection3DTracer.
See Tracing + Evaluation for details.
When Custom Tracer is Needed
Custom Tracer definition may be needed in the following cases:
- Model structures not supported by standard Tracers
- Models with special input/output formats
- Custom tasks like Semantic Segmentation that are not yet covered by standard Tracers
- When collecting additional metadata
Basic Structure
Custom Tracers are defined by inheriting from task-specific base classes.
from ml_debugger.tracer.object_detection.object_detection_torchtracer import (
ObjectDetectionTorchTracer,
)
class CustomTracer(ObjectDetectionTorchTracer):
def _parse_and_save_io_data(
self,
model_input,
model_output,
ground_truth,
**kwargs,
):
"""Method to parse and save model input/output"""
# Custom implementation
pass
_parse_and_save_io_data Method
This method implements the process of analyzing model inference results and saving them to the database.
Arguments
| Argument | Description |
|---|---|
model_input |
Input tensor to model |
model_output |
Model output |
ground_truth |
Ground truth labels (in Tracing mode) |
**kwargs |
Additional arguments (passed from __call__) |
Getting Internal Features
You can get the outputs of target_layers specified during Tracer initialization.
# Example target_layers specification
tracer = CustomTracer(
model=model,
model_name="custom_model",
version_name="v1",
target_layers={
"cls_logits": "head.classification_head",
"bbox_regression": "head.regression_head",
},
)
# Get in _parse_and_save_io_data method
cls_logits = self.get_hooked_features("cls_logits")
bbox_regression = self.get_hooked_features("bbox_regression")
Custom Tracer Example for Object Detection
Below is a complete example of a custom Tracer for SSD model.
from __future__ import annotations
from typing import Any
import torch
from ml_debugger.tracer.object_detection.object_detection_torchtracer import (
ObjectDetectionTorchTracer,
)
class SSDTracer(ObjectDetectionTorchTracer):
def _parse_and_save_io_data(
self,
model_input: torch.Tensor,
model_output: list[dict[str, torch.Tensor]],
ground_truth: list[list[dict[str, torch.Tensor]]],
filenames: list[str],
dataset_type: str,
) -> None:
"""Parse and save SSD model input/output"""
batch_size = model_input.size(0)
# Get internal features
batch_bbox_regression = self.get_hooked_features("bbox_regression")
batch_cls_logits = self.get_hooked_features("cls_logits")
for i in range(batch_size):
# Get post-NMS indices
keep_idxs = model_output[i]["keep_index"].numpy()
img_bbox_regression = batch_bbox_regression[i].numpy()
img_cls_logits = batch_cls_logits[i].numpy()
# Save for each predicted BBox
for j, box in enumerate(model_output[i]["boxes"]):
keep_idx = keep_idxs[j]
# Flatten internal features
bbox_regression = img_bbox_regression[keep_idx].flatten().tolist()
bbox_regression_shape = list(img_bbox_regression[keep_idx].shape)
cls_logits = img_cls_logits[keep_idx].flatten().tolist()
cls_logits_shape = list(img_cls_logits[keep_idx].shape)
# Save predicted BBox
self._save_extracted_feature(
input_id=filenames[i],
input_tensor=model_input[i],
pred_bbox_id=j,
pred_top_left_x=box[0],
pred_top_left_y=box[1],
pred_bottom_right_x=box[2],
pred_bottom_right_y=box[3],
pred_class_id=model_output[i]["labels"][j],
pred_score=model_output[i]["scores"][j],
dataset_type=dataset_type,
bbox_regression=bbox_regression,
bbox_regression_shape=bbox_regression_shape,
cls_logits=cls_logits,
cls_logits_shape=cls_logits_shape,
)
# Save ground truth BBox
for k, gt_info in enumerate(ground_truth[i]):
self._save_ground_truth(
input_id=filenames[i],
input_tensor=model_input[i],
gt_bbox_id=k,
gt_top_left_x=gt_info["bbox"][0],
gt_top_left_y=gt_info["bbox"][1],
gt_bottom_right_x=gt_info["bbox"][2] + gt_info["bbox"][0],
gt_bottom_right_y=gt_info["bbox"][3] + gt_info["bbox"][1],
gt_class_id=gt_info["category_id"],
dataset_type=dataset_type,
src_img_width=gt_info["src_img_width"],
src_img_height=gt_info["src_img_height"],
)
def __call__(
self,
model_input: Any,
ground_truth: Any,
filenames: list[str],
dataset_type: str,
) -> Any:
"""__call__ method with custom arguments"""
return super().__call__(
model_input,
ground_truth,
filenames,
dataset_type,
)
Save Methods
_save_extracted_feature
Saves prediction data and internal features.
self._save_extracted_feature(
input_id="image_001", # Data identifier
input_tensor=model_input[i], # Input tensor (saved as hash)
# Task-specific columns
pred_class_id=predicted_class,
pred_score=confidence_score,
# Custom columns (internal features)
custom_feature=feature_vector.tolist(),
custom_feature_shape=list(feature_vector.shape),
)
_save_ground_truth
Saves ground truth data.
self._save_ground_truth(
input_id="image_001",
input_tensor=model_input[i],
gt_class_id=true_class,
dataset_type="train",
)
Specifying target_layers
Specify layers to extract internal features from.
tracer = CustomTracer(
model=model,
model_name="model",
version_name="v1",
target_layers={
# key: Alias name (used for column name)
# value: Path to layer (format accessible as model.xxx)
"fc_output": "fc",
"conv_features": "backbone.layer4",
},
)
additional_fields / additional_label_fields
Options for recording arbitrary additional information for predictions and annotations.
| Parameter | Target | Description |
|---|---|---|
additional_fields |
Predictions | Record additional metadata for each prediction |
additional_label_fields |
Annotations | Record additional metadata for each ground truth label |
Specification at Initialization
Specify additional fields in List[dict] format when initializing the Tracer. Each dict can have the following keys.
| Key | Required | Description |
|---|---|---|
name |
Yes | Field name |
type |
No | Python type (str, int, float, etc.) |
nullable |
No | Allow null values (default: True) |
from ml_debugger.tracer.classification import ClassificationTorchTracer
tracer = ClassificationTorchTracer(
model=model,
model_name="my_model",
version_name="v1",
# Fields to add to predictions
additional_fields=[
{"name": "camera_id", "type": str},
{"name": "lighting_condition", "type": str},
{"name": "temperature", "type": float, "nullable": True},
],
# Fields to add to annotations
additional_label_fields=[
{"name": "annotator_id", "type": str},
{"name": "confidence_level", "type": int},
],
)
Specifying Values During Tracing
Pass additional field values as lists in **kwargs to the __call__ method.
The list length must match the batch size.
# Batch processing (example with batch_size=4)
output = tracer(
model_input=images, # shape: (4, C, H, W)
ground_truth=labels, # shape: (4,)
input_ids=input_ids, # ["img_001", "img_002", "img_003", "img_004"]
dataset_type="train",
# Pass additional_fields / additional_label_fields values as lists
camera_id=["CAM_001", "CAM_002", "CAM_001", "CAM_003"],
lighting_condition=["daylight", "night", "daylight", "indoor"],
temperature=[25.5, 18.0, 26.0, 22.5],
annotator_id=["annotator_A", "annotator_A", "annotator_B", "annotator_A"],
confidence_level=[3, 2, 3, 1],
)
Dynamic Field Addition
If you pass fields not defined at initialization, the schema will be updated automatically.
However, it's recommended to specify types in advance using additional_fields.
Usage Example
Using additional fields allows you to analyze evaluation results by condition.
# Get evaluation result
result = evaluator.get_result(result_name="my_result")
# Conditional analysis with custom view
# Error distribution by lighting condition
lighting_view = result.get_view(
groupby=["lighting_condition", "category"],
adjustby="lighting_condition",
)
# Error distribution by camera
camera_view = result.get_view(
groupby=["camera_id", "category"],
adjustby="camera_id",
)
Use Cases
- Shooting conditions: Camera ID, lighting, weather, time of day
- Data source: Collection location, device type
- Annotation quality: Annotator, confidence level, review status
- Object Detection: Object size, occlusion presence
Notes
-
Internal Feature Format
- Must be converted to 1D list format when saving
- Recommended to also save shape information for restoration
-
input_tensor
- Hashed and saved for data uniqueness verification
- Warning occurs if different input_tensor exists for same input_id
-
Performance
- Use batch processing for efficient saving
- Large numbers of custom columns affect processing speed
-
Standard Evaluation Methods May Not Be Supported
- When using a custom Tracer, standard evaluation methods such as
Evaluator.request_evaluation()may not be supported - In this case, export data using
tracer.export()and send to Adansons for custom evaluation - Please contact support@adansons.ai for details
- When using a custom Tracer, standard evaluation methods such as
Support
If you encounter issues with custom Tracer implementation, please contact support@adansons.ai.