Overview
The cml:lidar_box
tag allows users to create a LiDAR annotation job for bounding box.
The cml:lidar_segmentation
tag allows users to create a LiDAR annotation job for point cloud semantics segmentation.
Building a LiDar Job
The following CML contains the possible parameters for a lidar annotation job:
<cml:lidar_box ontology="true" name="annotation" base-url="{{base_url}}"
validates="required" rotate-mode="yaw" range-indicators="20:0xFFFF00,30:0xFFA500,40:0x00FF00"
color-mode="elevation" project-async="false" project-rect="false" show-grid="false"
default-add-mode="DRAG"/>
<cml:lidar_segmentation ontology="true" name="annotation" base-url="{{base_url}}"
validates="required" range-indicators="20:0xFFFF00,30:0xFFA500,40:0x00FF00"
color-mode="elevation" />
Parameters
Below are the parameters available for cml:lidar_box
and cml:lidar_segmentation
tag. Some are required in the element; some can be left out.
-
name
(Required)- The results header where the results links will be stored
-
base_url
(Required)- URL pointing to the base folder containing point cloud data
-
color-mode
(Optional)
-
- Defines point cloud color mode
- Options: 'speed', 'elevation', 'reflection', 'elevation:[x, y][z, w]', 'reflection:[x, y][z, w]', where x & y define the elevation/reflection range, z & w define the color ramp proportion range, [z, w] is optional. For example, 'elevation:[0,5]', 'elevation:[0,2][0.25,1]', 'reflection:[0.25,1]', 'reflection:[0.25,1][0.25,1]' are all acceptable
-
color-mode(new)
(Optional)- Define preset color mode, if color_config is provided will replace color_mode
- Options: 'Intensity:0:#0000ff,1:#00ffff', 'Elevation:0:#0000ff,1:#00ffff', can get options string by double click the color mode custom label with ALT key down
-
rotate-mode
(Optional)- Defines tool rotation mode
- Options: ‘yaw’ - will only allow rotation of the bounding box in the direction parallel to the ground
-
range-indicators
(Optional)
- Defines circle ranges around the point cloud sensor center.
- Options: 'x, y, z, ...', 'x:colorInHex, y:colorInHex, ...', where x, y, z defines the distance and colorInHex defines the circle color. If colorInHex is not provided, the default color red is used
-
project-async
(Optional)- United Annotation - Allows 2D annotation to be updated independently
- Options: true or false
-
project-rect
(Optional)- United Annotation - Projects 3D annotation into 2D annotation
- Options: true or false
-
auto_save
(Optional)- Flag to enable or disable autosave (frame switch, interpolate, delete from all frames)
- Options: true or false
-
tracking_mode
(Optional)- Flag to enable or disable tool-tracked events
- Options: true or false
-
validate_from
(Optional)- URL pointing to annotation to be used as ground truth for in tool validation.
- The format must match the output of the lidar annotation tool (JSON in a hosted URL)
-
review-data
(Currently not supported)
-
show-grid
(Optional)- To show grid in 3d space.
-
default-add-mode
(Optional)- Defines the behavior of Cuboid. If it is DRAG, users can change the size of the Cuboid when they add Cube. If it is Click, the size is fixed.
-
label_config
(Optional)- This parameter can be used to define additional 3D Cuboid Attributes.
-
label_config_2d
(Optional)- This parameter can be used to define additional 2D Shape Attributes.
Ontology
LiDAR annotation for bounding box and point cloud semantics segmentation share the same ontology structure. The Ontology Manager allows job owners to create and edit the ontology within a LiDAR Annotation job. LiDAR Annotation Jobs require an ontology to launch. When the CML for a text annotation job is saved, the Ontology Manager link will appear at the top of the Design page.
Ontology Manager Best Practices
- The limit of ontology is 1,000 classes, however, as best practice, we recommend not exceeding 16 classes in a job to ensure contributors can understand and process the different classes.
- Choose from 16 colors pre-selected or upload custom colors as hex code via the CSV ontology upload.
- If you uploaded model predictions as JSONs, the predicted classes should also be added to the ontology.
Upload Data
We need to first convert client data to the schema supported by our platform. Since there is no standard format in the industry, we work with our clients to understand their format and provide conversion scripts for each request.
For more information on secure hosting, check out this article. Below are example files on how to structure source data.
Results
LiDAR bounding box annotation:
{
"baseUrl": "https://cf-83774kd99dl.s3.amazonaws.com/Q12944/", // base_url for the scene
"frames": [
{
"frameId": 0, // frame number, start from 0
"frameUrl": "/points/pc_001950.bin", // file path for the frame
"items" : [ // objects in the frame
{
"id": "13f222fd-065c-4745-b441-44dd25566cbb", // object uuid
"category": "Car",
"number": 8, // object number shows on tool. If the template is set to based on category, then it starts from 1 for each category. If the template is set to global then it starts from 1 for all the objects in a frame
"position": { // x,y,z position of center of cuboid, note this is in the coordinate system provided by the customer
"x": 66.49787120373375,
"y": -37.28758690422451,
"z": -4.426572264322248
},
"rotation": { // rotation of object, value in radian, rotate lidar coordinate +X to annotation object coordinate +X, if it's clockwise, the value is negative, otherwise positive around center of cuboid.
"x": 0,
"y": 0,
"z": -1.5804235113355598
},
"dimension": { // need to clarify how cuboid is calculated based on position information. this the full height, width, depth of a cuboid in meters
"x": 1.86, // the meaninig of XYZ depends on the definitation of the source data and the point format xyz setup as mentioned above. By default, X is length, Y is width, Z is height. Unit is also depends on the definition of the source data. Normally for autonomus driving, the unit is meter.
"y": 4.43,
"z": 1.86
},
"locked": null, // not used, please ignore
"interpolated": true, // if the cuboid is a interpolated cuboid and never manually adjusted, the value is true. Otherwise the value is false. You. can say is the value is false, it's a key frame to the object.
"labels": null, // This field is used to store the attribute form. Usually a json string containing key-value pairs, say '{ "attribute1": "value1", "attribute1": "valueX" }'
"isEmpty": false, // for specific client to indicate if the cuboid is an empty cuboid, ignore if not needed
"pointCount": 120 // count of points inside the cuboid
},
...
],
"isValid" : true, // To mark a frame is valid or not
"images" : [
{
"image" : "/image_00/image_001950.png", //file path to image
"items" : [ //array of 2D annotations
{
"id": "13f222fd-065c-4745-b441-44dd25566cbb", //UUID of object, matches UUID of cuboid annotation
"number": 1, //object instance
"category": "Car", //object class
"type": "RECT" //format if proj_rect is set to rectangle (or enabled if in GAP Stage)
"position": { //top left corner of box
"x": 7.678934984761854,
"y": 151.025760731091
},
"dimension": { //full width and height of the rectangle in relation to the position
"x": 311.2317472514253,
"y": 184.7346955567628
},
"labels": {\"testing\":\"Yes\"}, // This field is used to store the attribute form. Usually a json string containing key-value pairs, say '{ "attribute1": "value1", "attribute1": "valueX" }'
"isManual": true // the same meaning as the interpolated for 3D cuboid, i.e. true means a labeler adjusted the 2D annotation
},
...
],
"relations" : [ //list of linked objects
{
"id": "6711aebb-e9db-4917-8f6d-5ca2a01861c9", //relationship uuid
"relation": "stopping", //relationship category
"type": "cube", //type of annotations being related
"from": "13f222fd-065c-4745-b441-44dd25566cbb",//uuid of object beginning the relation
"to": "72162541-bcac-4fd4-ae43-e0460b6d4c16" //uuid of object ending the relation
}
]
}
]
},
...
]
}
LiDAR point cloud semantics segmentation:
{
"auditId": "c4b332f4-bdda-48e0-a395-a8a814f87fa2.157.audit", // for QA
"results": [
{
"frameId": 0, // frame id, start form 0
"frameUrl": "/haomo_3d/segmentation/CDXYC20210930/61506756316f405f23528861/point_cloud/bin_v1/1626944760095249.bin", // file path for data
"totalPointCount": 183031, // number of point in the frame
"items": [ // objects in the frame
{
"id": "128a61d8-87a9-4db2-a174-629c3ce9db92", // object uuid
"category": "lane marking",
"number": 1, // object number
"points": [ // corresponding index of point in the point cloud for the object, start from 0
50657,
56142,
88959,
106743,
123995,
131539,
130796,
134074
],
"labels": "{\"ef-ontology\":\"\u8f66\u9053\u7ebf\",\"vecline_type\":\"Road_Edge\",\"occlusion_edge\":0,\"edge_type\":\"Physical\",\"current\":true,\"edge_index\":-1}", // object attributes, it's json string or null if the form is not configurated
"type": "polyline", // object type is "points" or "polyline", if empty then it's "points"
"pointCount": 8 // number of point in the object
},
...
},
...
]
}
Note: This report may take a while to generate and download due to the large nature of all its data files. However, the download will still be much faster compared to running scripts to scrape the results.
Additional Reference
Training Guide for LiDAR annotators: https://paper.dropbox.com/doc/LiDAR-Training-Guide--BisDqY2Udj8s4krcDy34907DAg-70CDkK5Ar77QUyR9C8ucO
Guide to Workflows for project managers: https://success.appen.com/hc/en-us/articles/360029503852-Guide-to-Workflows