Tensor
Classes for transferring point cloud (NumPy) -> PyTorch Tensor
ToTensor
Recursively convert NumPy arrays, scalars, and container structures to PyTorch tensors.
This transform is designed to work on:
- Individual values:
torch.Tensor: returned as-is.int→LongTensor([value]).float→FloatTensor([value]).str: returned as-is (strings are not converted).np.ndarraywith:- boolean dtype →
torch.from_numpy(arr)(bool tensor), - integer dtype →
torch.from_numpy(arr).long(), - float dtype →
torch.from_numpy(arr).float().
- boolean dtype →
- Containers:
Mapping(e.g.dict): converts each value recursively, preserving keys.Sequence(e.g.list,tuple): converts each element recursively and returns a Pythonlistof tensors/converted items.
Any unsupported type will raise a TypeError.
Typical usage is at the end of a preprocessing pipeline, to convert a nested sample dictionary (coords, features, labels) from NumPy to tensors.
__call__(data)
Convert input data (and nested contents) to PyTorch tensors. Args: data: Arbitrary input to convert. Can be a scalar, NumPy array, tensor, mapping (e.g. dict), or sequence (e.g. list/tuple).
Returns:
| Type | Description |
|---|---|
Any
|
Converted object where all supported leaves are PyTorch tensors, |
Any
|
and the original container structure (dict/list) is preserved. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If |
FinalFeatures
Assemble a final feature tensor and manage bookkeeping fields in a sample dict.
This transform is typically used at the end of a preprocessing pipeline to:
- Build a unified feature array under the key
"feat"by selecting and concatenating one or more existing fields fromdata_dict. - Optionally remove some intermediate fields (e.g.,
"norm", auxiliary features) to save memory. - Optionally add offset fields (
"offset","fps_offset") that are useful when batching variable-length point clouds.
Behavior:
- Feature construction:
- If
featis a string,data_dict["feat"]is set todata_dict[feat]. - If
featis a list/tuple of strings, the corresponding arrays are concatenated along the last dimension:feat = np.concatenate([data_dict[name] for name in feat], axis=-1)
All specified feature names must exist in data_dict.
- Field removal:
- If
removeis a string, that key is deleted fromdata_dict. - If
removeis a list/tuple of strings, each corresponding key is deleted. -
All specified keys must exist in
data_dict. -
Offsets:
-
If
add_offsetis True and"offset"is not already present, then:data_dict["offset"] = len(data_dict["coord"])This is often interpreted as the number of points in this sample. - If
add_fps_offsetis True and"fps_offset"is not present but"fps_index"exists, then:data_dict["fps_offset"] = len(data_dict["fps_index"])This is typically the number of FPS (farthest-point sampling) indices for this sample.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
feat
|
str or sequence of str
|
Name(s) of fields in |
'coord'
|
remove
|
str or sequence of str
|
Name(s) of fields to delete from |
'norm'
|
add_offset
|
bool
|
If True and |
True
|
add_fps_offset
|
bool
|
If True, |
True
|
__call__(data_dict)
Construct the final feature array and update bookkeeping fields.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_dict
|
dict
|
Sample dictionary containing at least the keys referenced by |
required |
Returns:
| Name | Type | Description |
|---|---|---|
dict |
dict
|
The same dictionary, with:
* a new |