easy_vision.python.core¶
- easy_vision.python.core.anchor_generators
- easy_vision.python.core.anchor_generators.anchor_generator
- easy_vision.python.core.anchor_generators.grid_anchor_generator
- easy_vision.python.core.anchor_generators.multiple_grid_anchor_generator
- easy_vision.python.core.anchor_generators.multiscale_grid_anchor_generator
- easy_vision.python.core.anchor_generators.temporal_grid_anchor_generator
- easy_vision.python.core.anchor_generators.yolo_anchor_generator
- easy_vision.python.core.backbones
- easy_vision.python.core.backbones.alexnet_backbone
- easy_vision.python.core.backbones.backbone
- easy_vision.python.core.backbones.c3d_backbone
- easy_vision.python.core.backbones.cifar_backbone
- easy_vision.python.core.backbones.custom_backbone
- easy_vision.python.core.backbones.custom_layers
- easy_vision.python.core.backbones.darknet_backbone
- easy_vision.python.core.backbones.efficientnet_backbone
- easy_vision.python.core.backbones.i3d
- easy_vision.python.core.backbones.inception_backbone
- easy_vision.python.core.backbones.mobilenet_backbone
- easy_vision.python.core.backbones.net_utils
- easy_vision.python.core.backbones.resnet_3d_backbone
- easy_vision.python.core.backbones.resnet_backbone
- easy_vision.python.core.backbones.resnext_3d_backbone
- easy_vision.python.core.backbones.text_resnet15_backbone
- easy_vision.python.core.backbones.vgg_backbone
- easy_vision.python.core.backbones.vgg_bai_backbone
- easy_vision.python.core.backbones.vgg_reduce_fc
- easy_vision.python.core.backbones.xception_backbone
- easy_vision.python.core.box_coders
- easy_vision.python.core.box_coders.faster_rcnn_box_coder
- easy_vision.python.core.box_coders.keypoint_box_coder
- easy_vision.python.core.box_coders.mean_stddev_box_coder
- easy_vision.python.core.box_coders.rc3d_box_coder
- easy_vision.python.core.box_coders.square_box_coder
- easy_vision.python.core.box_coders.yolo_box_coder
- easy_vision.python.core.decoders
- easy_vision.python.core.detection_predictors
- easy_vision.python.core.detection_predictors.heads
- easy_vision.python.core.detection_predictors.heads.box_head
- easy_vision.python.core.detection_predictors.heads.class_head
- easy_vision.python.core.detection_predictors.heads.head
- easy_vision.python.core.detection_predictors.heads.keypoint_head
- easy_vision.python.core.detection_predictors.heads.mask_head
- easy_vision.python.core.detection_predictors.convolutional_box_predictor
- easy_vision.python.core.detection_predictors.mask_rcnn_box_predictor
- easy_vision.python.core.detection_predictors.mask_rcnn_mask_predictor
- easy_vision.python.core.detection_predictors.predictor
- easy_vision.python.core.detection_predictors.rfcn_box_predictor
- easy_vision.python.core.detection_predictors.text_resnet_keypoint_predictor
- easy_vision.python.core.detection_predictors.yolo_box_predictor
- easy_vision.python.core.detection_predictors.heads
- easy_vision.python.core.encoders
- easy_vision.python.core.matchers
- easy_vision.python.core.ops
- easy_vision.python.core.ops.attention_ops
- easy_vision.python.core.ops.box_coder
- easy_vision.python.core.ops.box_list
- easy_vision.python.core.ops.box_list_ops
- easy_vision.python.core.ops.common_layers
- easy_vision.python.core.ops.common_ops
- easy_vision.python.core.ops.embedding_layer
- easy_vision.python.core.ops.keypoint_ops
- easy_vision.python.core.ops.normalization
- easy_vision.python.core.ops.post_processing
- easy_vision.python.core.ops.region_similarity_calculator
- easy_vision.python.core.ops.rnn_ops
- easy_vision.python.core.ops.shape_utils
- easy_vision.python.core.ops.static_shape
- easy_vision.python.core.ops.target_assigner
- easy_vision.python.core.ops.text_net_utils
- easy_vision.python.core.ops.transform_ops
- easy_vision.python.core.ops.ts_list
- easy_vision.python.core.ops.ts_list_ops
- easy_vision.python.core.optical_flow
- easy_vision.python.core.preprocessing
- easy_vision.python.core.preprocessing.autoaugment
- easy_vision.python.core.preprocessing.cifarnet_preprocessing
- easy_vision.python.core.preprocessing.classification_preprocess
- easy_vision.python.core.preprocessing.common_preprocess
- easy_vision.python.core.preprocessing.deeplab_preprocess
- easy_vision.python.core.preprocessing.efficientnet_preprocessing
- easy_vision.python.core.preprocessing.inception_preprocessing
- easy_vision.python.core.preprocessing.lenet_preprocessing
- easy_vision.python.core.preprocessing.preprocessing_factory
- easy_vision.python.core.preprocessing.preprocessor
- easy_vision.python.core.preprocessing.preprocessor_cache
- easy_vision.python.core.preprocessing.ssd_preprocess
- easy_vision.python.core.preprocessing.text_preprocess
- easy_vision.python.core.preprocessing.vgg_preprocessing
- easy_vision.python.core.preprocessing.video_preprocess
- easy_vision.python.core.sampler
- easy_vision.python.core.transformer
easy_vision.python.core.feature_map_generators¶
Functions to generate a list of feature maps based on image features.
Provides several feature map generators that can be used to build object detection feature extractors.
Object detection feature extractors usually are built by stacking two components - A base feature extractor such as Inception V3 and a feature map generator. Feature map generators build on the base feature extractors and produce a list of final feature maps.
-
easy_vision.python.core.feature_map_generators.
fpn_top_down_feature_maps
(image_features, depth, extra_conv_layers=0, retina_net=False, use_depthwise=False, resize_method=0, scope=None)[source]¶ Generates FPN feature maps for Feature Pyramid Networks.
See https://arxiv.org/abs/1612.03144 for details.
Parameters: - image_features – list of image_feature_tensor. Spatial resolutions of succesive tensors must reduce exactly by a factor of 2.
- depth – depth of output feature maps.
- extra_conv_layers – whether to use extra convolution layers by strided max pool.
- retina_net – whether to use more extra convolution layers by strided conv as in retinanet
- use_depthwise – whether to use separable convolution.
- resize_method – image resize method default to bilinear.
- scope – A scope name to wrap this op under.
Returns: - an OrderedDict mapping keys (feature map names) to fpn feature
map tensors, where each has shape corresponding to image_features
Return type: feature_maps
-
easy_vision.python.core.feature_map_generators.
get_depth_fn
(depth_multiplier, min_depth)[source]¶ Builds a callable to compute depth (output channels) of conv filters.
Parameters: - depth_multiplier – a multiplier for the nominal depth.
- min_depth – a lower bound on the depth of filters.
Returns: A callable that takes in a nominal depth and returns the depth to use.
-
easy_vision.python.core.feature_map_generators.
multi_resolution_feature_maps
(feature_map_layout, depth_multiplier, min_depth, insert_1x1_conv, image_features, pool_residual=False)[source]¶ Generates multi resolution feature maps from input image features.
Generates multi-scale feature maps for detection as in the SSD papers by Liu et al: https://arxiv.org/pdf/1512.02325v2.pdf, See Sec 2.1.
More specifically, it performs the following two tasks: 1) If a layer name is provided in the configuration, returns that layer as a
feature map.- If a layer name is left as an empty string, constructs a new feature map based on the spatial shape and depth configuration. Note that the current implementation only supports generating new layers using convolution of stride 2 resulting in a spatial resolution reduction by a factor of 2. By default convolution kernel size is set to 3, and it can be customized by caller.
An example of the configuration for Inception V3: {
‘from_layer’: [‘Mixed_5d’, ‘Mixed_6e’, ‘Mixed_7c’, ‘’, ‘’, ‘’], ‘layer_depth’: [-1, -1, -1, 512, 256, 128]}
Parameters: - feature_map_layout –
Dictionary of specifications for the feature map layouts in the following format (Inception V2/V3 respectively): {
’from_layer’: [‘Mixed_3c’, ‘Mixed_4c’, ‘Mixed_5c’, ‘’, ‘’, ‘’], ‘layer_depth’: [-1, -1, -1, 512, 256, 128]} or {
’from_layer’: [‘Mixed_5d’, ‘Mixed_6e’, ‘Mixed_7c’, ‘’, ‘’, ‘’], ‘layer_depth’: [-1, -1, -1, 512, 256, 128]} If ‘from_layer’ is specified, the specified feature map is directly used as a box predictor layer, and the layer_depth is directly infered from the feature map (instead of using the provided ‘layer_depth’ parameter). In this case, our convention is to set ‘layer_depth’ to -1 for clarity. Otherwise, if ‘from_layer’ is an empty string, then the box predictor layer will be built from the previous layer using convolution operations. Note that the current implementation only supports generating new layers using convolutions of stride 2 (resulting in a spatial resolution reduction by a factor of 2), and will be extended to a more flexible design. Convolution kernel size is set to 3 by default, and can be customized by ‘conv_kernel_size’ parameter (similarily, ‘conv_kernel_size’ should be set to -1 if ‘from_layer’ is specified). The created convolution operation will be a normal 2D convolution by default, and a depthwise convolution followed by 1x1 convolution if ‘use_depthwise’ is set to True.
- depth_multiplier – Depth multiplier for convolutional layers.
- min_depth – Minimum depth for convolutional layers.
- insert_1x1_conv – A boolean indicating whether an additional 1x1 convolution should be inserted before shrinking the feature map.
- image_features – A dictionary of handles to activation tensors from the base feature extractor.
- pool_residual – Whether to add an average pooling layer followed by a residual connection between subsequent feature maps when the channel depth match. For example, with option ‘layer_depth’: [-1, 512, 256, 256], a pooling and residual layer is added between the third and forth feature map. This option is better used with Weight Shared Convolution Box Predictor when all feature maps have the same channel depth to encourage more consistent features across multi-scale feature maps.
Returns: - an OrderedDict mapping keys (feature map names) to
tensors where each tensor has shape [batch, height_i, width_i, depth_i].
Return type: feature_maps
Raises: ValueError
– if the number entries in ‘from_layer’ and ‘layer_depth’ do not match.ValueError
– if the generated layer does not have the same resolution as specified.
-
easy_vision.python.core.feature_map_generators.
pooling_pyramid_feature_maps
(base_feature_map_depth, num_layers, image_features, replace_pool_with_conv=False)[source]¶ Generates pooling pyramid feature maps.
The pooling pyramid feature maps is motivated by multi_resolution_feature_maps. The main difference are that it is simpler and reduces the number of free parameters.
- More specifically:
- Instead of using convolutions to shrink the feature map, it uses max pooling, therefore totally gets rid of the parameters in convolution.
- By pooling feature from larger map up to a single cell, it generates features in the same feature space.
- Instead of independently making box predictions from individual maps, it shares the same classifier across different feature maps, therefore reduces the “mis-calibration” across different scales.
See go/ppn-detection for more details.
Parameters: - base_feature_map_depth – Depth of the base feature before the max pooling.
- num_layers – Number of layers used to make predictions. They are pooled from the base feature.
- image_features – A dictionary of handles to activation tensors from the feature extractor.
- replace_pool_with_conv – Whether or not to replace pooling operations with convolutions in the PPN. Default is False.
Returns: - an OrderedDict mapping keys (feature map names) to
tensors where each tensor has shape [batch, height_i, width_i, depth_i].
Return type: feature_maps
Raises: ValueError
– image_features does not contain exactly one entry
-
easy_vision.python.core.feature_map_generators.
yolo_feature_maps
(image_features, use_pan=False, use_spp=False, use_sam=False, fixed_features_output_dim=0)[source]¶ Generate multi-scale feature map for yolo detection
Parameters: - image_features – a list of feature map, spatial resolutions of succesive tensors must reduce exactly by a factor of 2.
- use_pan – use path aggregation network structure or not.
- use_spp – use spatial pyramid pooling structure or not.
- use_sam – use convolutional spatial attention module or not, sam will be added to the last yolo_block of each feature map branches
- Return
- feature map list: a list of pyramid feature maps, whose order is the same as
- input features
easy_vision.python.core.learning_schedules¶
Library of common learning rate schedules.
-
easy_vision.python.core.learning_schedules.
cosine_decay_with_warmup
(global_step, learning_rate_base, total_steps, warmup_learning_rate=0.0, warmup_steps=0, hold_base_rate_steps=0)[source]¶ Cosine decay schedule with warm up period.
- Cosine annealing learning rate as described in:
- Loshchilov and Hutter, SGDR: Stochastic Gradient Descent with Warm Restarts. ICLR 2017. https://arxiv.org/abs/1608.03983
In this schedule, the learning rate grows linearly from warmup_learning_rate to learning_rate_base for warmup_steps, then transitions to a cosine decay schedule.
Parameters: - global_step – int64 (scalar) tensor representing global step.
- learning_rate_base – base learning rate.
- total_steps – total number of training steps.
- warmup_learning_rate – initial learning rate for warm up.
- warmup_steps – number of warmup steps.
- hold_base_rate_steps – Optional number of steps to hold base learning rate before decaying.
Returns: a (scalar) float tensor representing learning rate.
Raises: ValueError
– if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
-
easy_vision.python.core.learning_schedules.
exponential_decay_with_burnin
(global_step, learning_rate_base, learning_rate_decay_steps, learning_rate_decay_factor, burnin_learning_rate=0.0, burnin_steps=0, min_learning_rate=0.0, staircase=True)[source]¶ Exponential decay schedule with burn-in period.
In this schedule, learning rate is fixed at burnin_learning_rate for a fixed period, before transitioning to a regular exponential decay schedule.
Parameters: - global_step – int tensor representing global step.
- learning_rate_base – base learning rate.
- learning_rate_decay_steps – steps to take between decaying the learning rate. Note that this includes the number of burn-in steps.
- learning_rate_decay_factor – multiplicative factor by which to decay learning rate.
- burnin_learning_rate – initial learning rate during burn-in period. If 0.0 (which is the default), then the burn-in learning rate is simply set to learning_rate_base.
- burnin_steps – number of steps to use burnin learning rate.
- min_learning_rate – the minimum learning rate.
- staircase – whether use staircase decay.
Returns: a (scalar) float tensor representing learning rate
-
easy_vision.python.core.learning_schedules.
manual_stepping
(global_step, boundaries, rates, warmup=False)[source]¶ Manually stepped learning rate schedule.
This function provides fine grained control over learning rates. One must specify a sequence of learning rates as well as a set of integer steps at which the current learning rate must transition to the next. For example, if boundaries = [5, 10] and rates = [.1, .01, .001], then the learning rate returned by this function is .1 for global_step=0,…,4, .01 for global_step=5…9, and .001 for global_step=10 and onward.
Parameters: - global_step – int64 (scalar) tensor representing global step.
- boundaries – a list of global steps at which to switch learning rates. This list is assumed to consist of increasing positive integers.
- rates – a list of (float) learning rates corresponding to intervals between the boundaries. The length of this list must be exactly len(boundaries) + 1.
- warmup – Whether to linearly interpolate learning rate for steps in [0, boundaries[0]].
Returns: a (scalar) float tensor representing learning rate
Raises: ValueError
– if one of the following checks fails: 1. boundaries is a strictly increasing list of positive integers 2. len(rates) == len(boundaries) + 1 3. boundaries[0] != 0
-
easy_vision.python.core.learning_schedules.
transformer_policy
(global_step, learning_rate, d_model, warmup_steps, step_scaling_rate=1.0, max_lr=None, coefficient=1.0, dtype=tf.float32)[source]¶ Transformer’s learning rate policy from https://arxiv.org/pdf/1706.03762.pdf with a hat (max_lr) (also called “noam” learning rate decay scheme).
Parameters: - global_step – global step TensorFlow tensor (ignored for this policy).
- learning_rate (float) – initial learning rate to use.
- d_model (int) – model dimensionality.
- warmup_steps (int) – number of warm-up steps.
- step_scaling_rate (float) – num step scale rate
- max_lr (float) – maximal learning rate, i.e. hat.
- coefficient (float) – optimizer adjustment. Recommended 0.002 if using “Adam” else 1.0.
- dtype – dtype for this policy.
Returns: learning rate at step
global_step
.
easy_vision.python.core.losses¶
Classification and regression loss functions for object detection.
- Localization losses:
- WeightedL2LocalizationLoss
- WeightedSmoothL1LocalizationLoss
- WeightedIOULocalizationLoss
- Classification losses:
- WeightedSigmoidClassificationLoss
- WeightedSoftmaxClassificationLoss
- WeightedSoftmaxClassificationAgainstLogitsLoss
- BootstrappedSigmoidClassificationLoss
-
class
easy_vision.python.core.losses.
BootstrappedSigmoidClassificationLoss
(alpha, bootstrap_type='soft')[source]¶ Bases:
easy_vision.python.core.losses.Loss
Bootstrapped sigmoid cross entropy classification loss function.
This loss uses a convex combination of training labels and the current model’s predictions as training targets in the classification loss. The idea is that as the model improves over time, its predictions can be trusted more and we can use these predictions to mitigate the damage of noisy/incorrect labels, because incorrect labels are likely to be eventually highly inconsistent with other stimuli predicted to have the same label by the model.
In “soft” bootstrapping, we use all predicted class probabilities, whereas in “hard” bootstrapping, we use the single class favored by the model.
See also Training Deep Neural Networks On Noisy Labels with Bootstrapping by Reed et al. (ICLR 2015).
-
__init__
(alpha, bootstrap_type='soft')[source]¶ Constructor.
Parameters: - alpha – a float32 scalar tensor between 0 and 1 representing interpolation weight
- bootstrap_type – set to either ‘hard’ or ‘soft’ (default)
Raises: AssertionError
– if bootstrap_type is not either ‘hard’ or ‘soft’
-
-
class
easy_vision.python.core.losses.
HardExampleMiner
(num_hard_examples=64, iou_threshold=0.7, loss_type='both', cls_loss_weight=0.05, loc_loss_weight=0.06, max_negatives_per_positive=None, min_negatives_per_image=0)[source]¶ Bases:
object
Hard example mining for regions in a list of images.
Implements hard example mining to select a subset of regions to be back-propagated. For each image, selects the regions with highest losses, subject to the condition that a newly selected region cannot have an IOU > iou_threshold with any of the previously selected regions. This can be achieved by re-using a greedy non-maximum suppression algorithm. A constraint on the number of negatives mined per positive region can also be enforced.
Reference papers: “Training Region-based Object Detectors with Online Hard Example Mining” (CVPR 2016) by Srivastava et al., and “SSD: Single Shot MultiBox Detector” (ECCV 2016) by Liu et al.
-
__init__
(num_hard_examples=64, iou_threshold=0.7, loss_type='both', cls_loss_weight=0.05, loc_loss_weight=0.06, max_negatives_per_positive=None, min_negatives_per_image=0)[source]¶ Constructor.
The hard example mining implemented by this class can replicate the behavior in the two aforementioned papers (Srivastava et al., and Liu et al). To replicate the A2 paper (Srivastava et al), num_hard_examples is set to a fixed parameter (64 by default) and iou_threshold is set to .7 for running non-max-suppression the predicted boxes prior to hard mining. In order to replicate the SSD paper (Liu et al), num_hard_examples should be set to None, max_negatives_per_positive should be 3 and iou_threshold should be 1.0 (in order to effectively turn off NMS).
Parameters: - num_hard_examples – maximum number of hard examples to be selected per image (prior to enforcing max negative to positive ratio constraint). If set to None, all examples obtained after NMS are considered.
- iou_threshold – minimum intersection over union for an example to be discarded during NMS.
- loss_type – use only classification losses (‘cls’, default), localization losses (‘loc’) or both losses (‘both’). In the last case, cls_loss_weight and loc_loss_weight are used to compute weighted sum of the two losses.
- cls_loss_weight – weight for classification loss.
- loc_loss_weight – weight for location loss.
- max_negatives_per_positive – maximum number of negatives to retain for each positive anchor. By default, num_negatives_per_positive is None, which means that we do not enforce a prespecified negative:positive ratio. Note also that num_negatives_per_positives can be a float (and will be converted to be a float even if it is passed in otherwise).
- min_negatives_per_image – minimum number of negative anchors to sample for a given image. Setting this to a positive number allows sampling negatives in an image without any positive anchors and thus not biased towards at least one detection per image.
-
-
class
easy_vision.python.core.losses.
Loss
[source]¶ Bases:
object
Abstract base class for loss functions.
-
class
easy_vision.python.core.losses.
SigmoidFocalClassificationLoss
(gamma=2.0, alpha=0.25, label_smoothing=0.0)[source]¶ Bases:
easy_vision.python.core.losses.Loss
Sigmoid focal cross entropy loss.
Focal loss down-weights well classified examples and focusses on the hard examples. See https://arxiv.org/pdf/1708.02002.pdf for the loss definition.
-
class
easy_vision.python.core.losses.
WeightedIOULocalizationLoss
[source]¶ Bases:
easy_vision.python.core.losses.Loss
IOU localization loss function.
Sums the IOU for corresponding pairs of predicted/groundtruth boxes and for each pair assign a loss of 1 - IOU. We then compute a weighted sum over all pairs which is returned as the total loss.
-
class
easy_vision.python.core.losses.
WeightedL2LocalizationLoss
[source]¶ Bases:
easy_vision.python.core.losses.Loss
L2 localization loss function with anchorwise output support.
Loss[b,a] = .5 * ||weights[b,a] * (prediction[b,a,:] - target[b,a,:])||^2
-
class
easy_vision.python.core.losses.
WeightedSigmoidClassificationLoss
(label_smoothing=0.0)[source]¶ Bases:
easy_vision.python.core.losses.Loss
Sigmoid cross entropy classification loss function.
-
class
easy_vision.python.core.losses.
WeightedSmoothL1LocalizationLoss
(delta=1.0)[source]¶ Bases:
easy_vision.python.core.losses.Loss
Smooth L1 localization loss function aka Huber Loss..
The smooth L1_loss is defined elementwise as .5 x^2 if |x| <= delta and 0.5 x^2 + delta * (|x|-delta) otherwise, where x is the difference between predictions and target.
See also Equation (3) in the Fast R-CNN paper by Ross Girshick (ICCV 2015)
-
class
easy_vision.python.core.losses.
WeightedSoftmaxClassificationAgainstLogitsLoss
(logit_scale=1.0)[source]¶ Bases:
easy_vision.python.core.losses.Loss
Softmax loss function against logits.
Targets are expected to be provided in logits space instead of “one hot” or “probability distribution” space.
-
class
easy_vision.python.core.losses.
WeightedSoftmaxClassificationLoss
(logit_scale=1.0, label_smoothing=0.0)[source]¶ Bases:
easy_vision.python.core.losses.Loss
Softmax loss function.
easy_vision.python.core.standard_fields¶
Contains classes specifying naming conventions used for object detection.
- Specifies:
- InputDataFields: standard fields used by reader/preprocessor/batcher. DetectionResultFields: standard fields returned by object detector. BoxListFields: standard field used by BoxList TfExampleFields: standard fields for tf-example data format (go/tf-example).
-
class
easy_vision.python.core.standard_fields.
BoxListFields
[source]¶ Bases:
object
Naming conventions for BoxLists.
-
boxes
¶ bounding box coordinates.
-
classes
¶ classes per bounding box.
-
scores
¶ scores per bounding box.
-
weights
¶ sample weights per bounding box.
-
objectness
¶ objectness score per bounding box.
-
masks
¶ masks per bounding box.
-
boundaries
¶ boundaries per bounding box.
-
keypoints
¶ keypoints per bounding box.
-
keypoint_heatmaps
¶ keypoint heatmaps per bounding box.
-
is_crowd
¶ is_crowd annotation per bounding box.
-
boundaries
= 'boundaries'
-
boxes
= 'boxes'
-
classes
= 'classes'
-
is_crowd
= 'is_crowd'
-
keypoint_heatmaps
= 'keypoint_heatmaps'
-
keypoints
= 'keypoints'
-
masks
= 'masks'
-
objectness
= 'objectness'
-
scores
= 'scores'
-
weights
= 'weights'
-
-
class
easy_vision.python.core.standard_fields.
DetectionResultFields
[source]¶ Bases:
object
Naming conventions for storing the output of the detector.
-
source_id
¶ source of the original image.
-
key
¶ unique key corresponding to image.
-
detection_boxes
¶ coordinates of the detection boxes in the image.
-
detection_scores
¶ detection scores for the detection boxes in the image.
-
detection_classes
¶ detection-level class labels.
-
detection_masks
¶ contains a segmentation mask for each detection box.
-
detection_boundaries
¶ contains an object boundary for each detection box.
-
detection_keypoints
¶ contains detection keypoints for each detection box.
-
num_detections
¶ number of detections in the batch.
-
detection_boundaries
= 'detection_boundaries'
-
detection_boxes
= 'detection_boxes'
-
detection_classes
= 'detection_classes'
-
detection_classes_scores
= 'detection_classes_scores'¶
-
detection_keypoints
= 'detection_keypoints'
-
detection_masks
= 'detection_masks'
-
detection_scores
= 'detection_scores'
-
detection_texts
= 'detection_texts'¶
-
detection_texts_ids
= 'detection_texts_ids'¶
-
detection_texts_scores
= 'detection_texts_scores'¶
-
key
= 'key'
-
num_detections
= 'num_detections'
-
source_id
= 'source_id'
-
-
class
easy_vision.python.core.standard_fields.
DistributeStrategyFields
[source]¶ Bases:
object
distribution strategy name
-
async_ps
= 'async_ps'¶
-
collective
= 'collective'¶
-
ess
= 'ess'¶
-
mirrored
= 'mirrored'¶
-
none
= ''¶
-
sync_ps
= 'sync_ps'¶
-
valid_set
= set(['', 'async_ps', 'collective', 'ess', 'mirrored', 'sync_ps', 'whale'])¶
-
whale
= 'whale'¶
-
-
class
easy_vision.python.core.standard_fields.
EVGraphKeys
[source]¶ Bases:
object
collection key for easy-vision realted collections
-
export_config
= 'EV_EXPORT_CONFIG'¶
-
-
class
easy_vision.python.core.standard_fields.
InputDataFields
[source]¶ Bases:
object
Names for the input tensors.
Holds the standard data field names to use for identifying input tensors. This should be used by the decoder to identify keys for the returned tensor_dict containing input tensors. And it should be used by the model to identify the tensors it needs.
-
image
¶ image.
-
original_image
¶ image in the original input size.
-
key
¶ unique key corresponding to image.
-
source_id
¶ source of the original image.
-
filename
¶ original filename of the dataset (without common path).
-
groundtruth_image_classes
¶ image-level class labels.
-
groundtruth_boxes
¶ coordinates of the ground truth boxes in the image.
-
groundtruth_classes
¶ box-level class labels.
-
groundtruth_label_types
¶ box-level label types (e.g. explicit negative).
-
groundtruth_is_crowd
¶ [DEPRECATED, use groundtruth_group_of instead] is the groundtruth a single object or a crowd.
-
groundtruth_area
¶ area of a groundtruth segment.
-
groundtruth_difficult
¶ is a difficult object
-
groundtruth_group_of
¶ is a group_of objects, e.g. multiple objects of the same class, forming a connected group, where instances are heavily occluding each other.
-
proposal_boxes
¶ coordinates of object proposal boxes.
-
proposal_objectness
¶ objectness score of each proposal.
-
groundtruth_instance_masks
¶ ground truth instance masks.
-
groundtruth_instance_boundaries
¶ ground truth instance boundaries.
-
groundtruth_instance_classes
¶ instance mask-level class labels.
-
groundtruth_keypoints
¶ ground truth keypoints.
-
groundtruth_keypoint_visibilities
¶ ground truth keypoint visibilities.
-
groundtruth_label_scores
¶ groundtruth label scores.
-
groundtruth_weights
¶ groundtruth weight factor for bounding boxes.
-
num_groundtruth_boxes
¶ number of groundtruth boxes.
-
true_image_shapes
¶ true shapes of images in the resized images, as resized images can be padded with zeros.
-
char_dict
= 'char_dict'¶
-
dataset_name
= 'dataset_name'¶
-
filename
= 'filename'
-
groundtruth_area
= 'groundtruth_area'
-
groundtruth_boxes
= 'groundtruth_boxes'
-
groundtruth_boxes_absolute
= 'groundtruth_boxes_absolute'¶
-
groundtruth_classes
= 'groundtruth_classes'
-
groundtruth_difficult
= 'groundtruth_difficult'
-
groundtruth_group_of
= 'groundtruth_group_of'
-
groundtruth_image_classes
= 'groundtruth_image_classes'
-
groundtruth_image_classes_num
= 'groundtruth_image_classes_num'¶
-
groundtruth_instance_boundaries
= 'groundtruth_instance_boundaries'
-
groundtruth_instance_classes
= 'groundtruth_instance_classes'
-
groundtruth_instance_masks
= 'groundtruth_instance_masks'
-
groundtruth_is_crowd
= 'groundtruth_is_crowd'
-
groundtruth_keypoint_visibilities
= 'groundtruth_keypoint_visibilities'
-
groundtruth_keypoints
= 'groundtruth_keypoints'
-
groundtruth_keypoints_absolute
= 'groundtruth_keypoints_absolute'¶
-
groundtruth_label_scores
= 'groundtruth_label_scores'
-
groundtruth_label_types
= 'groundtruth_label_types'
-
groundtruth_text
= 'groundtruth_text'¶
-
groundtruth_text_direction
= 'groundtruth_text_direction'¶
-
groundtruth_text_ids
= 'groundtruth_text_ids'¶
-
groundtruth_text_keypoints
= 'groundtruth_text_keypoints'¶
-
groundtruth_text_length
= 'groundtruth_text_length'¶
-
groundtruth_texts
= 'groundtruth_texts'¶
-
groundtruth_texts_direction
= 'groundtruth_texts_direction'¶
-
groundtruth_texts_ids
= 'groundtruth_texts_ids'¶
-
groundtruth_texts_length
= 'groundtruth_texts_length'¶
-
groundtruth_weights
= 'groundtruth_weights'
-
height
= 'height'¶
-
image
= 'image'
-
key
= 'key'
-
label_map
= 'label_map'¶
-
mask
= 'mask'¶
-
num_groundtruth_boxes
= 'num_groundtruth_boxes'
-
optical_flow
= 'optical_flow'¶
-
original_image
= 'original_image'
-
original_image_shape
= 'original_image_shape'¶
-
original_instance_masks
= 'original_instance_masks'¶
-
proposal_boxes
= 'proposal_boxes'
-
proposal_objectness
= 'proposal_objectness'
-
source_id
= 'source_id'
-
true_image_shape
= 'true_image_shape'¶
-
width
= 'width'¶
-
-
class
easy_vision.python.core.standard_fields.
Mode
[source]¶ Bases:
object
model mode: train, evaluate, predict
-
evaluate
= 'evaluate'¶
-
predict
= 'predict'¶
-
train
= 'train'¶
-
-
class
easy_vision.python.core.standard_fields.
ModelFields
[source]¶ Bases:
object
- model related fields, which is used to index parameter
- in params in fucntion model_fn python/estimator/cv_estimator.py:19
- additional_outputs: user specified output names in addition to the outputs
- assigned in each model
export_outputs: whether to export outputs info in estimator
-
additional_outputs
= 'additional_outputs'¶
-
export_outputs
= 'export_outputs'¶
-
class
easy_vision.python.core.standard_fields.
TextRecognitionResultFields
[source]¶ Bases:
object
-
prediction_text_keypoints
= 'prediction_text_keypoints'¶
-
sequence_logits
= 'sequence_logits'¶
-
sequence_predict_ids
= 'sequence_predict_ids'¶
-
sequence_predict_text
= 'sequence_predict_text'¶
-
sequence_probability
= 'sequence_probability'¶
-
-
class
easy_vision.python.core.standard_fields.
TfExampleFields
[source]¶ Bases:
object
TF-example proto feature names for object detection.
Holds the standard feature names to load from an Example proto for object detection.
-
image_encoded
¶ JPEG encoded string
-
image_format
¶ image format, e.g. “JPEG”
-
filename
¶ filename
-
channels
¶ number of channels of image
-
colorspace
¶ colorspace, e.g. “RGB”
-
height
¶ height of image in pixels, e.g. 462
-
width
¶ width of image in pixels, e.g. 581
-
source_id
¶ original source of the image
-
object_class_text
¶ labels in text format, e.g. [“person”, “cat”]
-
object_class_label
¶ labels in numbers, e.g. [16, 8]
-
object_bbox_xmin
¶ xmin coordinates of groundtruth box, e.g. 10, 30
-
object_bbox_xmax
¶ xmax coordinates of groundtruth box, e.g. 50, 40
-
object_bbox_ymin
¶ ymin coordinates of groundtruth box, e.g. 40, 50
-
object_bbox_ymax
¶ ymax coordinates of groundtruth box, e.g. 80, 70
-
object_view
¶ viewpoint of object, e.g. [“frontal”, “left”]
-
object_truncated
¶ is object truncated, e.g. [true, false]
-
object_occluded
¶ is object occluded, e.g. [true, false]
-
object_difficult
¶ is object difficult, e.g. [true, false]
-
object_group_of
¶ is object a single object or a group of objects
-
object_depiction
¶ is object a depiction
-
object_is_crowd
¶ [DEPRECATED, use object_group_of instead] is the object a single object or a crowd
-
object_segment_area
¶ the area of the segment.
-
object_weight
¶ a weight factor for the object’s bounding box.
-
instance_masks
¶ instance segmentation masks.
-
instance_boundaries
¶ instance boundaries.
-
instance_classes
¶ Classes for each instance segmentation mask.
-
detection_class_label
¶ class label in numbers.
-
detection_bbox_ymin
¶ ymin coordinates of a detection box.
-
detection_bbox_xmin
¶ xmin coordinates of a detection box.
-
detection_bbox_ymax
¶ ymax coordinates of a detection box.
-
detection_bbox_xmax
¶ xmax coordinates of a detection box.
-
detection_score
¶ detection score for the class label and box.
-
channels
= 'image/channels'
-
colorspace
= 'image/colorspace'
-
detection_bbox_xmax
= 'image/detection/bbox/xmax'
-
detection_bbox_xmin
= 'image/detection/bbox/xmin'
-
detection_bbox_ymax
= 'image/detection/bbox/ymax'
-
detection_bbox_ymin
= 'image/detection/bbox/ymin'
-
detection_class_label
= 'image/detection/label'
-
detection_score
= 'image/detection/score'
-
filename
= 'image/filename'
-
height
= 'image/height'
-
image_encoded
= 'image/encoded'
-
image_format
= 'image/format'
-
instance_boundaries
= 'image/boundaries/object'
-
instance_classes
= 'image/segmentation/object/class'
-
instance_masks
= 'image/segmentation/object'
-
object_bbox_xmax
= 'image/object/bbox/xmax'
-
object_bbox_xmin
= 'image/object/bbox/xmin'
-
object_bbox_ymax
= 'image/object/bbox/ymax'
-
object_bbox_ymin
= 'image/object/bbox/ymin'
-
object_class_label
= 'image/object/class/label'
-
object_class_text
= 'image/object/class/text'
-
object_depiction
= 'image/object/depiction'
-
object_difficult
= 'image/object/difficult'
-
object_group_of
= 'image/object/group_of'
-
object_is_crowd
= 'image/object/is_crowd'
-
object_occluded
= 'image/object/occluded'
-
object_segment_area
= 'image/object/segment/area'
-
object_truncated
= 'image/object/truncated'
-
object_view
= 'image/object/view'
-
object_weight
= 'image/object/weight'
-
source_id
= 'image/source_id'
-
width
= 'image/width'
-