easy_vision.python.input

easy_vision.python.input.action_detection_input

class easy_vision.python.input.action_detection_input.ActionDetectionInput(dataset_config)[source]

Bases: easy_vision.python.input.video_input.VideoInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
create_placeholder(batch_size)[source]
create placeholder for export, if subclasses need more
placeholders, this method should be overloaded.
Parameters:number of samples to predict in one session run. (batch_size,) –
Returns:a dict of name(string) to placeholders
declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.classification_input

class easy_vision.python.input.classification_input.ClassificationInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
convert_to_tensor(tensor)[source]
classmethod create_class(name)
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.cv_input

class easy_vision.python.input.cv_input.CVInput(dataset_config)[source]

Bases: object

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
build_record_dataset(filename_dataset)[source]

build variant kind of dataset, eg tfrecord_dataset/textline_dataset/csv_dataset :param names: string tensor for filenames or table name

classmethod create_class(name)
create_input(export_config=None)[source]

get input_fn for estimator

Returns:input_fn to construct estimator
Return type:input_fn
create_placeholder(batch_size)[source]
create placeholder for export, if subclasses need more
placeholders, this method should be overloaded.
Parameters:number of samples to predict in one session run. (batch_size,) –
Returns:a dict of name(string) to placeholders
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict
export_fn(export_config)[source]
shard(filenames)[source]
easy_vision.python.input.cv_input.get_output_shapes(dataset)[source]
easy_vision.python.input.cv_input.get_output_types(dataset)[source]

easy_vision.python.input.cv_input_new

easy_vision.python.input.grid_mnist_input

class easy_vision.python.input.grid_mnist_input.GridMnistInput(dataset_config, cache_dir='./grid_mnist_cache', class_id_maps=None)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config, cache_dir='./grid_mnist_cache', class_id_maps=None)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict
grid_mnist_generator(class_id_maps, is_training=True, class_ids_map=None)[source]
prepare_mnist_data()[source]

easy_vision.python.input.multilabel_classification_input

class easy_vision.python.input.multilabel_classification_input.MultiLabelClassificationInput(dataset_config)[source]

Bases: easy_vision.python.input.classification_input.ClassificationInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.segmentation_dataset

class easy_vision.python.input.segmentation_dataset.SegmentationInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.text_detection_input

class easy_vision.python.input.text_detection_input.TextDetectionInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.text_end2end_input

class easy_vision.python.input.text_end2end_input.TextEnd2EndInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.text_recognition_input

class easy_vision.python.input.text_recognition_input.TextRecognitionInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.text_rectification_input

class easy_vision.python.input.text_rectification_input.TextRectificationInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict

easy_vision.python.input.video_classification_input

class easy_vision.python.input.video_classification_input.VideoClassificationInput(dataset_config)[source]

Bases: easy_vision.python.input.video_input.VideoInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
convert_to_tensor(tensor, default_value)[source]
classmethod create_class(name)
create_placeholder(batch_size, export_video_preprocess)[source]
create placeholder for export, if subclasses need more
placeholders, this method should be overloaded.
Parameters:
  • number of samples to predict in one session run. (batch_size,) –
  • bool whether export preprocessing graph (export_video_preprocess,) –
Returns:

a dict of name(string) to placeholders

declare_feature_label()[source]
decode_record(tf_example_string_tensor)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict
export_fn(export_config)[source]

easy_vision.python.input.video_input

class easy_vision.python.input.video_input.VideoInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
create_placeholder(batch_size)[source]
create placeholder for export, if subclasses need more
placeholders, this method should be overloaded.
Parameters:number of samples to predict in one session run. (batch_size,) –
Returns:a dict of name(string) to placeholders
export_fn(export_config)[source]

easy_vision.python.input.voc_input

class easy_vision.python.input.voc_input.VOCInput(dataset_config)[source]

Bases: easy_vision.python.input.cv_input.CVInput

__init__(dataset_config)[source]

Init input object

Parameters:dataset_config – easy-vision protobuf object, easy_vision.python.protos.dataset_pb2.DatasetConfig
classmethod create_class(name)
declare_feature_label()[source]
decode_record(record_string)[source]

decode data to tensor_dict

Parameters:record – record_data to be decoded
Returns:a dict of tensors containing both features and labels
Return type:tensor_dict