easy_vision.python.core.optical_flow.tvnet¶
easy_vision.python.core.optical_flow.tvnet.spatial_transformer¶
-
easy_vision.python.core.optical_flow.tvnet.spatial_transformer.
batch_transformer
(U, thetas, out_size, name='BatchSpatialTransformer')[source]¶ Batch Spatial Transformer Layer
Parameters: - U (float) – tensor of inputs [num_batch,height,width,num_channels]
- thetas (float) – a set of transformations for each input [num_batch,num_transforms,6]
- out_size (int) – the size of the output [out_height,out_width]
- Returns (float) – Tensor of size [num_batch*num_transforms,out_height,out_width,num_channels]
-
easy_vision.python.core.optical_flow.tvnet.spatial_transformer.
transformer
(U, theta, out_size, name='SpatialTransformer', **kwargs)[source]¶ Spatial Transformer Layer
Implements a spatial transformer layer as described in [1]. Based on [2] and edited by David Dao for Tensorflow.
Parameters: References
[1] Spatial Transformer Networks Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu Submitted on 5 Jun 2015 [2] https://github.com/skaae/transformer_network/blob/master/transformerlayer.py Notes
To initialize the network to the identity transform init
theta
to :- identity = np.array([[1., 0., 0.],
- [0., 1., 0.]])
identity = identity.flatten() theta = tf.Variable(initial_value=identity)
easy_vision.python.core.optical_flow.tvnet.tvnet¶
-
class
easy_vision.python.core.optical_flow.tvnet.tvnet.
TVNet
[source]¶ Bases:
object
-
GRAD_IS_ZERO
= 1e-12¶
-
dual_tvl1_optic_flow
(x1, x2, u1, u2, tau=0.25, lbda=0.15, theta=0.3, warps=5, max_iterations=5)[source]¶
-
get_loss
(x1, x2, tau=0.25, lbda=0.15, theta=0.3, warps=5, zfactor=0.5, max_scales=5, max_iterations=5)[source]¶
-