vtem.vtdm.decode_video¶
- vtem.vtdm.decode_video(spikefile, Dswfilename, dirichfilename, start_time, end_time, dt, Mx, My=None, Mt=None, domain=None, Wx=None, Wy=None, Wt=None, lamb=0.0, dtype=<type 'numpy.float64'>, rnn=False, alpha=50000, steps=4000, stitching=False, stitch_interval=None, spatial_stitching=False, spatial_interval=[96, 96], precompute=True, write_blocks=False, output_format=0, output='rec')¶
Reconstruct video using VTDM with dirichlet kernel, assuming IAF neurons
Parameters: spikefile : string
the file generated by VTEM containing spike info
Dswfilename : string
file generated by VTDM_prep
dirichfilename : string
file generated by VTDM_prep
start_time : float
the starting time of the segment to reconstruct
end_time : float
the ending time of the segment to reconstruct
dt : float
the interval between two consecutive frames in the output
Mx : integer
order of dirichlet space in x variable must be the same as used in VTDM_prep
My : integer, optional
order of dirichlet space in y variable must be the same as used in VTDM_prep if not specified, My = Mx
Mt : integer, optional
order of dirichlet space in t variable if not specified, will infer from spikefile
domain: list, optional
4-list or 4-tuple, [xstart, xend, ystart, yend] the spatial domain to recover must be the same as in VTDM_prep
Wx : float, optional
bandwidth in x variable if not specified, will use the info in spikefile
Wy : float, optional
bandwidth in y variable if not specified, will use the info in spikefile
Wt : float, optional
bandwidth in t variable if not specified, will use the info in spikefile
lamb : float, optional
smoothing coefficient lambda
dtype : optional
np.float128 or np.float64, data type of the output If not specified, will be set to np.float64
stitching: bool, optional
True if the recovery algorithm should use stiching. False otherwise. If not specified, will be set to False. Stiching should ideally be used only for long videos.
stitch_interval : integer, optional
If stitching is set to True, stitch_interval will set the individual segment lengths. If not specified, will default to an interval corresponding to 20 frames. If a larger video is being decoded, this should be explicitly set to a lower value if a single GPU is being used.
spatial_stitiching : bool, optional
True if the recovery algorithm should use spatial stitching. If the video is of high resolution, should be set to true. Spatial stiching will be diabled if the domain of recovery is less than 96x96 pixels. If not specified, will be set to False.
spatial_interval : list, optional
If spatial_stiching is set to True, will determine the domain for spatial stiching. If not specified, will default to [96,96] pixels.
precompute : bool, optional
If set to True, innerproducts will be calculated and stored in an h5 file before the VTDM call. If not specified, will default to True
write_blocks: bool, optional
Only used if spatial_stitching is True. If set to True, will write the individual blocks to disc as well. If not specified, will be set to False
output_format : integer, optional
0 to write recovered video to avi file 1 to write recovered video to h5 file Anything else to not write recovered video to disc If not specified, will be set to 0
output : string, optional
output filename in which the reconstructed video will be stored If not specified, will be set to “rec”
Returns: The recovered video as a numpy array
Notes
On a single device with 3 GB memory, around 27000 spikes can be decoded. Spatial stitching and temporal stitching can be used to recover a video which produces more spikes than what can be decoded on single GPU. See Example below.
The coordinate system is given by the following:
row (width / X) major -----------------------------------------> width | Y | ^--------------------> | |--------------------> | |--------------------> | |--------------------> X | | v height
Examples
>>> import atexit >>> import pycuda.driver as cuda >>> import numpy as np >>> from vtem import vtem,vtdm >>> cuda.init() >>> context1 = cuda.Device(0).make_context() >>> atexit.register(cuda.Context.pop) >>> vtem.VTEM_Gabor_IAF('fly_large.avi', 'spikes.h5', 2*np.pi*10, h5input=False) >>> vtdm.decode_video('spikes.h5', 'dsw.h5', 'dirich.h5', 0, 1, 0.01, Mx, rnn=True, alpha=5000, steps=4000, dtype=np.float32, stitching=True, stitch_interval=0.2, spatial_stitching = True, spatial_interval = [80,80], output_format=0, write_blocks=True)