[
  {
    "path": ".gitignore",
    "content": "# Local config folders\nconfig/tt\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n/tmp\n*/latex\n*/html\n\n# C extensions\n*.so\ndata_generator/voxel_generation/build\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n# input data, saved log, checkpoints\ndata/\ninput/\nsaved/\ndatasets/\n\n# editor, os cache directory\n.vscode/\n.idea/\n__MACOSX/\n\n# outputs\n*.jpg\n*.jpeg\n*.h5\n*.swp\n\n# dirs\n/configs/r2\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2020 Timo Stoffregen\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# event_utils\nEvent based vision utility library. For additional detail, see the thesis document [Motion Estimation by Focus Optimisation: Optic Flow and Motion Segmentation with Event Cameras](https://timostoff.github.io/thesis). If you use this code in an academic context, please cite:\n```\n@PhDThesis{Stoffregen20Thesis,\n  author        = {Timo Stoffregen},\n  title         = {Motion Estimation by Focus Optimisation: Optic Flow and Motion Segmentation with Event Cameras},\n  school        = {Department of Electrical and Computer Systems Engineering, Monash University},\n  year          = 2020\n}\n```\n\nThis is an event based vision utility library with functionality for focus optimisation, deep learning, event-stream noise augmentation, data format conversion and efficient generation of various event representations (event images, voxel grids etc).\n\nThe library is implemented in Python. Nevertheless, the library is efficient and fast, since almost all of the hard work is done using vectorisation or numpy/pytorch functions. All functionality is implemented in numpy _and_ pytorch, so that on-GPU processing for hardware accelerated performance is very easy. \n\nThe library is divided into eight sub-libraries:\n```\n└── lib\n    ├── augmentation\n    ├── contrast_max\n    ├── data_formats\n    ├── data_loaders\n    ├── representations\n    ├── transforms\n    ├── util\n    └── visualization\n```\n\n## augmentation\nWhile the `data_loaders` learning library contains some code for tensor augmentation (such as adding Gaussian noise, rotations, flips, random crops etc), the augmentation library allows for these operations to occur on the raw events.\nThis functionality is contained within `event_augmentation.py`.\n### `event_augmentation.py`\nThe following augmentations are available:\n* `add_random_events`: Generates N new random events, drawn from a uniform distribution over the size of the spatiotemporal volume.\n* `remove_events`: Makes the event stream more sparse, by removing a random selection of N events from the original event stream.\n* `add_correlated_events`: Makes the event stream more dense by adding N new events around the existing events.\nEach original event is fitted with a Gaussian bubble with standard deviation `sigma_xy` in the `x,y` dimension and `sigma_t` in the `t` dimension.\nNew events are drawn from these distributions.\nNote that this also 'blurs' the event stream.\n* `flip_events_x`: Flip events over x axis.\n* `flip_events_y`: Flip events over y axis.\n* `crop_events`: Spatially crop events either randomly, to a desired amount and either from the origin or as a center crop.\n* `rotate_events`: Rotate events by angle `theta` around a center of rotation `a,b`.\nEvents can then optionally be cropped in the case that they overflow the sensor resolution.\nSome possible augmentations are shown below:\nSince the augmentations are implemented using vectorisation, the heavy lifting is done in optimised C/C++ backends and is thus very fast.\n\n![Augmentation examples](https://github.com/TimoStoff/event_utils/blob/master/.images/augmentation.png)\n\nSome examples of augmentations on the `slider_depth` sequence from the [event camera dataset](http://rpg.ifi.uzh.ch/davis_data.html) can be seen above (events in red and blue with the first events in black to show scene structure). (a) the original event stream, (b) doubling the events by adding random _correlated_ events, (c) doubling the events by adding fully random (normal distribution) events, (d) halving the events by removing random, (e) flipping the events horizontally, (f) rotating the events 45 degrees. Demo code to reproduce these plots can be found by executing the following (note that the events need to be in HDF5 format):\n```python lib/augmentation/event_augmentation.py /path/to/slider_depth.h5 --output_path /tmp```\n\n## contrast_max\nThe focus optimisation library contains code that allows the user to perform focus optimisation on events.\nThe important files of this library are:\n`events_cmax.py`\nThis file contains code to perform focus optimisation.\nThe most important functionality is provided by:\n* `grid_search_optimisation`: Performs the grid search optimisation from [SOFAS algorithm](https://arxiv.org/abs/1805.12326).\n* `optimize`: Performs gradient based focus optimisation on the input events, given an objective function and motion model.\n* `grid_cmax`: Given a set of events, splits the image plane into ROI of size `roi_size`.\n\tPerforms focus optimisation on each ROI separately.\n* `segmentation_mask_from_d_iwe`: Retrieve a segmentation mask for the events based on dIWE/dWarpParams.\n* `draw_objective_function`: Draw the objective function for a given set of events, motion model and objective function.\nProduces plots as in below image.\n* `main`: Demo showing various capabilities and code examples.\n\n![Focus Optimisation](https://github.com/TimoStoff/event_utils/blob/master/.images/cmax.png)\n\nExamples can be seen in the images above: each set of events is drawn with the variance objective function (w.r.t. optic flow motion model) underneath. This set of tools allows optimising the objective function to recover the motion parameters (images generated with the library). \n\n### `objectives.py`\nThis file implements various objective functions described in this thesis as well as some other commonly cited works.\nObjective functions inherit from the parent class `objective_function`.\nThe idea is to make it as easy as possible to add new, custom objective functions by providing a common API for the optimisation code.\nThis class has several members that require initialisation:\n* `name`: The name of the objective function (eg `variance`).\n* `use_polarity`: Whether to use the polarity of the events in generating IWEs.\n* `has_derivative`: Whether this objective has an analytical derivative w.r.t. warp parameters.\n* `default_blur`: What `sigma` should be default for blurring.\n* `adaptive_lifespan`: An innovative feature to deal with linearisation errors. \n\tMany implementations of contrast maximisation use assumptions of linear motion w.r.t. the chosen motion model. \n\tA given estimate of the motion parameters implies a lifespan of the events. \n\tIf `adaptive_lifespan`: is True, the number of events used during warping is cut to that lifespan for each optimisation step, computed using `pixel_crossings`.\n\teg If motion model is optic flow velocity and the estimate = 12 pixels/second and `pixel_crossings`=3, then the lifespan will be 3/12=0.25s.\n* `pixel_crossings`: Number of pixel crossings used to calculate lifespan.\n* `minimum_events`: The minimal number of events that the lifespan can cut to.\nThe required function that inheriting classes need to implement are:\n* `evaluate_function`: Evaluate the objective function for given parameters, events etc.\n* `evaluate_gradient`: Evaluate the objective function and the gradient of the objective function w.r.t. motion parameters for given parameters, events etc.\nThe objective functions implemented in this file are:\n* `variance_objective`: Variance objective (see [Accurate Angular Velocity Estimation with an Event Camera](https://www.zora.uzh.ch/id/eprint/138896/1/RAL16_Gallego.pdf)).\n* `rms_objective`: Root Mean Squared objective.\n* `sos_objective`: See [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `soe_objective`: See [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `moa_objective`: See [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `soa_objective`: See [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `sosa_objective`: See [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `zhu_timestamp_objective`: Objective function defined in [Unsupervised event-based learning of optical flow, depth, and egomotion](https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhu_Unsupervised_Event-Based_Learning_of_Optical_Flow_Depth_and_Egomotion_CVPR_2019_paper.pdf).\n* `r1_objective`: Combined objective function R1 [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n* `r2_objective`: Combined objective function R2 [Event Cameras, Contrast Maximization and Reward Functions: An Analysis](https://openaccess.thecvf.com/content_CVPR_2019/html/Stoffregen_Event_Cameras_Contrast_Maximization_and_Reward_Functions_An_Analysis_CVPR_2019_paper.html)\n\n### `warps.py`\nThis file implements warping functions described in this thesis as well as some other commonly cited works.\nObjective functions inherit from the parent class `warp_function`.\nThe idea is to make it as easy as possible to add new, custom warping functions by providing a common API for the optimisation code.\nInitialisation requires setting member variables:\n* `name`: Name of the warping function, eg `optic_flow`.\n* `dims`: DoF of the warping function.\nThe only function that needs to be implemented by inheriting classes is `warp`, which takes events, a reference time and motion parameters as input.\nThe function then returns a list of the warped event coordinates as well as the Jacobian of each event w.r.t. the motion parameters.\nWarp functions currently implemented are:\n* `linvel_warp`: 2-DoF optic flow warp.\n* `xyztheta_warp`: 4-DoF warping function from [Event-based moving object detection and tracking](https://arxiv.org/abs/1803.04523) (`x,y,z`) velocity and angular velocity `theta` around the origin).\n* `pure_rotation_warp`: 3-DoF pure rotation warp (`x,y,theta` where `x,y` are the center of rotation and `theta` is the angular velocity).\n\n## `data_formats`\nThe `data_formats` provides code for converting events in one file format to another.\nEven though many candidates have appeared over the years (rosbag, AEDAT, .txt, `hdf5`, pickle, cuneiform clay tablets, just to name a few), a universal storage option for event based data has not yet crystallised.\nSome of these data formats are particularly useful within particular operating systems or programming languages.\nFor example, rosbags are the natural choice for C++ programming with the `ros` environment.\nSince they also store data in an efficient binary format, they have become a very common storage option.\nHowever, they are notoriously slow and impractical to process in Python, which has become the de-facto deep-learning language and is commonly used in research due to the rapid development cycle.\nMore practical (and importantly, fast) options are the `hdf5` and numpy memmap formats.\n`hdf5` is a more compact and easily accessible format, since it allows for easy grouping and metadata allocation, however it's difficulty in setting up multi-threading access and subsequent buggy behaviour (even in read-only applications) means that memmap is more common for deep learning, where multi-threaded data-loaders can significantly speed up training.\n\n### `event_packagers.py`\nThe `data_formats` library provides a `packager` abstract base class, which defines what a `packager` needs to do.\n`packager`objects receive data (events, frames etc) and write them to the desired file format (eg `hdf5`).\nConverting file formats is now much easier, since input files now need only to be parsed and the data sent to the `packager`with the appropriate function calls.\nThe functions that need to implemented are:\n* `package_events` A function which given events, writes them to the file/buffer.\n* `package_image` A function which given images, writes them to the file/buffer.\n* `package_flow` A function which given optic flow frames, writes them to the file/buffer.\n* `add_metadata` Writes metadata to the file (number of events, number of negative/positive events, duration of sequence, start time, end time, number of images, number of optic flow frames).\n* `set_data_available` What data is available and needs to be written (ie events, frames, optic flow).\nA `packager` for `hdf5` and memmap is implemented.\n### `h5_to_memmap.py` and `rosbag_to_h5.py`\nThe library implements two converters, one for `hdf5` to memmap and one for rosbag to `hdf5`.\nThese can be easily called from the command line with various options that can be found in the documentation.\n### `add_hdf5_attribute.py`\n`add_hdf5_attribute.py` allows the user to add or modify attributes to existing `hdf5` files.\nAttributes are the manner in which metadata is saved in `hdf5` files.\n### `read_events.py`\n`read_events.py` contains functions for reading events from `hdf5` and memmap.\nThe functions are:\n* `read_memmap_events`.\n* `read_h5_events`.\n\n## `data_loader`\nThe deep learning code can be found in the `data_loaders`library.\nIt contains code for loading events and transforming them into voxel grids in an efficient manner as well as code for data augmentation.\nActual networks and cost functions described in this thesis are not implemented in the library but at the project page for that paper.\n\n`data_loaders` provides a highly versatile `pytorch` dataloader, which can be used across various storage formats for events (.txt, `hdf5`, memmap etc).\nAs a result it is very easy to implement new dataloader for a different storage format.\nThe output of the dataloader was originally to provide voxel grids of the events, but can be used just as well to output batched events, due to a custom `pytorch`collation function.\nAs a result, the dataloader is useful for any situation in which it is desirable to iterate over the events in a storage medium and is not only useful for deep learning.\nFor instance, if one wants to iterate over the events that lie between all the frames of a `davis` sequence, the following code is sufficient:\n```\ndloader = DynamicH5Dataset(path_to_events_file)\nfor item in dloader:\n\tprint(item[`events'].shape)\n```\n\n### `base_dataset.py`\nThis file defines the base dataset class (`BaseVoxelDataset`), which defines all batching, augmentation, collation and housekeeping code.\nInheriting classes (one per data format) need only to implement the abstract functions for providing events, frames and other data from storage.\nThese abstract functions are:\n* `get_frame(self, index)` Given an index `n`, return the `n`th frame.\n* `get_flow(self, index)` Given an index `n`, return the `n`th optic flow frame.\n* `get_events(self, idx0, idx1)` Given a start and end index `idx0` and `idx1`, return all events between those indices.\n* `load_data(self, data_path)` Function which is called once during initialisation, which creates handles to files and sets several class attributes (number of frames, events etc).\n* `find_ts_index(self, timestamp)` Given a timestamp, get the index of the nearest event.\n* `ts(self, index)` Given an event index, return the timestamp of that event.\nThe function `load_data`must set the following member variables:\n* `self.sensor_resolution` Event sensor resolution.\n* `self.has_flow` Whether or not the data has optic flow frames.\n* `self.t0` The start timestamp of the events.\n* `self.tk` The end timestamp of the events.\n* `self.num_events` The number of events in the dataset.\n* `self.frame_ts` The timestamps of the time-synchronised frames.\n* `self.num_frames` The number of frames in the dataset.\nThe constructor of the class takes following arguments:\n* `data_path` Path to the file containing the event/image data.\n* `transforms` Python dict containing the desired augmentations.\n* `sensor_resolution` The size of the image sensor.\n* `num_bins` The number of bins desired in the voxel grid.\n* `voxel_method` Which method should be used to form the voxels.\n* `max_length` If desired, the length of the dataset can be capped to `max_length` batches.\n* `combined_voxel_channels` If True, produces one voxel grid for all events, if False, produces separate voxel grids for positive and negative channels.\n* `return_events` If true, returns events in output dict.\n* `return_voxelgrid` If true, returns voxel grid in output dict.\n* `return_frame` If true, returns frames in output dict.\n* `return_prev_frame` If true, returns previous batch's frame to current frame in output dict.\n* `return_flow` If true, returns optic flow in output dict.\n* `return_prev_flow` If true, returns previous batch's optic flow to current optic flow in output dict.\n* `return_format` Which output format to use (options=`'numpy'` and `'torch'`).\nThe parameter `voxel_method` defines how the data is to be batched.\nFor instance, one might wish to have data returned in windows `t` seconds wide, or to always get all data between successive `aps` frames.\nThe method is given as a dict, as some methods have additional parametrisations.\nThe current options are:\n* `k_events` Data is returned every `k` events.\n\tThe dict is given in the format `method = {'method': 'k_events', 'k': value_for_k, 'sliding_window_w': value_for_sliding_window}`.\n\tThe parameter `sliding_window_w` defines by how many events each batch overlaps.\n* `t_seconds` Data is returned every `t` seconds.\n\tThe dict is given in the format `method = {'method': 't_seconds', 't': value_for_t, 'sliding_window_t': value_for_sliding_window}`.\n\tThe parameter `sliding_window_t` defines by how many seconds each batch overlaps.\n* `between_frames` All data between successive frames is returned.\n\tRequires time-synchronised frames to exist.\n\tThe dict is given in the format `method={'method':'between_frames'}`.\nGenerating the voxel grids can be done very efficiently and on the `gpu` (if the events have been loaded there) using the `pytorch` function `target.index_put_(index, value, accumulate=True)`.\nThis function puts values from `value` into `target` using the indices specified in `indices` using highly optimised C++ code in the background.\n`accumulate` specifies if values in `value` which get put in the same location on `target` should sum (accumulate) or overwrite one another.\nIn summary, `BaseVoxelDataset` allows for very fast, on-device data-loading and on-the-fly voxel grid generation.\n\n## `representations`\nThis library contains code for generating representations from the events in a highly efficient, `gpu` ready manner.\n![Representations](https://github.com/TimoStoff/event_utils/blob/master/.images/representations.png)\nVarious representations can be seen above with (a) the raw events, (b) the voxel grid, (c) the event image, (d) the timestamp image.\n### `voxel_grid.py`\nThis file contains several means for forming and viewing voxel grids from events.\nThere are two versions of each function, representing a pure `numpy` and a `pytorch` implementation.\nThe `pytorch` implementation is necessary for `gpu` processing, however it is not as commonly used as `numpy`, which is so frequently used as to barely be a dependency any more.\nFunctions for `pytorch` are:\n* `voxel_grids_fixed_n_torch` Given a set of `n` events, return a voxel grid with `B` bins and with a fixed number of events.\n* `voxel_grids_fixed_t_torch` Given a set of events and a duration `t`, return a voxel grid with `B` bins and with a fixed temporal width `t`.\n* `events_to_voxel_timesync_torch` Given a set of events and two times `t_0` and `t_1`, return a voxel grid with `B` bins from the events between `t_0` and `t_1`.\n* `events_to_voxel_torch` Given a set of events, return a voxel grid with `B` bins from those events.\n* `events_to_neg_pos_voxel_torch` Given a set of events, return a voxel grid with `B` bins from those events.\nPositive and negative events are formed into two separate voxel grids.\nFunctions for `numpy` are:\n* `events_to_voxel` Given a set of events, return a voxel grid with `B` bins from those events.\n* `events_to_neg_pos_voxel` Given a set of events, return a voxel grid with `B` bins from those events.\nPositive and negative events are formed into two separate voxel grids.\nAdditionally:\n* `get_voxel_grid_as_image`Returns a voxel grid as a series of images, one for each bin for display.\n* `plot_voxel_grid` Given a voxel grid, display it as an image.\nVoxel grids can be formed both using spatial and temporal interpolation between the bins.\n### `image.py`\n`image.py` contains code for forming images from events in an efficient manner.\nThe functions allow for forming images with both discrete and floating point events using bilinear interpolation.\nImages currently supported are event images and timestamp images using either `numpy` or `pytorch`.\nFunctions are:\n* `events_to_image` Form an image from events using `numpy`.\nAllows for bilinear interpolation while assigning events to pixels and padding of the image or clipping of events for events which fall outside of the range.\n* `events_to_image_torch` Form an image from events using `pytorch`.\nAllows for bilinear interpolation while assigning events to pixels and padding of the image or clipping of events for events which fall outside of the range.\n* `image_to_event_weights` Given an image and a set of event coordinates, get the pixel value of the image for each event using reverse bilinear interpolation.\n* `events_to_image_drv` Form an image from events and the derivative images from the event Jacobians (with options for padding the image or clipping out-of-range events).\nOf particular use for `cmax` where analytic gradients motion models are known.\n* `events_to_timestamp_image` Method to generate the average timestamp images from [Unsupervised event-based learning of optical flow, depth, and egomotion](https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhu_Unsupervised_Event-Based_Learning_of_Optical_Flow_Depth_and_Egomotion_CVPR_2019_paper.pdf) using `numpy`.\nReturns two images, one for negative and one for positive events.\n* `events_to_timestamp_image_torch` Method to generate the average timestamp images from [Unsupervised event-based learning of optical flow, depth, and egomotion](https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhu_Unsupervised_Event-Based_Learning_of_Optical_Flow_Depth_and_Egomotion_CVPR_2019_paper.pdf) using `pytorch`.\nReturns two images, one for negative and one for positive events.\n\n## `util`\nThis library contains some utility functions used in the rest of the library.\nFunctions include:\n* `infer_resolution` Given events, guess the resolution by looking at the max and min values.\n* `events_bounds_mask` Get a mask of the events that are within given bounds.\n* `clip_events_to_bounds` Clip events to the given bounds.\n* `cut_events_to_lifespan` Given motion model parameters, compute the speed and thus the lifespan, given a desired number of pixel crossings.\n* `get_events_from_mask` Given an image mask, return the indices of all events at each location in the mask.\n* `binary_search_h5_dset` Binary search for a timestamp in an `hdf5` event file, without loading the entire file into RAM.\n* `binary_search_torch_tensor` Binary search implemented for `pytorch` tensors (no native implementation exists).\n* `remove_hot_pixels` Given a set of events, removes the `hot' pixel events. Accumulates all of the events into an event image and removes the `num_hot` highest value pixels.\n* `optimal_crop_size` Find the optimal crop size for a given `max_size` and `subsample_factor`. The optimal crop size is the smallest integer which is greater or equal than `max_size`, while being divisible by 2^`max_subsample_factor`.\n* `plot_image_grid` Given a list of images, stitch them into a grid and display/save the grid.\n* `flow2bgr_np` Turn optic flow into an RGB image.\n\n## `visualisation`\nThe `visualization` library contains methods for generating figures and movies from events.\nThe majority of figures shown in the thesis were generated using this library.\nTwo rendering backends are available, the commonly used `matplotlib` plotting library and `mayavi`, which is a VTK based graphics library.\nThe API for both of these is essentially the same, the main difference being the dependency on `matplotlib` or `mayavi`.\n`matplotlib` is very easy to set up, but quite slow, `mayavi` is very fast but more difficult to set up and debug.\nI will describe the `matplotlib` version here, although all functionality exists in the `mayavi` version too (see the code documentation for details).\n### `draw_event_stream.py`\nThe core work is done in this file, which contains code for visualising events and voxel grids for examples).\nThe function for plotting events is `plot_events`.\n\\input{figures/appendix/visualisations/fig.tex}\nInput parameters for this function are:\n* `xs` x coords of events.\n* `ys` y coords of events.\n* `ts` t coords of events.\n* `ps` p coords of events.\n* `save_path` If set, will save the plot to here\n* `num_compress` Takes `num_compress` events from the beginning of the sequence and draws them in the plot at time `t=0` in black.\n\tThis aids visibility (see the augmentation examples).\n* `compress_front` If True, display the compressed events in black at the front of the spatiotemporal volume rather than the back\n* `num_show` Sets the number of events to plot.\n\tIf set to -1 will plot all of the events (can be potentially expensive).\n\tOtherwise, skips events in order to achieve the desired number of events\n* `event_size` Sets the size of the plotted events.\n* `elev` Sets the elevation of the plot.\n* `azim` Sets the azimuth of the plot.\n* `imgs` A list of images to draw into the spatiotemporal volume.\n* `img_ts` A list of the position on the temporal axis where each image from `imgs` is to be placed.\n* `show_events` If False, will not plot the events (only images).\n* `show_plot` If True, display the plot in a `matplotlib` window as well as saving to disk.\n* `crop` A crop, if desired, of the events and images to be plotted.\n* `marker` Which marker should be used to display the events (default is '.', which results in points, but circles 'o' or crosses 'x' are among many other possible options).\n* `stride` Determines the pixel stride of the image rendering (1=full resolution, but can be quite resource intensive).\n* `invert` Inverts the colour scheme for black backgrounds.\n* `img_size` The size of the sensor resolution. Inferred if empty.\n* `show_axes` If True, draw axes onto the plot.\nThe analogous function for plotting voxel grids is:\n* `xs` x coords of events.\n* `ys`y coords of events.\n* `ts` t coords of events.\n* `ps` p coords of events.\n* `bins` The number of bins to have in the voxel grid.\n* `frames` A list of images to draw into the plot with the voxel grid.\n* `frame_ts` A list of the position on the temporal axis where each image from `frames` is to be placed.\n* `sensor_size` Event sensor resolution.\n* `crop` A crop, if desired, of the events and images to be plotted.\n* `elev` Sets the elevation of the plot.\n* `azim` Sets the azimuth of the plot.\nTo plot successive frames in order to generate video, the function `plot_events_sliding` can be used.\nEssentially, this function renders a sliding window of the events, for either the event or voxel visualisation modes.\nSimilarly, `plot_between_frames` can be used to render all events between frames, with the option to skip every `n`th event.\nTo generate such plots from the command line, the library provides the scripts:\n* `visualize_events.py`\n* `visualize_voxel.py`\n* `visualize_flow.py`\nThese provide a range of documented commandline arguments with sensble defaults from which plots of the events, voxel grids and events with optic flow overlaid can be generated.\nFor example,\n```python visualize_events.py /path/to/slider_depth.h5```\nproduces plots of the `slider_depth` sequence.\nInvoking:\n```python visualize_voxel.py /path/to/slider_depth.h5```\nproduces voxels of the `slider_depth` sequence.\n\\input{figures/appendix/slider_vis/fig.tex}\n![Visualisation](https://github.com/TimoStoff/event_utils/blob/master/.images/visualisations.png)\nTypical visualisations are shown above: the `slider_depth` sequence is drawn as successive frames of events (top) and voxels (bottom).\n"
  },
  {
    "path": "__init__.py",
    "content": "# __init__.py\n"
  },
  {
    "path": "lib/augmentation/__init__.py",
    "content": ""
  },
  {
    "path": "lib/augmentation/event_augmentation.py",
    "content": "import numpy as np\nfrom lib.representations.voxel_grid import events_to_neg_pos_voxel\nfrom lib.data_formats.read_events import read_h5_event_components\nfrom lib.visualization.draw_event_stream import plot_events\nfrom lib.util.event_util import clip_events_to_bounds\nimport matplotlib.pyplot as plt\n\ndef sample(cdf, ts):\n    \"\"\"\n    Given a cumulative density function (CDF) and timestamps, draw\n    a random sample from the CDF then find the index of the corresponding\n    event. The idea is to allow fair sampling of an event streams timestamps\n    @param cdf The CDF as np array\n    @param ts The timestamps to sample from\n    @returns The index of the sampled event\n    \"\"\"\n    minval = cdf[0]\n    maxval = cdf[-1]\n    rnd = np.random.uniform(minval, maxval)\n    idx = np.searchsorted(ts, rnd)\n    return idx\n\ndef events_to_block(xs, ys, ts, ps):\n    \"\"\"\n    Given events as lists of components, return a 4xN numpy array of the events\n    where N is the number of events\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @returns The block of events\n    \"\"\"\n    block_events = np.concatenate((\n        xs[:,np.newaxis],\n        ys[:,np.newaxis],\n        ts[:,np.newaxis],\n        ps[:,np.newaxis]), axis=1)\n    return block_events\n\ndef merge_events(event_sets):\n    \"\"\"\n    Merge multiple sets of events\n    @param event_sets A list of event streams, where each event strea consists\n        of four numpy arrays of xs, ys, ts and ps\n    @returns One merged set of events as tuple: xs, ys, ts, ps\n    \"\"\"\n    xs,ys,ts,ps = [],[],[],[]\n    for events in event_sets:\n        xs.append(events[0])\n        ys.append(events[1])\n        ts.append(events[2])\n        ps.append(events[3])\n    merged = events_to_block(\n        np.concatenate(xs),\n        np.concatenate(ys),\n        np.concatenate(ts),\n        np.concatenate(ps))\n    return merged\n\ndef add_random_events(xs, ys, ts, ps, to_add, sensor_resolution=None,\n        sort=True, return_merged=True):\n    \"\"\"\n    Add new, random events drawn from a uniform distribution.\n    Event coordinates are drawn from uniform dist over the sensor resolution and\n    duration of the events.\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param to_add How many events to add\n    @param sensor_resolution The resolution of the events. If left None, takes the range\n        of the spatial coordinates of the imput events\n    @param sort Sort the output events?\n    @param return_merged Whether to return the random events separately or merged into\n        the orginal input events\n    @returns The random events as tuple: xs, ys, ts, ps\n    \"\"\"\n    xs_new = np.random.randint(np.max(xs)+1, size=to_add)\n    ys_new = np.random.randint(np.max(ys)+1, size=to_add)\n    ts_new = np.random.uniform(np.min(ts), np.max(ts), size=to_add)\n    ps_new = (np.random.randint(2, size=to_add))*2-1\n    if return_merged:\n        new_events = merge_events([[xs_new, ys_new, ts_new, ps_new], [xs, ys, ts, ps]])\n        if sort:\n            new_events.view('i8,i8,i8,i8').sort(order=['f2'], axis=0)\n        return new_events[:,0], new_events[:,1], new_events[:,2], new_events[:,3],\n    elif sort:\n        new_events = events_to_block(xs_new, ys_new, ts_new, ps_new)\n        new_events.view('i8,i8,i8,i8').sort(order=['f2'], axis=0)\n        return new_events[:,0], new_events[:,1], new_events[:,2], new_events[:,3],\n    else:\n        return xs_new, ys_new, ts_new, ps_new\n\ndef remove_events(xs, ys, ts, ps, to_remove, add_noise=0):\n    \"\"\"\n    Remove events by random selection\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param to_remove How many events to remove\n    @param add_noise How many noise events to add (0 by default)\n    @returns Event stream with events removed as tuple: xs, ys, ts, ps\n    \"\"\"\n    if to_remove > len(xs):\n        return np.array([]), np.array([]), np.array([]), np.array([])\n    to_select = len(xs)-to_remove\n    idx = np.random.choice(np.arange(len(xs)), size=to_select, replace=False)\n    if add_noise <= 0:\n        idx.sort()\n        return xs[idx], ys[idx], ts[idx], ps[idx]\n    else:\n        nsx, nsy, nst, nsp = add_random_events(xs, ys, ts, ps, add_noise, sort=False, return_merged=False)\n        new_events = merge_events([[xs[idx], ys[idx], ts[idx], ps[idx]], [nsx, nsy, nst, nsp]])\n        new_events.view('i8,i8,i8,i8').sort(order=['f2'], axis=0)\n        return new_events[:,0], new_events[:,1], new_events[:,2], new_events[:,3],\n\ndef add_correlated_events(xs, ys, ts, ps, to_add, sort=True, return_merged=True, xy_std = 1.5, ts_std = 0.001, add_noise=0):\n    \"\"\"\n    Add events in the vicinity of existing events. Each original event has a Gaussian bubble\n    placed around it from which the new events are sampled.\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param to_add How many events to add\n    @param sort Whether to sort the output events\n    @param return_merged Whether to return the random events separately or merged into\n        the orginal input events\n    @param xy_std Standard deviation of new xy coords\n    @param ts_std standard deviation of new timestamp\n    @param add_noise How many random noise events to add (default 0)\n    @returns Events augemented with correlated events in tuple: xs, ys, ts, ps\n    \"\"\"\n    iters = int(to_add/len(xs))+1\n    xs_new, ys_new, ts_new, ps_new = [], [], [], []\n    for i in range(iters):\n        xs_new.append(xs+np.random.normal(scale=xy_std, size=xs.shape).astype(int))\n        ys_new.append(ys+np.random.normal(scale=xy_std, size=ys.shape).astype(int))\n        ts_new.append(ts+np.random.normal(scale=ts_std, size=ts.shape))\n        ps_new.append(ps)\n    xs_new = np.concatenate(xs_new, axis=0)\n    ys_new = np.concatenate(ys_new, axis=0)\n    ts_new = np.concatenate(ts_new, axis=0)\n    ps_new = np.concatenate(ps_new, axis=0)\n    idx = np.random.choice(np.arange(len(xs_new)), size=to_add, replace=False)\n    xs_new = np.clip(xs_new[idx], 0, np.max(xs))\n    ys_new = np.clip(ys_new[idx], 0, np.max(ys))\n    ts_new = ts_new[idx]\n    ps_new = ps_new[idx]\n    nsx, nsy, nst, nsp = add_random_events(xs, ys, ts, ps, add_noise, sort=False, return_merged=False)\n    if return_merged:\n        new_events = merge_events([[xs_new, ys_new, ts_new, ps_new], [nsx, nsy, nst, nsp]])\n    else:\n        new_events = events_to_block(xs_new, ys_new, ts_new, ps_new)\n    if sort:\n        new_events.view('i8,i8,i8,i8').sort(order=['f2'], axis=0)\n    return new_events[:,0], new_events[:,1], new_events[:,2], new_events[:,3],\n\ndef flip_events_x(xs, ys, ts, ps, sensor_resolution=(180,240)):\n    \"\"\"\n    Flip events along x axis\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @returns Flipped events\n    \"\"\"\n    xs = sensor_resolution[1]-xs\n    return xs, ys, ts, ps\n\ndef flip_events_y(xs, ys, ts, ps, sensor_resolution=(180,240)):\n    \"\"\"\n    Flip events along y axis\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @returns Flipped events\n    \"\"\"\n    ys = sensor_resolution[0]-ys\n    return xs, ys, ts, ps\n\ndef crop_events(xs, ys, sensor_resolution, new_resolution):\n    \"\"\"\n    Crop events to new resolution\n    @param xs x component of events\n    @param ys y component of events\n    @param sensor_resolution Original resolution\n    @param new_resolution New desired resolution\n    @returns Events cropped to new resolution as tuple: xs, ys\n    \"\"\"\n    clip = clip_events_to_bounds(xs, ys, None, None, new_resolution)\n    return clip[0], clip[1]\n\ndef rotate_events(xs, ys, sensor_resolution=(180,240),\n        theta_radians=None, center_of_rotation=None, clip_to_range=False):\n    \"\"\"\n    Rotate events by a given angle around a given center of rotation.\n    Note that the output events are floating point and may no longer\n    be in the range of the image sensor. Thus, if 'standard' events are\n    required, conversion to int and clipping to range may be necessary.\n    @param xs x component of events\n    @param ys y component of events\n    @param sensor_resolution Size of event camera sensor\n    @param theta_radians Angle of rotation in radians. If left empty, choose random\n    @param center_of_rotation Center of the rotation. If left empty, choose random\n    @param clip_to_range If True, remove events that lie outside of image plane after rotation\n    @returns Rotated event coords and rotation parameters: xs, ys,\n        theta_radians, center_of_rotation\n    \"\"\"\n    theta_radians = np.random.uniform(0, 2*3.14159265359) if theta_radians is None else theta_radians\n    corx = int(np.random.uniform(0, sensor_resolution[1])+1)\n    cory = int(np.random.uniform(0, sensor_resolution[1])+1)\n    center_of_rotation = (corx, cory) if center_of_rotation is None else center_of_rotation\n\n    cxs = xs-center_of_rotation[0]\n    cys = ys-center_of_rotation[1]\n    new_xs = (cxs*np.cos(theta_radians)-cys*np.sin(theta_radians))+cxs\n    new_ys = (cxs*np.sin(theta_radians)+cys*np.cos(theta_radians))+cys\n    if clip_to_range:\n        clip = clip_events_to_bounds(new_xs, new_ys, None, None, sensor_resolution)\n        new_xs, new_ys = clip[0], clip[1]\n    return new_xs, new_ys, theta_radians, center_of_rotation\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Tool to add events to a set of events.\n    \"\"\"\n    import argparse\n    import os\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"Path to event file\")\n    parser.add_argument(\"--output_path\", default=\"/tmp/extracted_data\", help=\"Folder where to put augmented events\")\n    parser.add_argument(\"--to_add\", type=float, default=1.0, help=\"How many more events, as a proportion \\\n            (eg, 1.5 will result in 150% more events, 0.2 will result in 20% of the events).\")\n    args = parser.parse_args()\n    out_dir = args.output_path\n\n    xs, ys, ts, ps = read_h5_event_components(args.path)\n    ys = 180-ys\n    num = 50000\n    s = 0#10000\n    num_to_add = num*2\n    num_comp=5000\n\n    pth = os.path.join(out_dir, \"img0\")\n    plot_events(xs[s:s+num], ys[s:s+num], ts[s:s+num], ps[s:s+num], elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n\n    pth = os.path.join(out_dir, \"img1\")\n    nx, ny, nt, npo = add_correlated_events(xs[s:s+num], ys[s:s+num], ts[s:s+num], ps[s:s+num], num_to_add)\n    plot_events(nx, ny, nt, npo, elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n\n    pth = os.path.join(out_dir, \"img3\")\n    nx, ny, nt, npo = add_random_events(xs[s:s+num], ys[s:s+num], ts[s:s+num], ps[s:s+num], num_to_add, sensor_resolution=(180,240))\n    plot_events(nx, ny, nt, npo, elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n\n    pth = os.path.join(out_dir, \"img4\")\n    nx, ny, nt, npo = remove_events(xs[s:s+num], ys[s:s+num], ts[s:s+num], ps[s:s+num], num//2)\n    plot_events(nx, ny, nt, npo, elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n\n    pth = os.path.join(out_dir, \"img5\")\n    nx, ny, rot, cor = rotate_events(xs[s:s+num], ys[s:s+num], theta_radians=1.4, center_of_rotation=(90, 120), clip_to_range=True)\n    plot_events(nx, ny, ts, ps, elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n\n    pth = os.path.join(out_dir, \"img6\")\n    nx, ny, rot, cor = flip_events_x(xs[s:s+num], ys[s:s+num], ts[s:s+num], ps[s:s+num])\n    plot_events(nx, ny, ts, ps, elev=30, num_compress=num_comp, num_show=-1, save_path=pth, show_axes=True, compress_front=True)\n"
  },
  {
    "path": "lib/contrast_max/Doxyfile",
    "content": "# Doxyfile 1.8.11\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org) for a project.\n#\n# All text after a double hash (##) is considered a comment and is placed in\n# front of the TAG it is preceding.\n#\n# All text after a single hash (#) is considered a comment and will be ignored.\n# The format is:\n# TAG = value [value, ...]\n# For lists, items can also be appended using:\n# TAG += value [value, ...]\n# Values that contain spaces should be placed between quotes (\\\" \\\").\n\n#---------------------------------------------------------------------------\n# Project related configuration options\n#---------------------------------------------------------------------------\n\n# This tag specifies the encoding used for all characters in the config file\n# that follow. The default is UTF-8 which is also the encoding used for all text\n# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv\n# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv\n# for the list of possible encodings.\n# The default value is: UTF-8.\n\nDOXYFILE_ENCODING      = UTF-8\n\n# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by\n# double-quotes, unless you are using Doxywizard) that should identify the\n# project for which the documentation is generated. This name is used in the\n# title of most generated pages and in a few other places.\n# The default value is: My Project.\n\nPROJECT_NAME           = \"Contrast Maximisation Library\"\n\n# The PROJECT_NUMBER tag can be used to enter a project or revision number. This\n# could be handy for archiving the generated documentation or if some version\n# control system is used.\n\nPROJECT_NUMBER         =\n\n# Using the PROJECT_BRIEF tag one can provide an optional one line description\n# for a project that appears at the top of each page and should give viewer a\n# quick idea about the purpose of the project. Keep the description short.\n\nPROJECT_BRIEF          = \"Library for focus optimisation using events\"\n\n# With the PROJECT_LOGO tag one can specify a logo or an icon that is included\n# in the documentation. The maximum height of the logo should not exceed 55\n# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy\n# the logo to the output directory.\n\nPROJECT_LOGO           =\n\n# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path\n# into which the generated documentation will be written. If a relative path is\n# entered, it will be relative to the location where doxygen was started. If\n# left blank the current directory will be used.\n\nOUTPUT_DIRECTORY       =\n\n# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-\n# directories (in 2 levels) under the output directory of each output format and\n# will distribute the generated files over these directories. Enabling this\n# option can be useful when feeding doxygen a huge amount of source files, where\n# putting all generated files in the same directory would otherwise causes\n# performance problems for the file system.\n# The default value is: NO.\n\nCREATE_SUBDIRS         = NO\n\n# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII\n# characters to appear in the names of generated files. If set to NO, non-ASCII\n# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode\n# U+3044.\n# The default value is: NO.\n\nALLOW_UNICODE_NAMES    = NO\n\n# The OUTPUT_LANGUAGE tag is used to specify the language in which all\n# documentation generated by doxygen is written. Doxygen will use this\n# information to generate all constant output in the proper language.\n# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,\n# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),\n# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,\n# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),\n# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,\n# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,\n# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,\n# Ukrainian and Vietnamese.\n# The default value is: English.\n\nOUTPUT_LANGUAGE        = English\n\n# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member\n# descriptions after the members that are listed in the file and class\n# documentation (similar to Javadoc). Set to NO to disable this.\n# The default value is: YES.\n\nBRIEF_MEMBER_DESC      = YES\n\n# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief\n# description of a member or function before the detailed description\n#\n# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the\n# brief descriptions will be completely suppressed.\n# The default value is: YES.\n\nREPEAT_BRIEF           = YES\n\n# This tag implements a quasi-intelligent brief description abbreviator that is\n# used to form the text in various listings. Each string in this list, if found\n# as the leading text of the brief description, will be stripped from the text\n# and the result, after processing the whole list, is used as the annotated\n# text. Otherwise, the brief description is used as-is. If left blank, the\n# following values are used ($name is automatically replaced with the name of\n# the entity):The $name class, The $name widget, The $name file, is, provides,\n# specifies, contains, represents, a, an and the.\n\nABBREVIATE_BRIEF       =\n\n# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then\n# doxygen will generate a detailed section even if there is only a brief\n# description.\n# The default value is: NO.\n\nALWAYS_DETAILED_SEC    = NO\n\n# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all\n# inherited members of a class in the documentation of that class as if those\n# members were ordinary class members. Constructors, destructors and assignment\n# operators of the base classes will not be shown.\n# The default value is: NO.\n\nINLINE_INHERITED_MEMB  = NO\n\n# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path\n# before files name in the file list and in the header files. If set to NO the\n# shortest path that makes the file name unique will be used\n# The default value is: YES.\n\nFULL_PATH_NAMES        = YES\n\n# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.\n# Stripping is only done if one of the specified strings matches the left-hand\n# part of the path. The tag can be used to show relative paths in the file list.\n# If left blank the directory from which doxygen is run is used as the path to\n# strip.\n#\n# Note that you can specify absolute paths here, but also relative paths, which\n# will be relative from the directory where doxygen is started.\n# This tag requires that the tag FULL_PATH_NAMES is set to YES.\n\nSTRIP_FROM_PATH        =\n\n# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the\n# path mentioned in the documentation of a class, which tells the reader which\n# header file to include in order to use a class. If left blank only the name of\n# the header file containing the class definition is used. Otherwise one should\n# specify the list of include paths that are normally passed to the compiler\n# using the -I flag.\n\nSTRIP_FROM_INC_PATH    =\n\n# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but\n# less readable) file names. This can be useful is your file systems doesn't\n# support long names like on DOS, Mac, or CD-ROM.\n# The default value is: NO.\n\nSHORT_NAMES            = NO\n\n# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the\n# first line (until the first dot) of a Javadoc-style comment as the brief\n# description. If set to NO, the Javadoc-style will behave just like regular Qt-\n# style comments (thus requiring an explicit @brief command for a brief\n# description.)\n# The default value is: NO.\n\nJAVADOC_AUTOBRIEF      = NO\n\n# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first\n# line (until the first dot) of a Qt-style comment as the brief description. If\n# set to NO, the Qt-style will behave just like regular Qt-style comments (thus\n# requiring an explicit \\brief command for a brief description.)\n# The default value is: NO.\n\nQT_AUTOBRIEF           = NO\n\n# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a\n# multi-line C++ special comment block (i.e. a block of //! or /// comments) as\n# a brief description. This used to be the default behavior. The new default is\n# to treat a multi-line C++ comment block as a detailed description. Set this\n# tag to YES if you prefer the old behavior instead.\n#\n# Note that setting this tag to YES also means that rational rose comments are\n# not recognized any more.\n# The default value is: NO.\n\nMULTILINE_CPP_IS_BRIEF = NO\n\n# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the\n# documentation from any documented member that it re-implements.\n# The default value is: YES.\n\nINHERIT_DOCS           = YES\n\n# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new\n# page for each member. If set to NO, the documentation of a member will be part\n# of the file/class/namespace that contains it.\n# The default value is: NO.\n\nSEPARATE_MEMBER_PAGES  = NO\n\n# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen\n# uses this value to replace tabs by spaces in code fragments.\n# Minimum value: 1, maximum value: 16, default value: 4.\n\nTAB_SIZE               = 4\n\n# This tag can be used to specify a number of aliases that act as commands in\n# the documentation. An alias has the form:\n# name=value\n# For example adding\n# \"sideeffect=@par Side Effects:\\n\"\n# will allow you to put the command \\sideeffect (or @sideeffect) in the\n# documentation, which will result in a user-defined paragraph with heading\n# \"Side Effects:\". You can put \\n's in the value part of an alias to insert\n# newlines.\n\nALIASES                =\n\n# This tag can be used to specify a number of word-keyword mappings (TCL only).\n# A mapping has the form \"name=value\". For example adding \"class=itcl::class\"\n# will allow you to use the command class in the itcl::class meaning.\n\nTCL_SUBST              =\n\n# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources\n# only. Doxygen will then generate output that is more tailored for C. For\n# instance, some of the names that are used will be different. The list of all\n# members will be omitted, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_FOR_C  = NO\n\n# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or\n# Python sources only. Doxygen will then generate output that is more tailored\n# for that language. For instance, namespaces will be presented as packages,\n# qualified scopes will look different, etc.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_JAVA   = NO\n\n# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran\n# sources. Doxygen will then generate output that is tailored for Fortran.\n# The default value is: NO.\n\nOPTIMIZE_FOR_FORTRAN   = NO\n\n# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL\n# sources. Doxygen will then generate output that is tailored for VHDL.\n# The default value is: NO.\n\nOPTIMIZE_OUTPUT_VHDL   = NO\n\n# Doxygen selects the parser to use depending on the extension of the files it\n# parses. With this tag you can assign which parser to use for a given\n# extension. Doxygen has a built-in mapping, but you can override or extend it\n# using this tag. The format is ext=language, where ext is a file extension, and\n# language is one of the parsers supported by doxygen: IDL, Java, Javascript,\n# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:\n# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:\n# Fortran. In the later case the parser tries to guess whether the code is fixed\n# or free formatted code, this is the default for Fortran type files), VHDL. For\n# instance to make doxygen treat .inc files as Fortran files (default is PHP),\n# and .f files as C (default is Fortran), use: inc=Fortran f=C.\n#\n# Note: For files without extension you can use no_extension as a placeholder.\n#\n# Note that for custom extensions you also need to set FILE_PATTERNS otherwise\n# the files are not read by doxygen.\n\nEXTENSION_MAPPING      =\n\n# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments\n# according to the Markdown format, which allows for more readable\n# documentation. See http://daringfireball.net/projects/markdown/ for details.\n# The output of markdown processing is further processed by doxygen, so you can\n# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in\n# case of backward compatibilities issues.\n# The default value is: YES.\n\nMARKDOWN_SUPPORT       = YES\n\n# When enabled doxygen tries to link words that correspond to documented\n# classes, or namespaces to their corresponding documentation. Such a link can\n# be prevented in individual cases by putting a % sign in front of the word or\n# globally by setting AUTOLINK_SUPPORT to NO.\n# The default value is: YES.\n\nAUTOLINK_SUPPORT       = YES\n\n# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want\n# to include (a tag file for) the STL sources as input, then you should set this\n# tag to YES in order to let doxygen match functions declarations and\n# definitions whose arguments contain STL classes (e.g. func(std::string);\n# versus func(std::string) {}). This also make the inheritance and collaboration\n# diagrams that involve STL classes more complete and accurate.\n# The default value is: NO.\n\nBUILTIN_STL_SUPPORT    = NO\n\n# If you use Microsoft's C++/CLI language, you should set this option to YES to\n# enable parsing support.\n# The default value is: NO.\n\nCPP_CLI_SUPPORT        = NO\n\n# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:\n# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen\n# will parse them like normal C++ but will assume all classes use public instead\n# of private inheritance when no explicit protection keyword is present.\n# The default value is: NO.\n\nSIP_SUPPORT            = NO\n\n# For Microsoft's IDL there are propget and propput attributes to indicate\n# getter and setter methods for a property. Setting this option to YES will make\n# doxygen to replace the get and set methods by a property in the documentation.\n# This will only work if the methods are indeed getting or setting a simple\n# type. If this is not the case, or you want to show the methods anyway, you\n# should set this option to NO.\n# The default value is: YES.\n\nIDL_PROPERTY_SUPPORT   = YES\n\n# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC\n# tag is set to YES then doxygen will reuse the documentation of the first\n# member in the group (if any) for the other members of the group. By default\n# all members of a group must be documented explicitly.\n# The default value is: NO.\n\nDISTRIBUTE_GROUP_DOC   = NO\n\n# If one adds a struct or class to a group and this option is enabled, then also\n# any nested class or struct is added to the same group. By default this option\n# is disabled and one has to add nested compounds explicitly via \\ingroup.\n# The default value is: NO.\n\nGROUP_NESTED_COMPOUNDS = NO\n\n# Set the SUBGROUPING tag to YES to allow class member groups of the same type\n# (for instance a group of public functions) to be put as a subgroup of that\n# type (e.g. under the Public Functions section). Set it to NO to prevent\n# subgrouping. Alternatively, this can be done per class using the\n# \\nosubgrouping command.\n# The default value is: YES.\n\nSUBGROUPING            = YES\n\n# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions\n# are shown inside the group in which they are included (e.g. using \\ingroup)\n# instead of on a separate page (for HTML and Man pages) or section (for LaTeX\n# and RTF).\n#\n# Note that this feature does not work in combination with\n# SEPARATE_MEMBER_PAGES.\n# The default value is: NO.\n\nINLINE_GROUPED_CLASSES = NO\n\n# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions\n# with only public data fields or simple typedef fields will be shown inline in\n# the documentation of the scope in which they are defined (i.e. file,\n# namespace, or group documentation), provided this scope is documented. If set\n# to NO, structs, classes, and unions are shown on a separate page (for HTML and\n# Man pages) or section (for LaTeX and RTF).\n# The default value is: NO.\n\nINLINE_SIMPLE_STRUCTS  = NO\n\n# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or\n# enum is documented as struct, union, or enum with the name of the typedef. So\n# typedef struct TypeS {} TypeT, will appear in the documentation as a struct\n# with name TypeT. When disabled the typedef will appear as a member of a file,\n# namespace, or class. And the struct will be named TypeS. This can typically be\n# useful for C code in case the coding convention dictates that all compound\n# types are typedef'ed and only the typedef is referenced, never the tag name.\n# The default value is: NO.\n\nTYPEDEF_HIDES_STRUCT   = NO\n\n# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This\n# cache is used to resolve symbols given their name and scope. Since this can be\n# an expensive process and often the same symbol appears multiple times in the\n# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small\n# doxygen will become slower. If the cache is too large, memory is wasted. The\n# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range\n# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536\n# symbols. At the end of a run doxygen will report the cache usage and suggest\n# the optimal cache size from a speed point of view.\n# Minimum value: 0, maximum value: 9, default value: 0.\n\nLOOKUP_CACHE_SIZE      = 0\n\n#---------------------------------------------------------------------------\n# Build related configuration options\n#---------------------------------------------------------------------------\n\n# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in\n# documentation are documented, even if no documentation was available. Private\n# class members and static file members will be hidden unless the\n# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.\n# Note: This will also disable the warnings about undocumented members that are\n# normally produced when WARNINGS is set to YES.\n# The default value is: NO.\n\nEXTRACT_ALL            = NO\n\n# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will\n# be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PRIVATE        = NO\n\n# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal\n# scope will be included in the documentation.\n# The default value is: NO.\n\nEXTRACT_PACKAGE        = NO\n\n# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be\n# included in the documentation.\n# The default value is: NO.\n\nEXTRACT_STATIC         = NO\n\n# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined\n# locally in source files will be included in the documentation. If set to NO,\n# only classes defined in header files are included. Does not have any effect\n# for Java sources.\n# The default value is: YES.\n\nEXTRACT_LOCAL_CLASSES  = YES\n\n# This flag is only useful for Objective-C code. If set to YES, local methods,\n# which are defined in the implementation section but not in the interface are\n# included in the documentation. If set to NO, only methods in the interface are\n# included.\n# The default value is: NO.\n\nEXTRACT_LOCAL_METHODS  = NO\n\n# If this flag is set to YES, the members of anonymous namespaces will be\n# extracted and appear in the documentation as a namespace called\n# 'anonymous_namespace{file}', where file will be replaced with the base name of\n# the file that contains the anonymous namespace. By default anonymous namespace\n# are hidden.\n# The default value is: NO.\n\nEXTRACT_ANON_NSPACES   = NO\n\n# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all\n# undocumented members inside documented classes or files. If set to NO these\n# members will be included in the various overviews, but no documentation\n# section is generated. This option has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_MEMBERS     = NO\n\n# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all\n# undocumented classes that are normally visible in the class hierarchy. If set\n# to NO, these classes will be included in the various overviews. This option\n# has no effect if EXTRACT_ALL is enabled.\n# The default value is: NO.\n\nHIDE_UNDOC_CLASSES     = NO\n\n# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend\n# (class|struct|union) declarations. If set to NO, these declarations will be\n# included in the documentation.\n# The default value is: NO.\n\nHIDE_FRIEND_COMPOUNDS  = NO\n\n# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any\n# documentation blocks found inside the body of a function. If set to NO, these\n# blocks will be appended to the function's detailed documentation block.\n# The default value is: NO.\n\nHIDE_IN_BODY_DOCS      = NO\n\n# The INTERNAL_DOCS tag determines if documentation that is typed after a\n# \\internal command is included. If the tag is set to NO then the documentation\n# will be excluded. Set it to YES to include the internal documentation.\n# The default value is: NO.\n\nINTERNAL_DOCS          = NO\n\n# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file\n# names in lower-case letters. If set to YES, upper-case letters are also\n# allowed. This is useful if you have classes or files whose names only differ\n# in case and if your file system supports case sensitive file names. Windows\n# and Mac users are advised to set this option to NO.\n# The default value is: system dependent.\n\nCASE_SENSE_NAMES       = YES\n\n# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with\n# their full class and namespace scopes in the documentation. If set to YES, the\n# scope will be hidden.\n# The default value is: NO.\n\nHIDE_SCOPE_NAMES       = NO\n\n# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will\n# append additional text to a page's title, such as Class Reference. If set to\n# YES the compound reference will be hidden.\n# The default value is: NO.\n\nHIDE_COMPOUND_REFERENCE= NO\n\n# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of\n# the files that are included by a file in the documentation of that file.\n# The default value is: YES.\n\nSHOW_INCLUDE_FILES     = YES\n\n# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each\n# grouped member an include statement to the documentation, telling the reader\n# which file to include in order to use the member.\n# The default value is: NO.\n\nSHOW_GROUPED_MEMB_INC  = NO\n\n# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include\n# files with double quotes in the documentation rather than with sharp brackets.\n# The default value is: NO.\n\nFORCE_LOCAL_INCLUDES   = NO\n\n# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the\n# documentation for inline members.\n# The default value is: YES.\n\nINLINE_INFO            = YES\n\n# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the\n# (detailed) documentation of file and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order.\n# The default value is: YES.\n\nSORT_MEMBER_DOCS       = YES\n\n# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief\n# descriptions of file, namespace and class members alphabetically by member\n# name. If set to NO, the members will appear in declaration order. Note that\n# this will also influence the order of the classes in the class list.\n# The default value is: NO.\n\nSORT_BRIEF_DOCS        = NO\n\n# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the\n# (brief and detailed) documentation of class members so that constructors and\n# destructors are listed first. If set to NO the constructors will appear in the\n# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.\n# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief\n# member documentation.\n# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting\n# detailed member documentation.\n# The default value is: NO.\n\nSORT_MEMBERS_CTORS_1ST = NO\n\n# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy\n# of group names into alphabetical order. If set to NO the group names will\n# appear in their defined order.\n# The default value is: NO.\n\nSORT_GROUP_NAMES       = NO\n\n# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by\n# fully-qualified names, including namespaces. If set to NO, the class list will\n# be sorted only by class name, not including the namespace part.\n# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.\n# Note: This option applies only to the class list, not to the alphabetical\n# list.\n# The default value is: NO.\n\nSORT_BY_SCOPE_NAME     = NO\n\n# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper\n# type resolution of all parameters of a function it will reject a match between\n# the prototype and the implementation of a member function even if there is\n# only one candidate or it is obvious which candidate to choose by doing a\n# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still\n# accept a match between prototype and implementation in such cases.\n# The default value is: NO.\n\nSTRICT_PROTO_MATCHING  = NO\n\n# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo\n# list. This list is created by putting \\todo commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TODOLIST      = YES\n\n# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test\n# list. This list is created by putting \\test commands in the documentation.\n# The default value is: YES.\n\nGENERATE_TESTLIST      = YES\n\n# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug\n# list. This list is created by putting \\bug commands in the documentation.\n# The default value is: YES.\n\nGENERATE_BUGLIST       = YES\n\n# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)\n# the deprecated list. This list is created by putting \\deprecated commands in\n# the documentation.\n# The default value is: YES.\n\nGENERATE_DEPRECATEDLIST= YES\n\n# The ENABLED_SECTIONS tag can be used to enable conditional documentation\n# sections, marked by \\if <section_label> ... \\endif and \\cond <section_label>\n# ... \\endcond blocks.\n\nENABLED_SECTIONS       =\n\n# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the\n# initial value of a variable or macro / define can have for it to appear in the\n# documentation. If the initializer consists of more lines than specified here\n# it will be hidden. Use a value of 0 to hide initializers completely. The\n# appearance of the value of individual variables and macros / defines can be\n# controlled using \\showinitializer or \\hideinitializer command in the\n# documentation regardless of this setting.\n# Minimum value: 0, maximum value: 10000, default value: 30.\n\nMAX_INITIALIZER_LINES  = 30\n\n# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at\n# the bottom of the documentation of classes and structs. If set to YES, the\n# list will mention the files that were used to generate the documentation.\n# The default value is: YES.\n\nSHOW_USED_FILES        = YES\n\n# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This\n# will remove the Files entry from the Quick Index and from the Folder Tree View\n# (if specified).\n# The default value is: YES.\n\nSHOW_FILES             = YES\n\n# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces\n# page. This will remove the Namespaces entry from the Quick Index and from the\n# Folder Tree View (if specified).\n# The default value is: YES.\n\nSHOW_NAMESPACES        = YES\n\n# The FILE_VERSION_FILTER tag can be used to specify a program or script that\n# doxygen should invoke to get the current version for each file (typically from\n# the version control system). Doxygen will invoke the program by executing (via\n# popen()) the command command input-file, where command is the value of the\n# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided\n# by doxygen. Whatever the program writes to standard output is used as the file\n# version. For an example see the documentation.\n\nFILE_VERSION_FILTER    =\n\n# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed\n# by doxygen. The layout file controls the global structure of the generated\n# output files in an output format independent way. To create the layout file\n# that represents doxygen's defaults, run doxygen with the -l option. You can\n# optionally specify a file name after the option, if omitted DoxygenLayout.xml\n# will be used as the name of the layout file.\n#\n# Note that if you run doxygen from a directory containing a file called\n# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE\n# tag is left empty.\n\nLAYOUT_FILE            =\n\n# The CITE_BIB_FILES tag can be used to specify one or more bib files containing\n# the reference definitions. This must be a list of .bib files. The .bib\n# extension is automatically appended if omitted. This requires the bibtex tool\n# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.\n# For LaTeX the style of the bibliography can be controlled using\n# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the\n# search path. See also \\cite for info how to create references.\n\nCITE_BIB_FILES         =\n\n#---------------------------------------------------------------------------\n# Configuration options related to warning and progress messages\n#---------------------------------------------------------------------------\n\n# The QUIET tag can be used to turn on/off the messages that are generated to\n# standard output by doxygen. If QUIET is set to YES this implies that the\n# messages are off.\n# The default value is: NO.\n\nQUIET                  = NO\n\n# The WARNINGS tag can be used to turn on/off the warning messages that are\n# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES\n# this implies that the warnings are on.\n#\n# Tip: Turn warnings on while writing the documentation.\n# The default value is: YES.\n\nWARNINGS               = YES\n\n# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate\n# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag\n# will automatically be disabled.\n# The default value is: YES.\n\nWARN_IF_UNDOCUMENTED   = YES\n\n# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for\n# potential errors in the documentation, such as not documenting some parameters\n# in a documented function, or documenting parameters that don't exist or using\n# markup commands wrongly.\n# The default value is: YES.\n\nWARN_IF_DOC_ERROR      = YES\n\n# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that\n# are documented, but have no documentation for their parameters or return\n# value. If set to NO, doxygen will only warn about wrong or incomplete\n# parameter documentation, but not about the absence of documentation.\n# The default value is: NO.\n\nWARN_NO_PARAMDOC       = NO\n\n# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when\n# a warning is encountered.\n# The default value is: NO.\n\nWARN_AS_ERROR          = NO\n\n# The WARN_FORMAT tag determines the format of the warning messages that doxygen\n# can produce. The string should contain the $file, $line, and $text tags, which\n# will be replaced by the file and line number from which the warning originated\n# and the warning text. Optionally the format may contain $version, which will\n# be replaced by the version of the file (if it could be obtained via\n# FILE_VERSION_FILTER)\n# The default value is: $file:$line: $text.\n\nWARN_FORMAT            = \"$file:$line: $text\"\n\n# The WARN_LOGFILE tag can be used to specify a file to which warning and error\n# messages should be written. If left blank the output is written to standard\n# error (stderr).\n\nWARN_LOGFILE           =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the input files\n#---------------------------------------------------------------------------\n\n# The INPUT tag is used to specify the files and/or directories that contain\n# documented source files. You may enter file names like myfile.cpp or\n# directories like /usr/src/myproject. Separate the files or directories with\n# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING\n# Note: If this tag is empty the current directory is searched.\n\nINPUT                  =\n\n# This tag can be used to specify the character encoding of the source files\n# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses\n# libiconv (or the iconv built into libc) for the transcoding. See the libiconv\n# documentation (see: http://www.gnu.org/software/libiconv) for the list of\n# possible encodings.\n# The default value is: UTF-8.\n\nINPUT_ENCODING         = UTF-8\n\n# If the value of the INPUT tag contains directories, you can use the\n# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and\n# *.h) to filter out the source-files in the directories.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# read by doxygen.\n#\n# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,\n# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,\n# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,\n# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f, *.for, *.tcl,\n# *.vhd, *.vhdl, *.ucf, *.qsf, *.as and *.js.\n\nFILE_PATTERNS          =\n\n# The RECURSIVE tag can be used to specify whether or not subdirectories should\n# be searched for input files as well.\n# The default value is: NO.\n\nRECURSIVE              = NO\n\n# The EXCLUDE tag can be used to specify files and/or directories that should be\n# excluded from the INPUT source files. This way you can easily exclude a\n# subdirectory from a directory tree whose root is specified with the INPUT tag.\n#\n# Note that relative paths are relative to the directory from which doxygen is\n# run.\n\nEXCLUDE                =\n\n# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or\n# directories that are symbolic links (a Unix file system feature) are excluded\n# from the input.\n# The default value is: NO.\n\nEXCLUDE_SYMLINKS       = NO\n\n# If the value of the INPUT tag contains directories, you can use the\n# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude\n# certain files from those directories.\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories for example use the pattern */test/*\n\nEXCLUDE_PATTERNS       =\n\n# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names\n# (namespaces, classes, functions, etc.) that should be excluded from the\n# output. The symbol name can be a fully qualified name, a word, or if the\n# wildcard * is used, a substring. Examples: ANamespace, AClass,\n# AClass::ANamespace, ANamespace::*Test\n#\n# Note that the wildcards are matched against the file with absolute path, so to\n# exclude all test directories use the pattern */test/*\n\nEXCLUDE_SYMBOLS        =\n\n# The EXAMPLE_PATH tag can be used to specify one or more files or directories\n# that contain example code fragments that are included (see the \\include\n# command).\n\nEXAMPLE_PATH           =\n\n# If the value of the EXAMPLE_PATH tag contains directories, you can use the\n# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and\n# *.h) to filter out the source-files in the directories. If left blank all\n# files are included.\n\nEXAMPLE_PATTERNS       =\n\n# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be\n# searched for input files to be used with the \\include or \\dontinclude commands\n# irrespective of the value of the RECURSIVE tag.\n# The default value is: NO.\n\nEXAMPLE_RECURSIVE      = NO\n\n# The IMAGE_PATH tag can be used to specify one or more files or directories\n# that contain images that are to be included in the documentation (see the\n# \\image command).\n\nIMAGE_PATH             =\n\n# The INPUT_FILTER tag can be used to specify a program that doxygen should\n# invoke to filter for each input file. Doxygen will invoke the filter program\n# by executing (via popen()) the command:\n#\n# <filter> <input-file>\n#\n# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the\n# name of an input file. Doxygen will then use the output that the filter\n# program writes to standard output. If FILTER_PATTERNS is specified, this tag\n# will be ignored.\n#\n# Note that the filter must not add or remove lines; it is applied before the\n# code is scanned, but not when the output code is generated. If lines are added\n# or removed, the anchors will not be placed correctly.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nINPUT_FILTER           = /usr/bin/doxypy\n\n# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern\n# basis. Doxygen will compare the file name with each pattern and apply the\n# filter if there is a match. The filters are a list of the form: pattern=filter\n# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how\n# filters are used. If the FILTER_PATTERNS tag is empty or if none of the\n# patterns match the file name, INPUT_FILTER is applied.\n#\n# Note that for custom extensions or not directly supported extensions you also\n# need to set EXTENSION_MAPPING for the extension otherwise the files are not\n# properly processed by doxygen.\n\nFILTER_PATTERNS        =\n\n# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using\n# INPUT_FILTER) will also be used to filter the input files that are used for\n# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).\n# The default value is: NO.\n\nFILTER_SOURCE_FILES    = NO\n\n# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file\n# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and\n# it is also possible to disable source filtering for a specific pattern using\n# *.ext= (so without naming a filter).\n# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.\n\nFILTER_SOURCE_PATTERNS =\n\n# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that\n# is part of the input, its contents will be placed on the main page\n# (index.html). This can be useful if you have a project on for instance GitHub\n# and want to reuse the introduction page also for the doxygen output.\n\nUSE_MDFILE_AS_MAINPAGE =\n\n#---------------------------------------------------------------------------\n# Configuration options related to source browsing\n#---------------------------------------------------------------------------\n\n# If the SOURCE_BROWSER tag is set to YES then a list of source files will be\n# generated. Documented entities will be cross-referenced with these sources.\n#\n# Note: To get rid of all source code in the generated output, make sure that\n# also VERBATIM_HEADERS is set to NO.\n# The default value is: NO.\n\nSOURCE_BROWSER         = NO\n\n# Setting the INLINE_SOURCES tag to YES will include the body of functions,\n# classes and enums directly into the documentation.\n# The default value is: NO.\n\nINLINE_SOURCES         = NO\n\n# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any\n# special comment blocks from generated source code fragments. Normal C, C++ and\n# Fortran comments will always remain visible.\n# The default value is: YES.\n\nSTRIP_CODE_COMMENTS    = YES\n\n# If the REFERENCED_BY_RELATION tag is set to YES then for each documented\n# function all documented functions referencing it will be listed.\n# The default value is: NO.\n\nREFERENCED_BY_RELATION = NO\n\n# If the REFERENCES_RELATION tag is set to YES then for each documented function\n# all documented entities called/used by that function will be listed.\n# The default value is: NO.\n\nREFERENCES_RELATION    = NO\n\n# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set\n# to YES then the hyperlinks from functions in REFERENCES_RELATION and\n# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will\n# link to the documentation.\n# The default value is: YES.\n\nREFERENCES_LINK_SOURCE = YES\n\n# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the\n# source code will show a tooltip with additional information such as prototype,\n# brief description and links to the definition and documentation. Since this\n# will make the HTML file larger and loading of large files a bit slower, you\n# can opt to disable this feature.\n# The default value is: YES.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nSOURCE_TOOLTIPS        = YES\n\n# If the USE_HTAGS tag is set to YES then the references to source code will\n# point to the HTML generated by the htags(1) tool instead of doxygen built-in\n# source browser. The htags tool is part of GNU's global source tagging system\n# (see http://www.gnu.org/software/global/global.html). You will need version\n# 4.8.6 or higher.\n#\n# To use it do the following:\n# - Install the latest version of global\n# - Enable SOURCE_BROWSER and USE_HTAGS in the config file\n# - Make sure the INPUT points to the root of the source tree\n# - Run doxygen as normal\n#\n# Doxygen will invoke htags (and that will in turn invoke gtags), so these\n# tools must be available from the command line (i.e. in the search path).\n#\n# The result: instead of the source browser generated by doxygen, the links to\n# source code will now point to the output of htags.\n# The default value is: NO.\n# This tag requires that the tag SOURCE_BROWSER is set to YES.\n\nUSE_HTAGS              = NO\n\n# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a\n# verbatim copy of the header file for each class for which an include is\n# specified. Set to NO to disable this.\n# See also: Section \\class.\n# The default value is: YES.\n\nVERBATIM_HEADERS       = YES\n\n# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the\n# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the\n# cost of reduced performance. This can be particularly helpful with template\n# rich C++ code for which doxygen's built-in parser lacks the necessary type\n# information.\n# Note: The availability of this option depends on whether or not doxygen was\n# generated with the -Duse-libclang=ON option for CMake.\n# The default value is: NO.\n\nCLANG_ASSISTED_PARSING = NO\n\n# If clang assisted parsing is enabled you can provide the compiler with command\n# line options that you would normally use when invoking the compiler. Note that\n# the include paths will already be set by doxygen for the files and directories\n# specified with INPUT and INCLUDE_PATH.\n# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.\n\nCLANG_OPTIONS          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the alphabetical class index\n#---------------------------------------------------------------------------\n\n# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all\n# compounds will be generated. Enable this if the project contains a lot of\n# classes, structs, unions or interfaces.\n# The default value is: YES.\n\nALPHABETICAL_INDEX     = YES\n\n# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in\n# which the alphabetical index list will be split.\n# Minimum value: 1, maximum value: 20, default value: 5.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nCOLS_IN_ALPHA_INDEX    = 5\n\n# In case all classes in a project start with a common prefix, all classes will\n# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag\n# can be used to specify a prefix (or a list of prefixes) that should be ignored\n# while generating the index headers.\n# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.\n\nIGNORE_PREFIX          =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the HTML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output\n# The default value is: YES.\n\nGENERATE_HTML          = YES\n\n# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_OUTPUT            = html\n\n# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each\n# generated HTML page (for example: .htm, .php, .asp).\n# The default value is: .html.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FILE_EXTENSION    = .html\n\n# The HTML_HEADER tag can be used to specify a user-defined HTML header file for\n# each generated HTML page. If the tag is left blank doxygen will generate a\n# standard header.\n#\n# To get valid HTML the header file that includes any scripts and style sheets\n# that doxygen needs, which is dependent on the configuration options used (e.g.\n# the setting GENERATE_TREEVIEW). It is highly recommended to start with a\n# default header using\n# doxygen -w html new_header.html new_footer.html new_stylesheet.css\n# YourConfigFile\n# and then modify the file new_header.html. See also section \"Doxygen usage\"\n# for information on how to generate the default header that doxygen normally\n# uses.\n# Note: The header is subject to change so you typically have to regenerate the\n# default header when upgrading to a newer version of doxygen. For a description\n# of the possible markers and block names see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_HEADER            =\n\n# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each\n# generated HTML page. If the tag is left blank doxygen will generate a standard\n# footer. See HTML_HEADER for more information on how to generate a default\n# footer and what special commands can be used inside the footer. See also\n# section \"Doxygen usage\" for information on how to generate the default footer\n# that doxygen normally uses.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_FOOTER            =\n\n# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style\n# sheet that is used by each HTML page. It can be used to fine-tune the look of\n# the HTML output. If left blank doxygen will generate a default style sheet.\n# See also section \"Doxygen usage\" for information on how to generate the style\n# sheet that doxygen normally uses.\n# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as\n# it is more robust and this tag (HTML_STYLESHEET) will in the future become\n# obsolete.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_STYLESHEET        =\n\n# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# cascading style sheets that are included after the standard style sheets\n# created by doxygen. Using this option one can overrule certain style aspects.\n# This is preferred over using HTML_STYLESHEET since it does not replace the\n# standard style sheet and is therefore more robust against future updates.\n# Doxygen will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list). For an example see the documentation.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_STYLESHEET  =\n\n# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the HTML output directory. Note\n# that these files will be copied to the base HTML output directory. Use the\n# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these\n# files. In the HTML_STYLESHEET file, use the file name only. Also note that the\n# files will be copied as-is; there are no commands or markers available.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_EXTRA_FILES       =\n\n# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen\n# will adjust the colors in the style sheet and background images according to\n# this color. Hue is specified as an angle on a colorwheel, see\n# http://en.wikipedia.org/wiki/Hue for more information. For instance the value\n# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300\n# purple, and 360 is red again.\n# Minimum value: 0, maximum value: 359, default value: 220.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_HUE    = 220\n\n# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors\n# in the HTML output. For a value of 0 the output will use grayscales only. A\n# value of 255 will produce the most vivid colors.\n# Minimum value: 0, maximum value: 255, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_SAT    = 100\n\n# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the\n# luminance component of the colors in the HTML output. Values below 100\n# gradually make the output lighter, whereas values above 100 make the output\n# darker. The value divided by 100 is the actual gamma applied, so 80 represents\n# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not\n# change the gamma.\n# Minimum value: 40, maximum value: 240, default value: 80.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_COLORSTYLE_GAMMA  = 80\n\n# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML\n# page will contain the date and time when the page was generated. Setting this\n# to YES can help to show when doxygen was last run and thus if the\n# documentation is up to date.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_TIMESTAMP         = NO\n\n# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML\n# documentation will contain sections that can be hidden and shown after the\n# page has loaded.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_DYNAMIC_SECTIONS  = NO\n\n# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries\n# shown in the various tree structured indices initially; the user can expand\n# and collapse entries dynamically later on. Doxygen will expand the tree to\n# such a level that at most the specified number of entries are visible (unless\n# a fully collapsed tree already exceeds this amount). So setting the number of\n# entries 1 will produce a full collapsed tree by default. 0 is a special value\n# representing an infinite number of entries and will result in a full expanded\n# tree by default.\n# Minimum value: 0, maximum value: 9999, default value: 100.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nHTML_INDEX_NUM_ENTRIES = 100\n\n# If the GENERATE_DOCSET tag is set to YES, additional index files will be\n# generated that can be used as input for Apple's Xcode 3 integrated development\n# environment (see: http://developer.apple.com/tools/xcode/), introduced with\n# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a\n# Makefile in the HTML output directory. Running make will produce the docset in\n# that directory and running make install will install the docset in\n# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at\n# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html\n# for more information.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_DOCSET        = NO\n\n# This tag determines the name of the docset feed. A documentation feed provides\n# an umbrella under which multiple documentation sets from a single provider\n# (such as a company or product suite) can be grouped.\n# The default value is: Doxygen generated docs.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_FEEDNAME        = \"Doxygen generated docs\"\n\n# This tag specifies a string that should uniquely identify the documentation\n# set bundle. This should be a reverse domain-name style string, e.g.\n# com.mycompany.MyDocSet. Doxygen will append .docset to the name.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_BUNDLE_ID       = org.doxygen.Project\n\n# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify\n# the documentation publisher. This should be a reverse domain-name style\n# string, e.g. com.mycompany.MyDocSet.documentation.\n# The default value is: org.doxygen.Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_ID    = org.doxygen.Publisher\n\n# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.\n# The default value is: Publisher.\n# This tag requires that the tag GENERATE_DOCSET is set to YES.\n\nDOCSET_PUBLISHER_NAME  = Publisher\n\n# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three\n# additional HTML index files: index.hhp, index.hhc, and index.hhk. The\n# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop\n# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on\n# Windows.\n#\n# The HTML Help Workshop contains a compiler that can convert all HTML output\n# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML\n# files are now used as the Windows 98 help format, and will replace the old\n# Windows help format (.hlp) on all Windows platforms in the future. Compressed\n# HTML files also contain an index, a table of contents, and you can search for\n# words in the documentation. The HTML workshop also contains a viewer for\n# compressed HTML files.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_HTMLHELP      = NO\n\n# The CHM_FILE tag can be used to specify the file name of the resulting .chm\n# file. You can add a path in front of the file if the result should not be\n# written to the html output directory.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_FILE               =\n\n# The HHC_LOCATION tag can be used to specify the location (absolute path\n# including file name) of the HTML help compiler (hhc.exe). If non-empty,\n# doxygen will try to run the HTML help compiler on the generated index.hhp.\n# The file has to be specified with full path.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nHHC_LOCATION           =\n\n# The GENERATE_CHI flag controls if a separate .chi index file is generated\n# (YES) or that it should be included in the master .chm file (NO).\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nGENERATE_CHI           = NO\n\n# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)\n# and project file content.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nCHM_INDEX_ENCODING     =\n\n# The BINARY_TOC flag controls whether a binary table of contents is generated\n# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it\n# enables the Previous and Next buttons.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nBINARY_TOC             = NO\n\n# The TOC_EXPAND flag can be set to YES to add extra items for group members to\n# the table of contents of the HTML help documentation and to the tree view.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTMLHELP is set to YES.\n\nTOC_EXPAND             = NO\n\n# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and\n# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that\n# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help\n# (.qch) of the generated HTML documentation.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_QHP           = NO\n\n# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify\n# the file name of the resulting .qch file. The path specified is relative to\n# the HTML output folder.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQCH_FILE               =\n\n# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help\n# Project output. For more information please see Qt Help Project / Namespace\n# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_NAMESPACE          = org.doxygen.Project\n\n# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt\n# Help Project output. For more information please see Qt Help Project / Virtual\n# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-\n# folders).\n# The default value is: doc.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_VIRTUAL_FOLDER     = doc\n\n# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom\n# filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_NAME   =\n\n# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the\n# custom filter to add. For more information please see Qt Help Project / Custom\n# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-\n# filters).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_CUST_FILTER_ATTRS  =\n\n# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this\n# project's filter section matches. Qt Help Project / Filter Attributes (see:\n# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHP_SECT_FILTER_ATTRS  =\n\n# The QHG_LOCATION tag can be used to specify the location of Qt's\n# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the\n# generated .qhp file.\n# This tag requires that the tag GENERATE_QHP is set to YES.\n\nQHG_LOCATION           =\n\n# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be\n# generated, together with the HTML files, they form an Eclipse help plugin. To\n# install this plugin and make it available under the help contents menu in\n# Eclipse, the contents of the directory containing the HTML and XML files needs\n# to be copied into the plugins directory of eclipse. The name of the directory\n# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.\n# After copying Eclipse needs to be restarted before the help appears.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_ECLIPSEHELP   = NO\n\n# A unique identifier for the Eclipse help plugin. When installing the plugin\n# the directory name containing the HTML and XML files should also have this\n# name. Each documentation set should have its own identifier.\n# The default value is: org.doxygen.Project.\n# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.\n\nECLIPSE_DOC_ID         = org.doxygen.Project\n\n# If you want full control over the layout of the generated HTML pages it might\n# be necessary to disable the index and replace it with your own. The\n# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top\n# of each HTML page. A value of NO enables the index and the value YES disables\n# it. Since the tabs in the index contain the same information as the navigation\n# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nDISABLE_INDEX          = NO\n\n# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index\n# structure should be generated to display hierarchical information. If the tag\n# value is set to YES, a side panel will be generated containing a tree-like\n# index structure (just like the one that is generated for HTML Help). For this\n# to work a browser that supports JavaScript, DHTML, CSS and frames is required\n# (i.e. any modern browser). Windows users are probably better off using the\n# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can\n# further fine-tune the look of the index. As an example, the default style\n# sheet generated by doxygen has an example that shows how to put an image at\n# the root of the tree instead of the PROJECT_NAME. Since the tree basically has\n# the same information as the tab index, you could consider setting\n# DISABLE_INDEX to YES when enabling this option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nGENERATE_TREEVIEW      = NO\n\n# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that\n# doxygen will group on one line in the generated HTML documentation.\n#\n# Note that a value of 0 will completely suppress the enum values from appearing\n# in the overview section.\n# Minimum value: 0, maximum value: 20, default value: 4.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nENUM_VALUES_PER_LINE   = 4\n\n# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used\n# to set the initial width (in pixels) of the frame in which the tree is shown.\n# Minimum value: 0, maximum value: 1500, default value: 250.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nTREEVIEW_WIDTH         = 250\n\n# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to\n# external symbols imported via tag files in a separate window.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nEXT_LINKS_IN_WINDOW    = NO\n\n# Use this tag to change the font size of LaTeX formulas included as images in\n# the HTML documentation. When you change the font size after a successful\n# doxygen run you need to manually remove any form_*.png images from the HTML\n# output directory to force them to be regenerated.\n# Minimum value: 8, maximum value: 50, default value: 10.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_FONTSIZE       = 10\n\n# Use the FORMULA_TRANPARENT tag to determine whether or not the images\n# generated for formulas are transparent PNGs. Transparent PNGs are not\n# supported properly for IE 6.0, but are supported on all modern browsers.\n#\n# Note that when changing this option you need to delete any form_*.png files in\n# the HTML output directory before the changes have effect.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nFORMULA_TRANSPARENT    = YES\n\n# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see\n# http://www.mathjax.org) which uses client side Javascript for the rendering\n# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX\n# installed or if you want to formulas look prettier in the HTML output. When\n# enabled you may also need to install MathJax separately and configure the path\n# to it using the MATHJAX_RELPATH option.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nUSE_MATHJAX            = NO\n\n# When MathJax is enabled you can set the default output format to be used for\n# the MathJax output. See the MathJax site (see:\n# http://docs.mathjax.org/en/latest/output.html) for more details.\n# Possible values are: HTML-CSS (which is slower, but has the best\n# compatibility), NativeMML (i.e. MathML) and SVG.\n# The default value is: HTML-CSS.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_FORMAT         = HTML-CSS\n\n# When MathJax is enabled you need to specify the location relative to the HTML\n# output directory using the MATHJAX_RELPATH option. The destination directory\n# should contain the MathJax.js script. For instance, if the mathjax directory\n# is located at the same level as the HTML output directory, then\n# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax\n# Content Delivery Network so you can quickly see the result without installing\n# MathJax. However, it is strongly recommended to install a local copy of\n# MathJax from http://www.mathjax.org before deployment.\n# The default value is: http://cdn.mathjax.org/mathjax/latest.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest\n\n# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax\n# extension names that should be enabled during MathJax rendering. For example\n# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_EXTENSIONS     =\n\n# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces\n# of code that will be used on startup of the MathJax code. See the MathJax site\n# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an\n# example see the documentation.\n# This tag requires that the tag USE_MATHJAX is set to YES.\n\nMATHJAX_CODEFILE       =\n\n# When the SEARCHENGINE tag is enabled doxygen will generate a search box for\n# the HTML output. The underlying search engine uses javascript and DHTML and\n# should work on any modern browser. Note that when using HTML help\n# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)\n# there is already a search function so this one should typically be disabled.\n# For large projects the javascript based search engine can be slow, then\n# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to\n# search using the keyboard; to jump to the search box use <access key> + S\n# (what the <access key> is depends on the OS and browser, but it is typically\n# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down\n# key> to jump into the search results window, the results can be navigated\n# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel\n# the search. The filter options can be selected when the cursor is inside the\n# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>\n# to select a filter and <Enter> or <escape> to activate or cancel the filter\n# option.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_HTML is set to YES.\n\nSEARCHENGINE           = YES\n\n# When the SERVER_BASED_SEARCH tag is enabled the search engine will be\n# implemented using a web server instead of a web client using Javascript. There\n# are two flavors of web server based searching depending on the EXTERNAL_SEARCH\n# setting. When disabled, doxygen will generate a PHP script for searching and\n# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing\n# and searching needs to be provided by external tools. See the section\n# \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSERVER_BASED_SEARCH    = NO\n\n# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP\n# script for searching. Instead the search results are written to an XML file\n# which needs to be processed by an external indexer. Doxygen will invoke an\n# external search engine pointed to by the SEARCHENGINE_URL option to obtain the\n# search results.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/).\n#\n# See the section \"External Indexing and Searching\" for details.\n# The default value is: NO.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH        = NO\n\n# The SEARCHENGINE_URL should point to a search engine hosted by a web server\n# which will return the search results when EXTERNAL_SEARCH is enabled.\n#\n# Doxygen ships with an example indexer (doxyindexer) and search engine\n# (doxysearch.cgi) which are based on the open source search engine library\n# Xapian (see: http://xapian.org/). See the section \"External Indexing and\n# Searching\" for details.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHENGINE_URL       =\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed\n# search data is written to a file for indexing by an external tool. With the\n# SEARCHDATA_FILE tag the name of this file can be specified.\n# The default file is: searchdata.xml.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nSEARCHDATA_FILE        = searchdata.xml\n\n# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the\n# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is\n# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple\n# projects and redirect the results back to the right project.\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTERNAL_SEARCH_ID     =\n\n# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen\n# projects other than the one defined by this configuration file, but that are\n# all added to the same external search index. Each project needs to have a\n# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of\n# to a relative location where the documentation can be found. The format is:\n# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...\n# This tag requires that the tag SEARCHENGINE is set to YES.\n\nEXTRA_SEARCH_MAPPINGS  =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the LaTeX output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.\n# The default value is: YES.\n\nGENERATE_LATEX         = YES\n\n# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_OUTPUT           = latex\n\n# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be\n# invoked.\n#\n# Note that when enabling USE_PDFLATEX this option is only used for generating\n# bitmaps for formulas in the HTML output, but not in the Makefile that is\n# written to the output directory.\n# The default file is: latex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_CMD_NAME         = latex\n\n# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate\n# index for LaTeX.\n# The default file is: makeindex.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nMAKEINDEX_CMD_NAME     = makeindex\n\n# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nCOMPACT_LATEX          = NO\n\n# The PAPER_TYPE tag can be used to set the paper type that is used by the\n# printer.\n# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x\n# 14 inches) and executive (7.25 x 10.5 inches).\n# The default value is: a4.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPAPER_TYPE             = a4\n\n# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names\n# that should be included in the LaTeX output. The package can be specified just\n# by its name or with the correct syntax as to be used with the LaTeX\n# \\usepackage command. To get the times font for instance you can specify :\n# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}\n# To use the option intlimits with the amsmath package you can specify:\n# EXTRA_PACKAGES=[intlimits]{amsmath}\n# If left blank no extra packages will be included.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nEXTRA_PACKAGES         =\n\n# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the\n# generated LaTeX document. The header should contain everything until the first\n# chapter. If it is left blank doxygen will generate a standard header. See\n# section \"Doxygen usage\" for information on how to let doxygen write the\n# default header to a separate file.\n#\n# Note: Only use a user-defined header if you know what you are doing! The\n# following commands have a special meaning inside the header: $title,\n# $datetime, $date, $doxygenversion, $projectname, $projectnumber,\n# $projectbrief, $projectlogo. Doxygen will replace $title with the empty\n# string, for the replacement values of the other commands the user is referred\n# to HTML_HEADER.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HEADER           =\n\n# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the\n# generated LaTeX document. The footer should contain everything after the last\n# chapter. If it is left blank doxygen will generate a standard footer. See\n# LATEX_HEADER for more information on how to generate a default footer and what\n# special commands can be used inside the footer.\n#\n# Note: Only use a user-defined footer if you know what you are doing!\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_FOOTER           =\n\n# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined\n# LaTeX style sheets that are included after the standard style sheets created\n# by doxygen. Using this option one can overrule certain style aspects. Doxygen\n# will copy the style sheet files to the output directory.\n# Note: The order of the extra style sheet files is of importance (e.g. the last\n# style sheet in the list overrules the setting of the previous ones in the\n# list).\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_STYLESHEET =\n\n# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or\n# other source files which should be copied to the LATEX_OUTPUT output\n# directory. Note that the files will be copied as-is; there are no commands or\n# markers available.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_EXTRA_FILES      =\n\n# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is\n# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will\n# contain links (just like the HTML output) instead of page references. This\n# makes the output suitable for online browsing using a PDF viewer.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nPDF_HYPERLINKS         = YES\n\n# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate\n# the PDF file directly from the LaTeX files. Set this option to YES, to get a\n# higher quality PDF documentation.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nUSE_PDFLATEX           = YES\n\n# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode\n# command to the generated LaTeX files. This will instruct LaTeX to keep running\n# if errors occur, instead of asking the user for help. This option is also used\n# when generating formulas in HTML.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BATCHMODE        = NO\n\n# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the\n# index chapters (such as File Index, Compound Index, etc.) in the output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_HIDE_INDICES     = NO\n\n# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source\n# code with syntax highlighting in the LaTeX output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_SOURCE_CODE      = NO\n\n# The LATEX_BIB_STYLE tag can be used to specify the style to use for the\n# bibliography, e.g. plainnat, or ieeetr. See\n# http://en.wikipedia.org/wiki/BibTeX and \\cite for more info.\n# The default value is: plain.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_BIB_STYLE        = plain\n\n# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated\n# page will contain the date and time when the page was generated. Setting this\n# to NO can help when comparing the output of multiple runs.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_LATEX is set to YES.\n\nLATEX_TIMESTAMP        = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the RTF output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The\n# RTF output is optimized for Word 97 and may not look too pretty with other RTF\n# readers/editors.\n# The default value is: NO.\n\nGENERATE_RTF           = NO\n\n# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: rtf.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_OUTPUT             = rtf\n\n# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF\n# documents. This may be useful for small projects and may help to save some\n# trees in general.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nCOMPACT_RTF            = NO\n\n# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will\n# contain hyperlink fields. The RTF file will contain links (just like the HTML\n# output) instead of page references. This makes the output suitable for online\n# browsing using Word or some other Word compatible readers that support those\n# fields.\n#\n# Note: WordPad (write) and others do not support links.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_HYPERLINKS         = NO\n\n# Load stylesheet definitions from file. Syntax is similar to doxygen's config\n# file, i.e. a series of assignments. You only have to provide replacements,\n# missing definitions are set to their default value.\n#\n# See also section \"Doxygen usage\" for information on how to generate the\n# default style sheet that doxygen normally uses.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_STYLESHEET_FILE    =\n\n# Set optional variables used in the generation of an RTF document. Syntax is\n# similar to doxygen's config file. A template extensions file can be generated\n# using doxygen -e rtf extensionFile.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_EXTENSIONS_FILE    =\n\n# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code\n# with syntax highlighting in the RTF output.\n#\n# Note that which sources are shown also depends on other settings such as\n# SOURCE_BROWSER.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_RTF is set to YES.\n\nRTF_SOURCE_CODE        = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the man page output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for\n# classes and files.\n# The default value is: NO.\n\nGENERATE_MAN           = NO\n\n# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it. A directory man3 will be created inside the directory specified by\n# MAN_OUTPUT.\n# The default directory is: man.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_OUTPUT             = man\n\n# The MAN_EXTENSION tag determines the extension that is added to the generated\n# man pages. In case the manual section does not start with a number, the number\n# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is\n# optional.\n# The default value is: .3.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_EXTENSION          = .3\n\n# The MAN_SUBDIR tag determines the name of the directory created within\n# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by\n# MAN_EXTENSION with the initial . removed.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_SUBDIR             =\n\n# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it\n# will generate one additional man file for each entity documented in the real\n# man page(s). These additional files only source the real man page, but without\n# them the man command would be unable to find the correct page.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_MAN is set to YES.\n\nMAN_LINKS              = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the XML output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that\n# captures the structure of the code including all documentation.\n# The default value is: NO.\n\nGENERATE_XML           = NO\n\n# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a\n# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of\n# it.\n# The default directory is: xml.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_OUTPUT             = xml\n\n# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program\n# listings (including syntax highlighting and cross-referencing information) to\n# the XML output. Note that enabling this will significantly increase the size\n# of the XML output.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_XML is set to YES.\n\nXML_PROGRAMLISTING     = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to the DOCBOOK output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files\n# that can be used to generate PDF.\n# The default value is: NO.\n\nGENERATE_DOCBOOK       = NO\n\n# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.\n# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in\n# front of it.\n# The default directory is: docbook.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_OUTPUT         = docbook\n\n# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the\n# program listings (including syntax highlighting and cross-referencing\n# information) to the DOCBOOK output. Note that enabling this will significantly\n# increase the size of the DOCBOOK output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_DOCBOOK is set to YES.\n\nDOCBOOK_PROGRAMLISTING = NO\n\n#---------------------------------------------------------------------------\n# Configuration options for the AutoGen Definitions output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an\n# AutoGen Definitions (see http://autogen.sf.net) file that captures the\n# structure of the code including all documentation. Note that this feature is\n# still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_AUTOGEN_DEF   = NO\n\n#---------------------------------------------------------------------------\n# Configuration options related to the Perl module output\n#---------------------------------------------------------------------------\n\n# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module\n# file that captures the structure of the code including all documentation.\n#\n# Note that this feature is still experimental and incomplete at the moment.\n# The default value is: NO.\n\nGENERATE_PERLMOD       = NO\n\n# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary\n# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI\n# output from the Perl module output.\n# The default value is: NO.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_LATEX          = NO\n\n# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely\n# formatted so it can be parsed by a human reader. This is useful if you want to\n# understand what is going on. On the other hand, if this tag is set to NO, the\n# size of the Perl module output will be much smaller and Perl will parse it\n# just the same.\n# The default value is: YES.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_PRETTY         = YES\n\n# The names of the make variables in the generated doxyrules.make file are\n# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful\n# so different doxyrules.make files included by the same Makefile don't\n# overwrite each other's variables.\n# This tag requires that the tag GENERATE_PERLMOD is set to YES.\n\nPERLMOD_MAKEVAR_PREFIX =\n\n#---------------------------------------------------------------------------\n# Configuration options related to the preprocessor\n#---------------------------------------------------------------------------\n\n# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all\n# C-preprocessor directives found in the sources and include files.\n# The default value is: YES.\n\nENABLE_PREPROCESSING   = YES\n\n# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names\n# in the source code. If set to NO, only conditional compilation will be\n# performed. Macro expansion can be done in a controlled way by setting\n# EXPAND_ONLY_PREDEF to YES.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nMACRO_EXPANSION        = NO\n\n# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then\n# the macro expansion is limited to the macros specified with the PREDEFINED and\n# EXPAND_AS_DEFINED tags.\n# The default value is: NO.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_ONLY_PREDEF     = NO\n\n# If the SEARCH_INCLUDES tag is set to YES, the include files in the\n# INCLUDE_PATH will be searched if a #include is found.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSEARCH_INCLUDES        = YES\n\n# The INCLUDE_PATH tag can be used to specify one or more directories that\n# contain include files that are not input files but should be processed by the\n# preprocessor.\n# This tag requires that the tag SEARCH_INCLUDES is set to YES.\n\nINCLUDE_PATH           =\n\n# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard\n# patterns (like *.h and *.hpp) to filter out the header-files in the\n# directories. If left blank, the patterns specified with FILE_PATTERNS will be\n# used.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nINCLUDE_FILE_PATTERNS  =\n\n# The PREDEFINED tag can be used to specify one or more macro names that are\n# defined before the preprocessor is started (similar to the -D option of e.g.\n# gcc). The argument of the tag is a list of macros of the form: name or\n# name=definition (no spaces). If the definition and the \"=\" are omitted, \"=1\"\n# is assumed. To prevent a macro definition from being undefined via #undef or\n# recursively expanded use the := operator instead of the = operator.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nPREDEFINED             =\n\n# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this\n# tag can be used to specify a list of macro names that should be expanded. The\n# macro definition that is found in the sources will be used. Use the PREDEFINED\n# tag if you want to use a different macro definition that overrules the\n# definition found in the source code.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nEXPAND_AS_DEFINED      =\n\n# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will\n# remove all references to function-like macros that are alone on a line, have\n# an all uppercase name, and do not end with a semicolon. Such function macros\n# are typically used for boiler-plate code, and will confuse the parser if not\n# removed.\n# The default value is: YES.\n# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.\n\nSKIP_FUNCTION_MACROS   = YES\n\n#---------------------------------------------------------------------------\n# Configuration options related to external references\n#---------------------------------------------------------------------------\n\n# The TAGFILES tag can be used to specify one or more tag files. For each tag\n# file the location of the external documentation should be added. The format of\n# a tag file without this location is as follows:\n# TAGFILES = file1 file2 ...\n# Adding location for the tag files is done as follows:\n# TAGFILES = file1=loc1 \"file2 = loc2\" ...\n# where loc1 and loc2 can be relative or absolute paths or URLs. See the\n# section \"Linking to external documentation\" for more information about the use\n# of tag files.\n# Note: Each tag file must have a unique name (where the name does NOT include\n# the path). If a tag file is not located in the directory in which doxygen is\n# run, you must also specify the path to the tagfile here.\n\nTAGFILES               =\n\n# When a file name is specified after GENERATE_TAGFILE, doxygen will create a\n# tag file that is based on the input files it reads. See section \"Linking to\n# external documentation\" for more information about the usage of tag files.\n\nGENERATE_TAGFILE       =\n\n# If the ALLEXTERNALS tag is set to YES, all external class will be listed in\n# the class index. If set to NO, only the inherited external classes will be\n# listed.\n# The default value is: NO.\n\nALLEXTERNALS           = NO\n\n# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed\n# in the modules index. If set to NO, only the current project's groups will be\n# listed.\n# The default value is: YES.\n\nEXTERNAL_GROUPS        = YES\n\n# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in\n# the related pages index. If set to NO, only the current project's pages will\n# be listed.\n# The default value is: YES.\n\nEXTERNAL_PAGES         = YES\n\n# The PERL_PATH should be the absolute path and name of the perl script\n# interpreter (i.e. the result of 'which perl').\n# The default file (with absolute path) is: /usr/bin/perl.\n\nPERL_PATH              = /usr/bin/perl\n\n#---------------------------------------------------------------------------\n# Configuration options related to the dot tool\n#---------------------------------------------------------------------------\n\n# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram\n# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to\n# NO turns the diagrams off. Note that this option also works with HAVE_DOT\n# disabled, but it is recommended to install and use dot, since it yields more\n# powerful graphs.\n# The default value is: YES.\n\nCLASS_DIAGRAMS         = YES\n\n# You can define message sequence charts within doxygen comments using the \\msc\n# command. Doxygen will then run the mscgen tool (see:\n# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the\n# documentation. The MSCGEN_PATH tag allows you to specify the directory where\n# the mscgen tool resides. If left empty the tool is assumed to be found in the\n# default search path.\n\nMSCGEN_PATH            =\n\n# You can include diagrams made with dia in doxygen documentation. Doxygen will\n# then run dia to produce the diagram and insert it in the documentation. The\n# DIA_PATH tag allows you to specify the directory where the dia binary resides.\n# If left empty dia is assumed to be found in the default search path.\n\nDIA_PATH               =\n\n# If set to YES the inheritance and collaboration graphs will hide inheritance\n# and usage relations if the target is undocumented or is not a class.\n# The default value is: YES.\n\nHIDE_UNDOC_RELATIONS   = YES\n\n# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is\n# available from the path. This tool is part of Graphviz (see:\n# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent\n# Bell Labs. The other options in this section have no effect if this option is\n# set to NO\n# The default value is: YES.\n\nHAVE_DOT               = YES\n\n# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed\n# to run in parallel. When set to 0 doxygen will base this on the number of\n# processors available in the system. You can set it explicitly to a value\n# larger than 0 to get control over the balance between CPU load and processing\n# speed.\n# Minimum value: 0, maximum value: 32, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_NUM_THREADS        = 0\n\n# When you want a differently looking font in the dot files that doxygen\n# generates you can specify the font name using DOT_FONTNAME. You need to make\n# sure dot is able to find the font, which can be done by putting it in a\n# standard location or by setting the DOTFONTPATH environment variable or by\n# setting DOT_FONTPATH to the directory containing the font.\n# The default value is: Helvetica.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTNAME           = Helvetica\n\n# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of\n# dot graphs.\n# Minimum value: 4, maximum value: 24, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTSIZE           = 10\n\n# By default doxygen will tell dot to use the default font as specified with\n# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set\n# the path where dot can find it using this tag.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_FONTPATH           =\n\n# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for\n# each documented class showing the direct and indirect inheritance relations.\n# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCLASS_GRAPH            = YES\n\n# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a\n# graph for each documented class showing the direct and indirect implementation\n# dependencies (inheritance, containment, and class references variables) of the\n# class with other documented classes.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCOLLABORATION_GRAPH    = YES\n\n# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for\n# groups, showing the direct groups dependencies.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGROUP_GRAPHS           = YES\n\n# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and\n# collaboration diagrams in a style similar to the OMG's Unified Modeling\n# Language.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LOOK               = NO\n\n# If the UML_LOOK tag is enabled, the fields and methods are shown inside the\n# class node. If there are many fields or methods and many nodes the graph may\n# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the\n# number of items for each type to make the size more manageable. Set this to 0\n# for no limit. Note that the threshold may be exceeded by 50% before the limit\n# is enforced. So when you set the threshold to 10, up to 15 fields may appear,\n# but if the number exceeds 15, the total amount of fields shown is limited to\n# 10.\n# Minimum value: 0, maximum value: 100, default value: 10.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nUML_LIMIT_NUM_FIELDS   = 10\n\n# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and\n# collaboration graphs will show the relations between templates and their\n# instances.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nTEMPLATE_RELATIONS     = NO\n\n# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to\n# YES then doxygen will generate a graph for each documented file showing the\n# direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDE_GRAPH          = YES\n\n# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are\n# set to YES then doxygen will generate a graph for each documented file showing\n# the direct and indirect include dependencies of the file with other documented\n# files.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINCLUDED_BY_GRAPH      = YES\n\n# If the CALL_GRAPH tag is set to YES then doxygen will generate a call\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable call graphs for selected\n# functions only using the \\callgraph command. Disabling a call graph can be\n# accomplished by means of the command \\hidecallgraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALL_GRAPH             = NO\n\n# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller\n# dependency graph for every global function or class method.\n#\n# Note that enabling this option will significantly increase the time of a run.\n# So in most cases it will be better to enable caller graphs for selected\n# functions only using the \\callergraph command. Disabling a caller graph can be\n# accomplished by means of the command \\hidecallergraph.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nCALLER_GRAPH           = NO\n\n# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical\n# hierarchy of all classes instead of a textual one.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGRAPHICAL_HIERARCHY    = YES\n\n# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the\n# dependencies a directory has on other directories in a graphical way. The\n# dependency relations are determined by the #include relations between the\n# files in the directories.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDIRECTORY_GRAPH        = YES\n\n# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images\n# generated by dot. For an explanation of the image formats see the section\n# output formats in the documentation of the dot tool (Graphviz (see:\n# http://www.graphviz.org/)).\n# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order\n# to make the SVG files visible in IE 9+ (other browsers do not have this\n# requirement).\n# Possible values are: png, png:cairo, png:cairo:cairo, png:cairo:gd, png:gd,\n# png:gd:gd, jpg, jpg:cairo, jpg:cairo:gd, jpg:gd, jpg:gd:gd, gif, gif:cairo,\n# gif:cairo:gd, gif:gd, gif:gd:gd, svg, png:gd, png:gd:gd, png:cairo,\n# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and\n# png:gdiplus:gdiplus.\n# The default value is: png.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_IMAGE_FORMAT       = png\n\n# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to\n# enable generation of interactive SVG images that allow zooming and panning.\n#\n# Note that this requires a modern browser other than Internet Explorer. Tested\n# and working are Firefox, Chrome, Safari, and Opera.\n# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make\n# the SVG files visible. Older versions of IE do not have SVG support.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nINTERACTIVE_SVG        = NO\n\n# The DOT_PATH tag can be used to specify the path where the dot tool can be\n# found. If left blank, it is assumed the dot tool can be found in the path.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_PATH               =\n\n# The DOTFILE_DIRS tag can be used to specify one or more directories that\n# contain dot files that are included in the documentation (see the \\dotfile\n# command).\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOTFILE_DIRS           =\n\n# The MSCFILE_DIRS tag can be used to specify one or more directories that\n# contain msc files that are included in the documentation (see the \\mscfile\n# command).\n\nMSCFILE_DIRS           =\n\n# The DIAFILE_DIRS tag can be used to specify one or more directories that\n# contain dia files that are included in the documentation (see the \\diafile\n# command).\n\nDIAFILE_DIRS           =\n\n# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the\n# path where java can find the plantuml.jar file. If left blank, it is assumed\n# PlantUML is not used or called during a preprocessing step. Doxygen will\n# generate a warning when it encounters a \\startuml command in this case and\n# will not generate output for the diagram.\n\nPLANTUML_JAR_PATH      =\n\n# When using plantuml, the specified paths are searched for files specified by\n# the !include statement in a plantuml block.\n\nPLANTUML_INCLUDE_PATH  =\n\n# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes\n# that will be shown in the graph. If the number of nodes in a graph becomes\n# larger than this value, doxygen will truncate the graph, which is visualized\n# by representing a node as a red box. Note that doxygen if the number of direct\n# children of the root node in a graph is already larger than\n# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that\n# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.\n# Minimum value: 0, maximum value: 10000, default value: 50.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_GRAPH_MAX_NODES    = 50\n\n# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs\n# generated by dot. A depth value of 3 means that only nodes reachable from the\n# root by following a path via at most 3 edges will be shown. Nodes that lay\n# further from the root node will be omitted. Note that setting this option to 1\n# or 2 may greatly reduce the computation time needed for large code bases. Also\n# note that the size of a graph can be further restricted by\n# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.\n# Minimum value: 0, maximum value: 1000, default value: 0.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nMAX_DOT_GRAPH_DEPTH    = 0\n\n# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent\n# background. This is disabled by default, because dot on Windows does not seem\n# to support this out of the box.\n#\n# Warning: Depending on the platform used, enabling this option may lead to\n# badly anti-aliased labels on the edges of a graph (i.e. they become hard to\n# read).\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_TRANSPARENT        = NO\n\n# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output\n# files in one run (i.e. multiple -o and -T options on the command line). This\n# makes dot run faster, but since only newer versions of dot (>1.8.10) support\n# this, this feature is disabled by default.\n# The default value is: NO.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_MULTI_TARGETS      = NO\n\n# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page\n# explaining the meaning of the various boxes and arrows in the dot generated\n# graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nGENERATE_LEGEND        = YES\n\n# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot\n# files that are used to generate the various graphs.\n# The default value is: YES.\n# This tag requires that the tag HAVE_DOT is set to YES.\n\nDOT_CLEANUP            = YES\n"
  },
  {
    "path": "lib/contrast_max/__init__.py",
    "content": "# __init__.py\nfrom .events_cmax import *\nfrom .warps import *\nfrom .objectives import *\n"
  },
  {
    "path": "lib/contrast_max/events_cmax.py",
    "content": "import time\nimport numpy as np\nimport scipy\nimport scipy.optimize as opt\nfrom scipy.ndimage.filters import gaussian_filter\nimport torch\nimport copy\nfrom ..util.event_util import infer_resolution, get_events_from_mask\nfrom ..util.util import plot_image, save_image, plot_image_grid\nfrom ..visualization.draw_event_stream import plot_events\nfrom .objectives import *\nfrom .warps import *\n\ndef get_hsv_shifted():\n    \"\"\"\n    Get the colormap used in Mitrokhin etal, Event-based Moving Object Detection and Tracking\n    \"\"\"\n    from matplotlib import cm\n    from matplotlib.colors import LinearSegmentedColormap\n\n    hsv = cm.get_cmap('hsv')\n    hsv_shifted = []\n    for i in np.arange(0, 0.6666, 0.01):\n        hsv_shifted.append(hsv(np.fmod(i+0.6666, 1.0)))\n    hsv_shifted = LinearSegmentedColormap.from_list('hsv_shifted', hsv_shifted, N=100)\n    return hsv_shifted\n\ndef grid_cmax(xs, ys, ts, ps, roi_size=(20,20), step=None, warp=linvel_warp(),\n        obj=variance_objective(adaptive_lifespan=True, minimum_events=105),\n        min_events=10):\n    \"\"\"\n    Break sensor into a grid and perform contrast maximisation on each sector of grid\n    separately. Main input parameters are the events and the size of each window of the\n    grid (roi_size)\n    @param xs x components of events as list\n    @param ys y components of events as list\n    @param ts t components of events as list\n    @param ps p components of events as list\n    @param roi_size The size of the grid regions of interest (rois)\n    @param step The sliding window step size (same as roi_size if left empty)\n    @param warp The warp function to be used\n    @param The objective fuction to be used\n    @param The min number of events in a ROI to be considered valid\n    @returns List of optimal parameters, optimal function evaluations and rois\n    \"\"\"\n    step = roi_size if step is None else step\n    resolution = infer_resolution(xs, ys)\n    warpfunc = linvel_warp()\n\n    results_params = []\n    results_rois = []\n    results_f_evals = []\n    for xc in range(0, resolution[1], step[1]):\n        x_roi_idc = np.argwhere((xs>=xc) & (xs<xc+step[1]))[:, 0]\n        y_subset = ys[x_roi_idc]\n        for yc in range(0, resolution[0], step[0]):\n            y_roi_idc = np.argwhere((y_subset>=yc) & (y_subset<yc+step[0]))[:, 0]\n\n            roi_xs = xs[x_roi_idc][y_roi_idc]\n            roi_ys = ys[x_roi_idc][y_roi_idc]\n            roi_ts = ts[x_roi_idc][y_roi_idc]\n            roi_ps = ps[x_roi_idc][y_roi_idc]\n\n            if len(roi_xs) > min_events:\n                obj = variance_objective(adaptive_lifespan=True, minimum_events=105)\n                params = optimize_contrast(roi_xs, roi_ys, roi_ts, roi_ps, warp, obj, numeric_grads=False, blur_sigma=2.0, img_size=resolution, grid_search_init=True)\n                params = optimize_contrast(roi_xs, roi_ys, roi_ts, roi_ps, warp, obj, numeric_grads=False, blur_sigma=1.0, img_size=resolution, x0=params)\n                iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warp, resolution,\n                       use_polarity=True, compute_gradient=False, return_events=False)\n                f_eval = obj.evaluate_function(iwe=iwe)\n\n                results_params.append(params)\n                results_rois.append([yc, xc, step[0], step[1]])\n                results_f_evals.append(obj.evaluate_function(iwe=iwe))\n\n    return results_params, results_rois, results_f_evals\n\ndef segmentation_mask_from_d_iwe(d_iwe, th=None):\n    \"\"\"\n    Generate a segmentation mask from the derivative of the IWE wrt motion params\n    @param d_iwe First derivative of IWE wrt motion parameters\n    @param th Value threshold for segmentation mask, auto generated if left blank\n    @returns Segmentation mask\n    \"\"\"\n    th1 = np.percentile(np.abs(d_iwe), 90)\n    validx = d_iwe[0].flatten()[np.argwhere(np.abs(d_iwe[0].flatten()) > th1).squeeze()]\n    validy = d_iwe[1].flatten()[np.argwhere(np.abs(d_iwe[1].flatten()) > th1).squeeze()]\n    x_c = np.percentile(validx, 95)\n    y_c = np.percentile(validy, 95)\n\n    thx = x_c if th is None else th\n    thy = y_c if th is None else th\n\n    imgxp = np.where(d_iwe[0] > thx, 1, 0)\n    imgyp = np.where(d_iwe[1] > thy, 1, 0)\n    imgxn = np.where(d_iwe[0] < -thx, 1, 0)\n    imgyn = np.where(d_iwe[1] < -thy, 1, 0)\n    imgx = imgxp + imgxn\n    imgy = imgyp + imgyn\n    img = np.clip(np.add(imgx, imgy), 0, 1)\n    return img\n\ndef draw_objective_function(xs, ys, ts, ps, objective=variance_objective(minimum_events=1),\n        warpfunc=linvel_warp(), x_range=(-200, 200), y_range=(-200, 200),\n        gt=(0,0), show_gt=True, resolution=20, img_size=(180, 240), show_axes=True, norm_min=None, norm_max=None,\n        show=True):\n    \"\"\"\n    Draw the objective function given by sampling over a range. Depending on the value of resolution, this\n    can involve many samples and take some time.\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @param objective (object) The objective function\n    @param warpfunc (object) The warp function\n    @param x_range, y_range (tuple) the range over which to plot the parameters\n    @param gt (tuple) The ground truth\n    @param show_gt (bool) Whether to draw the ground truth in\n    @param resolution (float) The resolution of the sampling\n    @param img_size (tuple) The image sensor size\n    \"\"\"\n    width = x_range[1]-x_range[0]\n    height = y_range[1]-y_range[0]\n    print(\"Drawing objective function. Taking {} samples\".format((width*height)/resolution))\n    imshape = (int(height/resolution+0.5), int(width/resolution+0.5))\n    img = np.zeros(imshape)\n    for x in range(img.shape[1]):\n       for y in range(img.shape[0]):\n           params = np.array([x*resolution+x_range[0], y*resolution+y_range[0]])\n           img[y,x] = -objective.evaluate_function(params, xs, ys, ts, ps, warpfunc, img_size, blur_sigma=0)\n    norm_min = np.min(img) if norm_min is None else norm_min\n    norm_max = np.max(img) if norm_max is None else norm_max\n    img = (img-norm_min)/((norm_max-norm_min)+1e-6)\n    #img = cv.normalize(img, None, 0, 1.0, cv.NORM_MINMAX)\n    plt.imshow(img, interpolation='bilinear', cmap='viridis')\n    if not show_axes:\n        plt.xticks([])\n        plt.yticks([])\n    else:\n        xt = plt.xticks()[0][1:-1]\n        xticklabs = np.linspace(x_range[0], x_range[1], len(xt))\n        xticklabs = [\"{}\".format(int(x)) for x in xticklabs]\n\n        yt = plt.yticks()[0][1:-1]\n        yticklabs = np.linspace(y_range[0], y_range[1], len(yt))\n        yticklabs = [\"{}\".format(int(y)) for y in yticklabs]\n\n        plt.xticks(ticks=xt, labels=xticklabs)\n        plt.yticks(ticks=yt, labels=yticklabs)\n\n        plt.xlabel(\"$v_x$\")\n        plt.ylabel(\"$v_y$\")\n\n    if show_gt:\n        xloc = ((gt[0]-x_range[0])/(width))*imshape[1]\n        yloc = ((gt[1]-y_range[0])/(height))*imshape[0]\n        plt.axhline(y=yloc, color='r', linestyle='--')\n        plt.axvline(x=xloc, color='r', linestyle='--')\n    if show:\n        plt.show()\n\ndef find_new_range(search_axes, param):\n    \"\"\"\n    During grid search, we need to find a new search range once we have located\n    an optimal parameter. This function gives us a new search range for a given axis\n    of the search space, given a parameter value, such that all the unsearched domain around\n    that parameter is encompassed.\n    @param search_axes The previous set of samples along one axis of the search space\n    @param The current motion parameter\n    @returns The new parameter search range\n    \"\"\"\n    magnitude = np.abs(param)\n    nearest_idx = np.searchsorted(search_axes, param)\n    if nearest_idx >= len(search_axes)-1:\n        d1 = np.abs(search_axes[-1]-search_axes[-2])\n        d2 = d1\n    elif nearest_idx == 0:\n        d1 = np.abs(search_axes[0]-search_axes[-1])\n        d2 = np.abs(search_axes[0]-search_axes[1])\n    else:\n        d1 = np.abs(search_axes[nearest_idx]-search_axes[nearest_idx-1])\n        d2 = np.abs(search_axes[nearest_idx]-search_axes[nearest_idx+1])\n    param_range = [param-d1, param+d2]\n    return param_range\n\ndef grid_search_optimisation(xs, ys, ts, ps, warp_function, objective_function, img_size, param_ranges=None,\n        log_scale=True, num_samples_per_param=5, depth=0, th0=1, max_iters=20):\n    \"\"\"\n    Recursive grid-search optimization as per SOFAS. For each axis of the parameter space, samples that\n    space evenly. Having found the best point in the space, resamples the region surrounding that point,\n    expanding the range if necessary. Continues to do this until convergence (search space is smaller than\n    th0) or until iterations exceed max_iters. Can select to logarithmically sample the search space (ie\n    samples are taken more densely near the origin).\n\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @param warp_function The warp function to use\n    @param objective_function The objective function to use\n    @param img_size The size of the event camera sensor\n    @param param_ranges A list of lists, where each list contains the search range for\n       the given warp function parameter. If None, the default is to search from -100 to 100 for\n       each parameter.\n    @param log_scale If true, the sample points are drawn from a log scale. This means that\n       the parameter space is searched more frequently near the origin and less frequently at\n       the fringes.\n    @param num_samples_per_param How many samples to take per parameter. The number of evaluations\n       this method needs to perform is equal to num_samples_per_param^warp_function.dims. Thus,\n       for high dimensional warp functions, it is advised to keep this value low. Must be greater\n       than 5 and odd.\n    @param depth Keeps track of the recursion depth\n    @param th0 When the subgrid search radius is smaller than th0, convergence is reached.\n    @param max_iters Maximum number of iterations\n    @returns The optimal parameter\n    \"\"\"\n    assert num_samples_per_param%2==1 and num_samples_per_param>=5\n\n    optimal = grid_search_initial(xs, ys, ts, ps, warp_function, copy.deepcopy(objective_function),\n            img_size, param_ranges=param_ranges, log_scale=log_scale,\n            num_samples_per_param=num_samples_per_param)\n\n    params = optimal[\"min_params\"]\n    new_param_ranges = []\n    max_range = 0\n    # Iterate over each search axis and each element of the \n    # optimal parameter to find new search range\n    for sa, param in zip(optimal[\"search_axes\"], params):\n        new_range = find_new_range(sa, param)\n        new_param_ranges.append(new_range)\n        max_range = np.abs(new_range[1]-new_range[0]) if np.abs(new_range[1]-new_range[0]) > max_range else max_range\n    if max_range >= th0 and depth < max_iters:\n        return recursive_search(xs,ys,ts,ps,warp_function,objective_function,img_size,\n                param_ranges=new_param_ranges, log_scale=log_scale,\n                num_samples_per_param=num_samples_per_param, depth=depth+1)\n    else:\n        return optimal\n\n\n\ndef grid_search_initial(xs, ys, ts, ps, warp_function, objective_function, img_size, param_ranges=None,\n        log_scale=True, num_samples_per_param=5):\n    \"\"\"\n    Recursive grid-search optimization as per SOFAS. Given a set of ranges for each parametrisation axis,\n    searches that range at evenly sampled points. Can also use a logarithmically sampled space (samples are\n    denser near the origin) if desired.\n\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @param warp_function The warp function to use\n    @param objective_function The objective function to use\n    @param img_size The size of the event camera sensor\n    @param param_ranges A list of lists, where each list contains the search range for\n       the given warp function parameter. If None, the default is to search from -100 to 100 for\n       each parameter.\n    @param log_scale If true, the sample points are drawn from a log scale. This means that\n       the parameter space is searched more frequently near the origin and less frequently at\n       the fringes.\n    @param num_samples_per_param How many samples to take per parameter. The number of evaluations\n       this method needs to perform is equal to num_samples_per_param^warp_function.dims. Thus,\n       for high dimensional warp functions, it is advised to keep this value low. Must be greater\n       than 5 and odd.\n    @returns optimal is a dict with keys 'params' (the list of sampling coordinates used),\n        'eval' (the evaluation at each sample coordinate), 'search_axes' (the sample coordinates on each parameter axis),\n        'min_params' (the best parameter, minimsing the optimisation problem) and 'min_func_eval' (the function value at\n        the best parameter).\n    \"\"\"\n    assert num_samples_per_param%2 == 1\n\n    if log_scale:\n        #Function is sampled from 10^x from 0 to 2\n        scale = np.logspace(0, 2.0, int(num_samples_per_param/2.0)+1)[1:]\n        scale /= scale[-1]\n    else:\n        scale = np.linspace(0, 1.0, int(num_samples_per_param/2.0)+1)[1:]\n\n    # If the parameter ranges are empty, intialise them\n    if param_ranges is None:\n        param_ranges = []\n        for i in range(warp_function.dims):\n            param_ranges.append([-150, 150])\n\n    axes = []\n    for param_range in param_ranges:\n        rng = param_range[1]-param_range[0]\n        mid = param_range[0] + rng/2.0\n        rescale_pos = np.array(mid+scale*(rng/2.0))\n        rescale_neg = np.array(mid-scale*(rng/2.0))[::-1]\n        rescale = np.concatenate((rescale_neg, np.array([mid]), rescale_pos))\n        axes.append(rescale)\n    grids = np.meshgrid(*axes)\n    coords = np.vstack(map(np.ravel, grids))\n\n    output = {\"params\":[], \"eval\": [], \"search_axes\": axes}\n    best_eval = 0\n    best_params = None\n\n    for params in zip(*coords):\n        f_eval = objective_function.evaluate_function(params=params, xs=xs, ys=ys, ts=ts, ps=ps,\n                warpfunc=warp_function, img_size=img_size, blur_sigma=1.0)\n        output[\"params\"].append(params)\n        output[\"eval\"].append(f_eval)\n        if f_eval < best_eval:\n            best_eval = f_eval\n            best_params = params\n\n    output[\"min_params\"] = best_params\n    output[\"min_func_eval\"] = best_eval\n    return output\n\ndef optimize_contrast(xs, ys, ts, ps, warp_function, objective, optimizer=opt.fmin_bfgs, x0=None,\n        numeric_grads=False, blur_sigma=None, img_size=(180, 240), grid_search_init=False, minimum_events=200):\n    \"\"\"\n    Optimize contrast for a set of events using gradient based optimiser\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @param warp_function (function) The function with which to warp the events\n    @param objective (objective class object) The objective to optimize\n    @param optimizer (function) The optimizer to use\n    @param x0 (np array) The initial guess for optimization\n    @param numeric_grads (bool) If true, use numeric derivatives, otherwise use analytic drivatives if available.\n        Numeric grads tend to be more stable as they are a little less prone to noise and don't require as much\n        tuning on the blurring parameter. However, they do make optimization slower.\n    @param img_size (tuple) The size of the event camera sensor\n    @param blur_sigma (float) Size of the blurring kernel. Blurring the images of warped events can\n        have a large impact on the convergence of the optimization.\n    @returns The max arguments for the warp parameters wrt the objective\n    \"\"\"\n    if grid_search_init and x0 is None:\n        init_obj = copy.deepcopy(objective)\n        init_obj.adaptive_lifespan = False\n        minv = recursive_search(xs, ys, ts, ps, warp_function, init_obj, img_size, log_scale=False)\n        x0 = minv[\"min_params\"]\n    elif x0 is None:\n        x0 = np.array([0,0])\n    objective.iter_update(x0)\n    args = (xs, ys, ts, ps, warp_function, img_size, blur_sigma)\n    if numeric_grads:\n        argmax = optimizer(objective.evaluate_function, x0, args=args, epsilon=1, disp=False, callback=objective.iter_update)\n    else:\n        argmax = optimizer(objective.evaluate_function, x0, fprime=objective.evaluate_gradient, args=args, disp=True, callback=objective.iter_update)\n    return argmax\n\ndef optimize(xs, ys, ts, ps, warp, obj, numeric_grads=True, img_size=(180, 240)):\n    \"\"\"\n    Optimize contrast for a set of events using gradient based optimiser.\n    Uses optimize_contrast() for the optimiziation, but allows\n    blurring schedules for successive optimization iterations.\n    Parameters:\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @params warp (function) The function with which to warp the events\n    @params obj (objective class object) The objective to optimize\n    @params numeric_grads (bool) If true, use numeric derivatives, otherwise use analytic drivatives if available.\n        Numeric grads tend to be more stable as they are a little less prone to noise and don't require as much\n        tuning on the blurring parameter. However, they do make optimization slower.\n    @params img_size (tuple) The size of the event camera sensor\n    @returns The max arguments for the warp parameters wrt the objective\n    \"\"\"\n    numeric_grads = numeric_grads if obj.has_derivative else True\n    argmax_an = optimize_contrast(xs, ys, ts, ps, warp, obj, numeric_grads=numeric_grads, blur_sigma=1.0, img_size=img_size)\n    return argmax_an\n\ndef optimize_r2(xs, ys, ts, ps, warp, obj, numeric_grads=True, img_size=(180, 240)):\n    \"\"\"\n    Optimize contrast for a set of events, finishing with SoE loss.\n    @param xs x components of events as np array\n    @param ys y components of events as np array\n    @param ts t components of events as np array\n    @param ps p components of events as np array\n    @param warp (function) The function with which to warp the events\n    @param obj (objective class object) The objective to optimize\n    @param numeric_grads (bool) If true, use numeric derivatives, otherwise use analytic drivatives if available.\n        Numeric grads tend to be more stable as they are a little less prone to noise and don't require as much\n        tuning on the blurring parameter. However, they do make optimization slower.\n    @param img_size (tuple) The size of the event camera sensor\n    @returns The max arguments for the warp parameters wrt the objective\n    \"\"\"\n    soe_obj = soe_objective()\n    numeric_grads = numeric_grads if obj.has_derivative else True\n    argmax_an = optimize_contrast(xs, ys, ts, ps, warp, obj, numeric_grads=numeric_grads, blur_sigma=None)\n    argmax_an = optimize_contrast(xs, ys, ts, ps, warp, soe_obj, x0=argmax_an, numeric_grads=numeric_grads, blur_sigma=1.0)\n    return argmax_an\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Quick demo of various objectives.\n    Args:\n        path Path to h5 file with event data\n        gt Ground truth optic flow for event slice\n        img_size The size of the event camera sensor\n    \"\"\"\n    import argparse\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"h5 events path\")\n    parser.add_argument(\"--gt\", nargs='+', type=float, default=(0,0))\n    parser.add_argument(\"--img_size\", nargs='+', type=float, default=(180,240))\n    args = parser.parse_args()\n\n    xs, ys, ts, ps = read_h5_event_components(args.path)\n    ts = ts-ts[0]\n    gt_params = tuple(args.gt)\n    img_size=tuple(args.img_size)\n\n    start_idx = 20000\n    end_idx=start_idx+15000\n    blur = None\n\n    draw_objective_function(xs[start_idx:end_idx], ys[start_idx:end_idx], ts[start_idx:end_idx], ps[start_idx:end_idx], variance_objective(), linvel_warp())\n\n    objectives = [r1_objective(), zhu_timestamp_objective(), variance_objective(), sos_objective(), soe_objective(), moa_objective(),\n            isoa_objective(), sosa_objective(), rms_objective()]\n    warp = linvel_warp()\n    for obj in objectives:\n        argmax = optimize(xs[start_idx:end_idx], ys[start_idx:end_idx], ts[start_idx:end_idx], ps[start_idx:end_idx], warp, obj, numeric_grads=True)\n        loss = obj.evaluate_function(argmax, xs[start_idx:end_idx], ys[start_idx:end_idx], ts[start_idx:end_idx],\n                ps[start_idx:end_idx], warp, img_size=img_size)\n        gtloss = obj.evaluate_function(gt_params, xs[start_idx:end_idx], ys[start_idx:end_idx],\n                ts[start_idx:end_idx], ps[start_idx:end_idx], warp, img_size=img_size)\n        print(\"{}:({})={}, gt={}\".format(obj.name, argmax, loss, gtloss))\n        if obj.has_derivative:\n            argmax = optimize(xs[start_idx:end_idx], ys[start_idx:end_idx], ts[start_idx:end_idx],\n                    ps[start_idx:end_idx], warp, obj, numeric_grads=False)\n            loss_an = obj.evaluate_function(argmax, xs[start_idx:end_idx], ys[start_idx:end_idx],\n                    ts[start_idx:end_idx], ps[start_idx:end_idx], warp, img_size=img_size)\n            print(\"   analytical:{}={}\".format(argmax, loss_an))\n"
  },
  {
    "path": "lib/contrast_max/objectives.py",
    "content": "import numpy as np\nimport torch\nfrom ..util.event_util import *\nfrom ..representations.image import *\nfrom scipy.ndimage.filters import gaussian_filter\nfrom abc import ABC, abstractmethod\nfrom ..util.util import plot_image\nimport cv2 as cv\n\nclass objective_function(ABC):\n\"\"\"\nParent class for all objective functions for contrast maximisation\n\"\"\"\n    def __init__(self, name=\"template\", use_polarity=True,\n            has_derivative=True, default_blur=1.0, adaptive_lifespan=False,\n            pixel_crossings=5, minimum_events=10000):\n        \"\"\"\n        Constructor, sets member variables.\n        @param name Sets the name of the objective function (eg: 'variance')\n        @param use_polarity If true, use the polarity of the events in generating IWEs \n        @param has_derivative If true, this function has a defined analytical derivative.\n            Else, will use numerically estimated derivatives.\n        @param default_blur Sets the default standard deviation for the Gaussian blurring kernel\n        @param adaptive_lifespan Many implementations of contrast maximisation use assumptions of\n            linear motion wrt the chosen motion model. A given estimate of the motion parameters\n            implies a lifespan of the events. If 'adaptive_lifespan' is True, the number of events\n            used during warping is cut to that lifespan for each optimisation step, computed using\n            'pixel_crossings'. EG If motion model is optic flow velocity and the\n            estimate = 12 pixels/second and 'pixel_crossings'=3, then the lifespan will\n            be 3/12=0.25 seconds.\n        @param pixel_crossings Number of pixel crossings used to calculate 'adaptive_lifespan'\n        @param minimum_events The minimal number of events that 'adaptive_lifespan' will cut to\n        \"\"\"\n        self.name = name\n        self.use_polarity = use_polarity\n        self.has_derivative = has_derivative\n        self.default_blur = default_blur\n        self.adaptive_lifespan = adaptive_lifespan\n        self.pixel_crossings = pixel_crossings\n        self.minimum_events = minimum_events\n\n        self.recompute_lifespan = True\n        self.lifespan = 0.5\n        self.s_idx = 0\n        self.num_events = None\n        super().__init__()\n\n    @abstractmethod\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Define the warp function. The function can either receive events and motion\n        paramters as input, to compute the IWE and evaluate the objective function,\n        or receive a precomputed IWE. An example is given in comments.\n        @param params The motion parameters to evaluate at\n        @param xs x components of events as list\n        @param ys y components of events as list\n        @param ts t components of events as list\n        @param ps p components of events as list\n        @param warpfunc The desired warping function\n        @param img_size The size of the image sensor/resolution\n        @param blur_sigma The desired amount of blurring to apply to IWE\n        @param show_img Debugging tool, if true, show the IWE in a matplotlib window\n        @param iwe Precomputed IWE to evalute the objective function for\n        @returns Evaluation of objective function at parameters 'params'\n        \"\"\"\n        #if iwe is None:\n        #    iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size,\n        #            use_polarity=self.use_polarity, compute_gradient=False)\n        #blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        #if blur_sigma > 0:\n        #    iwe = gaussian_filter(iwe, blur_sigma)\n        #loss = compute_loss_here...\n        #return loss\n        pass\n\n    @abstractmethod\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Define the gradient of the warp function, if available (else numeric gradient\n        will be computed). The function can either receive events and motion\n        paramters as input, to compute the IWE and dIWE/dParams and evaluate the objective function,\n        or receive a precomputed IWE and dIWE/dParams. An example is given in comments.\n        @param params The motion parameters to evaluate at\n        @param xs x components of events as list\n        @param ys y components of events as list\n        @param ts t components of events as list\n        @param ps p components of events as list\n        @param warpfunc The desired warping function\n        @param img_size The size of the image sensor/resolution\n        @param blur_sigma The desired amount of blurring to apply to IWE\n        @param show_img Debugging tool, if true, show the IWE in a matplotlib window\n        @param iwe Precomputed IWE to evalute the objective function for\n        @param iwe Precomputed gradient of IWE wrt motion params to evalute the gradient\n            of the objective function\n        @returns Gradient of objective function wrt motion parameters at 'params'\n        \"\"\"\n        #if iwe is None or d_iwe is None:\n        #    iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size,\n        #            use_polarity=self.use_polarity, compute_gradient=True)\n        #blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        #if blur_sigma > 0:\n        #    d_iwe = gaussian_filter(d_iwe, blur_sigma)\n\n        #gradient = []\n        #for grad_dim in range(d_iwe.shape[0]):\n        #    gradient.append(compute_gradient_here...)\n        #grad = np.array(gradient)\n        #return grad\n        pass\n\n    def iter_update(self, params, pixel_crossings=None):\n        \"\"\"\n        Housekeeping function that runs as a callback at each optimisation step\n        if 'adaptive_lifespan' is set True\n        @param The current motion parameters\n        @param The number of pixel crossings to compute the new lifespan\n        \"\"\"\n        pixel_crossings = self.pixel_crossings if pixel_crossings is None else pixel_crossings\n        magnitude = np.linalg.norm(params)\n        if magnitude == 0:\n            dt = 5\n        else:\n            dt = pixel_crossings/magnitude\n        self.lifespan = dt\n        self.recompute_lifespan = True\n\n    def update_lifespan(self, ts):\n        \"\"\"\n        Set the new lifespan and thus the new set of events to be used in optimisation\n        @param ts The timestamps of the events currently used\n        \"\"\"\n        print(\"update lifespan\")\n        if self.adaptive_lifespan:\n            self.s_idx = np.searchsorted(ts, ts[-1]-self.lifespan)\n            self.s_idx = len(ts)-self.minimum_events if len(ts)-self.s_idx < self.minimum_events else self.s_idx\n            print(\"New num events = {}/{}\".format(len(ts)-self.s_idx, len(ts)))\n        if self.num_events is None:\n            self.num_events = len(ts)-self.s_idx\n\n\ndef cut_events_to_lifespan(xs, ys, ts, ps, params, pixel_crossings, minimum_events=10000):\n    \"\"\"\n    Given events, cut the events down to the lifespan defined by the motion parameters\n    and desired pixel crossings\n    @param xs x components of events as list\n    @param ys y components of events as list\n    @param ts t components of events as list\n    @param ps p components of events as list\n    @param params The motion parameters to evaluate at\n    @param pixel_crossings Number of pixel crossings used to calculate new lifespan\n    @param minimum_events The minimal number of events that the output set of\n        events will contain\n    @returns The set of events cut to the new lifespan*desired pixel crossings\n    \"\"\"\n    magnitude = np.linalg.norm(params)\n    dt = pixel_crossings/magnitude\n    s_idx = np.searchsorted(ts, ts[-1]-dt)\n    num_events = len(xs)-s_idx\n    s_idx = len(xs)-minimum_events if num_events < minimum_events else s_idx\n    print(\"Magnitude: {:.2f} pix/s. dt({:.2f} pix)={}. New range is {}:{}={} events\".format(magnitude, pixel_crossings, dt, s_idx, len(xs), len(xs)-s_idx))\n    return xs[s_idx:-1], ys[s_idx:-1], ts[s_idx:-1], ps[s_idx:-1]\n\ndef get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, compute_gradient=False,\n        use_polarity=True, return_events=False, return_per_event_contrast=False):\n    \"\"\"\n    Given a set of parameters, events and warp function, get the warped image and derivative image\n    if required.\n    @param params The motion parameters to evaluate at\n    @param xs x components of events as list\n    @param ys y components of events as list\n    @param ts t components of events as list\n    @param ps p components of events as list\n    @param warpfunc The desired warping function\n    @param img_size The size of the image sensor/resolution\n    @param compute_gradient If True, compute and return the gradient of the IWE wrt motion params\n    @param use_polarity If True, use the polarity of the events in IWE formation\n    @param return_events If True, return the warped events as well\n    @param return_per_event_contrast If True, return the contrast in the IWE at\n        each warped event's location\n    @returns IWE, dIWE/dParams, warped events, local contrast of each event in IWE\n    \"\"\"\n    if not use_polarity:\n        ps = np.abs(ps)\n    xs, ys, jx, jy = warpfunc.warp(xs, ys, ts, ps, ts[-1], params, compute_grad=compute_gradient)\n    mask = events_bounds_mask(xs, ys, 0, img_size[1], 0, img_size[0])\n    xs, ys, ts, ps = xs*mask, ys*mask, ts*mask, ps*mask\n    if compute_gradient:\n        jx, jy = jx*mask, jy*mask\n    iwe, iwe_drv = events_to_image_drv(xs, ys, ps, jx, jy,\n            interpolation='bilinear', compute_gradient=compute_gradient)\n    returnval = [iwe, iwe_drv]\n    if return_events:\n        returnval.append((xs, ys))\n    if return_per_event_contrast:\n        weights = image_to_event_weights(xs, ys, iwe)\n        returnval.append(weights)\n    return tuple(returnval)\n\n\nclass variance_objective(objective_function):\n    \"\"\"\n    Variance objective from 'Gallego, Accurate Angular Velocity Estimation with an Event Camera, RAL'17'\n    \"\"\"\n    def __init__(self, adaptive_lifespan=False, minimum_events=10000):\n        super().__init__(name=\"variance\", use_polarity=True, has_derivative=True,\n                default_blur=1.0, adaptive_lifespan=adaptive_lifespan, pixel_crossings=5,\n                minimum_events=minimum_events)\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by var(g(x)) where g(x) is IWE\n        \"\"\"\n        if iwe is None:\n            if self.adaptive_lifespan:\n                #print(\"{}/{}\".format(self.s_idx, len(ts)))\n                #ps = ps/len(ps)*100000\n                if self.recompute_lifespan:\n                    print(\"Updating lifespan\")\n                    self.update_lifespan(ts)\n                    self.recompute_lifespan = False\n                xs, ys, ts, ps = xs[self.s_idx:-1], ys[self.s_idx:-1], ts[self.s_idx:-1], ps[self.s_idx:-1]\n                ps = ps*100\n\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps,\n                    warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n            #print(\"iwe={}\".format(np.sum(iwe)))\n\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        loss = np.var(iwe-np.mean(iwe))\n        #print(loss)\n        return -loss\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient given by 2*(g(x)-mu(g(x))*(g'(x)-mu(g'(x))) where g(x) is the IWE\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            if self.adaptive_lifespan:\n                if self.recompute_lifespan:\n                    self.update_lifespan(ts)\n                    self.recompute_lifespan = False\n                xs, ys, ts, ps = xs[self.s_idx:-1], ys[self.s_idx:-1], ts[self.s_idx:-1], ps[self.s_idx:-1]\n                ps = ps*100\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n\n        gradient = []\n        zero_mean = 2.0*(iwe-np.mean(iwe))\n        img_component = 2.0*(iwe-np.mean(iwe))\n        for grad_dim in range(d_iwe.shape[0]):\n            mean_jac = d_iwe[grad_dim]-np.mean(d_iwe[grad_dim])\n            #gradient.append(np.mean(zero_mean*(d_iwe[grad_dim]-np.mean(d_iwe[grad_dim]))))\n            #gradient.append((np.mean(zero_mean*(d_iwe[grad_dim]-np.mean(d_iwe[grad_dim])))))\n            gradient.append(np.mean(img_component*d_iwe[grad_dim]))\n        grad = np.array(gradient)\n        return -grad\n\nclass rms_objective(objective_function):\n    \"\"\"\n    Root mean squared objective\n    \"\"\"\n    def __init__(self):\n        self.use_polarity = True\n        self.name = \"rms\"\n        self.has_derivative = True\n        self.default_blur=1.0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by l2(g(x))^2 where g(x) is IWE\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        norm = np.linalg.norm(iwe, 2)\n        num_pix = iwe.shape[0]*iwe.shape[1]\n        loss = (norm*norm)/num_pix\n        return -loss\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient given by 2*(mu(g(x)*g'(x))) where g(x) is IWE\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n\n        gradient = []\n        for grad_dim in range(d_iwe.shape[0]):\n            gradient.append(2.0*np.mean(iwe*d_iwe[grad_dim]))\n        grad = np.array(gradient)\n        return -grad\n\nclass sos_objective(objective_function):\n    \"\"\"\n    Sum of squares objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n\n    def __init__(self, adaptive_lifespan=False, minimum_events=10000):\n        self.use_polarity = True\n        self.name = \"sos\"\n        self.has_derivative = True\n        self.default_blur=1.0\n        self.adaptive_lifespan = adaptive_lifespan\n        self.pixel_crossings = 5\n        self.minimum_events = minimum_events\n        self.current_num_events = minimum_events\n        self.div = 1\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by g(x)^2 where g(x) is IWE\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n            iwe /= self.div\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        sos = np.mean(iwe*iwe)\n        return -sos\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient given by 2*g(x)*g'(x) where g(x) is IWE\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            _, self.start = find_lifespan(ts, params, self.pixel_crossings)\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n\n        gradient = []\n        img_component = (iwe*2.0)/(self.div*self.div)\n        for grad_dim in range(d_iwe.shape[0]):\n            gradient.append(np.mean(d_iwe[grad_dim]*img_component))\n        grad = np.array(gradient)\n        return -grad\n\nclass soe_objective(objective_function):\n    \"\"\"\n    Sum of exponentials objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n    def __init__(self):\n        self.use_polarity = False\n        self.name = \"soe\"\n        self.has_derivative = True\n        self.default_blur=2.5\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by e^g(x) where g(x) is IWE\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        exp = np.exp(iwe.astype(np.double))\n        soe = np.mean(exp)\n        return -soe\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient given by e^g(x)*g'(x) where g(x) is IWE\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n            iwe = gaussian_filter(iwe, blur_sigma)\n        gradient = []\n        soe_deriv = np.exp(iwe.astype(np.double))#/num_pix\n        for grad_dim in range(d_iwe.shape[0]):\n            gradient.append(np.mean(soe_deriv*d_iwe[grad_dim]))\n        grad = np.array(gradient)\n        return -grad\n\nclass moa_objective(objective_function):\n    \"\"\"\n    Max of accumulations objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n    def __init__(self):\n        self.use_polarity = False\n        self.name = \"moa\"\n        self.has_derivative = False\n        self.default_blur=3.0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by max(g(x)) where g(x) is IWE\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        moa = np.max(iwe)\n        return -moa\n\n    def evaluate_gradient(self, iwe=None, d_iwe=None, blur_sigma=None, showimg=False):\n        \"\"\"\n        No analytic derivative known\n        \"\"\"\n        return None\n\nclass isoa_objective(objective_function):\n    \"\"\"\n    Inverse sum of accumulations objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n    def __init__(self, thresh=0.5):\n        self.use_polarity = False\n        self.thresh = thresh\n        self.name = \"isoa\"\n        self.has_derivative = True\n        self.default_blur=1.0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by sum(1 where g(x)>1 else 0) where g(x) is IWE.\n        This formulation has similar properties to original ISoA, but negation makes derivative\n        more stable than inversion.\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        isoa = np.sum(np.where(iwe>self.thresh, 1, 0))\n        return isoa\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient = g'(x) where thresh<g(x), otherwise 0\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n        gradient = []\n        mask = np.ma.masked_greater(iwe, self.thresh)\n        iwe[iwe > self.thresh] = 1.0\n        iwe[iwe <= self.thresh] = 0.0\n        for grad_dim in range(d_iwe.shape[0]):\n            gradient.append(np.sum(d_iwe[grad_dim]*iwe))\n        grad = np.array(gradient)\n        return -grad\n\nclass sosa_objective(objective_function):\n    \"\"\"\n    Sum of Supressed Accumulations objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n    def __init__(self, p=3):\n        self.p = p\n        self.use_polarity = False\n        self.name = \"sosa\"\n        self.has_derivative = True\n        self.default_blur=2.0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by e^(-p*g(x)) where g(x) is IWE. p is arbitrary shifting factor,\n        higher values give better noise performance but lower accuracy.\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        exp = np.exp(-self.p*iwe.astype(np.double))\n        sosa = np.sum(exp)\n        return -sosa\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        Gradient = p*-e^(-p*g(x))*g'(x) where g(x) is iwe\n        \"\"\"\n        if iwe is None or d_iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=True)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n            d_iwe = gaussian_filter(d_iwe, blur_sigma)\n        gradient = []\n        exp = np.exp((-self.p*iwe).astype(np.double))\n        fx = -self.p*exp\n        for grad_dim in range(d_iwe.shape[0]):\n            gradient.append(np.sum(d_iwe[grad_dim]*fx))\n        grad = np.array(gradient)\n        return -grad\n\nclass zhu_timestamp_objective(objective_function):\n    \"\"\"\n    Squared timestamp images objective (Zhu et al, Unsupervised Event-based\n    Learning of Optical Flow, Depth, and Egomotion, CVPR19)\n    \"\"\"\n    def __init__(self):\n        self.use_polarity = True\n        self.name = \"zhu\"\n        self.has_derivative = False\n        self.default_blur=2.0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by g(x)^2*h(x)^2 where g(x) is image of average timestamps of positive events\n        and h(x) is image of average timestamps of negative events.\n        \"\"\"\n        if iwe is None:\n            xs, ys, jx, jy = warpfunc.warp(xs, ys, ts, ps, ts[-1], params, compute_grad=False)\n            mask = events_bounds_mask(xs, ys, 0, img_size[1], 0, img_size[0])\n            xs, ys, ts, ps = xs*mask, ys*mask, ts*mask, ps*mask\n            posimg, negimg = events_to_zhu_timestamp_image(xs, ys, ts, ps, compute_gradient=False, showimg=showimg)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            posimg = gaussian_filter(posimg, blur_sigma)\n            negimg = gaussian_filter(negimg, blur_sigma)\n        loss = -(np.sum(posimg*posimg)+np.sum(negimg*negimg))\n        return loss\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        No derivative known\n        \"\"\"\n        return None\n\nclass r1_objective(objective_function):\n    \"\"\"\n    R1 objective (Stoffregen et al, Event Cameras, Contrast\n    Maximization and Reward Functions: an Analysis, CVPR19)\n    \"\"\"\n    def __init__(self, p=3):\n        self.name = \"r1\"\n        self.use_polarity = False\n        self.has_derivative = False\n        self.p = p\n        self.default_blur = 1.0\n        self.last_sosa = 0\n\n    def evaluate_function(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None):\n        \"\"\"\n        Loss given by SOS and SOSA combined\n        \"\"\"\n        if iwe is None:\n            iwe, d_iwe = get_iwe(params, xs, ys, ts, ps, warpfunc, img_size, use_polarity=self.use_polarity, compute_gradient=False)\n        blur_sigma=self.default_blur if blur_sigma is None else blur_sigma\n        if blur_sigma > 0:\n            iwe = gaussian_filter(iwe, blur_sigma)\n        sos = np.mean(iwe*iwe)\n        exp = np.exp(-self.p*iwe.astype(np.double))\n        sosa = np.sum(exp)\n        if sosa > self.last_sosa:\n            return -sos\n        self.last_sosa = sosa\n        return -sos*sosa\n\n    def evaluate_gradient(self, params=None, xs=None, ys=None, ts=None, ps=None,\n            warpfunc=None, img_size=None, blur_sigma=None, showimg=False, iwe=None, d_iwe=None):\n        \"\"\"\n        No derivative known\n        \"\"\"\n        return None\n"
  },
  {
    "path": "lib/contrast_max/warps.py",
    "content": "import numpy as np\nimport torch\nfrom event_utils import *\nfrom abc import ABC, abstractmethod\n\nclass warp_function(ABC):\n\"\"\"\nBase class for objects that can warp events to a reference time\nvia a parametrizeable, differentiable motion model\n\"\"\"\n    def __init__(self, name, dims):\n        \"\"\"\n        Constructor.\n        @param name The name of the warp function (eg 'optic flow')\n        @param dims The number of degrees of freedom of the motion model\n        \"\"\"\n        self.name = name\n        self.dims = dims\n        super().__init__()\n\n    @abstractmethod\n    def warp(self, xs, ys, ts, ps, t0, params, compute_grad=False):\n        \"\"\"\n        Warp function which given a set of events and a reference time,\n        moves the events to that reference time via a motion model\n        @param xs x components of events as list\n        @param ys y components of events as list\n        @param ts t components of events as list\n        @param ps p components of events as list\n        @param t0 The reference time to which to warp the events to\n        @param params The parameters of the motion model for\n            which to warp the events\n        @param compute_grad If True, compute the gradient of the warp with \n            respect to the motion parameters for each event (the Jacobian)\n        @returns xs_warped, ys_warped, xs_jacobian, ys_jacobian: The warped\n            event locations and the gradients for each event as a tuple of four\n            numpy arrays\n        \"\"\"\n        #Warp the events...\n        #if compute_grad:\n        #   compute the jacobian of the warp function\n        pass\n\nclass linvel_warp(warp_function):\n    \"\"\"\n    This class implements linear velocity warping (global optic flow)\n    \"\"\"\n    def __init__(self):\n        warp_function.__init__(self, 'linvel_warp', 2)\n\n    def warp(self, xs, ys, ts, ps, t0, params, compute_grad=False):\n        dt = ts-t0\n        x_prime = xs-dt*params[0]\n        y_prime = ys-dt*params[1]\n        jacobian_x, jacobian_y = None, None\n        if compute_grad:\n            jacobian_x = np.zeros((2, len(x_prime)))\n            jacobian_y = np.zeros((2, len(y_prime)))\n            jacobian_x[0, :] = -dt\n            jacobian_y[1, :] = -dt\n        return x_prime, y_prime, jacobian_x, jacobian_y\n\nclass xyztheta_warp(warp_function):\n    \"\"\"\n    This class implements 4-DoF x,y,z,rotation warps from Mitrokhin etal, \n    \"Event-based moving object detection and tracking\"\n    \"\"\"\n    def __init__(self):\n        warp_function.__init__(self, 'xyztheta_warp', 4)\n\n    def warp(self, xs, ys, ts, ps, t0, params, compute_grad=False):\n        pass\n\nclass pure_rotation_warp(warp_function):\n    \"\"\"\n    This class implements pure rotation warps, with params\n    x,y,theta (x,y is center of rotation, theta is angular velocity\n    \"\"\"\n    def __init__(self):\n        warp_function.__init__(self, 'pure_rotation_warp', 4)\n{not:timeslice}\n    def warp(self, xs, ys, ts, ps, t0, params, compute_grad=False):\n        pass\n"
  },
  {
    "path": "lib/data_formats/__init__.py",
    "content": "# __init__.py\nfrom .data_utils import *\nfrom .read_events import *\n"
  },
  {
    "path": "lib/data_formats/add_hdf5_attribute.py",
    "content": "import argparse\nimport numpy as np\nimport h5py\nimport os\nimport glob\n\ndef endswith(path, extensions):\n    for ext in extensions:\n        if path.endswith(ext):\n            return True\n    return False\n\ndef get_filepaths_from_path_or_file(path, extensions=[], datafile_extensions=[\".txt\", \".csv\"]):\n    files = []\n    path = path.rstrip(\"/\")\n    if os.path.isdir(path):\n        for ext in extensions:\n            files += sorted(glob.glob(\"{}/*{}\".format(path, ext)))\n    else:\n        if endswith(path, extensions):\n            files.append(path)\n        elif endswith(path, datafile_extensions):\n            with open(path, 'r') as f:\n                #files.append(line) for line in f.readlines\n                files = [line.strip() for line in f.readlines()]\n    return files\n\ndef add_attribute(h5_filepaths, group, attribute_name, attribute_value, dry_run=False):\n    for h5_filepath in h5_filepaths:\n        print(\"adding {}/{}[{}]={}\".format(h5_filepath, group, attribute_name, attribute_value))\n        if dry_run:\n            continue\n        h5_file = h5py.File(h5_filepath, 'a')\n        dset = h5_file[\"{}/\".format(group)]\n        dset.attrs[attribute_name] = attribute_value\n        h5_file.close()\n\nif __name__ == \"__main__\":\n    # arguments\n    parser = argparse.ArgumentParser()\n    parser._action_groups.pop()\n    required = parser.add_argument_group('required arguments')\n    optional = parser.add_argument_group('optional arguments')\n\n    required.add_argument(\"--path\", help=\"Can be either 1: path to individual hdf file, \" +\n        \"2: txt file with list of hdf files, or \" +\n        \"3: directory (all hdf files in directory will be processed).\", required=True)\n    required.add_argument(\"--attr_name\", help=\"Name of new attribute\", required=True)\n    required.add_argument(\"--attr_val\", help=\"Value of new attribute\", required=True)\n    optional.add_argument(\"--group\", help=\"Group to add attribute to. Subgroups \" +\n            \"are represented like paths, eg: /group1/subgroup2...\", default=\"\")\n    optional.add_argument(\"--dry_run\", default=0, type=int,\n            help=\"If set to 1, will print changes without performing them\")\n\n    args = parser.parse_args()\n    path = args.path\n    extensions = [\".hdf\", \".h5\"]\n    files = get_filepaths_from_path_or_file(path, extensions=extensions)\n    print(files)\n    dry_run = False if args.dry_run <= 0 else True\n    add_attribute(files, args.group, args.attr_name, args.attr_val, dry_run=dry_run)\n"
  },
  {
    "path": "lib/data_formats/data_providers.py",
    "content": "import numpy as np\nimport h5py\n\n\nclass BaseDataLoader():\n    def __init__(self, data_root, iter_method='between_frames'):\n        pass\n\n    def __getitem__():\n        pass\n\n    def __len__(self):\n        return self.length\n"
  },
  {
    "path": "lib/data_formats/data_utils.py",
    "content": "import h5py\nimport numpy as np\n\ndef binary_search_h5_dset(dset, x, l=None, r=None, side='left'):\n    l = 0 if l is None else l\n    r = len(dset)-1 if r is None else r\n    while l <= r:\n        mid = l + (r - l)//2;\n        midval = dset[mid]\n        if midval == x:\n            return mid\n        elif midval < x:\n            l = mid + 1\n        else:\n            r = mid - 1\n    if side == 'left':\n        return l\n    return r\n\ndef binary_search_h5_timestamp(hdf_path, l, r, x, side='left'):\n    f = h5py.File(hdf_path, 'r')\n    return binary_search_h5_dset(f['events/ts'], x, l=l, r=r, side=side)\n"
  },
  {
    "path": "lib/data_formats/event_packagers.py",
    "content": "from abc import ABCMeta, abstractmethod\nimport h5py\nimport cv2 as cv\nimport numpy as np\n\nclass packager():\n    \"\"\"\n    Abstract base class for classes that package event-based data to\n    some storage format\n    \"\"\"\n    __metaclass__ = ABCMeta\n\n    def __init__(self, name, output_path, max_buffer_size=1000000):\n        \"\"\"\n        Set class attributes\n        @param name The name of the packager (eg: txt_packager)\n        @param output_path Where to dump event data\n        @param max_buffer_size For packagers that buffer data prior to\n            writing, how large this buffer may maximally be\n        \"\"\"\n        self.name = name\n        self.output_path = output_path\n        self.max_buffer_size = max_buffer_size\n\n    @abstractmethod\n    def package_events(self, xs, ys, ts, ps):\n        \"\"\"\n        Given events, write them to the file/store them into the buffer\n        @param xs x component of events\n        @param ys y component of events\n        @param ts t component of events\n        @param ps p component of events\n        @returns None\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def package_image(self, frame, timestamp):\n        \"\"\"\n        Given an image, write it to the file/buffer\n        @param frame The image frame to write to the file/buffer\n        @param timestamp The timestamp of the frame\n        @returns None\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def package_flow(self, flow, timestamp):\n        \"\"\"\n        Given an optic flow image, write it to the file/buffer\n        @param frame The optic flow image frame to write to the file/buffer\n        @param timestamp The timestamp of the optic flow frame\n        @returns None\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def add_metadata(self, num_events, num_pos, num_neg,\n            duration, t0, tk, num_imgs, num_flow):\n        \"\"\"\n        Add metadata to the file\n        @param num_events The number of events in the sequence\n        @param num_pos The numer of positive events in the sequence\n        @param num_neg The numer of negative events in the sequence\n        @param duration The length of the sequence in seconds\n        @param t0 The start time of the sequence\n        @param tk The end time of the sequence\n        @param num_imgs The number of images in the sequence\n        @param num_flow The number of optic flow frames in the sequence\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def set_data_available(self, num_images, num_flow):\n        \"\"\"\n        Configure the file/buffers depending on which data needs to be written\n        @param num_images How many images in the dataset\n        @param num_flow How many optic flow frames in the dataset\n        \"\"\"\n        pass\n\nclass hdf5_packager(packager):\n    \"\"\"\n    This class packages data to hdf5 files\n    \"\"\"\n    def __init__(self, output_path, max_buffer_size=1000000):\n        packager.__init__(self, 'hdf5', output_path, max_buffer_size)\n        print(\"CREATING FILE IN {}\".format(output_path))\n        self.events_file = h5py.File(output_path, 'w')\n        self.event_xs = self.events_file.create_dataset(\"events/xs\", (0, ), dtype=np.dtype(np.int16), maxshape=(None, ), chunks=True)\n        self.event_ys = self.events_file.create_dataset(\"events/ys\", (0, ), dtype=np.dtype(np.int16), maxshape=(None, ), chunks=True)\n        self.event_ts = self.events_file.create_dataset(\"events/ts\", (0, ), dtype=np.dtype(np.float64), maxshape=(None, ), chunks=True)\n        self.event_ps = self.events_file.create_dataset(\"events/ps\", (0, ), dtype=np.dtype(np.bool_), maxshape=(None, ), chunks=True)\n\n    def append_to_dataset(self, dataset, data):\n        dataset.resize(dataset.shape[0] + len(data), axis=0)\n        if len(data) == 0:\n            return\n        dataset[-len(data):] = data[:]\n\n    def package_events(self, xs, ys, ts, ps):\n        self.append_to_dataset(self.event_xs, xs)\n        self.append_to_dataset(self.event_ys, ys)\n        self.append_to_dataset(self.event_ts, ts)\n        self.append_to_dataset(self.event_ps, ps)\n\n    def package_image(self, image, timestamp, img_idx):\n        image_dset = self.events_file.create_dataset(\"images/image{:09d}\".format(img_idx),\n                data=image, dtype=np.dtype(np.uint8))\n        image_dset.attrs['size'] = image.shape\n        image_dset.attrs['timestamp'] = timestamp\n        image_dset.attrs['type'] = \"greyscale\" if image.shape[-1] == 1 or len(image.shape) == 2 else \"color_bgr\" \n\n    def package_flow(self, flow_image, timestamp, flow_idx):\n        flow_dset = self.events_file.create_dataset(\"flow/flow{:09d}\".format(flow_idx),\n                data=flow_image, dtype=np.dtype(np.float32))\n        flow_dset.attrs['size'] = flow_image.shape\n        flow_dset.attrs['timestamp'] = timestamp\n\n    def add_event_indices(self):\n        datatypes = ['images', 'flow']\n        for datatype in datatypes:\n            if datatype in self.events_file.keys():\n                s = 0\n                added = 0\n                ts = self.events_file[\"events/ts\"][s:s+self.max_buffer_size]\n                for image in self.events_file[datatype]:\n                    img_ts = self.events_file[datatype][image].attrs['timestamp']\n                    event_idx = np.searchsorted(ts, img_ts)\n                    if event_idx == len(ts):\n                        added += len(ts)\n                        s += self.max_buffer_size\n                        ts = self.events_file[\"events/ts\"][s:s+self.max_buffer_size]\n                        event_idx = np.searchsorted(ts, img_ts)\n                    event_idx = max(0, event_idx-1)\n                    self.events_file[datatype][image].attrs['event_idx'] = event_idx + added\n\n    def add_metadata(self, num_pos, num_neg,\n            duration, t0, tk, num_imgs, num_flow, sensor_size):\n        self.events_file.attrs['num_events'] = num_pos+num_neg\n        self.events_file.attrs['num_pos'] = num_pos\n        self.events_file.attrs['num_neg'] = num_neg\n        self.events_file.attrs['duration'] = tk-t0\n        self.events_file.attrs['t0'] = t0\n        self.events_file.attrs['tk'] = tk\n        self.events_file.attrs['num_imgs'] = num_imgs\n        self.events_file.attrs['num_flow'] = num_flow\n        self.events_file.attrs['sensor_resolution'] = sensor_size\n        self.add_event_indices()\n\n    def set_data_available(self, num_images, num_flow):\n        if num_images > 0:\n            self.image_dset = self.events_file.create_group(\"images\")\n            self.image_dset.attrs['num_images'] = num_images\n        if num_flow > 0:\n            self.flow_dset = self.events_file.create_group(\"flow\")\n            self.flow_dset.attrs['num_images'] = num_flow\n\n"
  },
  {
    "path": "lib/data_formats/h5_to_memmap.py",
    "content": "import argparse\nimport h5py\nimport numpy as np\nimport os, shutil\nimport json\n\nclass NpEncoder(json.JSONEncoder):\n    def default(self, obj):\n        if isinstance(obj, np.integer):\n            return int(obj)\n        elif isinstance(obj, np.floating):\n            return float(obj)\n        elif isinstance(obj, np.ndarray):\n            return obj.tolist()\n        else:\n            return super(NpEncoder, self).default(obj)\n\ndef find_safe_alternative(output_base_path):\n    i = 0\n    alternative_path = \"{}_{:09d}\".format(output_base_path, i)\n    while(os.path.exists(alternative_path)):\n        i += 1\n        alternative_path = \"{}_{:09d}\".format(output_base_path, i)\n        assert(i < 999999999)\n    return alternative_path\n\ndef save_additional_data_as_mmap(f, mmap_pth, data):\n    data_path = os.path.join(mmap_pth, data['mmap_filename'])\n    data_ts_path = os.path.join(mmap_pth, data['mmap_ts_filename'])\n    data_event_idx_path = os.path.join(mmap_pth, data['mmap_event_idx_filename'])\n    data_key = data['h5_key']\n    print('Writing {} to mmap {}, timestamps to {}'.format(data_key, data_path, data_ts_path))\n    h, w, c = 1, 1, 1\n    if data_key in f.keys():\n        num_data = len(f[data_key].keys())\n        if num_data > 0:\n            data_keys = list(f[data_key].keys())\n            data_size = f[data_key][data_keys[0]].attrs['size']\n            h, w = data_size[0], data_size[1]\n            c = 1 if len(data_size) <= 2 else data_size[2]\n    else:\n        num_data = 1\n    mmp_imgs = np.memmap(data_path, dtype='uint8', mode='w+', shape=(num_data, h, w, c))\n    mmp_img_ts = np.memmap(data_ts_path, dtype='float64', mode='w+', shape=(num_data, 1))\n    mmp_event_indices = np.memmap(data_event_idx_path, dtype='uint16', mode='w+', shape=(num_data, 1))\n\n    if data_key in f.keys():\n        data = []\n        data_timestamps = []\n        data_event_index = []\n        for img_key in f[data_key].keys():\n            data.append(f[data_key][img_key][:])\n            data_timestamps.append(f[data_key][img_key].attrs['timestamp'])\n            data_event_index.append(f[data_key][img_key].attrs['event_idx'])\n\n        data_stack = np.expand_dims(np.stack(data), axis=3)\n        data_ts_stack = np.expand_dims(np.stack(data_timestamps), axis=1)\n        data_event_indices_stack = np.expand_dims(np.stack(data_event_index), axis=1)\n        mmp_imgs[...] = data_stack\n        mmp_img_ts[...] = data_ts_stack\n        mmp_event_indices[...] = data_event_indices_stack\n\ndef write_metadata(f, metadata_path):\n    metadata = {}\n    for attr in f.attrs:\n        val = f.attrs[attr]\n        if isinstance(val, np.ndarray):\n            val = val.tolist()\n        metadata[attr] = val\n    with open(metadata_path, 'w') as js:\n        json.dump(metadata, js, cls=NpEncoder)\n\ndef h5_to_memmap(h5_file_path, output_base_path, overwrite=True):\n    output_pth = output_base_path\n    if os.path.exists(output_pth):\n        if overwrite:\n            print(\"Overwriting {}\".format(output_pth))\n            shutil.rmtree(output_pth)\n        else:\n            output_pth = find_safe_alternative(output_base_path)\n            print('Data will be extracted to: {}'.format(output_pth))\n    os.makedirs(output_pth)\n    mmap_pth = os.path.join(output_pth, \"memmap\")\n    os.makedirs(mmap_pth)\n\n    ts_path = os.path.join(mmap_pth, 't.npy')\n    xy_path = os.path.join(mmap_pth, 'xy.npy')\n    ps_path = os.path.join(mmap_pth, 'p.npy')\n    metadata_path = os.path.join(mmap_pth, 'metadata.json')\n\n    additional_data = {\n            \"images\":\n                {\n                    'h5_key' : 'images',\n                    'mmap_filename' : 'images.npy',\n                    'mmap_ts_filename' : 'timestamps.npy',\n                    'mmap_event_idx_filename' : 'image_event_indices.npy',\n                    'dims' : 3\n                },\n            \"flow\":\n                {\n                    'h5_key' : 'flow',\n                    'mmap_filename' : 'flow.npy',\n                    'mmap_ts_filename' : 'flow_timestamps.npy',\n                    'mmap_event_idx_filename' : 'flow_event_indices.npy',\n                    'dims' : 3\n                }\n    }\n\n    with h5py.File(h5_file_path, 'r') as f:\n        num_events = f.attrs['num_events']\n        num_images = f.attrs['num_imgs']\n        num_flow = f.attrs['num_flow']\n\n        mmp_ts = np.memmap(ts_path, dtype='float64', mode='w+', shape=(num_events, 1))\n        mmp_xy = np.memmap(xy_path, dtype='int16', mode='w+', shape=(num_events, 2))\n        mmp_ps = np.memmap(ps_path, dtype='uint8', mode='w+', shape=(num_events, 1))\n\n        mmp_ts[:, 0] = f['events/ts'][:]\n        mmp_xy[:, :] = np.stack((f['events/xs'][:], f['events/ys'][:])).transpose()\n        mmp_ps[:, 0] = f['events/ps'][:]\n\n        for data in additional_data:\n            save_additional_data_as_mmap(f, mmap_pth, additional_data[data])\n        write_metadata(f, metadata_path)\n\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Tool to convert this projects style hdf5 files to the memmap format used in some RPG projects\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"HDF5 file to convert\")\n    parser.add_argument(\"--output_dir\", default=None, help=\"Path to extract (same as bag if left empty)\")\n    parser.add_argument('--not_overwrite', action='store_false', help='If set, will not overwrite\\\n            existing memmap, but will place safe alternative')\n\n    args = parser.parse_args()\n\n    bagname = os.path.splitext(os.path.basename(args.path))[0]\n    if args.output_dir is None:\n        output_path = os.path.join(os.path.dirname(os.path.abspath(args.path)), bagname)\n    else:\n        output_path = os.path.join(args.output_dir, bagname)\n    h5_to_memmap(args.path, output_path, overwrite=args.not_overwrite)\n"
  },
  {
    "path": "lib/data_formats/read_events.py",
    "content": "import h5py\nimport numpy as np\nimport os\n\ndef compute_indices(event_stamps, frame_stamps):\n    \"\"\"\n    Given event timestamps and frame timestamps as arrays,\n    find the event indices that correspond to the beginning and\n    end period of each frames\n    @param event_stamps The event timestamps\n    @param frame_stamps The frame timestamps\n    @returns The indices as a 2xN numpy array (N=number of frames)\n    \"\"\"\n    indices_first = np.searchsorted(event_stamps[:,0], frame_stamps[1:])\n    indices_last = np.searchsorted(event_stamps[:,0], frame_stamps[:-1])\n    index = np.stack([indices_first, indices_last], -1)\n    return index\n\ndef read_memmap_events(memmap_path, skip_frames=1, return_events=False, images_file = 'images.npy',\n        images_ts_file = 'timestamps.npy', optic_flow_file = 'optic_flow.npy',\n        optic_flow_ts_file = 'optic_flow_timestamps.npy', events_xy_file = 'xy.npy',\n        events_p_file = 'p.npy', events_t_file = 't.npy'):\n    \"\"\"\n    Given a path to an RPG-style memmap, read the events it contains.\n    These memmaps break images, timestamps, optic flow, xy, p and t\n    components of events into separate files.\n    @param memmap_path Path to the root directory of the memmap\n    @param skip_frames Skip reading every 'skip_frames'th frame, default=1\n    @param return_events If True, return the events as numpy arrays, else return\n        a handle to the event data files (which can be indexed, but does not load\n        events into RAM)\n    @param images_file The file containing images\n    @param images_ts_file The file containing image timestamps\n    @param optic_flow_file The file containing optic flow frames\n    @param optic_flow_ts_file The file containing optic flow frame timestamps\n    @param events_xy_file The file containing event coordinate data\n    @param events_p_file The file containing the event polarities\n    @param events_ts_file The file containing the event timestamps\n    @return dict with event data:\n        data = {\n            \"index\": index mapping image index to event idx\n            \"frame_stamps\": frame timestamps\n            \"images\": images\n            \"optic_flow\": optic flow\n            \"optic_flow_stamps\": of timestamps\n            \"t\": event timestamps\n            \"xy\": event coords\n            \"p\": event polarities\n            \"t0\": t0\n    \"\"\"\n    assert os.path.isdir(memmap_path), '%s is not a valid memmap_pathectory' % memmap_path\n\n    data = {}\n    has_flow = False\n    for subroot, _, fnames in sorted(os.walk(memmap_path)):\n        for fname in sorted(fnames):\n            path = os.path.join(subroot, fname)\n            if fname.endswith(\".npy\"):\n                if fname==\"index.npy\":  # index mapping image index to event idx\n                    indices = np.load(path)  # N x 2\n                    assert len(indices.shape) == 2 and indices.shape[1] == 2\n                    indices = indices.astype(\"int64\")  # ignore event indices which are 0 (before first image)\n                    data[\"index\"] = indices.T\n                elif fname==images_ts_file:\n                    data[\"frame_stamps\"] = np.load(path)[::skip_frames,...]\n                elif fname==images_file:\n                    data[\"images\"] = np.load(path, mmap_mode=\"r\")[::skip_frames,...]\n                elif fname==optic_flow_file:\n                    data[\"optic_flow\"] = np.load(path, mmap_mode=\"r\")[::skip_frames,...]\n                    has_flow = True\n                elif fname==optic_flow_ts_file:\n                    data[\"optic_flow_stamps\"] = np.load(path)[::skip_frames,...]\n\n                handle = np.load(path, mmap_mode=\"r\")\n                if fname==events_t_file:  # timestamps\n                    data[\"t\"] = handle[:].squeeze() if return_events else handle\n                    data[\"t0\"] = handle[0]\n                elif fname==events_xy_file: # coordinates\n                    data[\"xy\"] = handle[:].squeeze() if return_events else handle\n                elif fname==events_p_file: # polarity\n                    data[\"p\"] = handle[:].squeeze() if return_events else handle\n\n        if len(data) > 0:\n            data['path'] = subroot\n            if \"t\" not in data:\n                raise Exception(f\"Ignoring memmap_pathectory {subroot} since no events\")\n            if not (len(data['p']) == len(data['xy']) and len(data['p']) == len(data['t'])):\n                raise Exception(f\"Events from {subroot} invalid\")\n            data[\"num_events\"] = len(data['p'])\n\n            if \"index\" not in data and \"frame_stamps\" in data:\n                data[\"index\"] = compute_indices(data[\"t\"], data['frame_stamps'])\n    return data\n\ndef read_memmap_events_dict(memmap_path, skip_frames=1, return_events=False, images_file = 'images.npy',\n        images_ts_file = 'timestamps.npy', optic_flow_file = 'optic_flow.npy',\n        optic_flow_ts_file = 'optic_flow_timestamps.npy', events_xy_file = 'xy.npy',\n        events_p_file = 'p.npy', events_t_file = 't.npy'):\n    \"\"\"\n    Read memmap file events and return them in a dict\n    \"\"\"\n    data = read_memmap_events(memmap_path, skip_frames, return_events, images_file, images_ts_file,\n            optic_flow_file, optic_flow_ts_file, events_xy_file, events_p_file, events_t_file)\n    events = {\n            'xs':data['xy'][:,0].squeeze(),\n            'ys':data['xy'][:,1].squeeze(),\n            'ts':events['t'][:].squeeze(),\n            'ps':events['p'][:].squeeze()}\n    return events\n\ndef read_h5_events(hdf_path):\n    \"\"\"\n    Read events from HDF5 file (Monash style).\n    @param hdf_path Path to HDF5 file\n    @returns Events as 4xN numpy array (N=num events)\n    \"\"\"\n    f = h5py.File(hdf_path, 'r')\n    if 'events/x' in f:\n        #legacy\n        events = np.stack((f['events/x'][:], f['events/y'][:], f['events/ts'][:], np.where(f['events/p'][:], 1, -1)), axis=1)\n    else:\n        events = np.stack((f['events/xs'][:], f['events/ys'][:], f['events/ts'][:], np.where(f['events/ps'][:], 1, -1)), axis=1)\n    return events\n\ndef read_h5_event_components(hdf_path):\n    \"\"\"\n    Read events from HDF5 file (Monash style).\n    @param hdf_path Path to HDF5 file\n    @returns Events as four np arrays with the event components\n    \"\"\"\n    f = h5py.File(hdf_path, 'r')\n    if 'events/x' in f:\n        #legacy\n        return (f['events/x'][:], f['events/y'][:], f['events/ts'][:], np.where(f['events/p'][:], 1, -1))\n    else:\n        return (f['events/xs'][:], f['events/ys'][:], f['events/ts'][:], np.where(f['events/ps'][:], 1, -1))\n\ndef read_h5_events_dict(hdf_path, read_frames=True):\n    \"\"\"\n    Read events from HDF5 file (Monash style).\n    @param hdf_path Path to HDF5 file\n    @returns Events as a dict with entries 'xs', 'ys', 'ts', 'ps' containing the event components,\n        'frames' containing the frames, 'frame_timestamps' containing frame timestamps and\n        'frame_event_indices' containing the indices of the corresponding event for each frame\n    \"\"\"\n    f = h5py.File(hdf_path, 'r')\n    if 'events/x' in f:\n        #legacy\n        events = {\n                'xs':f['events/x'][:],\n                'ys':f['events/y'][:],\n                'ts':f['events/ts'][:],\n                'ps':np.where(f['events/p'][:], 1, -1)\n        }\n        return events\n    else:\n        events = {\n                'xs':f['events/xs'][:],\n                'ys':f['events/ys'][:],\n                'ts':f['events/ts'][:],\n                'ps':np.where(f['events/ps'][:], 1, -1)\n                }\n        if read_frames:\n            images = []\n            image_stamps = []\n            image_event_indices = []\n            for key in f['images']:\n                frame = f['images/{}'.format(key)][:]\n                images.append(frame)\n                image_stamps.append(f['images/{}'.format(key)].attrs['timestamp'])\n                image_event_indices.append(f['images/{}'.format(key)].attrs['event_idx'])\n            events['frames'] = images\n            #np.concatenate(images, axis=2).swapaxes(0,2) if len(frame.shape)==3 else np.stack(images, axis=0)\n            events['frame_timestamps'] = np.array(image_stamps)\n            events['frame_event_indices'] = np.array(image_event_indices)\n        return events\n"
  },
  {
    "path": "lib/data_formats/rosbag_to_h5.py",
    "content": "import glob\nimport argparse\nimport rosbag\nimport rospy\nfrom cv_bridge import CvBridge, CvBridgeError\nimport os\nimport h5py\nimport numpy as np\nfrom event_packagers import *\nfrom tqdm import tqdm\n\n\ndef append_to_dataset(dataset, data):\n    dataset.resize(dataset.shape[0] + len(data), axis=0)\n    if len(data) == 0:\n        return\n    dataset[-len(data):] = data[:]\n\n\ndef timestamp_float(ts):\n    return ts.secs + ts.nsecs / float(1e9)\n\n\ndef get_rosbag_stats(bag, event_topic, image_topic=None, flow_topic=None):\n    num_event_msgs = 0\n    num_img_msgs = 0\n    num_flow_msgs = 0\n    topics = bag.get_type_and_topic_info().topics\n    for topic_name, topic_info in topics.iteritems():\n        if topic_name == event_topic:\n            num_event_msgs = topic_info.message_count\n            print('Found events topic: {} with {} messages'.format(topic_name, topic_info.message_count))\n        if topic_name == image_topic:\n            num_img_msgs = topic_info.message_count\n            print('Found image topic: {} with {} messages'.format(topic_name, num_img_msgs))\n        if topic_name == flow_topic:\n            num_flow_msgs = topic_info.message_count\n            print('Found flow topic: {} with {} messages'.format(topic_name, num_flow_msgs))\n    return num_event_msgs, num_img_msgs, num_flow_msgs\n\n\n# Inspired by https://github.com/uzh-rpg/rpg_e2vid\ndef extract_rosbag(rosbag_path, output_path, event_topic, image_topic=None,\n                   flow_topic=None, start_time=None, end_time=None, zero_timestamps=False,\n                   packager=hdf5_packager, is_color=False):\n    ep = packager(output_path)\n    topics = (event_topic, image_topic, flow_topic)\n    event_msg_sum = 0\n    num_msgs_between_logs = 25\n    first_ts = -1\n    t0 = -1\n    sensor_size = None\n    if not os.path.exists(rosbag_path):\n        print(\"{} does not exist!\".format(rosbag_path))\n        return\n    with rosbag.Bag(rosbag_path, 'r') as bag:\n        # Look for the topics that are available and save the total number of messages for each topic (useful for the progress bar)\n        num_event_msgs, num_img_msgs, num_flow_msgs = get_rosbag_stats(bag, event_topic, image_topic, flow_topic)\n        # Extract events to h5\n        xs, ys, ts, ps = [], [], [], []\n        max_buffer_size = 1e20\n        ep.set_data_available(num_img_msgs, num_flow_msgs)\n        num_pos, num_neg, last_ts, img_cnt, flow_cnt = 0, 0, 0, 0, 0\n\n        for topic, msg, t in tqdm(bag.read_messages()):\n            if first_ts == -1 and topic in topics:\n                timestamp = timestamp_float(msg.header.stamp)\n                first_ts = timestamp\n                if zero_timestamps:\n                    timestamp = timestamp-first_ts\n                if start_time is None:\n                    start_time = first_ts\n                start_time = start_time + first_ts\n                if end_time is not None:\n                    end_time = end_time+start_time\n                t0 = timestamp\n\n            if topic == image_topic:\n                timestamp = timestamp_float(msg.header.stamp)-(first_ts if zero_timestamps else 0)\n                if is_color:\n                    image = CvBridge().imgmsg_to_cv2(msg, \"bgr8\")\n                else:\n                    image = CvBridge().imgmsg_to_cv2(msg, \"mono8\")\n\n                ep.package_image(image, timestamp, img_cnt)\n                sensor_size = image.shape\n                img_cnt += 1\n\n            elif topic == flow_topic:\n                timestamp = timestamp_float(msg.header.stamp)-(first_ts if zero_timestamps else 0)\n\n                flow_x = np.array(msg.flow_x)\n                flow_y = np.array(msg.flow_y)\n                flow_x.shape = (msg.height, msg.width)\n                flow_y.shape = (msg.height, msg.width)\n                flow_image = np.stack((flow_x, flow_y), axis=0)\n\n                ep.package_flow(flow_image, timestamp, flow_cnt)\n                flow_cnt += 1\n\n            elif topic == event_topic:\n                event_msg_sum += 1\n                #if event_msg_sum % num_msgs_between_logs == 0 or event_msg_sum >= num_event_msgs - 1:\n                #    print('Event messages: {} / {}'.format(event_msg_sum + 1, num_event_msgs))\n                for e in msg.events:\n                    timestamp = timestamp_float(e.ts)-(first_ts if zero_timestamps else 0)\n                    xs.append(e.x)\n                    ys.append(e.y)\n                    ts.append(timestamp)\n                    ps.append(1 if e.polarity else 0)\n                    if e.polarity:\n                        num_pos += 1\n                    else:\n                        num_neg += 1\n                    last_ts = timestamp\n                if (len(xs) > max_buffer_size and timestamp >= start_time) or (end_time is not None and timestamp >= start_time):\n                    print(\"Writing events\")\n                    if sensor_size is None or sensor_size[0] < max(ys) or sensor_size[1] < max(xs):\n                        sensor_size = [max(ys), max(xs)]\n                        print(\"Sensor size inferred from events as {}\".format(sensor_size))\n                    ep.package_events(xs, ys, ts, ps)\n                    del xs[:]\n                    del ys[:]\n                    del ts[:]\n                    del ps[:]\n                if end_time is not None and timestamp >= start_time:\n                    return\n                if sensor_size is None or sensor_size[0] < max(ys) or sensor_size[1] < max(xs):\n                    sensor_size = [max(ys), max(xs)]\n                    print(\"Sensor size inferred from events as {}\".format(sensor_size))\n                ep.package_events(xs, ys, ts, ps)\n                del xs[:]\n                del ys[:]\n                del ts[:]\n                del ps[:]\n        if sensor_size is None:\n            raise Exception(\"ERROR: No sensor size detected, implies no events/images in bag topics?\")\n        print(\"Detected sensor size {}\".format(sensor_size))\n        ep.add_metadata(num_pos, num_neg, last_ts-t0, t0, last_ts, img_cnt, flow_cnt, sensor_size)\n\n\ndef extract_rosbags(rosbag_paths, output_dir, event_topic, image_topic, flow_topic,\n        zero_timestamps=False, is_color=False):\n    for path in rosbag_paths:\n        bagname = os.path.splitext(os.path.basename(path))[0]\n        out_path = os.path.join(output_dir, \"{}.h5\".format(bagname))\n        print(\"Extracting {} to {}\".format(path, out_path))\n        extract_rosbag(path, out_path, event_topic, image_topic=image_topic,\n                       flow_topic=flow_topic, zero_timestamps=zero_timestamps, is_color=is_color)\n\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Tool for converting rosbag events to an efficient HDF5 format that can be speedily\n    accessed by python code.\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"ROS bag file to extract or directory containing bags\")\n    parser.add_argument(\"--output_dir\", default=\"/tmp/extracted_data\", help=\"Folder where to extract the data\")\n    parser.add_argument(\"--event_topic\", default=\"/dvs/events\", help=\"Event topic\")\n    parser.add_argument(\"--image_topic\", default=None, help=\"Image topic (if left empty, no images will be collected)\")\n    parser.add_argument(\"--flow_topic\", default=None, help=\"Flow topic (if left empty, no flow will be collected)\")\n    parser.add_argument('--zero_timestamps', action='store_true', help='If true, timestamps will be offset to start at 0')\n    parser.add_argument('--is_color', action='store_true', help='Set flag to save frames from image_topic as 3-channel, bgr color images')\n    args = parser.parse_args()\n\n    print('Data will be extracted in folder: {}'.format(args.output_dir))\n    if not os.path.exists(args.output_dir):\n        os.makedirs(args.output_dir)\n    if os.path.isdir(args.path):\n        rosbag_paths = sorted(glob.glob(os.path.join(args.path, \"*.bag\")))\n    else:\n        rosbag_paths = [args.path]\n    extract_rosbags(rosbag_paths, args.output_dir, args.event_topic, args.image_topic,\n            args.flow_topic, zero_timestamps=args.zero_timestamps, is_color=args.is_color)\n"
  },
  {
    "path": "lib/data_loaders/__init__.py",
    "content": "# __init__.py\nfrom .base_dataset import *\nfrom .memmap_dataset import *\nfrom .hdf5_dataset import *\nfrom .npy_dataset import *\n"
  },
  {
    "path": "lib/data_loaders/base_dataset.py",
    "content": "from torch.utils.data import Dataset\nfrom torch.utils.data.dataloader import default_collate\nimport numpy as np\nimport torch\nimport random\nimport os\n\n# local modules\nfrom .data_augmentation import Compose, RobustNorm, CenterCrop\nfrom .data_util import data_sources\nfrom ..representations.voxel_grid import events_to_voxel_torch, events_to_neg_pos_voxel_torch\nfrom ..util.util import read_json, write_json\n\nclass BaseVoxelDataset(Dataset):\n    \"\"\"\n    Dataloader for voxel grids given file containing events.\n    Also loads time-synchronized frames and optic flow if available.\n    Voxel grids are formed on-the-fly.\n    For each index, returns a dict containing:\n        * frame is a H x W tensor containing the first frame whose\n          timestamp >= event tensor\n        * events is a C x H x W tensor containing the voxel grid\n        * flow is a 2 x H x W tensor containing the flow (displacement) from\n          the current frame to the last frame\n        * dt is the time spanned by 'events'\n        * data_source_idx is the index of the data source (simulated, IJRR, MVSEC etc)\n    Subclasses must implement:\n        - get_frame(index) method which retrieves the frame at index i\n        - get_flow(index) method which retrieves the optic flow at index i\n        - get_events(idx0, idx1) method which gets the events between idx0 and idx1\n            (in format xs, ys, ts, ps, where each is a np array\n            of x, y positions, timestamps and polarities respectively)\n        - load_data() initialize the data loading method and ensure the following\n            members are filled:\n            sensor_resolution - the sensor resolution\n            has_flow - if this dataset has optic flow\n            t0 - timestamp of first event\n            tk - timestamp of last event\n            num_events - the total number of events\n            frame_ts - list of the timestamps of the frames\n            num_frames - the number of frames\n        - find_ts_index(timestamp) given a timestamp, find the index of\n            the corresponding event\n\n    Parameters:\n        data_path Path to the file containing the event/image data\n        transforms Dict containing the desired augmentations\n        sensor_resolution The size of the image sensor from which the events originate\n        num_bins The number of bins desired in the voxel grid\n        voxel_method Which method should be used to form the voxels.\n            Currently supports:\n            * \"k_events\" (new voxels are formed every k events)\n            * \"t_seconds\" (new voxels are formed every t seconds)\n            * \"between_frames\" (all events between frames are taken, requires frames to exist)\n            * \"fixed_frames\" ('num_frames' voxels formed at even intervals)\n            A sliding window width must be given for k_events and t_seconds,\n            which determines overlap (no overlap if set to 0). Eg:\n            method={'method':'k_events', 'k':10000, 'sliding_window_w':100}\n            method={'method':'t_seconds', 't':0.5, 'sliding_window_t':0.1}\n            method={'method':'between_frames'}\n            method={'method':'fixed_frames', 'num_frames':100}\n            Default is 'between_frames'.\n    \"\"\"\n\n    def get_frame(self, index):\n        \"\"\"\n        Get frame at index\n        @param index The index of the frame to get\n        \"\"\"\n        raise NotImplementedError\n\n    def get_flow(self, index):\n        \"\"\"\n        Get optic flow at index\n        @param index The index of the optic flow to get\n        \"\"\"\n        raise NotImplementedError\n\n    def get_events(self, idx0, idx1):\n        \"\"\"\n        Get events between idx0, idx1\n        @param idx0 Start index to get events from\n        @param idx1 End index to get events from\n        \"\"\"\n        raise NotImplementedError\n\n    def load_data(self, data_path):\n        \"\"\"\n        Perform initialization tasks and ensure essential members are populated.\n        Required members are:\n            members are filled:\n            self.sensor_resolution - the sensor resolution\n            self.has_flow - if this dataset has optic flow\n            self.t0 - timestamp of first event\n            self.tk - timestamp of last event\n            self.num_events - the total number of events\n            self.frame_ts - list of the timestamps of the frames\n            self.num_frames - the number of frames\n        @param data_path The path to the data file/s containing events etc\n        \"\"\"\n        raise NotImplementedError\n\n    def find_ts_index(self, timestamp):\n        \"\"\"\n        Given a timestamp, find the event index\n        @param timestamp The timestamp at which to find the corresponding event index\n        \"\"\"\n        raise NotImplementedError\n\n    def ts(self, index):\n        \"\"\"\n        Get timestamp at index\n        @param Index of event whose timestamp to return\n        \"\"\"\n        raise NotImplementedError\n\n    def __init__(self, data_path, transforms={}, sensor_resolution=None, num_bins=5,\n                 voxel_method={'method': 'between_frames'}, max_length=None, combined_voxel_channels=False,\n                 return_events=False, return_voxelgrid=True, return_frame=True, return_prev_frame=False,\n                 return_flow=True, return_prev_flow=False, return_format='torch'):\n        \"\"\"\n        @param data_path Path to the file containing the event/image data\n        @param transforms Dict containing the desired augmentations\n        @param sensor_resolution The size of the image sensor from which the events originate\n        @param num_bins The number of bins desired in the voxel grid\n        @param voxel_method Which method should be used to form the voxels.\n            Currently supports:\n            * \"k_events\" (new voxels are formed every k events, with each batch\n                overlapping by 'sliding_window_w' events)\n            * \"t_seconds\" (new voxels are formed every t seconds, with each batch\n                overlapping by 'sliding_window_t' seconds)\n            * \"between_frames\" (all events between frames are taken, requires frames to exist)\n            * \"fixed_frames\" ('num_frames' voxels formed at even intervals)\n            A sliding window width must be given for k_events and t_seconds,\n            which determines overlap (no overlap if set to 0). Eg:\n            method={'method':'k_events', 'k':10000, 'sliding_window_w':100}\n            method={'method':'t_seconds', 't':0.5, 'sliding_window_t':0.1}\n            method={'method':'between_frames'}\n            method={'method':'fixed_frames', 'num_frames':100}\n            Default is 'between_frames'.\n        @param max_length Maximum capped length of dataset (no cap if left empty)\n        @param combined_voxel_channels If True, produces one voxel grid for all events, if False,\n            produces separate voxel grids for positive and negative channels\n        @param return_events If true, returns events in output dict\n        @param return_voxelgrid If true, returns voxelgrid in output dict\n        @param return_frame If true, returns frames in output dict\n        @param return_prev_frame If true, returns previous frame to current frame\n            in output dict\n        @param return_flow If true, returns optic flow in output dict\n        @param return_prev_flow If true, returns previous optic flow to current\n            optic flow in output dict\n        @param return_format The desired output format (options = 'numpy' and 'torch')\n        \"\"\"\n\n        self.num_bins = num_bins\n        self.data_path = data_path\n        self.combined_voxel_channels = combined_voxel_channels\n        self.sensor_resolution = sensor_resolution\n        self.data_source_idx = -1\n        self.has_flow = False\n        self.has_frames = True\n        self.return_format = return_format\n        self.counter = 0\n\n        self.return_events = return_events\n        self.return_voxelgrid = return_voxelgrid\n        self.return_frame = return_frame\n        self.return_prev_frame = return_prev_frame\n        self.return_flow = return_flow\n        self.return_prev_flow = return_prev_flow\n\n        self.sensor_resolution, self.t0, self.tk, self.num_events, self.frame_ts, self.num_frames = \\\n            None, None, None, None, None, None\n\n        self.load_data(data_path)\n\n        if self.sensor_resolution is None or self.has_flow is None or self.t0 is None \\\n                or self.tk is None or self.num_events is None or self.frame_ts is None \\\n                or self.num_frames is None:\n            print(\"s_r: {}, h_f={}, t0={}, tk={}, n_e={}, nf={}, s_f={}\".format(self.sensor_resolution is None, self.has_flow is None, self.t0 is None, self.tk is None, self.num_events is None, self.frame_ts is None, self.num_frames))\n            raise Exception(\"Dataloader failed to intialize all required members\")\n\n        self.num_pixels = self.sensor_resolution[0] * self.sensor_resolution[1]\n        self.duration = self.tk - self.t0\n\n        self.set_voxel_method(voxel_method)\n\n        self.normalize_voxels = False\n        if 'RobustNorm' in transforms.keys():\n            vox_transforms_list = [eval(t)(**kwargs) for t, kwargs in transforms.items()]\n            del (transforms['RobustNorm'])\n            self.normalize_voxels = True\n            self.vox_transform = Compose(vox_transforms_list)\n\n        transforms_list = [eval(t)(**kwargs) for t, kwargs in transforms.items()]\n\n        if len(transforms_list) == 0:\n            self.transform = None\n        elif len(transforms_list) == 1:\n            self.transform = transforms_list[0]\n        else:\n            self.transform = Compose(transforms_list)\n        if not self.normalize_voxels:\n            self.vox_transform = self.transform\n\n        if max_length is not None:\n            self.length = min(self.length, max_length + 1)\n\n    @staticmethod\n    def preprocess_events(xs, ys, ts, ps):\n        \"\"\"\n        Given empty events, return single zero event\n        @param xs x compnent of events\n        @param ys y compnent of events\n        @param ts t compnent of events\n        @param ps p compnent of events\n        \"\"\"\n        if len(xs) == 0:\n            txs = np.zeros((1))\n            tys = np.zeros((1))\n            tts = np.zeros((1))\n            tps = np.zeros((1))\n            return txs, tys, tts, tps\n        return xs, ys, ts, ps\n\n    def __getitem__(self, index, seed=None):\n        \"\"\"\n        Get data at index.\n        @param index Index of data\n        @param seed Random seed for data augmentation\n        @returns Dict with desired outputs (voxel grid, events, frames etc)\n            as set in constructor\n        \"\"\"\n        if index < 0 or index >= self.__len__():\n            raise IndexError\n        seed = random.randint(0, 2 ** 32) if seed is None else seed\n\n        idx0, idx1 = self.get_event_indices(index)\n        xs, ys, ts, ps = self.get_events(idx0, idx1)\n        xs, ys, ts, ps = self.preprocess_events(xs, ys, ts, ps)\n        ts_0, ts_k  = ts[0], ts[-1]\n        dt = ts_k-ts_0\n\n        item = {'data_source_idx': self.data_source_idx, 'data_path': self.data_path,\n                'timestamp': ts_k, 'dt_between_frames': dt, 'ts_idx0': ts_0, 'ts_idx1': ts_k,\n                'idx0': idx0, 'idx1': idx1}\n        if self.return_voxelgrid:\n            voxel = self.get_voxel_grid(xs, ys, ts, ps, combined_voxel_channels=self.combined_voxel_channels)\n            voxel = self.transform_voxel(voxel, seed)\n            item['voxel'] = voxel\n\n        if self.voxel_method['method'] == 'between_frames':\n            frame = self.get_frame(index)\n            frame = self.transform_frame(frame, seed)\n\n            if self.has_flow:\n                flow = self.get_flow(index)\n                # convert to displacement (pix)\n                flow = flow * dt\n                flow = self.transform_flow(flow, seed)\n            else:\n                if self.return_format == 'torch':\n                    flow = torch.zeros((2, frame.shape[-2], frame.shape[-1]), dtype=frame.dtype, device=frame.device)\n                else:\n                    flow = np.zeros((2, frame.shape[-2], frame.shape[-1]))\n\n            if self.return_flow:\n                item['flow'] = flow\n                item['flow_ts'] = self.frame_ts[index]\n            if self.return_prev_flow:\n                prev_flow = flow if not self.has_flow else self.get_flow(index)\n                item['prev_flow'] = self.transform_flow(prev_flow, seed)\n            if self.return_frame:\n                item['frame'] = frame\n                item['frame_ts'] = self.frame_ts[index]\n            if self.return_prev_frame:\n                item['prev_frame'] = self.transform_frame(self.get_frame(index), seed)\n        else:\n            frames = []\n            frame_ts = []\n            if self.has_frames and self.return_frame:\n                fi = self.frame_indices[index]\n                if fi[0] != -1:\n                    frames = [self.transform_frame(self.get_frame(fidx), seed) for fidx in range(fi[1]-fi[0])]\n                    frame_ts = self.frame_ts[fi[0]:fi[1]]\n            item['frame'] = frames\n            item['frame_ts'] = frame_ts\n\n            flows = []\n            flow_ts = []\n            if self.has_flow and self.return_flow:\n                fi = self.frame_indices[index]\n                if fi[0] != -1 and self.has_flow:\n                    flows = [self.transform_flow(self.get_flow(fidx), seed) for fidx in range(fi[0], fi[1], 1)]\n                    flow_ts = self.frame_ts[fi[0]:fi[1]]\n            item['flow'] = flows\n            item['flow_ts'] = flow_ts\n\n        if self.return_events:\n            if self.return_format == 'torch':\n                if idx0-idx1 == 0:\n                    item['events'] = torch.zeros((1, 4), dtype=torch.float32)\n                    item['events_batch_indices'] = torch.ones((1))\n                    item['ts_idx0'] = torch.zeros((1), dtype=torch.float64)\n                else:\n                    item['events'] = torch.from_numpy(np.stack((xs, ys, ts-ts_0, ps), axis=1)).float()\n                    item['events_batch_indices'] = idx1-idx0\n                    item['ts_idx0'] = torch.tensor(ts_0)\n            elif self.return_format == 'numpy':\n                if idx0-idx1 == 0:\n                    item['events'] = np.zeros((1, 4))\n                    item['events_batch_indices'] = np.ones((1))\n                    item['ts_idx0'] = np.zeros((1))\n                else:\n                    item['events'] = np.stack((xs, ys, ts, ps), axis=1)\n                    item['events_batch_indices'] = idx1-idx0\n                    item['ts_idx0'] = np.array(ts_0)\n            else:\n                raise Exception(\"Invalid event format '{}' used\".format(self.return_format))\n        return item\n\n    def compute_between_frame_indices(self):\n        \"\"\"\n        For each frame, find the start and end indices of the\n        time synchronized events\n        @returns List of indices of events at each frame timestamp\n        \"\"\"\n        frame_indices = []\n        start_idx = 0\n        for ts in self.frame_ts:\n            end_index = self.find_ts_index(ts)\n            if end_index >= self.num_events:\n                end_index = self.num_events-1\n            frame_indices.append([start_idx, end_index])\n            start_idx = end_index\n        return frame_indices\n\n    def compute_timeblock_indices(self):\n        \"\"\"\n        For each block of time (using t_seconds), find the start and\n        end indices of the corresponding events\n        @returns List of indices of events at beginning and end of each block of time\n        \"\"\"\n        timeblock_indices = []\n        start_idx = 0\n        for i in range(self.__len__()):\n            start_time = ((self.voxel_method['t'] - self.voxel_method['sliding_window_t']) * i) + self.t0\n            end_time = start_time + self.voxel_method['t']\n            end_idx = self.find_ts_index(end_time)\n            timeblock_indices.append([start_idx, end_idx])\n            start_idx = end_idx\n        return timeblock_indices\n\n    def compute_k_indices(self):\n        \"\"\"\n        For each block of k events, find the start and\n        end indices of the corresponding events\n        @returns List of indices of events at beginning and end of each block of\n            k events (with sliding window)\n        \"\"\"\n        k_indices = []\n        start_idx = 0\n        for i in range(self.__len__()):\n            idx0 = (self.voxel_method['k'] - self.voxel_method['sliding_window_w']) * i\n            idx1 = idx0 + self.voxel_method['k']\n            k_indices.append([idx0, idx1])\n        return k_indices\n\n    def compute_per_frame_indices(self):\n        \"\"\"\n        For each set of event_indices, find the enclosed frame indices\n        @returns List of frame indices at each event index\n        \"\"\"\n        frame_indices = []\n        for indices in self.event_indices:\n            s_t, e_t = self.ts(int(indices[0])), self.ts(int(indices[1]))\n            idx0 = min(np.searchsorted(self.frame_ts, s_t), len(self.frame_ts)-1)\n            idx1 = min(np.searchsorted(self.frame_ts, e_t), len(self.frame_ts)-1)\n            if idx0 == idx1:\n                frame_indices.append([-1, -1])\n            else:\n                frame_indices.append([idx0, idx1])\n        return frame_indices\n\n    def set_voxel_method(self, voxel_method):\n        \"\"\"\n        Given the desired method of computing voxels,\n        compute the event_indices lookup table and dataset length\n        @param voxel_method The method of voxel formation as set in constructor.\n            Options = {'k_events', 't_seconds, 'fixed_frames', 'between_frames'}\n        \"\"\"\n        self.voxel_method = voxel_method\n        if self.voxel_method['method'] == 'k_events':\n            self.length = max(int(self.num_events / (voxel_method['k'] - voxel_method['sliding_window_w'])), 0)\n            if self.length == 0:\n                print(\"num_events={}, t={}, window={}\".format(self.num_events, voxel_method['k'], voxel_method['sliding_window_w']))\n            self.event_indices = self.compute_k_indices()\n        elif self.voxel_method['method'] == 't_seconds':\n            self.length = max(int(self.duration / (voxel_method['t'] - voxel_method['sliding_window_t'])), 0)\n            if self.length == 0:\n                print(\"duration={}, t={}, window={}\".format(self.duration, voxel_method['t'], voxel_method['sliding_window_t']))\n            self.event_indices = self.compute_timeblock_indices()\n        elif self.voxel_method['method'] == 'fixed_frames':\n            self.length = self.voxel_method['num_frames']\n            self.voxel_method['t'] = (self.tk-self.t0)/self.length\n            voxel_method['sliding_window_t'] = 0\n            self.event_indices = self.compute_timeblock_indices()\n        elif self.voxel_method['method'] == 'between_frames':\n            self.length = self.num_frames - 1\n            self.event_indices = self.compute_between_frame_indices()\n        else:\n            raise Exception(\"Invalid voxel forming method chosen ({})\".format(self.voxel_method))\n        print(\"Dataset contains {} items\".format(self.length))\n        if self.has_frames:\n            self.frame_indices = self.compute_per_frame_indices()\n        if self.length == 0:\n            raise Exception(\"Current voxel generation parameters lead to sequence length of zero\")\n\n    def __len__(self):\n        return self.length\n\n    def get_event_indices(self, index):\n        \"\"\"\n        Get start and end indices of events at index\n        @param Desired data index\n        @returns Start and end indices of events at index\n        \"\"\"\n        idx0, idx1 = self.event_indices[index]\n        if not (idx0 >= 0 and idx1 <= self.num_events):\n            raise Exception(\"WARNING: Event indices {},{} out of bounds 0,{}\".format(idx0, idx1, self.num_events))\n        return int(idx0), int(idx1)\n\n    def get_voxel_grid(self, xs, ys, ts, ps, combined_voxel_channels=True):\n        \"\"\"\n        Given events, return voxel grid\n        @param xs tensor containg x coords of events\n        @param ys tensor containg y coords of events\n        @param ts tensor containg t coords of events\n        @param ps tensor containg p coords of events\n        @param combined_voxel_channels: if True, create voxel grid merging positive and\n            negative events (resulting in NUM_BINS x H x W tensor). Otherwise, create\n            voxel grid for positive and negative events separately\n            (resulting in 2*NUM_BINS x H x W tensor)\n        @returns Voxel grid of input events\n        \"\"\"\n        if combined_voxel_channels:\n            # generate voxel grid which has size self.num_bins x H x W\n            voxel_grid = events_to_voxel_torch(xs, ys, ts, ps, self.num_bins, sensor_size=self.sensor_resolution)\n        else:\n            # generate voxel grid which has size 2*self.num_bins x H x W\n            voxel_grid = events_to_neg_pos_voxel_torch(xs, ys, ts, ps, self.num_bins,\n                                                       sensor_size=self.sensor_resolution)\n            voxel_grid = torch.cat([voxel_grid[0], voxel_grid[1]], 0)\n\n        return voxel_grid\n\n    def transform_frame(self, frame, seed):\n        \"\"\"\n        Augment frame and turn into tensor\n        @param frame Input frame\n        @param seed  Seed for random number generation\n        @returns Augmented frame\n        \"\"\"\n        if self.return_format == \"torch\":\n            frame = torch.from_numpy(frame).float().unsqueeze(0) / 255\n            if self.transform:\n                random.seed(seed)\n                frame = self.transform(frame)\n        return frame\n\n    def transform_voxel(self, voxel, seed):\n        \"\"\"\n        Augment voxel and turn into tensor\n        @param voxel Input voxel\n        @param seed  Seed for random number generation\n        @returns Augmented voxel\n        \"\"\"\n        if self.vox_transform:\n            random.seed(seed)\n            voxel = self.vox_transform(voxel)\n        return voxel\n\n    def transform_flow(self, flow, seed):\n        \"\"\"\n        Augment flow and turn into tensor\n        @param flow Input flow\n        @param seed  Seed for random number generation\n        @returns Augmented flow\n        \"\"\"\n        if self.return_format == \"torch\":\n            flow = torch.from_numpy(flow)  # should end up [2 x H x W]\n            if self.transform:\n                random.seed(seed)\n                flow = self.transform(flow, is_flow=True)\n        return flow\n\n    def size(self):\n        \"\"\"\n        Get the size of the event camera sensor/resolution\n        @returns Sensor resolution\n        \"\"\"\n        return self.sensor_resolution\n\n    @staticmethod\n    def unpackage_events(events):\n        \"\"\"\n        Given events as 2D array, break it up into xs,ys,ts,ps components\n        @returns xs, ys, ts, ps component of events\n        \"\"\"\n        return events[:,0], events[:,1], events[:,2], events[:,3]\n\n    @staticmethod\n    def collate_fn(data, event_keys=['events'], idx_keys=['events_batch_indices']):\n        \"\"\"\n        Custom collate function for pyTorch batching to allow batching events\n        \"\"\"\n        collated_events = {}\n        events_arr = []\n        end_idx = 0\n        batch_end_indices = []\n        for idx, item in enumerate(data):\n            for k, v in item.items():\n                if not k in collated_events.keys():\n                    collated_events[k] = []\n                if k in event_keys:\n                    end_idx += v.shape[0]\n                    events_arr.append(v)\n                    batch_end_indices.append(end_idx)\n                else:\n                    collated_events[k].append(v)\n        for k in collated_events.keys():\n            try:\n                i = event_keys.index(k)\n                events = torch.cat(events_arr, dim=0)\n                collated_events[event_keys[i]] = events\n                collated_events[idx_keys[i]] = batch_end_indices\n            except:\n                collated_events[k] = default_collate(collated_events[k])\n        return collated_events\n"
  },
  {
    "path": "lib/data_loaders/data_augmentation.py",
    "content": "import torch\nimport numbers\nimport torchvision.transforms\n\n\nclass Compose(object):\n    \"\"\"\n    Composes several transforms together.\n    Example:\n        >>> torchvision.transforms.Compose([\n        >>>     torchvision.transforms.CenterCrop(10),\n        >>>     torchvision.transforms.ToTensor(),\n        >>> ])\n    \"\"\"\n\n    def __init__(self, transforms):\n        \"\"\"\n        @param transforms (list of ``Transform`` objects): list of transforms to compose.\n        \"\"\"\n        self.transforms = transforms\n\n    def __call__(self, x, is_flow=False):\n        \"\"\"\n        Call the transform.\n        @param x The tensor to transform\n        @param is_flow Set true if tensor represents optic flow\n        @returns Transformed tensor\n        \"\"\"\n        for t in self.transforms:\n            x = t(x, is_flow)\n        return x\n\n    def __repr__(self):\n        format_string = self.__class__.__name__ + '('\n        for t in self.transforms:\n            format_string += '\\n'\n            format_string += '    {0}'.format(t)\n        format_string += '\\n)'\n        return format_string\n\n\nclass CenterCrop(object):\n    \"\"\"\n    Center crop the tensor to a certain size.\n    \"\"\"\n\n    def __init__(self, size, preserve_mosaicing_pattern=False):\n        if isinstance(size, numbers.Number):\n            self.size = (int(size), int(size))\n        else:\n            self.size = size\n\n        self.preserve_mosaicing_pattern = preserve_mosaicing_pattern\n\n    def __call__(self, x, is_flow=False):\n        \"\"\"\n            @param x [C x H x W] Tensor to be rotated.\n            @param is_flow this parameter does not have any effect\n            @returns Cropped tensor.\n        \"\"\"\n        w, h = x.shape[2], x.shape[1]\n        th, tw = self.size\n        assert(th <= h)\n        assert(tw <= w)\n        i = int(round((h - th) / 2.))\n        j = int(round((w - tw) / 2.))\n\n        if self.preserve_mosaicing_pattern:\n            # make sure that i and j are even, to preserve\n            # the mosaicing pattern\n            if i % 2 == 1:\n                i = i + 1\n            if j % 2 == 1:\n                j = j + 1\n\n        return x[:, i:i + th, j:j + tw]\n\n    def __repr__(self):\n        return self.__class__.__name__ + '(size={0})'.format(self.size)\n\n\nclass RobustNorm(object):\n\n    \"\"\"\n    Robustly normalize tensor (ie normalise it between top and \n    bottom centiles of tensor value range)\n    \"\"\"\n\n    def __init__(self, low_perc=0, top_perc=95):\n        self.top_perc = top_perc\n        self.low_perc = low_perc\n\n    @staticmethod\n    def percentile(t, q):\n        \"\"\"\n        Return the ``q``-th percentile of the flattened input tensor's data.\n        CAUTION:\n         * Needs PyTorch >= 1.1.0, as ``torch.kthvalue()`` is used.\n         * Values are not interpolated, which corresponds to\n           ``numpy.percentile(..., interpolation=\"nearest\")``.\n        @param t Input tensor.\n        @param q Percentile to compute, which must be between 0 and 100 inclusive.\n        @returns Resulting value (scalar).\n        \"\"\"\n        # Note that ``kthvalue()`` works one-based, i.e. the first sorted value\n        # indeed corresponds to k=1, not k=0! Use float(q) instead of q directly,\n        # so that ``round()`` returns an integer, even if q is a np.float32.\n        k = 1 + round(.01 * float(q) * (t.numel() - 1))\n        try:\n            result = t.view(-1).kthvalue(k).values.item()\n        except RuntimeError:\n            result = t.reshape(-1).kthvalue(k).values.item()\n        return result\n\n    def __call__(self, x, is_flow=False):\n        \"\"\"\n        Call the transform.\n        @param x The tensor to normalise\n        @param is_flow Set true if the tensor represents optic flow\n        @returns Normalised tensor\n        \"\"\"\n        t_max = self.percentile(x, self.top_perc)\n        t_min = self.percentile(x, self.low_perc)\n        # print(\"t_max={}, t_min={}\".format(t_max, t_min))\n        if t_max == 0 and t_min == 0:\n            return x\n        eps = 1e-6\n        normed = torch.clamp(x, min=t_min, max=t_max)\n        normed = (normed-torch.min(normed))/(torch.max(normed)+eps)\n        return normed\n\n    def __repr__(self):\n        format_string = self.__class__.__name__\n        format_string += '(top_perc={:.2f}'.format(self.top_perc)\n        format_string += ', low_perc={:.2f})'.format(self.low_perc)\n        return format_string\n"
  },
  {
    "path": "lib/data_loaders/data_util.py",
    "content": "import os\nimport pandas as pd\nfrom tqdm import tqdm\nfrom torch.utils.data import ConcatDataset\n\n\ndata_sources = ('esim', 'ijrr', 'mvsec', 'eccd', 'hqfd', 'unknown')\n# Usage: name = data_sources[1], idx = data_sources.index('ijrr')\n\n\ndef concatenate_subfolders(data_file, dataset, dataset_kwargs):\n    \"\"\"\n    Create an instance of ConcatDataset by aggregating all the datasets in a given folder\n    \"\"\"\n    if os.path.isdir(data_file):\n        subfolders = [os.path.join(data_file, s) for s in os.listdir(data_file)]\n    elif os.path.isfile(data_file):\n        subfolders = pd.read_csv(data_file, header=None).values.flatten().tolist()\n    else:\n        raise Exception('{} must be data_file.txt or base/folder'.format(data_file))\n    print('Found {} samples in {}'.format(len(subfolders), data_file))\n    datasets = []\n    for subfolder in subfolders:\n        dataset_kwargs['item_kwargs'].update({'base_folder': subfolder})\n        datasets.append(dataset(**dataset_kwargs))\n    return ConcatDataset(datasets)\n\n\ndef concatenate_datasets(data_file, dataset_type, dataset_kwargs=None):\n    \"\"\"\n    Generates a dataset for each cti_path specified in data_file and concatenates the datasets.\n    :param data_file: A file containing a list of paths to CTI h5 files.\n                      Each file is expected to have a sequence of frame_{:09d}\n    :param dataset_type: Pointer to dataset class\n    :param dataset_kwargs: Dataset keyword arguments\n    :return ConcatDataset: concatenated dataset of all cti_paths in data_file\n    \"\"\"\n    if dataset_kwargs is None:\n        dataset_kwargs = {}\n\n    cti_paths = pd.read_csv(data_file, header=None).values.flatten().tolist()\n    dataset_list = []\n    print('Concatenating {} datasets'.format(dataset_type))\n    for cti_path in tqdm(cti_paths):\n        dataset_kwargs['dataset_kwargs'].update({'h5_path': cti_path})\n        dataset_list.append(dataset_type(**dataset_kwargs))\n    return ConcatDataset(dataset_list)\n\n\ndef concatenate_memmap_datasets(data_file, dataset_type, dataset_kwargs):\n    \"\"\"\n    Generates a dataset for each memmap_path specified in data_file and concatenates the datasets.\n    :param data_file: A file containing a list of paths to memmap root dirs.\n    :param dataset_type: Pointer to dataset class\n    :param dataset_kwargs: Dataset keyword arguments\n    :return ConcatDataset: concatenated dataset of all memmap_paths in data_file\n    \"\"\"\n    if dataset_kwargs is None:\n        dataset_kwargs = {}\n\n    memmap_paths = pd.read_csv(data_file, header=None).values.flatten().tolist()\n    dataset_list = []\n    print('Concatenating {} datasets'.format(dataset_type))\n    for memmap_path in tqdm(memmap_paths):\n        dataset_kwargs['dataset_kwargs'].update({'root': memmap_path})\n        dataset_list.append(dataset_type(**dataset_kwargs))\n    return ConcatDataset(dataset_list)\n"
  },
  {
    "path": "lib/data_loaders/dataloader_util.py",
    "content": "import torch\n\ndef unpack_batched_events(events, batch_indices):\n    \"\"\"\n    When returning events from a pytorch dataloader, it is often convenient when\n    batching, to place them into a contiguous 1x1xNx4 array, where N=length of all\n    B event arrays in the batch. This function unpacks the events into a Bx1xMx4 array,\n    where B is the batch size, M is the length of the *longest* event array in the\n    batch. The shorter event arrays are then padded with zeros.\n    Parameters\n    ----------\n    events : 1x1xNx4 array of the events\n    batch_indices : A list of the end indices of events, where one event array ends and\n    the next begins. For example, if you batched two event arrays A and B of length\n    200 and 700 respectively, batch_indices=[200, 900]\n    Returns\n    -------\n    unpacked_events: Bx1xMx4 array of unpacked events\n    \"\"\"\n    maxlen = 0\n    start_idx = 0\n    for b_idx in range(len(batch_indices)):\n        end_idx = event_batch_indices[b_idx]\n        maxlen = end_idx-start_idx if end_idx-start_dx > maxlen else maxlen\n\n    unpacked_events = torch.zeros((len(batch_indices), 1, maxlen, 4))\n    start_idx = 0\n    for b_idx in range(len(batch_indices)):\n        num_events = end_idx-start_idx\n        unpacked_events[b_idx, 0, 0:num_events, :] = events[start_idx:end_idx, :]\n        start_idx = end_idx\n    return unpacked_events\n"
  },
  {
    "path": "lib/data_loaders/hdf5_dataset.py",
    "content": "import h5py\nfrom ..util.event_util import binary_search_h5_dset\nfrom .base_dataset import BaseVoxelDataset\nimport matplotlib.pyplot as plt\n\nclass DynamicH5Dataset(BaseVoxelDataset):\n    \"\"\"\n    Dataloader for events saved in the Monash University HDF5 events format\n    (see https://github.com/TimoStoff/event_utils for code to convert datasets)\n    \"\"\"\n\n    def get_frame(self, index):\n        return self.h5_file['images']['image{:09d}'.format(index)][:]\n\n    def get_flow(self, index):\n        return self.h5_file['flow']['flow{:09d}'.format(index)][:]\n\n    def get_events(self, idx0, idx1):\n        xs = self.h5_file['events/xs'][idx0:idx1]\n        ys = self.h5_file['events/ys'][idx0:idx1]\n        ts = self.h5_file['events/ts'][idx0:idx1]\n        ps = self.h5_file['events/ps'][idx0:idx1] * 2.0 - 1.0\n        return xs, ys, ts, ps\n\n    def load_data(self, data_path):\n        self.data_sources = ('esim', 'ijrr', 'mvsec', 'eccd', 'hqfd', 'unknown')\n        try:\n            self.h5_file = h5py.File(data_path, 'r')\n        except OSError as err:\n            print(\"Couldn't open {}: {}\".format(data_path, err))\n\n        if self.sensor_resolution is None:\n            self.sensor_resolution = self.h5_file.attrs['sensor_resolution'][0:2]\n        else:\n            self.sensor_resolution = self.sensor_resolution[0:2]\n        print(\"sensor resolution = {}\".format(self.sensor_resolution))\n        self.has_flow = 'flow' in self.h5_file.keys() and len(self.h5_file['flow']) > 0\n        self.t0 = self.h5_file['events/ts'][0]\n        self.tk = self.h5_file['events/ts'][-1]\n        self.num_events = self.h5_file.attrs[\"num_events\"]\n        self.num_frames = self.h5_file.attrs[\"num_imgs\"]\n\n        self.frame_ts = []\n        for img_name in self.h5_file['images']:\n            self.frame_ts.append(self.h5_file['images/{}'.format(img_name)].attrs['timestamp'])\n\n        data_source = self.h5_file.attrs.get('source', 'unknown')\n        try:\n            self.data_source_idx = self.data_sources.index(data_source)\n        except ValueError:\n            self.data_source_idx = -1\n\n    def find_ts_index(self, timestamp):\n        idx = binary_search_h5_dset(self.h5_file['events/ts'], timestamp)\n        return idx\n\n    def ts(self, index):\n        return self.h5_file['events/ts'][index]\n\n    def compute_frame_indices(self):\n        frame_indices = []\n        start_idx = 0\n        for img_name in self.h5_file['images']:\n            end_idx = self.h5_file['images/{}'.format(img_name)].attrs['event_idx']\n            frame_indices.append([start_idx, end_idx])\n            start_idx = end_idx\n        return frame_indices\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Tool to add events to a set of events.\n    \"\"\"\n    import argparse\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"Path to event file\")\n    args = parser.parse_args()\n\n    dloader = DynamicH5Dataset(args.path)\n    for item in dloader:\n        print(item['events'].shape)\n"
  },
  {
    "path": "lib/data_loaders/memmap_dataset.py",
    "content": "import numpy as np\nimport os\nfrom .base_dataset import BaseVoxelDataset\n\nclass MemMapDataset(BaseVoxelDataset):\n    \"\"\"\n    Dataloader for events saved in the MemMap events format used at RPG.\n    (see https://github.com/TimoStoff/event_utils for code to convert datasets)\n    \"\"\"\n\n    def get_frame(self, index):\n        frame = self.filehandle['images'][index][:, :, 0]\n        return frame\n\n    def get_flow(self, index):\n        flow = self.filehandle['optic_flow'][index]\n        return flow\n\n    def get_events(self, idx0, idx1):\n        xy = self.filehandle[\"xy\"][idx0:idx1]\n        xs = xy[:, 0].astype(np.float32)\n        ys = xy[:, 1].astype(np.float32)\n        ts = self.filehandle[\"t\"][idx0:idx1]\n        ps = self.filehandle[\"p\"][idx0:idx1] * 2.0 - 1.0\n        return xs, ys, ts, ps\n\n    def load_data(self, data_path, timestamp_fname=\"timestamps.npy\", image_fname=\"images.npy\",\n                  optic_flow_fname=\"optic_flow.npy\", optic_flow_stamps_fname=\"optic_flow_stamps.npy\",\n                  t_fname=\"t.npy\", xy_fname=\"xy.npy\", p_fname=\"p.npy\"):\n\n        assert os.path.isdir(data_path), '%s is not a valid data_path' % data_path\n\n        data = {}\n        self.has_flow = False\n        for subroot, _, fnames in sorted(os.walk(data_path)):\n            for fname in sorted(fnames):\n                path = os.path.join(subroot, fname)\n                if fname.endswith(\".npy\"):\n                    if fname.endswith(timestamp_fname):\n                        frame_stamps = np.load(path)\n                        data[\"frame_stamps\"] = frame_stamps\n                    elif fname.endswith(image_fname):\n                        data[\"images\"] = np.load(path, mmap_mode=\"r\")\n                    elif fname.endswith(optic_flow_fname):\n                        data[\"optic_flow\"] = np.load(path, mmap_mode=\"r\")\n                        self.has_flow = True\n                    elif fname.endswith(optic_flow_stamps_fname):\n                        optic_flow_stamps = np.load(path)\n                        data[\"optic_flow_stamps\"] = optic_flow_stamps\n\n                    try:\n                        handle = np.load(path, mmap_mode=\"r\")\n                    except Exception as err:\n                        print(\"Couldn't load {}:\".format(path))\n                        raise err\n                    if fname.endswith(t_fname):  # timestamps\n                        data[\"t\"] = handle.squeeze()\n                    elif fname.endswith(xy_fname):  # coordinates\n                        data[\"xy\"] = handle.squeeze()\n                    elif fname.endswith(p_fname):  # polarity\n                        data[\"p\"] = handle.squeeze()\n            if len(data) > 0:\n                data['path'] = subroot\n                if \"t\" not in data:\n                    print(\"Ignoring root {} since no events\".format(subroot))\n                    continue\n                assert (len(data['p']) == len(data['xy']) and len(data['p']) == len(data['t']))\n\n                self.t0, self.tk = data['t'][0], data['t'][-1]\n                self.num_events = len(data['p'])\n                self.num_frames = len(data['images'])\n\n                self.frame_ts = []\n                for ts in data[\"frame_stamps\"]:\n                    self.frame_ts.append(ts)\n                data[\"index\"] = self.frame_ts\n\n        self.filehandle = data\n        self.find_config(data_path)\n\n    def find_ts_index(self, timestamp):\n        index = np.searchsorted(self.filehandle[\"t\"], timestamp)\n        return index\n\n    def ts(self, index):\n        return self.filehandle[\"t\"][index]\n\n    def infer_resolution(self):\n        if len(self.filehandle[\"images\"]) > 0:\n            sr = self.filehandle[\"images\"][0].shape[0:2]\n        else:\n            sr = [np.max(self.filehandle[\"xy\"][:, 1]) + 1, np.max(self.filehandle[\"xy\"][:, 0]) + 1]\n            print(\"Inferred sensor resolution: {}\".format(self.sensor_resolution))\n        return sr\n\n    def find_config(self, data_path):\n        if self.sensor_resolution is None:\n            config = os.path.join(data_path, \"dataset_config.json\")\n            if os.path.exists(config):\n                self.config = read_json(config)\n                self.data_source = self.config['data_source']\n                self.sensor_resolution = self.config[\"sensor_resolution\"]\n            else:\n                data_source = 'unknown'\n                self.sensor_resolution = self.infer_resolution()\n"
  },
  {
    "path": "lib/data_loaders/npy_dataset.py",
    "content": "from .base_dataset import BaseVoxelDataset\nimport numpy as np\n\nclass NpyDataset(BaseVoxelDataset):\n    \"\"\"\n    Dataloader for events saved in the Monash University HDF5 events format\n    (see https://github.com/TimoStoff/event_utils for code to convert datasets)\n    \"\"\"\n\n    def get_frame(self, index):\n        return None\n\n    def get_flow(self, index):\n        return None\n\n    def get_events(self, idx0, idx1):\n        xs = self.xs[idx0:idx1]\n        ys = self.ys[idx0:idx1]\n        ts = self.ts[idx0:idx1]\n        ps = self.ps[idx0:idx1]\n        return xs, ys, ts, ps\n\n    def load_data(self, data_path):\n        try:\n            self.data = np.load(data_path)\n            self.xs, self.ys, self.ps, self.ts = self.data[:, 0], self.data[:, 1], self.data[:, 2]*2-1, self.data[:, 3]*1e-6\n        except OSError as err:\n            print(\"Couldn't open {}: {}\".format(data_path, err))\n        print(self.ps)\n\n        if self.sensor_resolution is None:\n            self.sensor_resolution = [np.max(self.xs), np.max(self.ys)]\n            print(\"Inferred resolution as {}\".format(self.sensor_resolution))\n        else:\n            self.sensor_resolution = self.sensor_resolution[0:2]\n        print(\"sensor resolution = {}\".format(self.sensor_resolution))\n        self.has_flow = False\n        self.has_frames = False\n        self.t0 = self.ts[0]\n        self.tk = self.ts[-1]\n        self.num_events = len(self.xs)\n        self.num_frames = 0\n        self.frame_ts = []\n\n    def find_ts_index(self, timestamp):\n        idx = np.searchsorted(self.ts, timestamp)\n        return idx\n\n    def ts(self, index):\n        return ts[index]\n\n    def compute_frame_indices(self):\n        return None\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Tool to add events to a set of events.\n    \"\"\"\n    import argparse\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"Path to event file\")\n    args = parser.parse_args()\n\n    dloader = NpyDataset(args.path)\n    for item in dloader:\n        print(item['events'].shape)\n"
  },
  {
    "path": "lib/representations/image.py",
    "content": "import numpy as np\nfrom scipy.stats import rankdata\nimport torch\n\ndef events_to_image(xs, ys, ps, sensor_size=(180, 240), interpolation=None, padding=False, meanval=False, default=0):\n    \"\"\"\n    Place events into an image using numpy\n    @param xs x coords of events\n    @param ys y coords of events\n    @param ps Event polarities/weights\n    @param sensor_size The size of the event camera sensor\n    @param interpolation Whether to add the events to the pixels by interpolation (values: None, 'bilinear')\n    @param padding If true, pad the output image to include events otherwise warped off sensor\n    @param meanval If true, divide the sum of the values by the number of events at that location\n    @returns Event image from the input events\n    \"\"\"\n    img_size = (sensor_size[0]+1, sensor_size[1]+1)\n    if interpolation == 'bilinear' and xs.dtype is not torch.long and xs.dtype is not torch.long:\n        xt, yt, pt = torch.from_numpy(xs), torch.from_numpy(ys), torch.from_numpy(ps)\n        xt, yt, pt = xt.float(), yt.float(), pt.float()\n        img = events_to_image_torch(xt, yt, pt, clip_out_of_range=True, interpolation='bilinear', padding=padding)\n        img[img==0] = default\n        img = img.numpy()\n        if meanval:\n            event_count_image = events_to_image_torch(xt, yt, torch.ones_like(xt),\n                    clip_out_of_range=True, padding=padding)\n            event_count_image = event_count_image.numpy()\n    else:\n        coords = np.stack((ys, xs))\n        try:\n            abs_coords = np.ravel_multi_index(coords, img_size)\n        except ValueError:\n            print(\"Issue with input arrays! minx={}, maxx={}, miny={}, maxy={}, coords.shape={}, \\\n                    sum(coords)={}, sensor_size={}\".format(np.min(xs), np.max(xs), np.min(ys), np.max(ys),\n                        coords.shape, np.sum(coords), img_size))\n            raise ValueError\n        img = np.bincount(abs_coords, weights=ps, minlength=img_size[0]*img_size[1])\n        img = img.reshape(img_size)\n        if meanval:\n            event_count_image = np.bincount(abs_coords, weights=np.ones_like(xs), minlength=img_size[0]*img_size[1])\n            event_count_image = event_count_image.reshape(img_size)\n    if meanval:\n        img = np.divide(img, event_count_image, out=np.ones_like(img)*default, where=event_count_image!=0)\n    return img[0:sensor_size[0], 0:sensor_size[1]]\n\ndef events_to_image_torch(xs, ys, ps,\n        device=None, sensor_size=(180, 240), clip_out_of_range=True,\n        interpolation=None, padding=True, default=0):\n    \"\"\"\n    Method to turn event tensor to image. Allows for bilinear interpolation.\n    @param xs Tensor of x coords of events\n    @param ys Tensor of y coords of events\n    @param ps Tensor of event polarities/weights\n    @param device The device on which the image is. If none, set to events device\n    @param sensor_size The size of the image sensor/output image\n    @param clip_out_of_range If the events go beyond the desired image size,\n       clip the events to fit into the image\n    @param interpolation Which interpolation to use. Options=None,'bilinear'\n    @param padding If bilinear interpolation, allow padding the image by 1 to allow events to fit:\n    @returns Event image from the events\n    \"\"\"\n    if device is None:\n        device = xs.device\n    if interpolation == 'bilinear' and padding:\n        img_size = (sensor_size[0]+1, sensor_size[1]+1)\n    else:\n        img_size = list(sensor_size)\n\n    mask = torch.ones(xs.size(), device=device)\n    if clip_out_of_range:\n        zero_v = torch.tensor([0.], device=device)\n        ones_v = torch.tensor([1.], device=device)\n        clipx = img_size[1] if interpolation is None and padding==False else img_size[1]-1\n        clipy = img_size[0] if interpolation is None and padding==False else img_size[0]-1\n        mask = torch.where(xs>=clipx, zero_v, ones_v)*torch.where(ys>=clipy, zero_v, ones_v)\n\n    img = (torch.ones(img_size)*default).to(device)\n    if interpolation == 'bilinear' and xs.dtype is not torch.long and xs.dtype is not torch.long:\n        pxs = (xs.floor()).float()\n        pys = (ys.floor()).float()\n        dxs = (xs-pxs).float()\n        dys = (ys-pys).float()\n        pxs = (pxs*mask).long()\n        pys = (pys*mask).long()\n        masked_ps = ps.squeeze()*mask\n        interpolate_to_image(pxs, pys, dxs, dys, masked_ps, img)\n    else:\n        if xs.dtype is not torch.long:\n            xs = xs.long().to(device)\n        if ys.dtype is not torch.long:\n            ys = ys.long().to(device)\n        try:\n            mask = mask.long().to(device)\n            xs, ys = xs*mask, ys*mask\n            img.index_put_((ys, xs), ps, accumulate=True)\n        except Exception as e:\n            print(\"Unable to put tensor {} positions ({}, {}) into {}. Range = {},{}\".format(\n                ps.shape, ys.shape, xs.shape, img.shape,  torch.max(ys), torch.max(xs)))\n            raise e\n    return img\n\ndef interpolate_to_image(pxs, pys, dxs, dys, weights, img):\n    \"\"\"\n    Accumulate x and y coords to an image using bilinear interpolation\n    @param pxs Numpy array of integer typecast x coords of events\n    @param pys Numpy array of integer typecast y coords of events\n    @param dxs Numpy array of residual difference between x coord and int(x coord)\n    @param dys Numpy array of residual difference between y coord and int(y coord)\n    @returns Image\n    \"\"\"\n    img.index_put_((pys,   pxs  ), weights*(1.0-dxs)*(1.0-dys), accumulate=True)\n    img.index_put_((pys,   pxs+1), weights*dxs*(1.0-dys), accumulate=True)\n    img.index_put_((pys+1, pxs  ), weights*(1.0-dxs)*dys, accumulate=True)\n    img.index_put_((pys+1, pxs+1), weights*dxs*dys, accumulate=True)\n    return img\n\ndef interpolate_to_derivative_img(pxs, pys, dxs, dys, d_img, w1, w2):\n    \"\"\"\n    Accumulate x and y coords to an image using double weighted bilinear interpolation.\n    This allows for computing gradient images, since in the gradient image the interpolation\n    is weighted by the values of the Jacobian.\n    @param pxs Numpy array of integer typecast x coords of events\n    @param pys Numpy array of integer typecast y coords of events\n    @param dxs Numpy array of residual difference between x coord and int(x coord)\n    @param dys Numpy array of residual difference between y coord and int(y coord)\n    @param dimg Derivative image (needs to be of appropriate dimensions)\n    @param w1 Weight for x component of bilinear interpolation\n    @param w2 Weight for y component of bilinear interpolation\n    @returns Image\n    \"\"\"\n    for i in range(d_img.shape[0]):\n        d_img[i].index_put_((pys,   pxs  ), w1[i] * (-(1.0-dys)) + w2[i] * (-(1.0-dxs)), accumulate=True)\n        d_img[i].index_put_((pys,   pxs+1), w1[i] * (1.0-dys)    + w2[i] * (-dxs), accumulate=True)\n        d_img[i].index_put_((pys+1, pxs  ), w1[i] * (-dys)       + w2[i] * (1.0-dxs), accumulate=True)\n        d_img[i].index_put_((pys+1, pxs+1), w1[i] * dys          + w2[i] *  dxs, accumulate=True)\n    return d_img\n\ndef image_to_event_weights(xs, ys, img):\n    \"\"\"\n    Given an image and a set of event coordinates, get the pixel value\n    of the image for each event using reverse bilinear interpolation\n    @param xs x coords of events\n    @param ys y coords of events\n    @param img The image from which to draw the weights\n    @return List containing the value in the image for each event\n    \"\"\"\n    clipx, clipy  = img.shape[1]-1, img.shape[0]-1\n    mask = np.where(xs>=clipx, 0, 1)*np.where(ys>=clipy, 0, 1)\n\n    pxs = np.floor(xs*mask).astype(int)\n    pys = np.floor(ys*mask).astype(int)\n    dxs = xs-pxs\n    dys = ys-pys\n    wxs, wys = 1.0-dxs, 1.0-dys\n\n    weights =  img[pys, pxs]      *wxs*wys\n    weights += img[pys, pxs+1]    *dxs*wys\n    weights += img[pys+1, pxs]    *wxs*dys\n    weights += img[pys+1, pxs+1]  *dxs*dys\n    return weights*mask\n\ndef events_to_image_drv(xn, yn, pn, jacobian_xn, jacobian_yn,\n        device=None, sensor_size=(180, 240), clip_out_of_range=True,\n        interpolation='bilinear', padding=True, compute_gradient=False):\n    \"\"\"\n    Method to turn event tensor to image and derivative image (given event Jacobians).\n    Allows for bilinear interpolation.\n    @param xs Tensor of x coords of events\n    @param ys Tensor of y coords of events\n    @param ps Tensor of event polarities/weights\n    @param device The device on which the image is. If none, set to events device\n    @param sensor_size The size of the image sensor/output image\n    @param clip_out_of_range If the events go beyond the desired image size,\n       clip the events to fit into the image\n    @param interpolation Which interpolation to use. Options=None,'bilinear'\n    @param padding If bilinear interpolation, allow padding the image by 1 to allow events to fit:\n    @param compute_gradient If True, compute the image gradient\n    \"\"\"\n    xt, yt, pt = torch.from_numpy(xn), torch.from_numpy(yn), torch.from_numpy(pn)\n    xs, ys, ps, = xt.float(), yt.float(), pt.float()\n    if compute_gradient:\n        jacobian_x, jacobian_y = torch.from_numpy(jacobian_xn), torch.from_numpy(jacobian_yn)\n        jacobian_x, jacobian_y = jacobian_x.float(), jacobian_y.float()\n    if device is None:\n        device = xs.device\n    if padding:\n        img_size = (sensor_size[0]+1, sensor_size[1]+1)\n    else:\n        img_size = sensor_size\n\n    mask = torch.ones(xs.size())\n    if clip_out_of_range:\n        zero_v = torch.tensor([0.])\n        ones_v = torch.tensor([1.])\n        clipx = img_size[1] if interpolation is None and padding==False else img_size[1]-1\n        clipy = img_size[0] if interpolation is None and padding==False else img_size[0]-1\n        mask = torch.where(xs>=clipx, zero_v, ones_v)*torch.where(ys>=clipy, zero_v, ones_v)\n\n    pxs = xs.floor()\n    pys = ys.floor()\n    dxs = xs-pxs\n    dys = ys-pys\n    pxs = (pxs*mask).long()\n    pys = (pys*mask).long()\n    masked_ps = ps*mask\n    img = torch.zeros(img_size).to(device)\n    interpolate_to_image(pxs, pys, dxs, dys, masked_ps, img)\n\n    if compute_gradient:\n        d_img = torch.zeros((2, *img_size)).to(device)\n        w1 = jacobian_x*masked_ps\n        w2 = jacobian_y*masked_ps\n        interpolate_to_derivative_img(pxs, pys, dxs, dys, d_img, w1, w2)\n        d_img = d_img.numpy()\n    else:\n        d_img = None\n    return img.numpy(), d_img\n\ndef events_to_timestamp_image(xn, yn, ts, pn,\n        device=None, sensor_size=(180, 240), clip_out_of_range=True,\n        interpolation='bilinear', padding=True, normalize_timestamps=True):\n    \"\"\"\n    Method to generate the average timestamp images from 'Zhu19, Unsupervised Event-based Learning\n    of Optical Flow, Depth, and Egomotion'. This method does not have known derivative.\n    @param xs List of event x coordinates\n    @param ys List of event y coordinates\n    @param ts List of event timestamps\n    @param ps List of event polarities\n    @param device The device that the events are on\n    @param sensor_size The size of the event sensor/output voxels\n    @param clip_out_of_range If the events go beyond the desired image size,\n        clip the events to fit into the image\n    @param interpolation Which interpolation to use. Options=None,'bilinear'\n    @param padding If bilinear interpolation, allow padding the image by 1 to allow events to fit\n    @returns Timestamp images of the positive and negative events: ti_pos, ti_neg\n    \"\"\"\n\n    t0 = ts[0]\n    xt, yt, ts, pt = torch.from_numpy(xn), torch.from_numpy(yn), torch.from_numpy(ts-t0), torch.from_numpy(pn)\n    xs, ys, ts, ps = xt.float(), yt.float(), ts.float(), pt.float()\n    zero_v = torch.tensor([0.])\n    ones_v = torch.tensor([1.])\n    if device is None:\n        device = xs.device\n    if padding:\n        img_size = (sensor_size[0]+1, sensor_size[1]+1)\n    else:\n        img_size = sensor_size\n\n    mask = torch.ones(xs.size())\n    if clip_out_of_range:\n        clipx = img_size[1] if interpolation is None and padding==False else img_size[1]-1\n        clipy = img_size[0] if interpolation is None and padding==False else img_size[0]-1\n        mask = torch.where(xs>=clipx, zero_v, ones_v)*torch.where(ys>=clipy, zero_v, ones_v)\n\n    pos_events_mask = torch.where(ps>0, ones_v, zero_v)\n    neg_events_mask = torch.where(ps<=0, ones_v, zero_v)\n    normalized_ts = (ts-ts[0])/(ts[-1]+1e-6) if normalize_timestamps else ts\n    pxs = xs.floor()\n    pys = ys.floor()\n    dxs = xs-pxs\n    dys = ys-pys\n    pxs = (pxs*mask).long()\n    pys = (pys*mask).long()\n    masked_ps = ps*mask\n\n    pos_weights = normalized_ts*pos_events_mask\n    neg_weights = normalized_ts*neg_events_mask\n    img_pos = torch.zeros(img_size).to(device)\n    img_pos_cnt = torch.ones(img_size).to(device)\n    img_neg = torch.zeros(img_size).to(device)\n    img_neg_cnt = torch.ones(img_size).to(device)\n\n    interpolate_to_image(pxs, pys, dxs, dys, pos_weights, img_pos)\n    interpolate_to_image(pxs, pys, dxs, dys, pos_events_mask, img_pos_cnt)\n    interpolate_to_image(pxs, pys, dxs, dys, neg_weights, img_neg)\n    interpolate_to_image(pxs, pys, dxs, dys, neg_events_mask, img_neg_cnt)\n\n    img_pos, img_pos_cnt = img_pos.numpy(), img_pos_cnt.numpy()\n    img_pos_cnt[img_pos_cnt==0] = 1\n    img_neg, img_neg_cnt = img_neg.numpy(), img_neg_cnt.numpy()\n    img_neg_cnt[img_neg_cnt==0] = 1\n    img_pos, img_neg = img_pos/img_pos_cnt, img_neg/img_neg_cnt\n    return img_pos, img_neg\n\ndef events_to_timestamp_image_torch(xs, ys, ts, ps,\n        device=None, sensor_size=(180, 240), clip_out_of_range=True,\n        interpolation='bilinear', padding=True, timestamp_reverse=False):\n    \"\"\"\n    Method to generate the average timestamp images from 'Zhu19, Unsupervised Event-based Learning\n    of Optical Flow, Depth, and Egomotion'. This method does not have known derivative.\n    @param xs List of event x coordinates\n    @param ys List of event y coordinates\n    @param ts List of event timestamps\n    @param ps List of event polarities\n    @param device The device that the events are on\n    @param sensor_size The size of the event sensor/output voxels\n    @param clip_out_of_range If the events go beyond the desired image size,\n        clip the events to fit into the image\n    @param interpolation Which interpolation to use. Options=None,'bilinear'\n    @param padding If bilinear interpolation, allow padding the image by 1 to allow events to fit\n    @param timestamp_reverse Reverse the timestamps of the events, for backward warping\n    @returns Timestamp images of the positive and negative events: ti_pos, ti_neg\n    \"\"\"\n    if device is None:\n        device = xs.device\n    xs, ys, ps, ts = xs.squeeze(), ys.squeeze(), ps.squeeze(), ts.squeeze()\n    if padding:\n        img_size = (sensor_size[0]+1, sensor_size[1]+1)\n    else:\n        img_size = sensor_size\n    zero_v = torch.tensor([0.], device=device)\n    ones_v = torch.tensor([1.], device=device)\n\n    mask = torch.ones(xs.size(), device=device)\n    if clip_out_of_range:\n        clipx = img_size[1] if interpolation is None and padding==False else img_size[1]-1\n        clipy = img_size[0] if interpolation is None and padding==False else img_size[0]-1\n        mask = torch.where(xs>=clipx, zero_v, ones_v)*torch.where(ys>=clipy, zero_v, ones_v)\n\n    pos_events_mask = torch.where(ps>0, ones_v, zero_v)\n    neg_events_mask = torch.where(ps<=0, ones_v, zero_v)\n    epsilon = 1e-6\n    if timestamp_reverse:\n        normalized_ts = ((-ts+ts[-1])/(ts[-1]-ts[0]+epsilon)).squeeze()\n    else:\n        normalized_ts = ((ts-ts[0])/(ts[-1]-ts[0]+epsilon)).squeeze()\n    pxs = xs.floor().float()\n    pys = ys.floor().float()\n    dxs = (xs-pxs).float()\n    dys = (ys-pys).float()\n    pxs = (pxs*mask).long()\n    pys = (pys*mask).long()\n    masked_ps = ps*mask\n\n    pos_weights = (normalized_ts*pos_events_mask).float()\n    neg_weights = (normalized_ts*neg_events_mask).float()\n    img_pos = torch.zeros(img_size).to(device)\n    img_pos_cnt = torch.ones(img_size).to(device)\n    img_neg = torch.zeros(img_size).to(device)\n    img_neg_cnt = torch.ones(img_size).to(device)\n\n    interpolate_to_image(pxs, pys, dxs, dys, pos_weights, img_pos)\n    interpolate_to_image(pxs, pys, dxs, dys, pos_events_mask, img_pos_cnt)\n    interpolate_to_image(pxs, pys, dxs, dys, neg_weights, img_neg)\n    interpolate_to_image(pxs, pys, dxs, dys, neg_events_mask, img_neg_cnt)\n\n    # Avoid division by 0\n    img_pos_cnt[img_pos_cnt==0] = 1\n    img_neg_cnt[img_neg_cnt==0] = 1\n    img_pos = img_pos.div(img_pos_cnt)\n    img_neg = img_neg.div(img_neg_cnt)\n    return img_pos, img_neg #/img_pos_cnt, img_neg/img_neg_cnt\n\nclass TimestampImage:\n\n    def __init__(self, sensor_size):\n        self.sensor_size = sensor_size\n        self.num_pixels = sensor_size[0]*sensor_size[1]\n        self.image = np.ones(sensor_size)\n\n    def set_init(self, value):\n        self.image = np.ones_like(self.image)*value\n\n    def add_event(self, x, y, t, p):\n        self.image[int(y), int(x)] = t\n\n    def add_events(self, xs, ys, ts, ps):\n        for x, y, t in zip(xs, ys, ts):\n            self.add_event(x, y, t, 0)\n\n    def get_image(self):\n        sort_args = rankdata(self.image, method='dense')\n        sort_args = sort_args-1\n        sort_args = sort_args.reshape(self.sensor_size)\n        sort_args = sort_args/np.max(sort_args)\n        return sort_args\n\nclass EventImage:\n\n    def __init__(self, sensor_size):\n        self.sensor_size = sensor_size\n        self.num_pixels = sensor_size[0]*sensor_size[1]\n        self.image = np.ones(sensor_size)\n\n    def add_event(self, x, y, t, p):\n        self.image[int(y), int(x)] += p\n\n    def add_events(self, xs, ys, ts, ps):\n        for x, y, t in zip(xs, ys, ts):\n            self.add_event(x, y, t, 0)\n\n    def get_image(self):\n        mn, mx = np.min(self.image), np.max(self.image)\n        norm_img = (self.image-mn)/(mx-mn)\n        return norm_img\n"
  },
  {
    "path": "lib/representations/voxel_grid.py",
    "content": "import argparse\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport cv2 as cv\nimport torch\nfrom ..util.event_util import events_bounds_mask\nfrom .image import events_to_image, events_to_image_torch\n\ndef get_voxel_grid_as_image(voxelgrid):\n    \"\"\"\n    Debug function. Returns a voxelgrid as a series of images,\n    one for each bin for display.\n    @param voxelgrid Input voxel grid\n    @returns Image of N bins placed side by side\n    \"\"\"\n    images = []\n    splitter = np.ones((voxelgrid.shape[1], 2))*np.max(voxelgrid)\n    for image in voxelgrid:\n        images.append(image)\n        images.append(splitter)\n    images.pop()\n    sidebyside = np.hstack(images)\n    sidebyside = cv.normalize(sidebyside, None, 0, 255, cv.NORM_MINMAX)\n    return sidebyside\n\ndef plot_voxel_grid(voxelgrid, cmap='gray'):\n    \"\"\"\n    Debug function. Given a voxel grid, display it as an image.\n    @param voxelgrid The input voxel grid\n    @param cmap The color map to use\n    @returns None\n    \"\"\"\n    sidebyside = get_voxel_grid_as_image(voxelgrid)\n    plt.imshow(sidebyside, cmap=cmap)\n    plt.show()\n\ndef voxel_grids_fixed_n_torch(xs, ys, ts, ps, B, n, sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Given a set of events, return the voxel grid formed with a fixed number of events.\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param n The number of events per voxel\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns List of output voxel grids\n    \"\"\"\n    voxels = []\n    for idx in range(0, len(xs)-n, n):\n        voxels.append(events_to_voxel_torch(xs[idx:idx+n], ys[idx:idx+n],\n            ts[idx:idx+n], ps[idx:idx+n], B, sensor_size=sensor_size,\n            temporal_bilinear=temporal_bilinear))\n    return voxels\n\ndef voxel_grids_fixed_t_torch(xs, ys, ts, ps, B, t, sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Given a set of events, return a voxel grid with a fixed temporal width.\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param t The time width of the voxel grids\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns List of output voxel grids\n    \"\"\"\n    device = xs.device\n    voxels = []\n    np_ts = ts.cpu().numpy()\n    for t_start in np.arange(ts[0].item(), ts[-1].item()-t, t):\n        voxels.append(events_to_voxel_timesync_torch(xs, ys, ts, ps, B, t_start, t_start+t, np_ts=np_ts,\n            sensor_size=sensor_size, temporal_bilinear=temporal_bilinear))\n    return voxels\n\ndef events_to_voxel_timesync_torch(xs, ys, ts, ps, B, t0, t1, device=None, np_ts=None,\n        sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Given a set of events, return a voxel grid of the events between t0 and t1\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param t0 The start time of the voxel grid\n    @param t1 The end time of the voxel grid\n    @param device Device to put voxel grid. If left empty, same device as events\n    @param np_ts A numpy copy of ts (optional). If not given, will be created in situ\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns Voxel of the events between t0 and t1\n    \"\"\"\n    assert(t1>t0)\n    if np_ts is None:\n        np_ts = ts.cpu().numpy()\n    if device is None:\n        device = xs.device\n    start_idx = np.searchsorted(np_ts, t0)\n    end_idx = np.searchsorted(np_ts, t1)\n    assert(start_idx < end_idx)\n    voxel = events_to_voxel_torch(xs[start_idx:end_idx], ys[start_idx:end_idx],\n        ts[start_idx:end_idx], ps[start_idx:end_idx], B, device, sensor_size=sensor_size,\n        temporal_bilinear=temporal_bilinear)\n    return voxel\n\ndef events_to_voxel_torch(xs, ys, ts, ps, B, device=None, sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Turn set of events to a voxel grid tensor, using temporal bilinear interpolation\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param device Device to put voxel grid. If left empty, same device as events\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns Voxel of the events between t0 and t1\n    \"\"\"\n    if device is None:\n        device = xs.device\n    assert(len(xs)==len(ys) and len(ys)==len(ts) and len(ts)==len(ps))\n    bins = []\n    dt = ts[-1]-ts[0]\n    t_norm = (ts-ts[0])/dt*(B-1)\n    zeros = torch.zeros(t_norm.size())\n    for bi in range(B):\n        if temporal_bilinear:\n            bilinear_weights = torch.max(zeros, 1.0-torch.abs(t_norm-bi))\n            weights = ps*bilinear_weights\n            vb = events_to_image_torch(xs, ys,\n                    weights, device, sensor_size=sensor_size,\n                    clip_out_of_range=False)\n        else:\n            tstart = t[0] + dt*bi\n            tend = tstart + dt\n            beg = binary_search_torch_tensor(t, 0, len(ts)-1, tstart)\n            end = binary_search_torch_tensor(t, 0, len(ts)-1, tend)\n            vb = events_to_image_torch(xs[beg:end], ys[beg:end],\n                    ps[beg:end], device, sensor_size=sensor_size,\n                    clip_out_of_range=False)\n        bins.append(vb)\n    bins = torch.stack(bins)\n    return bins\n\ndef events_to_neg_pos_voxel_torch(xs, ys, ts, ps, B, device=None,\n        sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Turn set of events to a voxel grid tensor, using temporal bilinear interpolation.\n    Positive and negative events are put into separate voxel grids\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param device Device to put voxel grid. If left empty, same device as events\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns Two voxel grids, one for positive one for negative events\n    \"\"\"\n    zero_v = torch.tensor([0.])\n    ones_v = torch.tensor([1.])\n    pos_weights = torch.where(ps>0, ones_v, zero_v)\n    neg_weights = torch.where(ps<=0, ones_v, zero_v)\n\n    voxel_pos = events_to_voxel_torch(xs, ys, ts, pos_weights, B, device=device,\n            sensor_size=sensor_size, temporal_bilinear=temporal_bilinear)\n    voxel_neg = events_to_voxel_torch(xs, ys, ts, neg_weights, B, device=device,\n            sensor_size=sensor_size, temporal_bilinear=temporal_bilinear)\n\n    return voxel_pos, voxel_neg\n\ndef events_to_voxel(xs, ys, ts, ps, B, sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Turn set of events to a voxel grid tensor, using temporal bilinear interpolation\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns Voxel of the events between t0 and t1\n    \"\"\"\n    assert(len(xs)==len(ys) and len(ys)==len(ts) and len(ts)==len(ps))\n    num_events_per_bin = len(xs)//B\n    bins = []\n    dt = ts[-1]-ts[0]\n    t_norm = (ts-ts[0])/dt*(B-1)\n    zeros = (np.expand_dims(np.zeros(t_norm.shape[0]), axis=0).transpose()).squeeze()\n    for bi in range(B):\n        if temporal_bilinear:\n            bilinear_weights = np.maximum(zeros, 1.0-np.abs(t_norm-bi))\n            weights = ps*bilinear_weights\n            vb = events_to_image(xs.squeeze(), ys.squeeze(), weights.squeeze(),\n                    sensor_size=sensor_size, interpolation=None)\n        else:\n            beg = bi*num_events_per_bin\n            end = beg + num_events_per_bin\n            vb = events_to_image(xs[beg:end], ys[beg:end],\n                    weights[beg:end], sensor_size=sensor_size)\n        bins.append(vb)\n    bins = np.stack(bins)\n    return bins\n\ndef events_to_neg_pos_voxel(xs, ys, ts, ps, B,\n        sensor_size=(180, 240), temporal_bilinear=True):\n    \"\"\"\n    Turn set of events to a voxel grid tensor, using temporal bilinear interpolation.\n    Positive and negative events are put into separate voxel grids\n    @param xs List of event x coordinates (torch tensor)\n    @param ys List of event y coordinates (torch tensor)\n    @param ts List of event timestamps (torch tensor)\n    @param ps List of event polarities (torch tensor)\n    @param B Number of bins in output voxel grids (int)\n    @param sensor_size The size of the event sensor/output voxels\n    @param temporal_bilinear Whether the events should be naively\n        accumulated to the voxels (faster), or properly\n        temporally distributed\n    @returns Two voxel grids, one for positive one for negative events\n    \"\"\"\n    pos_weights = np.where(ps, 1, 0)\n    neg_weights = np.where(ps, 0, 1)\n\n    voxel_pos = events_to_voxel(xs, ys, ts, pos_weights, B,\n            sensor_size=sensor_size, temporal_bilinear=temporal_bilinear)\n    voxel_neg = events_to_voxel(xs, ys, ts, neg_weights, B,\n            sensor_size=sensor_size, temporal_bilinear=temporal_bilinear)\n\n    return voxel_pos, voxel_neg\n"
  },
  {
    "path": "lib/transforms/optic_flow.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn.functional as F\n\ndef warp_events_flow_torch(xt, yt, tt, pt, flow_field, t0=None,\n        batched=False, batch_indices=None):\n    \"\"\"\n    Given events and a flow field, warp the events by the flow\n    Parameters\n    ----------\n    xs : list of event x coordinates \n    ys : list of event y coordinates \n    ts : list of event timestamps \n    ps : list of event polarities \n    flow_field : 2D tensor containing the flow at each x,y position\n    t0 : the reference time to warp events to. If empty, will use the\n        timestamp of the last event\n    Returns\n    -------\n    warped_xt: x coords of warped events\n    warped_yt: y coords of warped events\n    \"\"\"\n    if len(xt.shape) > 1:\n        xt, yt, tt, pt = xt.squeeze(), yt.squeeze(), tt.squeeze(), pt.squeeze()\n    if t0 is None:\n        t0 = tt[-1]\n    while len(flow_field.size()) < 4:\n        flow_field = flow_field.unsqueeze(0)\n    if len(xt.size()) == 1:\n        event_indices = torch.transpose(torch.stack((xt, yt), dim=0), 0, 1)\n    else:\n        event_indices = torch.transpose(torch.cat((xt, yt), dim=1), 0, 1)\n    #event_indices.requires_grad_ = False\n    event_indices = torch.reshape(event_indices, [1, 1, len(xt), 2])\n\n    # Event indices need to be between -1 and 1 for F.gridsample\n    event_indices[:,:,:,0] = event_indices[:,:,:,0]/(flow_field.shape[-1]-1)*2.0-1.0\n    event_indices[:,:,:,1] = event_indices[:,:,:,1]/(flow_field.shape[-2]-1)*2.0-1.0\n\n    flow_at_event = F.grid_sample(flow_field, event_indices, align_corners=True)\n    dt = (tt-t0).squeeze()\n\n    warped_xt = xt+flow_at_event[:,0,:,:].squeeze()*dt\n    warped_yt = yt+flow_at_event[:,1,:,:].squeeze()*dt\n\n    return warped_xt, warped_yt\n\n"
  },
  {
    "path": "lib/util/__init__.py",
    "content": "# __init__.py\nfrom .event_util import *\nfrom .util import *\n"
  },
  {
    "path": "lib/util/event_util.py",
    "content": "import numpy as np\nimport h5py\nfrom ..representations.image import events_to_image\n\ndef infer_resolution(xs, ys):\n    \"\"\"\n    Given events, guess the resolution by looking at the max and min values\n    @param xs Event x coords\n    @param ys Event y coords\n    @returns Inferred resolution\n    \"\"\"\n    sr = [np.max(ys) + 1, np.max(xs) + 1]\n    return sr\n\ndef events_bounds_mask(xs, ys, x_min, x_max, y_min, y_max):\n    \"\"\"\n    Get a mask of the events that are within the given bounds\n    @param xs Event x coords\n    @param ys Event y coords\n    @param x_min Lower bound of x axis\n    @param x_max Upper bound of x axis\n    @param y_min Lower bound of y axis\n    @param y_max Upper bound of y axis\n    @returns mask\n    \"\"\"\n    mask = np.where(np.logical_or(xs<=x_min, xs>x_max), 0.0, 1.0)\n    mask *= np.where(np.logical_or(ys<=y_min, ys>y_max), 0.0, 1.0)\n    return mask\n\ndef cut_events_to_lifespan(xs, ys, ts, ps, params,\n        pixel_crossings, minimum_events=100, side='back'):\n    \"\"\"\n    Given motion model parameters, compute the speed and thus\n    the lifespan, given a desired number of pixel crossings\n    @param xs Event x coords\n    @param ys Event y coords\n    @param ts Event timestamps\n    @param ps Event polarities\n    @param params Motion model parameters\n    @param pixel_crossings Number of pixel crossings\n    @param minimum_events The minimum number of events to cut down to\n    @param side Cut events from 'back' or 'front'\n    @returns Cut events\n    \"\"\"\n    magnitude = np.linalg.norm(params)\n    dt = pixel_crossings/magnitude\n    if side == 'back':\n        s_idx = np.searchsorted(ts, ts[-1]-dt)\n        num_events = len(xs)-s_idx\n        s_idx = len(xs)-minimum_events if num_events < minimum_events else s_idx\n        return xs[s_idx:-1], ys[s_idx:-1], ts[s_idx:-1], ps[s_idx:-1]\n    elif side == 'front':\n        s_idx = np.searchsorted(ts, dt+ts[0])\n        num_events = s_idx\n        s_idx = minimum_events if num_events < minimum_events else s_idx\n        return xs[0:s_idx], ys[0:s_idx], ts[0:s_idx], ps[0:s_idx]\n    else:\n        raise Exception(\"Invalid side given: {}. To cut events, must provide an \\\n                appropriate side to cut from, either 'front' or 'back'\".format(side))\n\ndef clip_events_to_bounds(xs, ys, ts, ps, bounds, set_zero=False):\n    \"\"\"\n    Clip events to the given bounds.\n    @param xs x coords of events\n    @param ys y coords of events\n    @param ts Timestamps of events (may be None)\n    @param ps Polarities of events (may be None)\n    @param bounds the bounds of the events. Must be list of\n       length 2 (in which case the lower bound is assumed to be 0,0)\n       or length 4, in format [min_y, max_y, min_x, max_x]\n    @param: set_zero if True, simply multiplies the out of bounds events with 0 mask.\n        Otherwise, removes the events.\n    @returns Clipped events\n    \"\"\"\n    if len(bounds) == 2:\n        bounds = [0, bounds[0], 0, bounds[1]]\n    elif len(bounds) != 4:\n        raise Exception(\"Bounds must be of length 2 or 4 (not {})\".format(len(bounds)))\n    miny, maxy, minx, maxx = bounds\n    if set_zero:\n        mask = events_bounds_mask(xs, ys, minx, maxx, miny, maxy)\n        ts_mask = None if ts is None else ts*mask\n        ps_mask = None if ps is None else ps*mask\n        return xs*mask, ys*mask, ts_mask, ps_mask\n    else:\n        x_clip_idc = np.argwhere((xs >= minx) & (xs < maxx))[:, 0]\n        y_subset = ys[x_clip_idc]\n        y_clip_idc = np.argwhere((y_subset >= miny) & (y_subset < maxy))[:, 0]\n\n        xs_clip = xs[x_clip_idc][y_clip_idc]\n        ys_clip = ys[x_clip_idc][y_clip_idc]\n        ts_clip = None if ts is None else ts[x_clip_idc][y_clip_idc]\n        ps_clip = None if ps is None else ps[x_clip_idc][y_clip_idc]\n        return xs_clip, ys_clip, ts_clip, ps_clip\n\ndef get_events_from_mask(mask, xs, ys):\n    \"\"\"\n    Given an image mask, return the indices of all events at each location in the mask\n    @params mask The image mask\n    @param xs x components of events as list\n    @param ys y components of events as list\n    @returns Indices of events that lie on the mask\n    \"\"\"\n    xs = xs.astype(int)\n    ys = ys.astype(int)\n    idx = np.stack((ys, xs))\n    event_vals = mask[tuple(idx)]\n    event_indices = np.argwhere(event_vals >= 0.01).squeeze()\n    return event_indices\n\ndef binary_search_h5_dset(dset, x, l=None, r=None, side='left'):\n    \"\"\"\n    Binary search for a timestamp in an HDF5 event file, without\n    loading the entire file into RAM\n    @param dset The HDF5 dataset\n    @param x The timestamp being searched for\n    @param l Starting guess for the left side (0 if None is chosen)\n    @param r Starting guess for the right side (-1 if None is chosen)\n    @param side Which side to take final result for if exact match is not found\n    @returns Index of nearest event to 'x'\n    \"\"\"\n    l = 0 if l is None else l\n    r = len(dset)-1 if r is None else r\n    while l <= r:\n        mid = l + (r - l)//2;\n        midval = dset[mid]\n        if midval == x:\n            return mid\n        elif midval < x:\n            l = mid + 1\n        else:\n            r = mid - 1\n    if side == 'left':\n        return l\n    return r\n\ndef binary_search_h5_timestamp(hdf_path, l, r, x, side='left'):\n    f = h5py.File(hdf_path, 'r')\n    return binary_search_h5_dset(f['events/ts'], x, l=l, r=r, side=side)\n\ndef binary_search_torch_tensor(t, l, r, x, side='left'):\n    \"\"\"\n    Binary search implemented for pytorch tensors (no native implementation exists)\n    @param t The tensor\n    @param x The value being searched for\n    @param l Starting lower bound (0 if None is chosen)\n    @param r Starting upper bound (-1 if None is chosen)\n    @param side Which side to take final result for if exact match is not found\n    @returns Index of nearest event to 'x'\n    \"\"\"\n    if r is None:\n        r = len(t)-1\n    while l <= r:\n        mid = l + (r - l)//2;\n        midval = t[mid]\n        if midval == x:\n            return mid\n        elif midval < x:\n            l = mid + 1\n        else:\n            r = mid - 1\n    if side == 'left':\n        return l\n    return r\n\ndef remove_hot_pixels(xs, ys, ts, ps, sensor_size=(180, 240), num_hot=50):\n    \"\"\"\n    Given a set of events, removes the 'hot' pixel events.\n    Accumulates all of the events into an event image and removes\n    the 'num_hot' highest value pixels.\n    @param xs Event x coords\n    @param ys Event y coords\n    @param ts Event timestamps\n    @param ps Event polarities\n    @param sensor_size The size of the event camera sensor\n    @param num_hot The number of hot pixels to remove\n    \"\"\"\n    img = events_to_image(xs, ys, ps, sensor_size=sensor_size)\n    hot = np.array([])\n    for i in range(num_hot):\n        maxc = np.unravel_index(np.argmax(img), sensor_size)\n        #print(\"{} = {}\".format(maxc, img[maxc]))\n        img[maxc] = 0\n        h = np.where((xs == maxc[1]) & (ys == maxc[0]))\n        hot = np.concatenate((hot, h[0]))\n    xs, ys, ts, ps = np.delete(xs, hot), np.delete(ys, hot), np.delete(ts, hot), np.delete(ps, hot)\n    return xs, ys, ts, ps\n"
  },
  {
    "path": "lib/util/util.py",
    "content": "import json\nimport numpy as np\nimport cv2 as cv\nimport pandas as pd\nfrom pathlib import Path\nfrom itertools import repeat\nfrom collections import OrderedDict\nfrom math import fabs, ceil, floor\nfrom torch.nn import ZeroPad2d\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nimport cv2 as cv\n\n\ndef ensure_dir(dirname):\n    \"\"\"\n    Ensure a directory exists, if not create it\n    @param dirname Directory name\n    @returns None\n    \"\"\"\n    dirname = Path(dirname)\n    if not dirname.is_dir():\n        dirname.mkdir(parents=True, exist_ok=False)\n\n\ndef read_json(fname):\n    fname = Path(fname)\n    with fname.open('rt') as handle:\n        return json.load(handle, object_hook=OrderedDict)\n\n\ndef write_json(content, fname):\n    fname = Path(fname)\n    with fname.open('wt') as handle:\n        json.dump(content, handle, indent=4, sort_keys=False)\n\n\ndef inf_loop(data_loader):\n    ''' wrapper function for endless data loader. '''\n    for loader in repeat(data_loader):\n        yield from loader\n\n\ndef optimal_crop_size(max_size, max_subsample_factor, safety_margin=0):\n    \"\"\" Find the optimal crop size for a given max_size and subsample_factor.\n        The optimal crop size is the smallest integer which is greater or equal than max_size,\n        while being divisible by 2^max_subsample_factor.\n    \"\"\"\n    crop_size = int(pow(2, max_subsample_factor) * ceil(max_size / pow(2, max_subsample_factor)))\n    crop_size += safety_margin * pow(2, max_subsample_factor)\n    return crop_size\n\n\nclass CropParameters:\n    \"\"\" Helper class to compute and store useful parameters for pre-processing and post-processing\n        of images in and out of E2VID.\n        Pre-processing: finding the best image size for the network, and padding the input image with zeros\n        Post-processing: Crop the output image back to the original image size\n    \"\"\"\n\n    def __init__(self, width, height, num_encoders, safety_margin=0):\n\n        self.height = height\n        self.width = width\n        self.num_encoders = num_encoders\n        self.width_crop_size = optimal_crop_size(self.width, num_encoders, safety_margin)\n        self.height_crop_size = optimal_crop_size(self.height, num_encoders, safety_margin)\n\n        self.padding_top = ceil(0.5 * (self.height_crop_size - self.height))\n        self.padding_bottom = floor(0.5 * (self.height_crop_size - self.height))\n        self.padding_left = ceil(0.5 * (self.width_crop_size - self.width))\n        self.padding_right = floor(0.5 * (self.width_crop_size - self.width))\n        self.pad = ZeroPad2d((self.padding_left, self.padding_right, self.padding_top, self.padding_bottom))\n\n        self.cx = floor(self.width_crop_size / 2)\n        self.cy = floor(self.height_crop_size / 2)\n\n        self.ix0 = self.cx - floor(self.width / 2)\n        self.ix1 = self.cx + ceil(self.width / 2)\n        self.iy0 = self.cy - floor(self.height / 2)\n        self.iy1 = self.cy + ceil(self.height / 2)\n\n    def crop(self, img):\n        return img[..., self.iy0:self.iy1, self.ix0:self.ix1]\n\n\ndef format_power(size):\n    power = 1e3\n    n = 0\n    power_labels = {0: '', 1: 'K', 2: 'M', 3: 'G', 4: 'T'}\n    while size > power:\n        size /= power\n        n += 1\n    return size, power_labels[n]\n\ndef plot_image(image, lognorm=False, cmap='gray', bbox=None, ticks=False, norm=True, savename=None, colorbar=False):\n    \"\"\"\n    Plot an image\n    :param image: The image to plot, as np array\n    :param lognorm: If true, apply log transform the normalize image\n    :param cmap: Colormap (defaul gray)\n    :param bbox: Optional bounding box to draw on image, as array with [[top corner x,y,w,h]]\n    :param ticks: Whether or not to draw axis ticks\n    :param norm: Normalize image?\n    :param savename: Optional save path\n    :param colorbar: Display color bar if true\n    \"\"\"\n    fig, ax = plt.subplots(1)\n    if lognorm:\n        image = np.log10(image)\n        cmap='viridis'\n    if norm:\n        image = cv.normalize(image, None, 0, 1.0, cv.NORM_MINMAX)\n    ims = ax.imshow(image, cmap=cmap)\n    if bbox is not None:\n        w,h = bbox[2], bbox[3]\n        rect = patches.Rectangle((bbox[0:2]), w, h, linewidth=1, edgecolor='r', facecolor='none')\n        ax.add_patch(rect)\n    if colorbar:\n        fig.colorbar(ims)\n    if not ticks:\n        plt.axis('off')\n    if savename is not None:\n        plt.savefig(savename)\n    plt.show()\n\ndef plot_image_grid(images, grid_shape=None, lognorm=False,\n        cmap='gray', bbox=None, norm=True, savename=None,\n        colorbar=False):\n    \"\"\"\n    Given a list of images, stitches them into a grid and displays/saves the grid\n    @param images List of images\n    @param grid_shape Shape of the grid\n    @param lognorm Logarithmic normalise the image\n    @param cmap Color map to use\n    @param bbox Draw a bounding box on the image\n    @param norm If True, normalise the image\n    @param savename If set, save the image to that path\n    @param colorbar If true, plot the colorbar\n    \"\"\"\n    if grid_shape is None:\n        grid_shape = [1, len(images)]\n\n    col = []\n    img_idx = 0\n    for xc in range(grid_shape[0]):\n        row = []\n        for yc in range(grid_shape[1]):\n            image = images[img_idx]\n            if lognorm:\n                image = np.log10(image)\n                cmap='viridis'\n            if norm:\n                image = cv.normalize(image, None, 0, 1.0, cv.NORM_MINMAX)\n            row.append(image)\n            img_idx += 1\n        col.append(np.concatenate(row, axis=1))\n    comp_img = np.concatenate(col, axis=0)\n    if savename is None:\n        plot_image(comp_img, norm=False, colorbar=colorbar, cmap=cmap)\n    else:\n        save_image(comp_img, fname=savename, colorbar=colorbar, cmap=cmap)\n\ndef save_image(image, fname=None, lognorm=False, cmap='gray', bbox=None, colorbar=False):\n    fname = \"/tmp/img.png\" if fname is None else fname\n    fig, ax = plt.subplots(1)\n    if lognorm:\n        image = np.log10(image)\n        cmap='viridis'\n    image = cv.normalize(image, None, 0, 1.0, cv.NORM_MINMAX)\n    ims = ax.imshow(image, cmap=cmap)\n    if bbox is not None:\n        w = bbox[1][0]-bbox[0][0]\n        h = bbox[1][1]-bbox[0][1]\n        rect = patches.Rectangle((bbox[0]), w, h, linewidth=1, edgecolor='r', facecolor='none')\n        ax.add_patch(rect)\n    if colorbar:\n        fig.colorbar(ims)\n    plt.savefig(fname, dpi=150)\n    plt.close()\n\ndef flow2bgr_np(disp_x, disp_y, max_magnitude=None):\n    \"\"\"\n    Convert an optic flow tensor to an RGB color map for visualization\n    Code adapted from: https://github.com/ClementPinard/FlowNetPytorch/blob/master/main.py#L339\n    @param disp_x A [H x W] NumPy array containing the X displacement\n    @param disp_y A [H x W] NumPy array containing the Y displacement\n    @returns A [H x W x 3] NumPy array containing a color-coded representation of the flow [0, 255]\n    \"\"\"\n    assert(disp_x.shape == disp_y.shape)\n    H, W = disp_x.shape\n\n    # X, Y = np.meshgrid(np.linspace(-1, 1, H), np.linspace(-1, 1, W))\n\n    # flow_x = (X - disp_x) * float(W) / 2\n    # flow_y = (Y - disp_y) * float(H) / 2\n    # magnitude, angle = cv.cartToPolar(flow_x, flow_y)\n    # magnitude, angle = cv.cartToPolar(disp_x, disp_y)\n\n    # follow alex zhu color convention https://github.com/daniilidis-group/EV-FlowNet\n\n    flows = np.stack((disp_x, disp_y), axis=2)\n    magnitude = np.linalg.norm(flows, axis=2)\n\n    angle = np.arctan2(disp_y, disp_x)\n    angle += np.pi\n    angle *= 180. / np.pi / 2.\n    angle = angle.astype(np.uint8)\n\n    if max_magnitude is None:\n        v = np.zeros(magnitude.shape, dtype=np.uint8)\n        cv.normalize(src=magnitude, dst=v, alpha=0, beta=255, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)\n    else:\n        v = np.clip(255.0 * magnitude / max_magnitude, 0, 255)\n        v = v.astype(np.uint8)\n\n    hsv = np.zeros((H, W, 3), dtype=np.uint8)\n    hsv[..., 1] = 255\n    hsv[..., 0] = angle\n    hsv[..., 2] = v\n    bgr = cv.cvtColor(hsv, cv.COLOR_HSV2BGR)\n\n    return bgr\n"
  },
  {
    "path": "lib/visualization/__init__.py",
    "content": "# __init__.py\nfrom . import draw_event_stream\n"
  },
  {
    "path": "lib/visualization/draw_event_stream.py",
    "content": "import numpy as np\nimport numpy.lib.recfunctions as nlr\nimport cv2 as cv\nfrom skimage.measure import block_reduce\nimport os\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom ..representations.image import events_to_image\nfrom ..representations.voxel_grid import events_to_voxel\nfrom ..util.event_util import clip_events_to_bounds\nfrom .visualization_utils import *\nfrom tqdm import tqdm\n\ndef plot_events_sliding(xs, ys, ts, ps, args, frames=[], frame_ts=[]):\n    \"\"\"\n    Plot the given events in a sliding window fashion to generate a video\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param args Arguments for the rendering (see args list\n        for 'plot_events' function)\n    @param frames List of image frames\n    @param frame_ts List of the image timestamps\n    @returns None\n    \"\"\"\n    dt, sdt = args.w_width, args.sw_width\n    if dt is None:\n        dt = (ts[-1]-ts[0])/10\n        sdt = dt/10\n        print(\"Using dt={}, sdt={}\".format(dt, sdt))\n\n    if len(frames) > 0:\n        has_frames = True\n        sensor_size = frames[0].shape\n        frame_ts = frame_ts[:,1] if len(frame_ts.shape) == 2 else frame_ts\n    else:\n        has_frames = False\n        sensor_size = [max(ys), max(xs)]\n\n    n_frames = len(np.arange(ts[0], ts[-1]-dt, sdt))\n    for i, t0 in enumerate(tqdm(np.arange(ts[0], ts[-1]-dt, sdt))):\n        te = t0+dt\n        eidx0 = np.searchsorted(ts, t0)\n        eidx1 = np.searchsorted(ts, te)\n        wxs, wys, wts, wps = xs[eidx0:eidx1], ys[eidx0:eidx1], ts[eidx0:eidx1], ps[eidx0:eidx1],\n\n        wframes, wframe_ts = [], []\n        if has_frames:\n            fidx0 = np.searchsorted(frame_ts, t0)\n            fidx1 = np.searchsorted(frame_ts, te)\n            wframes = [frames[fidx0]]\n            wframe_ts = [wts[0]]\n\n        save_path = os.path.join(args.output_path, \"frame_{:010d}.jpg\".format(i))\n\n        perc = i/n_frames\n        min_p, max_p = 0.2, 0.7\n        elev, azim = args.elev, args.azim\n        max_elev, max_azim = 10, 45\n        if perc > min_p and perc < max_p:\n            p_way = (perc-min_p)/(max_p-min_p)\n            elev = elev + (max_elev*p_way)\n            azim = azim - (max_azim*p_way)\n        elif perc >= max_p:\n            elev, azim = max_elev, max_azim\n\n        plot_events(wxs, wys, wts, wps, save_path=save_path, num_show=args.num_show, event_size=args.event_size,\n                imgs=wframes, img_ts=wframe_ts, show_events=not args.hide_events, azim=azim,\n                elev=elev, show_frames=not args.hide_frames, crop=args.crop, compress_front=args.compress_front,\n                invert=args.invert, num_compress=args.num_compress, show_plot=args.show_plot, img_size=sensor_size,\n                show_axes=args.show_axes, stride=args.stride)\n\ndef plot_voxel_grid(xs, ys, ts, ps, bins=5, frames=[], frame_ts=[],\n        sensor_size=None, crop=None, elev=0, azim=45, show_axes=False):\n    \"\"\"\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param bins The number of bins to have in the voxel grid\n    @param frames The list of image frames\n    @param frame_ts The list of image timestamps\n    @param sensor_size The size of the event sensor resolution\n    @param crop Cropping parameters for the voxel grid (no crop if None)\n    @param elev The elevation of the plot\n    @param azim The azimuth of the plot\n    @param show_axes Show the axes of the plot\n    @returns None\n    \"\"\"\n    if sensor_size is None:\n        sensor_size = [np.max(ys)+1, np.max(xs)+1] if len(frames)==0 else frames[0].shape\n    if crop is not None:\n        xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop)\n        sensor_size = crop_to_size(crop)\n        xs, ys = xs-crop[2], ys-crop[0]\n    num = 10000\n    xs, ys, ts, ps = xs[0:num], ys[0:num], ts[0:num], ps[0:num]\n    if len(xs) == 0:\n        return\n    voxels = events_to_voxel(xs, ys, ts, ps, bins, sensor_size=sensor_size)\n    voxels = block_reduce(voxels, block_size=(1,10,10), func=np.mean, cval=0)\n    dimdiff = voxels.shape[1]-voxels.shape[0]\n    filler = np.zeros((dimdiff, *voxels.shape[1:]))\n    voxels = np.concatenate((filler, voxels), axis=0)\n    voxels = voxels.transpose(0,2,1)\n\n    pltvoxels = voxels != 0\n    pvp, nvp = voxels > 0, voxels < 0\n    pvox, nvox = voxels*np.where(voxels > 0, 1, 0), voxels*np.where(voxels < 0, 1, 0)\n    pvox, nvox = (pvox/np.max(pvox))*0.5+0.5, (np.abs(nvox)/np.max(np.abs(nvox)))*0.5+0.5\n    zeros = np.zeros_like(voxels)\n\n    colors = np.empty(voxels.shape, dtype=object)\n\n    redvals = np.stack((pvox, zeros, pvox-0.5), axis=3)\n    redvals = nlr.unstructured_to_structured(redvals).astype('O')\n\n    bluvals = np.stack((nvox-0.5, zeros, nvox), axis=3)\n    bluvals = nlr.unstructured_to_structured(bluvals).astype('O')\n\n    colors[pvp] = redvals[pvp]\n    colors[nvp] = bluvals[nvp]\n\n    fig = plt.figure()\n    ax = fig.gca(projection='3d')\n    ax.voxels(pltvoxels, facecolors=colors, edgecolor='k')\n    ax.view_init(elev=elev, azim=azim)\n\n    ax.grid(False)\n    # Hide panes\n    ax.xaxis.pane.fill = False\n    ax.yaxis.pane.fill = False\n    ax.zaxis.pane.fill = False\n    if not show_axes:\n        # Hide spines\n        ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.set_frame_on(False)\n    # Hide xy axes\n    ax.set_xticks([])\n    ax.set_yticks([])\n    ax.set_zticks([])\n\n    ax.xaxis.set_visible(False)\n    ax.axes.get_yaxis().set_visible(False)\n\n    plt.show()\n\ndef plot_events(xs, ys, ts, ps, save_path=None, num_compress='auto', num_show=1000,\n        event_size=2, elev=0, azim=45, imgs=[], img_ts=[], show_events=True,\n        show_frames=True, show_plot=False, crop=None, compress_front=False,\n        marker='.', stride = 1, invert=False, img_size=None, show_axes=False):\n    \"\"\"\n    Given events, plot these in a spatiotemporal volume.\n    @param xs x coords of events\n    @param ys y coords of events\n    @param ts t coords of events\n    @param ps p coords of events\n    @param save_path If set, will save plot to here\n    @param num_compress Takes num_compress events from the beginning of the\n        sequence and draws them in the plot at time $t=0$ in black\n    @param compress_front If True, display the compressed events in black at the\n        front of the spatiotemporal volume rather than the back\n    @param num_show Sets the number of events to plot. If set to -1\n        will plot all of the events (can be potentially expensive)\n    @param event_size Sets the size of the plotted events\n    @param elev Sets the elevation of the plot\n    @param azim Sets the azimuth of the plot\n    @param imgs A list of images to draw into the spatiotemporal volume\n    @param img_ts A list of the position on the temporal axis where each\n        image from 'imgs' is to be placed (the timestamp of the images, usually)\n    @param show_events If False, will not plot the events (only images)\n    @param show_plot If True, display the plot in a matplotlib window as\n        well as saving to disk\n    @param crop A list of length 4 that sets the crop of the plot (must\n        be in the format [top_left_y, top_left_x, height, width]\n    @param marker Which marker should be used to display the events (default\n        is '.', which results in points, but circles 'o' or crosses 'x' are\n        among many other possible options)\n    @param stride Determines the pixel stride of the image rendering\n        (1=full resolution, but can be quite resource intensive)\n    @param invert Inverts the color scheme for black backgrounds\n    @param img_size The size of the sensor resolution. Inferred if empty.\n    @param show_axes If True, draw axes onto the plot.\n    @returns None\n    \"\"\"\n    #Crop events\n    if img_size is None:\n        img_size = [max(ys), max(xs)] if len(imgs)==0 else imgs[0].shape[0:2]\n        print(\"Inferred image size = {}\".format(img_size))\n    crop = [0, img_size[0], 0, img_size[1]] if crop is None else crop\n    xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop, set_zero=False)\n    xs, ys = xs-crop[2], ys-crop[0]\n\n    #Defaults and range checks\n    num_show = len(xs) if num_show == -1 else num_show\n    skip = max(len(xs)//num_show, 1)\n    num_compress = len(xs) if num_compress == -1 else num_compress\n    num_compress = min(img_size[0]*img_size[1]*0.5, len(xs)) if num_compress=='auto' else num_compress\n    xs, ys, ts, ps = xs[::skip], ys[::skip], ts[::skip], ps[::skip]\n\n    #Prepare the plot, set colors\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d', proj_type = 'ortho')\n    colors = ['r' if p>0 else ('#00DAFF' if invert else 'b') for p in ps]\n\n    #Plot images\n    if len(imgs)>0 and show_frames:\n        for imgidx, (img, img_ts) in enumerate(zip(imgs, img_ts)):\n            img = img[crop[0]:crop[1], crop[2]:crop[3]]\n            if len(img.shape)==2:\n                img = np.stack((img, img, img), axis=2)\n            if num_compress > 0:\n                events_img = events_to_image(xs[0:num_compress], ys[0:num_compress],\n                        np.ones(num_compress), sensor_size=img.shape[0:2])\n                events_img[events_img>0] = 1\n                img[:,:,1]+=events_img[:,:]\n                img = np.clip(img, 0, 1)\n            x, y = np.ogrid[0:img.shape[0], 0:img.shape[1]]\n            event_idx = np.searchsorted(ts, img_ts)\n\n            ax.scatter(xs[0:event_idx], ts[0:event_idx], ys[0:event_idx], zdir='z',\n                    c=colors[0:event_idx], facecolors=colors[0:event_idx],\n                    s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n\n            ax.plot_surface(y, img_ts, x, rstride=stride, cstride=stride, facecolors=img, alpha=1)\n\n            ax.scatter(xs[event_idx:-1], ts[event_idx:-1], ys[event_idx:-1], zdir='z',\n                    c=colors[event_idx:-1], facecolors=colors[event_idx:-1],\n                    s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n\n    elif num_compress > 0:\n        # Plot events\n        ax.scatter(xs[::skip], ts[::skip], ys[::skip], zdir='z', c=colors[::skip], facecolors=colors[::skip],\n                s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n        num_compress = min(num_compress, len(xs))\n        if not compress_front:\n            ax.scatter(xs[0:num_compress], np.ones(num_compress)*ts[0], ys[0:num_compress],\n                    marker=marker, zdir='z', c='w' if invert else 'k', s=np.ones(num_compress)*event_size)\n        else:\n            ax.scatter(xs[-num_compress-1:-1], np.ones(num_compress)*ts[-1], ys[-num_compress-1:-1],\n                    marker=marker, zdir='z', c='w' if invert else 'k', s=np.ones(num_compress)*event_size)\n    else:\n        # Plot events\n        ax.scatter(xs, ts, ys,zdir='z', c=colors, facecolors=colors, s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n\n    ax.view_init(elev=elev, azim=azim)\n    ax.grid(False)\n    # Hide panes\n    ax.xaxis.pane.fill = False\n    ax.yaxis.pane.fill = False\n    ax.zaxis.pane.fill = False\n    if not show_axes:\n        # Hide spines\n        ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.set_frame_on(False)\n    # Hide xy axes\n    ax.set_xticks([])\n    ax.set_yticks([])\n    ax.set_zticks([])\n    # Flush axes\n    ax.set_xlim3d(0, img_size[1])\n    ax.set_ylim3d(ts[0], ts[-1])\n    ax.set_zlim3d(0,img_size[0])\n\n    if show_plot:\n        plt.show()\n    if save_path is not None:\n        ensure_dir(save_path)\n        plt.savefig(save_path, transparent=True, dpi=600, bbox_inches = 'tight')\n    plt.close()\n\ndef plot_between_frames(xs, ys, ts, ps, frames, frame_event_idx, args, plttype='voxel'):\n    \"\"\"\n    Plot events between frames for an entire sequence to form a video\n    @param xs x component of events\n    @param ys y component of events\n    @param ts t component of events\n    @param ps p component of events\n    @param frames List of the frames\n    @param frame_event_idx The event index for each frame\n    @param args Arguments for the rendering function 'plot_events'\n    @param plttype Whether to plot 'voxel' or 'events'\n    @return None\n    \"\"\"\n    args.crop = None if args.crop is None else parse_crop(args.crop)\n    prev_idx = 0\n    for i in range(0, len(frames), args.skip_frames):\n        if args.hide_skipped:\n            frame = [frames[i]]\n            frame_indices = frame_event_idx[i][np.newaxis, ...]\n        else:\n            frame = frames[i:i+args.skip_frames]\n            frame_indices = frame_event_idx[i:i+args.skip_frames]\n        print(\"Processing frame {}\".format(i))\n        s, e = frame_indices[0,1], frame_indices[-1,0]\n        img_ts = []\n        for f_idx in frame_indices:\n            img_ts.append(ts[f_idx[1]])\n        fname = os.path.join(args.output_path, \"events_{:09d}.png\".format(i))\n        if plttype == 'voxel':\n            plot_voxel_grid(xs[s:e], ys[s:e], ts[s:e], ps[s:e], bins=args.num_bins, crop=args.crop,\n                    frames=frame, frame_ts=img_ts, elev=args.elev, azim=args.azim)\n        elif plttype == 'events':\n            plot_events(xs[s:e], ys[s:e], ts[s:e], ps[s:e], save_path=fname,\n                    num_show=args.num_show, event_size=args.event_size, imgs=frame,\n                    img_ts=img_ts, show_events=not args.hide_events, azim=args.azim,\n                    elev=args.elev, show_frames=not args.hide_frames, crop=args.crop,\n                    compress_front=args.compress_front, invert=args.invert,\n                    num_compress=args.num_compress, show_plot=args.show_plot, stride=args.stride)\n\n"
  },
  {
    "path": "lib/visualization/draw_event_stream_mayavi.py",
    "content": "from mayavi import mlab\nfrom mayavi.api import Engine\nimport numpy as np\nimport numpy.lib.recfunctions as nlr\nimport cv2 as cv\nfrom skimage.measure import block_reduce\nimport os\n#import matplotlib.pyplot as plt\n#from mpl_toolkits.mplot3d import Axes3D\n\nfrom ..representations.image import events_to_image\nfrom ..representations.voxel_grid import events_to_voxel\nfrom ..util.event_util import clip_events_to_bounds\nfrom ..visualization.visualization_utils import *\nfrom tqdm import tqdm\n\ndef plot_events_sliding(xs, ys, ts, ps, args, dt=None, sdt=None, frames=None, frame_ts=None, padding=True):\n\n    skip = max(len(xs)//args.num_show, 1)\n    xs, ys, ts, ps = xs[::skip], ys[::skip], ts[::skip], ps[::skip]\n    t0 = ts[0]\n    sx,sy, st, sp = [], [], [], []\n    if padding:\n        for i in np.arange(ts[0]-dt, ts[0], sdt):\n            sx.append(0)\n            sy.append(0)\n            st.append(i)\n            sp.append(0)\n        print(len(sx))\n        print(st)\n        print(ts)\n        xs = np.concatenate((np.array(sx), xs))\n        ys = np.concatenate((np.array(sy), ys))\n        ts = np.concatenate((np.array(st), ts))\n        ps = np.concatenate((np.array(sp), ps))\n        print(ts)\n\n        ts += -st[0]\n        frame_ts += -st[0]\n        t0 += -st[0]\n        print(ts)\n\n    f = mlab.figure(bgcolor=(1,1,1), size=(1080, 720))\n    engine = mlab.get_engine()\n    scene = engine.scenes[0]\n    scene.scene.camera.position = [373.1207907160101, 5353.96218497846, 7350.065665045519]\n    scene.scene.camera.focal_point = [228.0033999234376, 37.75424682790012, 3421.439332472788]\n    scene.scene.camera.view_angle = 30.0\n    scene.scene.camera.view_up = [0.9997493712140433, -0.02027499237784438, -0.009493125997461629]\n    scene.scene.camera.clipping_range = [2400.251302762254, 11907.415293888362]\n    scene.scene.camera.compute_view_plane_normal()\n\n    print(\"ts from {} to {}, imgs from {} to {}\".format(ts[0], ts[-1], frame_ts[0], frame_ts[-1]))\n    frame_ts = np.array([t0]+list(frame_ts[0:-1]))\n    if dt is None:\n        dt = (ts[-1]-ts[0])/10\n        sdt = dt/10\n        print(\"Using dt={}, sdt={}\".format(dt, sdt))\n    if frames is not None:\n        sensor_size = frames[0].shape\n    else:\n        sensor_size = [max(ys), max(xs)]\n\n    if len(frame_ts.shape) == 2:\n        frame_ts = frame_ts[:,1]\n    for i, t0 in enumerate(tqdm(np.arange(ts[0], ts[-1]-dt, sdt))):\n        te = t0+dt\n        eidx0 = np.searchsorted(ts, t0)\n        eidx1 = np.searchsorted(ts, te)\n        fidx0 = np.searchsorted(frame_ts, t0)\n        fidx1 = np.searchsorted(frame_ts, te)\n        #print(\"{}:{} = {}\".format(frame_ts[fidx0], ts[eidx0], fidx0))\n\n        wxs, wys, wts, wps = xs[eidx0:eidx1], ys[eidx0:eidx1], ts[eidx0:eidx1], ps[eidx0:eidx1],\n        if fidx0 == fidx1:\n            wframes=[]\n            wframe_ts=[]\n        else:\n            wframes = frames[fidx0:fidx1]\n            wframe_ts = frame_ts[fidx0:fidx1]\n\n        save_path = os.path.join(args.output_path, \"frame_{:010d}.jpg\".format(i))\n        plot_events(wxs, wys, wts, wps, save_path=save_path, num_show=-1, event_size=args.event_size,\n                imgs=wframes, img_ts=wframe_ts, show_events=not args.hide_events, azim=args.azim,\n                elev=args.elev, show_frames=not args.hide_frames, crop=args.crop, compress_front=args.compress_front,\n                invert=args.invert, num_compress=args.num_compress, show_plot=args.show_plot, img_size=sensor_size,\n                show_axes=args.show_axes, ts_scale=args.ts_scale)\n\n        if save_path is not None:\n            ensure_dir(save_path)\n            #mlab.savefig(save_path, figure=f, magnification=10)\n            #GUI().process_events()\n            #img = mlab.screenshot(figure=f, mode='rgba', antialiased=True)\n            #print(img.shape)\n            mlab.savefig(save_path, figure=f, magnification=8)\n\n        mlab.clf()\n\ndef plot_voxel_grid(xs, ys, ts, ps, bins=5, frames=[], frame_ts=[],\n        sensor_size=None, crop=None, elev=0, azim=45, show_axes=False):\n    if sensor_size is None:\n        sensor_size = [np.max(ys)+1, np.max(xs)+1] if len(frames)==0 else frames[0].shape\n    if crop is not None:\n        xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop)\n        sensor_size = crop_to_size(crop)\n        xs, ys = xs-crop[2], ys-crop[0]\n    num = 10000\n    xs, ys, ts, ps = xs[0:num], ys[0:num], ts[0:num], ps[0:num]\n    if len(xs) == 0:\n        return\n    voxels = events_to_voxel(xs, ys, ts, ps, bins, sensor_size=sensor_size)\n    voxels = block_reduce(voxels, block_size=(1,10,10), func=np.mean, cval=0)\n    dimdiff = voxels.shape[1]-voxels.shape[0]\n    filler = np.zeros((dimdiff, *voxels.shape[1:]))\n    voxels = np.concatenate((filler, voxels), axis=0)\n    voxels = voxels.transpose(0,2,1)\n\n    pltvoxels = voxels != 0\n    pvp, nvp = voxels > 0, voxels < 0\n    pvox, nvox = voxels*np.where(voxels > 0, 1, 0), voxels*np.where(voxels < 0, 1, 0)\n    pvox, nvox = (pvox/np.max(pvox))*0.5+0.5, (np.abs(nvox)/np.max(np.abs(nvox)))*0.5+0.5\n    zeros = np.zeros_like(voxels)\n\n    colors = np.empty(voxels.shape, dtype=object)\n\n    redvals = np.stack((pvox, zeros, pvox-0.5), axis=3)\n    redvals = nlr.unstructured_to_structured(redvals).astype('O')\n\n    bluvals = np.stack((nvox-0.5, zeros, nvox), axis=3)\n    bluvals = nlr.unstructured_to_structured(bluvals).astype('O')\n\n    colors[pvp] = redvals[pvp]\n    colors[nvp] = bluvals[nvp]\n\n    fig = plt.figure()\n    ax = fig.gca(projection='3d')\n    ax.voxels(pltvoxels, facecolors=colors, edgecolor='k')\n    ax.view_init(elev=elev, azim=azim)\n\n    ax.grid(False)\n    # Hide panes\n    ax.xaxis.pane.fill = False\n    ax.yaxis.pane.fill = False\n    ax.zaxis.pane.fill = False\n    if not show_axes:\n        # Hide spines\n        ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.set_frame_on(False)\n    # Hide xy axes\n    ax.set_xticks([])\n    ax.set_yticks([])\n    ax.set_zticks([])\n\n    ax.xaxis.set_visible(False)\n    ax.axes.get_yaxis().set_visible(False)\n\n    plt.show()\n\ndef plot_events(xs, ys, ts, ps, save_path=None, num_compress='auto', num_show=1000,\n        event_size=2, elev=0, azim=45, imgs=[], img_ts=[], show_events=True,\n        show_frames=True, show_plot=False, crop=None, compress_front=False,\n        marker='.', stride = 1, invert=False, img_size=None, show_axes=False,\n        ts_scale = 100000):\n    \"\"\"\n    Given events, plot these in a spatiotemporal volume.\n    :param: xs x coords of events\n    :param: ys y coords of events\n    :param: ts t coords of events\n    :param: ps p coords of events\n    :param: save_path if set, will save plot to here\n    :param: num_compress will take this number of events from the end\n        and create an event image from these. This event image will\n        be displayed at the end of the spatiotemporal volume\n    :param: num_show sets the number of events to plot. If set to -1\n        will plot all of the events (can be potentially expensive)\n    :param: event_size sets the size of the plotted events\n    :param: elev sets the elevation of the plot\n    :param: azim sets the azimuth of the plot\n    :param: imgs a list of images to draw into the spatiotemporal volume\n    :param: img_ts a list of the position on the temporal axis where each\n        image from 'imgs' is to be placed (the timestamp of the images, usually)\n    :param: show_events if False, will not plot the events (only images)\n    :param: crop a list of length 4 that sets the crop of the plot (must\n        be in the format [top_left_y, top_left_x, height, width]\n    \"\"\"\n    print(\"plot all\")\n    #Crop events\n    if img_size is None:\n        img_size = [max(ys), max(ps)] if len(imgs)==0 else imgs[0].shape[0:2]\n    crop = [0, img_size[0], 0, img_size[1]] if crop is None else crop\n    xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop, set_zero=False)\n    xs, ys = xs-crop[2], ys-crop[0]\n\n    #Defaults and range checks\n    num_show = len(xs) if num_show == -1 else num_show\n    skip = max(len(xs)//num_show, 1)\n    print(\"Has {} events, show only {}, skip = {}\".format(len(xs), num_show, skip))\n    num_compress = len(xs) if num_compress == -1 else num_compress\n    num_compress = min(img_size[0]*img_size[1]*0.5, len(xs)) if num_compress=='auto' else num_compress\n    xs, ys, ts, ps = xs[::skip], ys[::skip], ts[::skip], ps[::skip]\n\n    t0 = ts[0]\n    ts = ts-t0\n\n    #mlab.options.offscreen = True\n\n    #Plot images\n    if len(imgs)>0 and show_frames:\n        for imgidx, (img, img_t) in enumerate(zip(imgs, img_ts)):\n            img = img[crop[0]:crop[1], crop[2]:crop[3]]\n\n            mlab.imshow(img, colormap='gray', extent=[0, img.shape[0], 0, img.shape[1], (img_t-t0)*ts_scale, (img_t-t0)*ts_scale+0.01], opacity=1.0, transparent=False)\n\n    colors = [0 if p>0 else 240 for p in ps]\n    ones = np.array([0 if p==0 else 1 for p in ps])\n    p3d = mlab.quiver3d(ys, xs, ts*ts_scale, ones, ones, ones, scalars=colors, mode='sphere', scale_factor=event_size)\n    p3d.glyph.color_mode = 'color_by_scalar'\n    p3d.module_manager.scalar_lut_manager.lut.table = colors\n    #mlab.draw()\n\n    #mlab.view(84.5, 54, 5400, np.array([ 187,  175, 2276]), roll=95)\n\n    if show_plot:\n        mlab.show()\n    #if save_path is not None:\n    #    ensure_dir(save_path)\n    #    print(\"Saving to {}\".format(save_path))\n    #    imgmap = mlab.screenshot(mode='rgba', antialiased=True)\n    #    print(imgmap.shape)\n    #    cv.imwrite(save_path, imgmap)\n\ndef plot_between_frames(xs, ys, ts, ps, frames, frame_event_idx, args, plttype='voxel'):\n    args.crop = None if args.crop is None else parse_crop(args.crop)\n    prev_idx = 0\n    for i in range(0, len(frames), args.skip_frames):\n        if i != 3:\n            continue\n        if args.hide_skipped:\n            frame = [frames[i]]\n            frame_indices = frame_event_idx[i][np.newaxis, ...]\n        else:\n            frame = frames[i:i+args.skip_frames]\n            frame_indices = frame_event_idx[i:i+args.skip_frames]\n        print(\"Processing frame {}\".format(i))\n        s, e = frame_indices[0,1], frame_indices[-1,0]\n        img_ts = []\n        for f_idx in frame_indices:\n            img_ts.append(ts[f_idx[1]])\n        fname = os.path.join(args.output_path, \"events_{:09d}.png\".format(i))\n        if plttype == 'voxel':\n            plot_voxel_grid(xs[s:e], ys[s:e], ts[s:e], ps[s:e], bins=args.num_bins, crop=args.crop,\n                    frames=frame, frame_ts=img_ts, elev=args.elev, azim=args.azim)\n        elif plttype == 'events':\n            print(\"plot events\")\n            plot_events(xs[s:e], ys[s:e], ts[s:e], ps[s:e], save_path=fname,\n                    num_show=args.num_show, event_size=args.event_size, imgs=frame,\n                    img_ts=img_ts, show_events=not args.hide_events, azim=args.azim,\n                    elev=args.elev, show_frames=not args.hide_frames, crop=args.crop,\n                    compress_front=args.compress_front, invert=args.invert,\n                    num_compress=args.num_compress, show_plot=args.show_plot, stride=args.stride)\n"
  },
  {
    "path": "lib/visualization/draw_flow.py",
    "content": "import numpy as np\nimport torch\nimport cv2 as cv\nimport os\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom ..util.event_util import clip_events_to_bounds\nfrom ..util.util import flow2bgr_np\nfrom ..transforms.optic_flow import warp_events_flow_torch\nfrom ..representations.image import events_to_image_torch\nfrom .visualization_utils import *\n\ndef motion_compensate(xs, ys, ts, ps, flow, fname=\"/tmp/img.png\", crop=None):\n    xs, ys, ts, ps, flow = torch.from_numpy(xs).type(torch.float32), torch.from_numpy(ys).type(torch.float32),\\\n        torch.from_numpy(ts).type(torch.float32), torch.from_numpy(ps).type(torch.float32), torch.from_numpy(flow).type(torch.float32)\n    xw, yw = warp_events_flow_torch(xs, ys, ts, ps, flow)\n    img_size = list(flow.shape)\n    img_size.remove(2)\n    img = events_to_image_torch(xw, yw, ps, sensor_size=img_size, interpolation='bilinear')\n    img = np.flip(np.flip(img.numpy(), axis=0), axis=1)\n    img = cv.normalize(img, None, 0, 255, cv.NORM_MINMAX)\n    if crop is not None:\n        img = img[crop[0]:crop[1], crop[2]: crop[3]]\n    cv.imwrite(fname, img)\n\ndef plot_flow_and_events(xs, ys, ts, ps, flow, save_path=None,\n        num_show=1000, event_size=2, elev=0, azim=45, show_events=True,\n        show_frames=True, show_plot=False, crop=None,\n        marker='.', stride = 20, img_size=None, show_axes=False,\n        invert=False):\n\n    print(event_size)\n    #Crop events\n    if img_size is None:\n        img_size = [max(ys), max(xs)] if len(flow)==0 else flow[0].shape[1:3]\n    crop = [0, img_size[0], 0, img_size[1]] if crop is None else crop\n    xs, ys = img_size[1]-xs, img_size[0]-ys\n    xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop, set_zero=False)\n    xs -= crop[2]\n    ys -= crop[0]\n    img_size = [crop[1]-crop[0], crop[3]-crop[2]]\n    xs, ys = img_size[1]-xs, img_size[0]-ys\n    #flow[0] = flow[0][:, crop[0]:crop[1], crop[2]:crop[3]]\n    flow = flow[0][:, crop[0]:crop[1], crop[2]:crop[3]]\n    flow = np.flip(np.flip(flow, axis=1), axis=2)\n\n    #Defaults and range checks\n    num_show = len(xs) if num_show == -1 else num_show\n    skip = max(len(xs)//num_show, 1)\n    xs, ys, ts, ps = xs[::skip], ys[::skip], ts[::skip], ps[::skip]\n\n    #Prepare the plot, set colors\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='3d', proj_type = 'ortho')\n    colors = ['r' if p>0 else ('#00DAFF' if invert else 'b') for p in ps]\n\n    # Plot quivers\n    f_reshape = flow.transpose(1,2,0)\n    print(f_reshape.shape)\n    t_w = ts[-1]-ts[0]\n    coords, flow_vals, magnitudes = [], [], []\n    s = 20\n    offset = 0\n    thresh = 0\n    print(img_size)\n    for x in np.linspace(offset, img_size[1]-1-offset, s):\n        for y in np.linspace(offset, img_size[0]-1-offset, s):\n            ix, iy = int(x), int(y)\n            flow_v = np.array([f_reshape[iy,ix,0]*t_w, f_reshape[iy,ix,1]*t_w, t_w])\n            mag = np.linalg.norm(flow_v)\n            if mag >= thresh:\n                flow_vals.append(flow_v)\n                magnitudes.append(mag)\n                coords.append([x,y])\n    magnitudes = np.array(magnitudes)\n    max_flow = np.percentile(magnitudes, 99)\n\n    x,y,z,u,v,w = [],[],[],[],[],[]\n    idx = 0\n    for coord, flow_vec, mag in zip(coords, flow_vals, magnitudes):\n        #q_start = [coord[0], ts[0], coord[1]]\n        rel_len = mag/max_flow\n        flow_vec = flow_vec*rel_len\n        x.append(coord[0])\n        y.append(0.065)\n        z.append(coord[1])\n        u.append(max(1, flow_vec[0]))\n        v.append(flow_vec[2])\n        w.append(max(1, flow_vec[1]))\n    ax.quiver(x,y,z,u,v,w,color='c', arrow_length_ratio=0, alpha=0.8)\n\n    img = flow2bgr_np(flow[0, :], flow[1, :])\n    img = img/255\n\n    x, y = np.ogrid[0:img.shape[0], 0:img.shape[1]]\n    ax.plot_surface(y, ts[0], x, rstride=stride, cstride=stride, facecolors=img, alpha=1)\n\n    ax.scatter(xs, ts, ys, zdir='z', c=colors, facecolors=colors,\n            s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n\n    ax.view_init(elev=elev, azim=azim)\n\n    ax.grid(False)\n    # Hide panes\n    ax.xaxis.pane.fill = False\n    ax.yaxis.pane.fill = False\n    ax.zaxis.pane.fill = False\n    if not show_axes:\n        # Hide spines\n        ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n        ax.set_frame_on(False)\n    # Hide xy axes\n    ax.set_xticks([])\n    ax.set_yticks([])\n    ax.set_zticks([])\n\n    ax.xaxis.set_visible(False)\n    ax.axes.get_yaxis().set_visible(False)\n\n    plt.show()\n\n\n\ndef plot_between_frames(xs, ys, ts, ps, flows, flow_imgs, flow_ts, args, plttype='voxel'):\n    args.crop = None if args.crop is None else parse_crop(args.crop)\n\n    flow_event_idx = get_frame_indices(ts, flow_ts)\n    if len(flow_ts.shape) == 1:\n        flow_ts = frame_stamps_to_start_end(flow_ts)\n        flow_event_idx = frame_stamps_to_start_end(flow_event_idx)\n    prev_idx = 0\n    for i in range(0, len(flows), args.skip_frames):\n        if i != 12:\n            continue\n        flow = flows[i:i+args.skip_frames]\n        flow_indices = flow_event_idx[i:i+args.skip_frames]\n        s, e = flow_indices[-1,0], flow_indices[0,1]\n\n        motion_compensate(xs[s:e], ys[s:e], ts[s:e], ps[s:e], -np.flip(np.flip(flow[0], axis=1), axis=2).copy(), fname=\"/tmp/comp.png\", crop=args.crop)\n        motion_compensate(xs[s:e], ys[s:e], ts[s:e], ps[s:e], np.zeros_like(flow[0]), fname=\"/tmp/zero.png\", crop=args.crop)\n        e = np.searchsorted(ts, ts[s]+0.02)\n        flow_ts = []\n        for f_idx in flow_indices:\n            flow_ts.append(ts[f_idx[1]])\n        fname = os.path.join(args.output_path, \"events_{:09d}.png\".format(i))\n\n        print(\"se: {}, {}\".format(s, e))\n        plot_flow_and_events(xs[s:e], ys[s:e], ts[s:e], ps[s:e], flow,\n        num_show=args.num_show, event_size=args.event_size, elev=args.elev,\n        azim=args.azim, show_events=not args.hide_events,\n        show_frames=not args.hide_frames, show_plot=args.show_plot, crop=args.crop,\n        stride=args.stride, show_axes=args.show_axes, invert=args.invert)\n"
  },
  {
    "path": "lib/visualization/utils/draw_plane.py",
    "content": "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import axes3d, Axes3D #<-- Note the capitalization!\n\n# z = ax + by + d\n\nx_min = 0\nx_max = 100\ny_min = 0\ny_max = 100\n\na = 0\nb = 10\nd = 10\n\nnum_points = 5000\npoint_size = 10\n\npoints = np.random.rand(num_points, 3)\npoints[:, 0] = points[:, 0]*(x_max-x_min) + x_min\npoints[:, 1] = points[:, 1]*(y_max-y_min) + y_min\npoints[:, 2] = points[:, 0]*a + points[:, 1]*b + d\n\nmean = 0\nstdev = 10\nnoise = np.random.normal(mean, stdev, num_points)\npoints[:, 2] = points[:, 2] + noise\n\nprint(points)\nnew_points = points[np.where(points[:, 1] < 50)]\nprint(new_points)\n\nfor x in range(y_min, y_max, 1):\n\n    fig = plt.figure()\n    ax = Axes3D(fig)\n    ax.set_xlabel('x')\n    ax.set_ylabel('y')\n    ax.set_zlabel('time')\n    ax.set_ylim([0, 100])\n\n    new_points = points[np.where(points[:, 1] < x)]\n    ax.scatter(new_points[:, 0], new_points[:, 1], new_points[:, 2], s=point_size, c=(new_points[:, 2]),\n               edgecolors='none', cmap='plasma')\n    ax.scatter(points[:, 0], points[:, 1], points[:, 2], s=0, c=(points[:, 2]),\n               edgecolors='none', cmap='plasma')\n\n    point = np.array([0, 1, 0])\n    normal = np.array([0, 0, 1])\n\n    # a plane is a*x+b*y+c*z+d=0\n    # [a,b,c] is the normal. Thus, we have to calculate\n    # d and we're set\n    d = -point.dot(normal)\n\n    # create x,y\n    xx, yy = np.meshgrid(range(100), range(10))\n    yy = yy + x - 10\n\n    # calculate corresponding z\n    z = (-normal[0] * xx - normal[1] * yy - d) * 1. / normal[2]\n    # plot the surface\n    # plt3d = plt.figure().gca(projection='3d')\n    ax.plot_surface(xx, yy, z, alpha=1)\n\n    save_name = (\"frame_\" + str(x) + \".png\")\n    fig.tight_layout()\n    fig.savefig(save_name, dpi=300, transparent=True)\n\n    # plt.show()\n    plt.close()"
  },
  {
    "path": "lib/visualization/utils/draw_plane_simple.py",
    "content": "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import axes3d, Axes3D #<-- Note the capitalization!\n\nfig = plt.figure()\nax = Axes3D(fig)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('time')\n\n# z = ax + by + d\nx_min = 0\nx_max = 10\ny_min = 0\ny_max = 10\n\na = 0\nb = 10\nd = 10\n\nnum_points = 50\npoint_size = 20\n\npoints = np.random.rand(num_points, 3)\npoints[:, 0] = points[:, 0]*(x_max-x_min) + x_min\npoints[:, 1] = points[:, 1]*(y_max-y_min) + y_min\npoints[:, 2] = points[:, 0]*a + points[:, 1]*b + d\n\nmean = 0\nstdev = 10\nnoise = np.random.normal(mean, stdev, num_points)\npoints[:, 2] = points[:, 2] + noise\n\nax.scatter(points[:, 0], points[:, 1], points[:, 2], s=point_size, c=(points[:, 2]),\n               edgecolors='none', cmap='plasma')\n\n\n# create x,y\nxx, yy = np.meshgrid(range(10), range(10))\nyy = yy\n\n# calculate corresponding z\n# z = (-normal[0] * xx - normal[1] * yy - d) * 1. / normal[2]\nz = xx*a + yy*b + d\n# plot the surface\n# plt3d = plt.figure().gca(projection='3d')\nax.plot_surface(xx, yy, z, alpha=0.2)\n\nsave_name = (\"plane.png\")\nfig.tight_layout()\nfig.savefig(save_name, dpi=600, transparent=True)\nplt.close()"
  },
  {
    "path": "lib/visualization/visualization_utils.py",
    "content": "import numpy as np\nimport os\n\ndef frame_stamps_to_start_end(stamps):\n    ends = list(stamps[1:])\n    ends.append(ends[-1])\n    se_stamps = np.stack((stamps, np.array(ends)), axis=1)\n    return se_stamps\n\ndef get_frame_indices(ts, frame_ts):\n    indices = [np.searchsorted(ts, fts) for fts in frame_ts]\n    return np.array(indices)\n\ndef crop_to_size(crop):\n    return [crop[0]-crop[1], crop[2]-crop[3]]\n\ndef parse_crop(cropstr):\n    \"\"\"\n    Crop is provided as string, same as imagemagick:\n        size_x, size_y, offset_x, offset_y, eg 10x10+30+30 would cut a 10x10 square at 30,30\n    Output is the indices as would be used in a numpy array. In the example,\n    [30,40,30,40] (ie [miny, maxy, minx, maxx])\n\n    \"\"\"\n    split = cropstr.split(\"x\")\n    xsize = int(split[0])\n    split = split[1].split(\"+\")\n    ysize = int(split[0])\n    xoff = int(split[1])\n    yoff = int(split[2])\n    crop = [yoff, yoff+ysize, xoff, xoff+xsize]\n    return crop\n\ndef ensure_dir(file_path):\n    directory = os.path.dirname(file_path)\n    if not os.path.exists(directory):\n        print(f\"Creating {directory}\")\n        os.makedirs(directory)\n\n"
  },
  {
    "path": "lib/visualization/visualizers.py",
    "content": "import numpy as np\nimport numpy.lib.recfunctions as nlr\nimport cv2 as cv\nimport colorsys\nfrom skimage.measure import block_reduce\nimport os\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom ..representations.image import events_to_image, TimestampImage\nfrom ..representations.voxel_grid import events_to_voxel\nfrom ..util.event_util import clip_events_to_bounds\nfrom .visualization_utils import *\nfrom tqdm import tqdm\n\nclass Visualizer():\n\n    def __init__(self):\n        raise NotImplementedError\n\n    def plot_events(self, data, save_path, **kwargs):\n        raise NotImplementedError\n\n    @staticmethod\n    def unpackage_events(events):\n        return events[:,0].astype(int), events[:,1].astype(int), events[:,2], events[:,3]\n\nclass TimeStampImageVisualizer(Visualizer):\n\n    def __init__(self, sensor_size):\n        self.ts_img = TimestampImage(sensor_size)\n        self.sensor_size = sensor_size\n\n    def plot_events(self, data, save_path, **kwargs):\n        xs, ys, ts, ps = self.unpackage_events(data['events'])\n        self.ts_img.set_init(ts[0])\n        self.ts_img.add_events(xs, ys, ts, ps)\n        timestamp_image = self.ts_img.get_image()\n        fig = plt.figure()\n        plt.imshow(timestamp_image, cmap='viridis')\n        ensure_dir(save_path)\n        plt.savefig(save_path, transparent=True, dpi=600, bbox_inches = 'tight')\n        #plt.show()\n\nclass EventImageVisualizer(Visualizer):\n\n    def __init__(self, sensor_size):\n        self.sensor_size = sensor_size\n\n    def plot_events(self, data, save_path, **kwargs):\n        xs, ys, ts, ps = self.unpackage_events(data['events'])\n        img = events_to_image(xs.astype(int), ys.astype(int), ps, self.sensor_size, interpolation=None, padding=False)\n        mn, mx = np.min(img), np.max(img)\n        img = (img-mn)/(mx-mn)\n\n        fig = plt.figure()\n        plt.imshow(img, cmap='gray')\n        ensure_dir(save_path)\n        plt.savefig(save_path, transparent=True, dpi=600, bbox_inches = 'tight')\n        #plt.show()\n\n\nclass EventsVisualizer(Visualizer):\n\n    def __init__(self, sensor_size):\n        self.sensor_size = sensor_size\n\n    def plot_events(self, data, save_path,\n            num_compress='auto', num_show=1000,\n            event_size=2, elev=0, azim=45, show_events=True,\n            show_frames=True, show_plot=False, crop=None, compress_front=False,\n            marker='.', stride = 1, invert=False, show_axes=False, flip_x=False):\n        \"\"\"\n        Given events, plot these in a spatiotemporal volume.\n        :param: xs x coords of events\n        :param: ys y coords of events\n        :param: ts t coords of events\n        :param: ps p coords of events\n        :param: save_path if set, will save plot to here\n        :param: num_compress will take this number of events from the end\n            and create an event image from these. This event image will\n            be displayed at the end of the spatiotemporal volume\n        :param: num_show sets the number of events to plot. If set to -1\n            will plot all of the events (can be potentially expensive)\n        :param: event_size sets the size of the plotted events\n        :param: elev sets the elevation of the plot\n        :param: azim sets the azimuth of the plot\n        :param: imgs a list of images to draw into the spatiotemporal volume\n        :param: img_ts a list of the position on the temporal axis where each\n            image from 'imgs' is to be placed (the timestamp of the images, usually)\n        :param: show_events if False, will not plot the events (only images)\n        :param: crop a list of length 4 that sets the crop of the plot (must\n            be in the format [top_left_y, top_left_x, height, width]\n        \"\"\"\n        xs, ys, ts, ps = self.unpackage_events(data['events'])\n        imgs, img_ts = data['frame'], data['frame_ts']\n        if not (isinstance(imgs, list) or isinstance(imgs, tuple)):\n            imgs, img_ts = [imgs], [img_ts]\n\n        ys = self.sensor_size[0]-ys\n        xs = self.sensor_size[1]-xs if flip_x else xs\n        #Crop events\n        img_size = self.sensor_size\n        if img_size is None:\n            img_size = [max(ys), max(ps)] if len(imgs)==0 else imgs[0].shape[0:2]\n        crop = [0, img_size[0], 0, img_size[1]] if crop is None else crop\n        xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop, set_zero=False)\n        xs, ys = xs-crop[2], ys-crop[0]\n\n        if len(xs) < 2:\n            xs = np.array([0,0])\n            ys = np.array([0,0])\n            if img_ts is None:\n                ts = np.array([0,0])\n            else:\n                ts = np.array([img_ts[0], img_ts[0]+0.000001])\n            ps = np.array([0.,0.])\n\n        #Defaults and range checks\n        num_show = len(xs) if num_show == -1 else num_show\n        skip = max(len(xs)//num_show, 1)\n        num_compress = len(xs) if num_compress == 'all' else num_compress\n        num_compress = min(int(img_size[0]*img_size[1]*0.5), len(xs)) if num_compress=='auto' else 0\n        xs, ys, ts, ps = xs[::skip], ys[::skip], ts[::skip], ps[::skip]\n\n        #Prepare the plot, set colors\n        fig = plt.figure()\n        ax = fig.add_subplot(111, projection='3d', proj_type = 'ortho')\n        colors = ['r' if p>0 else ('#00DAFF' if invert else 'b') for p in ps]\n\n        #Plot images\n        if len(imgs)>0 and show_frames:\n            for imgidx, (img, img_ts) in enumerate(zip(imgs, img_ts)):\n                img = img[crop[0]:crop[1], crop[2]:crop[3]].astype(float)\n                img = np.flip(img, axis=0)\n                img = np.flip(img, axis=1) if flip_x else img\n                if len(img.shape)==2:\n                    img = np.stack((img, img, img), axis=2)\n                if num_compress > 0:\n                    events_img = events_to_image(xs[0:num_compress], ys[0:num_compress],\n                            np.ones(min(num_compress, len(xs))), sensor_size=img.shape[0:2])\n                    events_img[events_img>0] = 1\n                    img[:,:,1] += events_img[:,:]\n                    img = np.clip(img, 0, 1)\n                x, y = np.ogrid[0:img.shape[0], 0:img.shape[1]]\n                event_idx = np.searchsorted(ts, img_ts)\n\n                ax.scatter(xs[0:event_idx], ts[0:event_idx], ys[0:event_idx], zdir='z',\n                        c=colors[0:event_idx], facecolors=colors[0:event_idx],\n                        s=event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n\n                img /= 255.0\n                #img = cv.normalize(img, None, 0, 1, cv.NORM_MINMAX)\n                ax.plot_surface(y, img_ts, x, rstride=stride, cstride=stride, facecolors=img, alpha=1)\n\n                ax.scatter(xs[event_idx:-1], ts[event_idx:-1], ys[event_idx:-1], zdir='z',\n                        c=colors[event_idx:-1], facecolors=colors[event_idx:-1],\n                        s=event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n    \n        elif num_compress > 0:\n            # Plot events\n            ax.scatter(xs[::skip], ts[::skip], ys[::skip], zdir='z', c=colors[::skip], facecolors=colors[::skip],\n                    s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n            num_compress = min(num_compress, len(xs))\n            if not compress_front:\n                ax.scatter(xs[0:num_compress], np.ones(num_compress)*ts[0], ys[0:num_compress],\n                        marker=marker, zdir='z', c='w' if invert else 'k', s=np.ones(num_compress)*event_size)\n            else:\n                ax.scatter(xs[-num_compress-1:-1], np.ones(num_compress)*ts[-1], ys[-num_compress-1:-1],\n                        marker=marker, zdir='z', c='w' if invert else 'k', s=np.ones(num_compress)*event_size)\n        else:\n            # Plot events\n            ax.scatter(xs, ts, ys,zdir='z', c=colors, facecolors=colors, s=np.ones(xs.shape)*event_size, marker=marker, linewidths=0, alpha=1.0 if show_events else 0)\n    \n        ax.view_init(elev=elev, azim=azim)\n        ax.grid(False)\n        # Hide panes\n        ax.xaxis.pane.fill = False\n        ax.yaxis.pane.fill = False\n        ax.zaxis.pane.fill = False\n        if not show_axes:\n            # Hide spines\n            ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.set_frame_on(False)\n        # Hide xy axes\n        ax.set_xticks([])\n        ax.set_yticks([])\n        ax.set_zticks([])\n        # Flush axes\n        ax.set_xlim3d(0, img_size[1])\n        ax.set_ylim3d(ts[0], ts[-1])\n        ax.set_zlim3d(0,img_size[0])\n        #ax.xaxis.set_visible(False)\n        #ax.axes.get_yaxis().set_visible(False)\n\n        if show_plot:\n            plt.show()\n        if save_path is not None:\n            ensure_dir(save_path)\n            print(\"Saving to {}\".format(save_path))\n            plt.savefig(save_path, transparent=True, dpi=600, bbox_inches = 'tight')\n        plt.close()\n\nclass VoxelVisualizer(Visualizer):\n\n    def __init__(self, sensor_size):\n        self.sensor_size = sensor_size\n\n    @staticmethod\n    def increase_brightness(rgb, increase=0.5):\n        rgb = (rgb*255).astype('uint8')\n        channels = rgb.shape[1]\n        hsv = (np.stack([cv.cvtColor(rgb[:,x,:,:], cv.COLOR_RGB2HSV) for x in range(channels)])).astype(float)\n        hsv[:,:,:,2] = np.clip(hsv[:,:,:,2] + increase*255, 0, 255)\n        hsv = hsv.astype('uint8')\n        rgb_new = np.stack([cv.cvtColor(hsv[x,:,:,:], cv.COLOR_HSV2RGB) for x in range(channels)])\n        rgb_new = (rgb_new.transpose(1,0,2,3)).astype(float)\n        return rgb_new/255.0\n\n    def plot_events(self, data, save_path, bins=5, crop=None, elev=0, azim=45, show_axes=False,\n            show_plot=False, flip_x=False, size_reduction=10):\n\n        xs, ys, ts, ps = self.unpackage_events(data['events'])\n        if len(xs) < 2:\n            return\n        ys = self.sensor_size[0]-ys\n        xs = self.sensor_size[1]-xs if flip_x else xs\n\n        frames, frame_ts = data['frame'], data['frame_ts']\n        if not isinstance(frames, list):\n            frames, frame_ts = [frames], [frame_ts]\n\n        if self.sensor_size is None:\n            self.sensor_size = [np.max(ys)+1, np.max(xs)+1] if len(frames)==0 else frames[0].shape\n        if crop is not None:\n            xs, ys, ts, ps = clip_events_to_bounds(xs, ys, ts, ps, crop)\n            self.sensor_size = crop_to_size(crop)\n            xs, ys = xs-crop[2], ys-crop[0]\n        num = 10000\n        xs, ys, ts, ps = xs[0:num], ys[0:num], ts[0:num], ps[0:num]\n        if len(xs) == 0:\n            return\n        voxels = events_to_voxel(xs, ys, ts, ps, bins, sensor_size=self.sensor_size)\n        voxels = block_reduce(voxels, block_size=(1,size_reduction,size_reduction), func=np.mean, cval=0)\n        dimdiff = voxels.shape[1]-voxels.shape[0]\n        filler = np.zeros((dimdiff, *voxels.shape[1:]))\n        voxels = np.concatenate((filler, voxels), axis=0)\n        voxels = voxels.transpose(0,2,1)\n\n        pltvoxels = voxels != 0\n        pvp, nvp = voxels > 0, voxels < 0\n        rng = 0.2\n        min_r, min_b, max_g = 80/255.0, 80/255.0, 0/255.0\n\n        vox_cols = voxels/(max(np.abs(np.max(voxels)), np.abs(np.min(voxels))))\n        pvox, nvox = vox_cols*np.where(vox_cols > 0, 1, 0), np.abs(vox_cols)*np.where(vox_cols < 0, 1, 0)\n        pvox, nvox = pvox*(1-min_r)+min_r, nvox*(1-min_b)+min_b\n        zeros = np.zeros_like(voxels)\n\n        colors = np.empty(voxels.shape, dtype=object)\n\n        increase = 0.5\n        redvals = np.stack((pvox, (1.0-pvox)*max_g, pvox-min_r), axis=3)\n        redvals = self.increase_brightness(redvals, increase=increase)\n        redvals = nlr.unstructured_to_structured(redvals).astype('O')\n\n        bluvals = np.stack((nvox-min_b, (1.0-nvox)*max_g, nvox), axis=3)\n        bluvals = self.increase_brightness(bluvals, increase=increase)\n        bluvals = nlr.unstructured_to_structured(bluvals).astype('O')\n\n        colors[pvp] = redvals[pvp]\n        colors[nvp] = bluvals[nvp]\n\n        fig = plt.figure()\n        ax = fig.gca(projection='3d')\n        ax.voxels(pltvoxels, facecolors=colors)\n        ax.view_init(elev=elev, azim=azim)\n\n        ax.grid(False)\n        # Hide panes\n        ax.xaxis.pane.fill = False\n        ax.yaxis.pane.fill = False\n        ax.zaxis.pane.fill = False\n        if not show_axes:\n            # Hide spines\n            ax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))\n            ax.set_frame_on(False)\n        # Hide xy axes\n        ax.set_xticks([])\n        ax.set_yticks([])\n        ax.set_zticks([])\n\n        ax.xaxis.set_visible(False)\n        ax.axes.get_yaxis().set_visible(False)\n\n        if show_plot:\n            plt.show()\n        if save_path is not None:\n            ensure_dir(save_path)\n            print(\"Saving to {}\".format(save_path))\n            plt.savefig(save_path, transparent=True, dpi=600, bbox_inches = 'tight')\n        plt.close()\n"
  },
  {
    "path": "lib/visualization/visualizers_mayavi.py",
    "content": ""
  },
  {
    "path": "visualize.py",
    "content": "import argparse\nimport os\nfrom tqdm import tqdm\nimport numpy as np\nfrom lib.data_formats.read_events import read_memmap_events, read_h5_events_dict\nfrom lib.data_loaders import MemMapDataset, DynamicH5Dataset, NpyDataset\nfrom lib.visualization.visualizers import TimeStampImageVisualizer, EventImageVisualizer, \\\n        EventsVisualizer, VoxelVisualizer\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Quick demo\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"memmap events path\")\n    parser.add_argument(\"--output_path\", type=str, default=\"/tmp/visualization\", help=\"Where to save image outputs\")\n    parser.add_argument(\"--filetype\", type=str, default=\"png\", help=\"Which filetype to save as\", choices=[\"png\", \"jpg\", \"pdf\"])\n\n    parser.add_argument('--plot_method', default='between_frames', type=str,\n                        help='which method should be used to visualize',\n                        choices=['between_frames', 'k_events', 't_seconds', 'fixed_frames'])\n    parser.add_argument('--w_width', type=float, default=0.01,\n                        help='new plot is formed every t seconds/k events (required if voxel_method is t_seconds)')\n    parser.add_argument('--sw_width', type=float,\n                        help='sliding_window size in seconds/events (required if voxel_method is t_seconds)')\n    parser.add_argument('--num_frames', type=int, default=100, help='if fixed_frames chosen as voxel method, sets the number of frames')\n\n    parser.add_argument('--visualization', type=str, default='events', choices=['events', 'voxels', 'event_image', 'ts_image'])\n\n    parser.add_argument(\"--num_bins\", type=int, default=6, help=\"How many bins voxels should have.\")\n\n    parser.add_argument('--show_plot', action='store_true', help='If true, will also display the plot in an interactive window.\\\n            Useful for selecting the desired orientation.')\n\n    parser.add_argument(\"--num_show\", type=int, default=-1, help=\"How many events to show per plot. If -1, show all events.\")\n    parser.add_argument(\"--event_size\", type=float, default=2, help=\"Marker size of the plotted events\")\n    parser.add_argument(\"--ts_scale\", type=int, default=10000, help=\"Scales the time axis. Only applicable for mayavi rendering.\")\n    parser.add_argument(\"--elev\", type=float, default=0, help=\"Elevation of plot\")\n    parser.add_argument(\"--azim\", type=float, default=45, help=\"Azimuth of plot\")\n    parser.add_argument(\"--stride\", type=int, default=1, help=\"Downsample stride for plotted images.\")\n    parser.add_argument(\"--skip_frames\", type=int, default=1, help=\"Amount of frames to place per plot.\")\n    parser.add_argument(\"--start_frame\", type=int, default=0, help=\"On which frame to start.\")\n    parser.add_argument('--hide_skipped', action='store_true', help='Do not draw skipped frames into plot.')\n    parser.add_argument('--hide_events', action='store_true', help='Do not draw events')\n    parser.add_argument('--hide_frames', action='store_true', help='Do not draw frames')\n    parser.add_argument('--show_axes', action='store_true', help='Draw axes')\n    parser.add_argument('--flip_x', action='store_true', help='Flip in the x axis')\n    parser.add_argument(\"--num_compress\", type=str, default='auto', help=\"How many events to draw compressed. If 'auto'\\\n            will automatically determine.\", choices=['auto', 'none', 'all'])\n    parser.add_argument('--compress_front', action='store_true', help='If set, will put the compressed events at the _start_\\\n            of the event volume, rather than the back.')\n    parser.add_argument('--invert', action='store_true', help='If the figure is for a black background, you can invert the \\\n            colors for better visibility.')\n    parser.add_argument(\"--crop\", type=str, default=None, help=\"Set a crop of both images and events. Uses 'imagemagick' \\\n            syntax, eg for a crop of 10x20 starting from point 30,40 use: 10x20+30+40.\")\n    parser.add_argument(\"--renderer\", type=str, default=\"matplotlib\", help=\"Which renderer to use (mayavi is faster)\", choices=[\"matplotlib\", \"mayavi\"])\n    args = parser.parse_args()\n    if not os.path.exists(args.output_path):\n        os.makedirs(args.output_path)\n\n    if os.path.isdir(args.path):\n        loader_type = MemMapDataset\n    elif os.path.splitext(args.path)[1] == \".npy\":\n        loader_type = NpyDataset\n    else:\n        loader_type = DynamicH5Dataset\n    dataloader = loader_type(args.path, voxel_method={'method':args.plot_method, 't':args.w_width,\n        'k':args.w_width, 'sliding_window_t':args.sw_width, 'sliding_window_w':args.sw_width, 'num_frames':args.num_frames},\n            return_events=True, return_voxelgrid=False, return_frame=True, return_flow=True, return_format='numpy')\n    sensor_size = dataloader.size()\n\n    if args.visualization == 'events':\n        kwargs = {'num_compress':args.num_compress, 'num_show':args.num_show, 'event_size':args.event_size,\n                'elev':args.elev, 'azim':args.azim, 'show_events':not args.hide_events,\n                'show_frames':not args.hide_frames, 'show_plot':args.show_plot, 'crop':args.crop,\n                'compress_front':args.compress_front, 'marker':'.', 'stride':args.stride,\n                'invert':args.invert, 'show_axes':args.show_axes, 'flip_x':args.flip_x}\n        visualizer = EventsVisualizer(sensor_size)\n    elif args.visualization == 'voxels':\n        kwargs = {'bins':args.num_bins, 'crop':args.crop, 'elev':args.elev, 'azim':args.azim,\n                'show_axes':args.show_axes, 'show_plot':args.show_plot, 'flip_x':args.flip_x}\n        visualizer = VoxelVisualizer(sensor_size)\n    elif args.visualization == 'event_image':\n        kwargs = {}\n        visualizer = EventImageVisualizer(sensor_size)\n    elif args.visualization == 'ts_image':\n        kwargs = {}\n        visualizer = TimeStampImageVisualizer(sensor_size)\n    else:\n        raise Exception(\"Unknown visualization chosen: {}\".format(args.visualization))\n\n    plot_data = {'events':np.ones((0, 4)), 'frame':[], 'frame_ts':[]}\n    print(\"{} frames in sequence\".format(len(dataloader)))\n    for i, data in enumerate(tqdm(dataloader)):\n        plot_data['events'] = np.concatenate((plot_data['events'], data['events']))\n        if args.plot_method == 'between_frames':\n            plot_data['frame'].append(data['frame'])\n            plot_data['frame_ts'].append(data['frame_ts'])\n        else:\n            plot_data['frame'] = data['frame']\n            plot_data['frame_ts'] = data['frame_ts']\n\n        output_path = os.path.join(args.output_path, \"frame_{:010d}.{}\".format(i, args.filetype))\n        if i%args.skip_frames == 0:\n            visualizer.plot_events(plot_data, output_path, **kwargs)\n            plot_data = {'events':np.ones((0, 4)), 'frame':[], 'frame_ts':[]}\n\n    #if args.plot_method == 'between_frames':\n    #    if args.renderer == \"mayavi\":\n    #        from lib.visualization.draw_event_stream_mayavi import plot_between_frames\n    #        plot_between_frames(xs, ys, ts, ps, frames, frame_idx, args, plttype='events')\n    #    elif args.renderer == \"matplotlib\":\n    #        from lib.visualization.draw_event_stream import plot_between_frames\n    #        plot_between_frames(xs, ys, ts, ps, frames, frame_idx, args, plttype='events')\n    #elif args.plot_method == 'k_events':\n    #    print(args.renderer)\n    #    pass\n    #elif args.plot_method == 't_seconds':\n    #    if args.renderer == \"mayavi\":\n    #        from lib.visualization.draw_event_stream_mayavi import plot_events_sliding\n    #        plot_events_sliding(xs, ys, ts, ps, args, dt=args.w_width, sdt=args.sw_width, frames=frames, frame_ts=frame_ts)\n    #    elif args.renderer == \"matplotlib\":\n    #        from lib.visualization.draw_event_stream import plot_events_sliding\n    #        plot_events_sliding(xs, ys, ts, ps, args, frames=frames, frame_ts=frame_ts)\n"
  },
  {
    "path": "visualize_events.py",
    "content": "import argparse\nimport os\nimport numpy as np\nfrom lib.data_formats.read_events import read_memmap_events, read_h5_events_dict\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Quick demo\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"memmap events path\")\n    parser.add_argument(\"--output_path\", type=str, default=\"/tmp/visualization\", help=\"Where to save image outputs\")\n\n    parser.add_argument('--plot_method', default='between_frames', type=str,\n                        help='which method should be used to visualize',\n                        choices=['between_frames', 'k_events', 't_seconds'])\n    parser.add_argument('--w_width', type=float, default=0.01,\n                        help='new plot is formed every t seconds (required if voxel_method is t_seconds)')\n    parser.add_argument('--sw_width', type=float,\n                        help='sliding_window size in seconds (required if voxel_method is t_seconds)')\n\n    parser.add_argument(\"--num_bins\", type=int, default=6, help=\"How many bins voxels should have.\")\n\n    parser.add_argument('--show_plot', action='store_true', help='If true, will also display the plot in an interactive window.\\\n            Useful for selecting the desired orientation.')\n\n    parser.add_argument(\"--num_show\", type=int, default=-1, help=\"How many events to show per plot. If -1, show all events.\")\n    parser.add_argument(\"--event_size\", type=float, default=2, help=\"Marker size of the plotted events\")\n    parser.add_argument(\"--ts_scale\", type=int, default=10000, help=\"Scales the time axis. Only applicable for mayavi rendering.\")\n    parser.add_argument(\"--elev\", type=float, default=20, help=\"Elevation of plot\")\n    parser.add_argument(\"--azim\", type=float, default=45, help=\"Azimuth of plot\")\n    parser.add_argument(\"--stride\", type=int, default=1, help=\"Downsample stride for plotted images.\")\n    parser.add_argument(\"--skip_frames\", type=int, default=1, help=\"Amount of frames to place per plot.\")\n    parser.add_argument(\"--start_frame\", type=int, default=0, help=\"On which frame to start.\")\n    parser.add_argument('--hide_skipped', action='store_true', help='Do not draw skipped frames into plot.')\n    parser.add_argument('--hide_events', action='store_true', help='Do not draw events')\n    parser.add_argument('--hide_frames', action='store_true', help='Do not draw frames')\n    parser.add_argument('--show_axes', action='store_true', help='Draw axes')\n    parser.add_argument(\"--num_compress\", type=int, default=0, help=\"How many events to draw compressed. If 'auto'\\\n            will automatically determine.\", choices=['value', 'auto'])\n    parser.add_argument('--compress_front', action='store_true', help='If set, will put the compressed events at the _start_\\\n            of the event volume, rather than the back.')\n    parser.add_argument('--invert', action='store_true', help='If the figure is for a black background, you can invert the \\\n            colors for better visibility.')\n    parser.add_argument(\"--crop\", type=str, default=None, help=\"Set a crop of both images and events. Uses 'imagemagick' \\\n            syntax, eg for a crop of 10x20 starting from point 30,40 use: 10x20+30+40.\")\n    parser.add_argument(\"--renderer\", type=str, default=\"matplotlib\", help=\"Which renderer to use (mayavi is faster)\", choices=[\"matplotlib\", \"mayavi\"])\n    args = parser.parse_args()\n\n    if os.path.isdir(args.path):\n        events = read_memmap_events(args.path)\n\n        ts = events['t'][:].squeeze()\n        t0 = ts[0]\n        ts = ts-t0\n        frames = (events['images'][args.start_frame+1::])/255\n        frame_idx = events['index'][args.start_frame::]\n        frame_ts = events['frame_stamps'][args.start_frame+1::]-t0\n\n        start_idx = np.searchsorted(ts, frame_ts[0])\n        print(\"Starting from frame {}, event {}\".format(args.start_frame, start_idx))\n\n        xs = events['xy'][:,0]\n        ys = events['xy'][:,1]\n        ts = ts[:]\n        ps = events['p'][:]\n\n        print(\"Have {} frames\".format(frames.shape))\n    else:\n        events = read_h5_events_dict(args.path)\n        xs = events['xs']\n        ys = events['ys']\n        ts = events['ts']\n        ps = events['ps']\n        t0 = ts[0]\n        ts = ts-t0\n        frames = [np.flip(np.flip(x/255., axis=0), axis=1) for x in events['frames']]\n        frame_ts = events['frame_timestamps'][1:]-t0\n        frame_end = events['frame_event_indices'][1:]\n        frame_start = np.concatenate((np.array([0]), frame_end))\n        frame_idx = np.stack((frame_end, frame_start[0:-1]), axis=1)\n        ys = frames[0].shape[0]-ys\n        xs = frames[0].shape[1]-xs\n\n    if args.plot_method == 'between_frames':\n        if args.renderer == \"mayavi\":\n            from lib.visualization.draw_event_stream_mayavi import plot_between_frames\n            plot_between_frames(xs, ys, ts, ps, frames, frame_idx, args, plttype='events')\n        elif args.renderer == \"matplotlib\":\n            from lib.visualization.draw_event_stream import plot_between_frames\n            plot_between_frames(xs, ys, ts, ps, frames, frame_idx, args, plttype='events')\n    elif args.plot_method == 'k_events':\n        print(args.renderer)\n        pass\n    elif args.plot_method == 't_seconds':\n        if args.renderer == \"mayavi\":\n            from lib.visualization.draw_event_stream_mayavi import plot_events_sliding\n            plot_events_sliding(xs, ys, ts, ps, args, dt=args.w_width, sdt=args.sw_width, frames=frames, frame_ts=frame_ts)\n        elif args.renderer == \"matplotlib\":\n            from lib.visualization.draw_event_stream import plot_events_sliding\n            plot_events_sliding(xs, ys, ts, ps, args, frames=frames, frame_ts=frame_ts)\n"
  },
  {
    "path": "visualize_flow.py",
    "content": "import argparse\nimport os\nimport pandas as pd\nimport glob\nimport numpy as np\nimport cv2 as cv\nfrom lib.data_formats.read_events import read_memmap_events, read_h5_events_dict\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Quick demo\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"events path\")\n    parser.add_argument(\"flow_path\", help=\"flow path\")\n    parser.add_argument(\"--output_path\", type=str, default=\"/tmp/visualization\", help=\"Where to save image outputs\")\n\n    parser.add_argument('--plot_method', default='between_frames', type=str,\n                        help='which method should be used to visualize',\n                        choices=['between_frames', 'k_events', 't_seconds'])\n    parser.add_argument('--w_width', type=float, default=0.01,\n                        help='new plot is formed every t seconds (required if voxel_method is t_seconds)')\n    parser.add_argument('--sw_width', type=float,\n                        help='sliding_window size in seconds (required if voxel_method is t_seconds)')\n\n    parser.add_argument(\"--num_bins\", type=int, default=6, help=\"How many bins voxels should have.\")\n\n    parser.add_argument('--show_plot', action='store_true', help='If true, will also display the plot in an interactive window.\\\n            Useful for selecting the desired orientation.')\n\n    parser.add_argument(\"--num_show\", type=int, default=-1, help=\"How many events to show per plot. If -1, show all events.\")\n    parser.add_argument(\"--event_size\", type=float, default=2, help=\"Marker size of the plotted events\")\n    parser.add_argument(\"--ts_scale\", type=int, default=10000, help=\"Scales the time axis. Only applicable for mayavi rendering.\")\n    parser.add_argument(\"--elev\", type=float, default=0, help=\"Elevation of plot\")\n    parser.add_argument(\"--azim\", type=float, default=45, help=\"Azimuth of plot\")\n    parser.add_argument(\"--stride\", type=int, default=1, help=\"Downsample stride for plotted images.\")\n    parser.add_argument(\"--skip_frames\", type=int, default=1, help=\"Amount of frames to place per plot.\")\n    parser.add_argument(\"--start_frame\", type=int, default=0, help=\"On which frame to start.\")\n    parser.add_argument('--hide_skipped', action='store_true', help='Do not draw skipped frames into plot.')\n    parser.add_argument('--hide_events', action='store_true', help='Do not draw events')\n    parser.add_argument('--hide_frames', action='store_true', help='Do not draw frames')\n    parser.add_argument('--show_axes', action='store_true', help='Draw axes')\n    parser.add_argument(\"--num_compress\", type=int, default=0, help=\"How many events to draw compressed. If 'auto'\\\n            will automatically determine.\", choices=['value', 'auto'])\n    parser.add_argument('--compress_front', action='store_true', help='If set, will put the compressed events at the _start_\\\n            of the event volume, rather than the back.')\n    parser.add_argument('--invert', action='store_true', help='If the figure is for a black background, you can invert the \\\n            colors for better visibility.')\n    parser.add_argument(\"--crop\", type=str, default=None, help=\"Set a crop of both images and events. Uses 'imagemagick' \\\n            syntax, eg for a crop of 10x20 starting from point 30,40 use: 10x20+30+40.\")\n    parser.add_argument(\"--renderer\", type=str, default=\"matplotlib\", help=\"Which renderer to use (mayavi is faster)\", choices=[\"matplotlib\", \"mayavi\"])\n    args = parser.parse_args()\n\n    events = read_h5_events_dict(args.path)\n    xs = events['xs']\n    ys = events['ys']\n    ts = events['ts']\n    ps = events['ps']\n    t0 = ts[0]\n    ts = ts-t0\n    frames = [np.flip(np.flip(x/255., axis=0), axis=1) for x in events['frames']]\n    frame_ts = events['frame_timestamps'][1:]-t0\n    frame_end = events['frame_event_indices'][1:]\n    frame_start = np.concatenate((np.array([0]), frame_end))\n    frame_idx = np.stack((frame_end, frame_start[0:-1]), axis=1)\n    ys = frames[0].shape[0]-ys\n    xs = frames[0].shape[1]-xs\n\n    flow_paths = sorted(glob.glob(os.path.join(args.flow_path, \"*.npy\")))\n    flow_img_paths = sorted(glob.glob(os.path.join(args.flow_path, \"*.png\")))\n    flow_ts = pd.read_csv(os.path.join(args.flow_path, \"timestamps.txt\"), delimiter=\" \", names=[\"fname\", \"timestamp\"])\n    flow_ts = np.array(flow_ts[\"timestamp\"])\n\n    #flows = [-np.flip(np.flip(np.load(fp), axis=1), axis=2) for fp in flow_paths]\n    flows = [-np.load(fp) for fp in flow_paths]\n    flow_imgs = [cv.imread(fi) for fi in flow_img_paths]\n    print(\"Loaded {} flow, {} img, {} ts\".format(len(flows), len(flow_imgs), len(flow_ts)))\n\n    if args.plot_method == 'between_frames':\n        if args.renderer == \"mayavi\":\n            print(args.renderer)\n            pass\n        elif args.renderer == \"matplotlib\":\n            from lib.visualization.draw_flow import plot_between_frames\n            plot_between_frames(xs, ys, ts, ps, flows, flow_imgs, flow_ts, args)\n            print(args.renderer)\n            pass\n    elif args.plot_method == 'k_events':\n        print(args.renderer)\n        pass\n    elif args.plot_method == 't_seconds':\n        if args.renderer == \"mayavi\":\n            print(args.renderer)\n            pass\n        elif args.renderer == \"matplotlib\":\n            print(args.renderer)\n            pass\n"
  },
  {
    "path": "visualize_voxel.py",
    "content": "import argparse\nimport os\nimport numpy as np\nfrom lib.visualization.draw_event_stream import plot_between_frames\nfrom lib.data_formats.read_events import read_memmap_events, read_h5_events_dict\n\ndef plot_events_sliding(xs, ys, ts, ps, args, frames=None, frame_ts=None):\n    if dt is None:\n        dt = (ts[-1]-ts[0])/10\n        sdt = dt/10\n        print(\"Using dt={}, sdt={}\".format(dt, sdt))\n    if frames is not None:\n        sensor_size = frames[0].shape\n    else:\n        sensor_size = [max(ys), max(xs)]\n\n    if len(frame_ts.shape) == 2:\n        frame_ts = frame_ts[:,1]\n    for i, t0 in enumerate(tqdm(np.arange(ts[0], ts[-1]-dt, sdt))):\n        te = t0+dt\n        eidx0 = np.searchsorted(ts, t0)\n        eidx1 = np.searchsorted(ts, te)\n        fidx0 = np.searchsorted(frame_ts, t0)\n        fidx1 = np.searchsorted(frame_ts, te)\n        #print(\"{}:{} = {}\".format(frame_ts[fidx0], ts[eidx0], fidx0))\n\n        wxs, wys, wts, wps = xs[eidx0:eidx1], ys[eidx0:eidx1], ts[eidx0:eidx1], ps[eidx0:eidx1],\n        if fidx0 == fidx1:\n            wframes=[]\n            wframe_ts=[]\n        else:\n            wframes = frames[fidx0:fidx1]\n            wframe_ts = frame_ts[fidx0:fidx1]\n\n        save_path = os.path.join(args.output_path, \"frame_{:010d}.png\".format(i))\n        plot_events(wxs, wys, wts, wps, save_path=save_path, num_show=args.num_show, event_size=args.event_size,\n                imgs=args.wframes, img_ts=args.wframe_ts, show_events=args.show_events, azim=args.azim,\n                elev=args.elev, show_frames=args.show_frames, crop=args.crop, compress_front=args.compress_front,\n                invert=args.invert, num_compress=args.num_compress, show_plot=args.show_plot, img_size=args.sensor_size,\n                show_axes=args.show_axes)\n\nif __name__ == \"__main__\":\n    \"\"\"\n    Quick demo\n    \"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", help=\"memmap events path\")\n    parser.add_argument(\"--output_path\", type=str, default=\"/tmp/visualization\", help=\"Where to save image outputs\")\n\n    parser.add_argument('--plot_method', default='between_frames', type=str,\n                        help='which method should be used to visualize',\n                        choices=['between_frames', 'k_events', 't_seconds'])\n    parser.add_argument('--k', type=int,\n                        help='new plot is formed every k events (required if voxel_method is k_events)')\n    parser.add_argument('--sliding_window_w', type=int,\n                        help='sliding_window size (required if voxel_method is k_events)')\n    parser.add_argument('--t', type=float,\n                        help='new plot is formed every t seconds (required if voxel_method is t_seconds)')\n    parser.add_argument('--sliding_window_t', type=float,\n                        help='sliding_window size in seconds (required if voxel_method is t_seconds)')\n    parser.add_argument(\"--num_bins\", type=int, default=6, help=\"How many bins voxels should have.\")\n\n    parser.add_argument('--show_plot', action='store_true', help='If true, will also display the plot in an interactive window.\\\n            Useful for selecting the desired orientation.')\n\n    parser.add_argument(\"--num_show\", type=int, default=-1, help=\"How many events to show per plot. If -1, show all events.\")\n    parser.add_argument(\"--event_size\", type=float, default=2, help=\"Marker size of the plotted events\")\n    parser.add_argument(\"--elev\", type=float, default=20, help=\"Elevation of plot\")\n    parser.add_argument(\"--azim\", type=float, default=-25, help=\"Azimuth of plot\")\n    parser.add_argument(\"--stride\", type=int, default=1, help=\"Downsample stride for plotted images.\")\n    parser.add_argument(\"--skip_frames\", type=int, default=1, help=\"Amount of frames to place per plot.\")\n    parser.add_argument(\"--start_frame\", type=int, default=0, help=\"On which frame to start.\")\n    parser.add_argument('--hide_skipped', action='store_true', help='Do not draw skipped frames into plot.')\n    parser.add_argument('--hide_events', action='store_true', help='Do not draw events')\n    parser.add_argument('--hide_frames', action='store_true', help='Do not draw frames')\n    parser.add_argument('--show_axes', action='store_true', help='Draw axes')\n    parser.add_argument(\"--num_compress\", type=int, default=0, help=\"How many events to draw compressed. If 'auto'\\\n            will automatically determine.\", choices=['value', 'auto'])\n    parser.add_argument('--compress_front', action='store_true', help='If set, will put the compressed events at the _start_\\\n            of the event volume, rather than the back.')\n    parser.add_argument('--invert', action='store_true', help='If the figure is for a black background, you can invert the \\\n            colors for better visibility.')\n    parser.add_argument(\"--crop\", type=str, default=None, help=\"Set a crop of both images and events. Uses 'imagemagick' \\\n            syntax, eg for a crop of 10x20 starting from point 30,40 use: 10x20+30+40.\")\n    args = parser.parse_args()\n\n    if os.path.isdir(args.path):\n        events = read_memmap_events(args.path)\n\n        ts = events['t'][:].squeeze()\n        t0 = ts[0]\n        ts = ts-t0\n        frames = (events['images'][args.start_frame+1::])/255\n        frame_idx = events['index'][args.start_frame::]\n        frame_ts = events['frame_stamps'][args.start_frame+1::]-t0\n\n        start_idx = np.searchsorted(ts, frame_ts[0])\n        print(\"Starting from frame {}, event {}\".format(args.start_frame, start_idx))\n\n        xs = events['xy'][:,0]\n        ys = events['xy'][:,1]\n        ts = ts[:]\n        ps = events['p'][:]\n\n        print(\"Have {} frames\".format(frames.shape))\n    else:\n        events = read_h5_events_dict(args.path)\n        xs = events['xs']\n        ys = events['ys']\n        ts = events['ts']\n        ps = events['ps']\n        t0 = ts[0]\n        ts = ts-t0\n        frames = [np.flip(x/255., axis=0) for x in events['frames']]\n        frame_ts = events['frame_timestamps'][1:]-t0\n        frame_end = events['frame_event_indices'][1:]\n        frame_start = np.concatenate((np.array([0]), frame_end))\n        frame_idx = np.stack((frame_end, frame_start[0:-1]), axis=1)\n        ys = frames[0].shape[0]-ys\n\n    plot_between_frames(xs, ys, ts, ps, frames, frame_idx, args, plttype='voxel')\n"
  }
]