Skip to content
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical Intelligence
      • Surgical Robotics
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Upcoming Events
    • Webinars
    • Meetups
    • News
    • Blog
  • The company
    • About us
    • Careers
Menu
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical Intelligence
      • Surgical Robotics
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Upcoming Events
    • Webinars
    • Meetups
    • News
    • Blog
  • The company
    • About us
    • Careers
Contact

GAN for non-rigid object tracking

Object identification and tracking remains a challenging task in computer vision, despite advances in hardware, computational, and algorithmic developments. Difficulties arise, in part, due to the non-rigid nature of objects’ motion, where continuous shape morphing during motion is observed. This variability in object shape makes it impossible to use a single mask to characterize the boundaries of the object. The problem of tracking is further augmented by the apparent change in object’s shape (even for rigid objects and motion) due to camera position, field of view, and self motion. Features extracted from each mask need therefore to be correlated and compared frame-wise to obtain a reliable tracking.

Generative Adversarial Networks at work

One line of solutions for non-rigid object tracking utilized a per-frame search algorithms to locate the goal object using learned object’s features and connect them temporally over frames. However, solutions of this kind tend to be computationally prohibitive in real-time applications and are prone to failure when multiple objects of the same nature appear in the image. Furthermore, the use of features to identify non-rigid objects demands a spatio-temporal relationship between features found in each frame. This top-bottom approach is frequently used for human body tracking, where a deformable skeleton template with predefined mechanical limitations is fitted to features found per-frame in a video sequence.

Thus, in all natural videos sequences we expect some degree of deformable objects, non-rigid motion and occlusion. Objects in each frame can be identified by their bounding box, which introduces the risk of capturing features belonging to the background, or by localizing a mask around the object in each frame when objects do not fill the major part of bounding boxes. The later approach is usually more computationally and algorithmically complex than the first, but it offers higher true-positive rate.

With the rapid advances in machine learning techniques during the past decade, it is now possible to reduce computational and algorithmic costs for object tracking. In other avenues, algorithms for objects’ bounding boxes suggestion for detection (such as YOLO, R-CNN and SSD algorithms) have demonstrated a significant improvement in running time. Recent developments in Generative Adversarial Networks (GAN) now offer a promising approach, one which provides the most adequate mask (in feature space) to an object during tracking and one that will persist the longest time (temporally); that is, with the least amount of adjustments over frames and the most persistent features.

The idea behind Generative Adversarial Networks:

GANs are constructed from two deep convolutional neural networks, one generative and the other discriminative. The goal of the generative network is to learn the distribution space of the input data and generate synthetic data samples, which are closest to the one coming from the input distribution. These synthetic sample images are fed passed to the discriminative network, which tries to figure out whether they come from the true data set or are synthetic. In other words, the goal of the generative network is to fool the discriminative network, such that it can no longer distinguish between true and synthetic data.

How is the idea behind Generative Adversarial Networks harnessed to improve object tracking? In object tracking, sampled points around the goal objects contain a dense set of overlapping points and features, which makes it hard to learn the true variability of the object, hence reducing tracking quality. The training data, therefore, needs to be amplified to account for as many object shapes as possible. By training a network on a tight mask of the object, generated by a generative network, the training data is amplified. The mask produced is therefore one which contains the most predictive features while discarding other ones, and one which persists the longest time without adjustments. Using such masks, data fed into the tracker needs to handle less false positives during the classification process, so that later it reduces the burden in the classification of whether a point (feature) in subsequent frames belongs to the goal object or not.

Augmentation of training data set is an important stage in machine learning based tasks, specifically when annotated data are sparse. It is therefore important to put emphasis on training in the construction of computer vision and machine learning solutions. At RSIP Vision we employ cutting edge methodologies to bring our client high-quality tailor-made solutions to their challenges. To learn more about RSIP Vision activities, please visit our project page. To consult our specialists for your computer vision project, please fill this simple contact form.

Share

Share on linkedin
Share on twitter
Share on facebook

Related Content

Percutaneous Nephrolithotomy

PCNL – Planning and real-time navigation

Prostate Tumor Segmentation

Implementing AI to Improve PI-RADS Scoring

RAS Navigation

Tissue Sparing in Robotic Assisted Orthopedic Surgeries

Procedural Planning in urology

Procedural Planning in Urology

C Arm X-Ray Machine Scanner

Radiation Reduction in Robotic Assisted Surgeries (RAS) Using AI

Visible spectrum color

Hyperspectral Imaging for Robotic Assisted Surgery

Percutaneous Nephrolithotomy

PCNL – Planning and real-time navigation

Prostate Tumor Segmentation

Implementing AI to Improve PI-RADS Scoring

RAS Navigation

Tissue Sparing in Robotic Assisted Orthopedic Surgeries

Procedural Planning in urology

Procedural Planning in Urology

C Arm X-Ray Machine Scanner

Radiation Reduction in Robotic Assisted Surgeries (RAS) Using AI

Visible spectrum color

Hyperspectral Imaging for Robotic Assisted Surgery

Show all

RSIP Vision

Field-tested software solutions and custom R&D, to power your next medical products with innovative AI and image analysis capabilities.

Read more about us

Get in touch

Please fill the following form and our experts will be happy to reply to you soon

Recent News

PR – Intra-op Virtual Measurements in Laparoscopic and Robotic-Assisted Surgeries

PR – Non-Invasive Planning of Coronary Intervention

PR – Bladder Panorama Generator and Sparse Reconstruction Tool

PR – Registration Module for Orthopedic Surgery

All news
Upcoming Events
Stay informed for our next events
Subscribe to Our Magazines

Subscribe now and receive the Computer Vision News Magazine every month to your mailbox

 
Subscribe for free
Follow us
Linkedin Twitter Facebook Youtube

contact@rsipvision.com

Terms of Use

Privacy Policy

© All rights reserved to RSIP Vision 2021

Created by Shmulik

  • Our Work
    • title-1
      • Ophthalmology
      • Uncategorized
      • Ophthalmology
      • Pulmonology
      • Cardiology
      • Orthopedics
    • Title-2
      • Orthopedics
  • Success Stories
  • Insights
  • The company