The Workshop on Estimation, Tracking and Fusion: A Tribute to Yaakov Bar-Shalom

 

P. K. Willett

 

 

A Two-Part Celebration

 

Do you know Yaakov Bar-Shalom? I do: he’s a nice man and a good friend, and works just two doors down the hall from me.  Perhaps many of you do, too, from our IEEE Transactions on Aerospace and Electronic Systems. It turns out, from a scan of the 50 Year Cumulative Index (a CD distributed with the July 2000 issue of AES) that amongst the authors of papers published between 1972 and 1999, he is the most prolific. I count Bar-Shalom with 42 articles[1]; his “closest competitors” are Karl Gerlach & Sahjendra Singh (tied at 26), Fred Lee (25), Asim Sen (23) and Peter Maybeck and Ramon Nitzberg (tied at 22). And you may have caught him floating in (on?) the Dead Sea on the cover of AES Magazine in October of 1999 (by the way, a visit to http://seal-www.gtri.gatech.edu/onr_workshop/deadsea.html is quite worthwhile, and thanks to Bill Ballard of GTRI for that), or enjoying a relaxing ski on the September 2001 cover, in both cases surrounded by his favorite reading material, the Systems Magazine.

 

Quite a few other people know Yaakov well, and that’s what I’ll write about now. On May 15-16 of 2001 the 4th ONR/GTRI Workshop on Target Tracking and Sensor Fusion was held at the Naval Postgraduate School in Monterrey, CA. This is a group of people who tend to meet at this, or at Oliver Drummond’s or Ivan Kadar’s SPIE conferences. They share a common interest in and around the tracking area, and perforce all owe much to Yaakov.

 

They respect and like him, too. So, on the day following the workshop, most of the participants moved a couple of buildings across the NPS campus for The Workshop on Estimation, Tracking and Fusion: A Tribute to Yaakov Bar-Shalom, a remarkable two-part event in honor of Yaakov’s LX birthday.

 

 

The Presentations

 

Just a “party” would not have been appropriate to celebrate a man for whom research is so important, and whose research (7 books, 19 book chapters, 106 Journal papers, and 184 conference papers) has been so abundant. In fact, thirty-five of his colleagues saw fit to write full – and original – papers for this workshop, and these are available both in a bound Proceedings and as a CD-ROM. (If you would like either, please contact Thiagalingam Kirubarajan at kiruba@mail.ece.mcmaster.ca.) All of these people, and many more, were present in the room, and a lucky 20 of them were able to present their work as an honor to Yaakov.

 

These were really good presentations, and many of them can be seen after minor navigation through http://seal-www.gtri.gatech.edu. Let me discuss some of them. First, though, for those readers whose research area does not overlap with Yaakov’s, let me explain that target tracking is, well, the technology of tracking (i.e. estimating the trajectory, and perhaps other characteristics, of) targets (could be a ballistic missile, an airplane, or even a fish). The basic tool is the Kalman filter, which is very nice from the perspective of numerical load, and is optimal (in most senses) under its assumptions of a linear system driven by Gaussian “process” noise to be tracked, and additive Gaussian measurement noise.

 

Fortunately, for the sake of continued enjoyable employment of many people, those assumptions are often simplistic. First, one has to deal with missed detections and false alarms – the overall “observation” is no longer Gaussian – and the tracker is presented with a group of measurements with no idea which, if any, is relevant. Second, the target model is often nonlinear, possibly in its motion (as missile drag) or in observations (as with angle-only measurements). Third, targets often “maneuver”; that is, they switch their models (e.g. from a straight line to a constant-speed turn) and don’t tell us when. Fourth, there are often many targets, meaning that not only is it uncertain whether a measurement is false or true, but also in the latter case from which target it arose. And fifth, there are now often multiple sensors “helping” with their own misalignment, out-of-sequence measurements, and unlabeled data. One probably could continue; but at this point it is probably best to note that once one has in hand one’s statistical and modeling assumptions, one can (usually) write down a likelihood function for the state of the target(s) given the measurements received to date. Problem solved? No way. The real problem is numerical: how can we approximate away most of the complexity?

 

So let’s begin. A good place to start is with Fred Daum (Principal Scientist at Raytheon) and his “Industrial Strength Nonlinear Filters”. Fred’s talks are always clear and witty; corporate briefings at Raytheon must be fun. Anyway, when one has written down the tracking likelihood function propagation as mentioned above (probably as a differential equation in many dimensions, with an equally-promising integral to follow), one can then explain that the solution amounts to a nonlinear filter. If anyone remains in the room, one may be called on to solve it; and this is hard. “The challenge, therefore, is to develop a nonlinear filtering theory that normal engineers can actually understand and use” instead of the extended Kalman filter (EKF) approximation. Daum has developed a nice and quite wide class of models for which good solutions can be programmed with complexity similar to the EKF, and, instead of heart attack, cause merely an accelerated pulse.

 

Subhash Challa from the University of Melbourne in Australia gave a very nice elucidation on “Target Tracking Using Particle Filters”, co-authored by Neil Gordon (from DERA, now Qinetiq, in Malvern, U.K., and, as you can see, people came from all over for this affair). A particle filter is also a general solution to the nonlinear filtering problem: the key here is to use Monte Carlo integration instead of numerical quadrature. This may sound unpromising, but there have been some recent breakthroughs. One such is due to Gordon, Salmond & Smith, in that they found a way to “re-sample” at each new scan: the notion now is of a series of “particles” that evolve to emulate a trajectory, with those that the new scan’s data makes appear unlikely dropped, and those that the data confirms more inclined to spawn new particles.

 

Measurement-origin uncertainty makes usual target tracking problems “nonlinear”, but with a special mixed integer/continuous structure. The “Theory of Multiple Target Tracking with a Review of Thirty Years of Multiple Target Tracking” (by Shozo Mori from Information Extraction and Transport, and Chee-Yee Chong from Booz-Allen & Hamilton) gives a historical perspective and timeline of the area, with an emphasis on tracking philosophies rather than algorithmic details. They see this as beginning in the mid-1970’s with Yaakov Bar-Shalom’s Probabilistic Data Association (PDA) idea and variants (based on re-Gaussianizing after each scan of data); through multi-hypothesis tracking (MHT) with significant contributions from Reid, Mori, and Blackman (here, at least notionally, all measurement-to-track associations are examined and perhaps 100 of the best measurement “explanations” kept and propagated); and continuing to the “assignment” algorithms of Deb, Pattipati & Bar-Shalom (SDA) and Poore & Rijavec (MFA) that examine these associations from an integer-programming perspective. Chong and Mori see the culmination of these ideas in the general tracking and data-fusion meta-theories of Mahler (random sets), Stone (UDF: unified data fusion) and Kastella (JMP: joint multi-target probabilities).

 

The name Sam Blackman (from Hughes/Raytheon) is one of those most closely associated with MHT solutions to target tracking problems; and it is probably fair to say that MHT is the most widely practically-applied among the target-tracking philosophies. His “Use of Tracking Methods for Enumerating Migrating Salmon” (with R. Dempster, T. Mulligan and P. Withler) describes an offbeat use of target tracking, perhaps a little unexpected, but certainly a nice multi-scale application. Basically, the Canadian government wants to count fish; and automation is a good idea since there can be more than 3,000 fish in an hour. “A major problem with the measurement process is the occlusion of targets (fish) at high density and, similarly, the presence of unresolved targets.” You have to like this paper.

 

Krishna Pattipati (from UConn) presented a “Survey of Assignment Techniques for Multitarget Tracking, co-authored by Thiagalingam Kirubarajan & Robert Popp. This compendium is nice, since the ideas are growing in importance, but are not very intuitive. The idea is that given two “lists”, of a single scan of measurements and of target-tracks, the best “assignment” is one that minimizes a (negative log-likelihood) cost. This is a polynomially-complex problem well studied in the optimization literature, for which the JVC and auction algorithms seem to be the most frisky competitors. For tracking we often need multi-scan or multi-sensor assignments; this is harder and, unfortunately, exponentially-complex. But there are techniques for it (SDA and MFA), and also for m-best assignment in which several of the most-promising trajectories are kept; take a look at this paper and its remarkable 113 references.

 

I have gone into some detail on the above 5 papers; but there were 18 more excellent presentations, and another 12 that were published but not presented. Among the former:

  • “An Overview of Multiple-Model Estimation Techniques”, presented by X. Rong Li from the University of New Orleans. The multiple-model philosophy for tracking target maneuver is given, with particular attention to the evolution from fused-output, interacting, and varying-structure approaches. (Rong has also three papers in the proceedings, but presented only this one that did not appear in print.)
  • “Recent Advances and Future Challenges in Target Tracking”, presented by W. Dale Blair from GTRI. Although Dale is also a significant research presence in the area, he attempts to represent the “consumer”. In this paper he describes the earlier “benchmark” series in which algorithms could compete on neutral terms, and often judged by means (such as radar energy or re-visit time) that tracking designers pay too little attention to. Here one gets some idea about what is really needed, and it isn’t necessarily a lower RMSE.
  • “Iterative Identification in Closed-Loop and Controller Redesign Suboptimal Dual Control Strategy?”, presented by Ioan Landau from the Laboratoire d'Automatique de Grenoble in France. In an earlier life Yaakov worked in control, and made some significant contributions to the “dual” idea, in which actuation not only controls but also probes the system to discover its structure. Professor Landau is a leading expert in the area: he puts Yaakov’s earlier work in perspective and offers some new results.
  • “Predicting a Hybrid System”, by John Boyd (Cubic Systems), David Sworder (UCSD) and Gary Hutchins (NPS). The genesis of the new Gaussian Wavelet Estimator is traced, and applied to an example ship missile defense problem.
  • “Sensor Scheduling in Kalman Filters: Varing Navaid Fixes for Trading-Off Submarine NAV Accuracy vs. ASW Exposure”, presented by Tom Kerr (TeK Associates). A submarine needs to check its INS self-location every so often; but to do so usually involves revealing itself for a time; how can the effect of this be best mitigated?
  • “Multi-Target Moments and Their Application to Multi-Target Tracking”, by Ron Mahler from Lockheed. Mahler’s finite set statistics (FISST) appear to be a natural way to describe probabilities when the number of targets is unknown. Here he describes first- and second-order implementations.
  • “Track and Tracklet Fusion Filtering Using Data from Distributed Sensors”, presented by Oliver Drummond. Track fusion is complicated by the correlation caused by common process noise, and even further by target maneuver. A tracklet is a decorrelated track; this paper is a succinct history of the track/tracklet fusion idea, with emphasis on algorithms and comparison to measurement fusion.
  • “The Identification Kalman Filter: Towards A Unified Theory of Sensor Fusion”, by Robert Lobbia and Arijit Mahalanabis from Boeing. This is an interesting approach to unify ID and state estimation fusion ideas in a practical architecture.
  • “Group Tracking Utilizing Track and Identification Fusion”, presented by Eric Blasch from AFRL and co-authored by Tom Connare from the University of Dayton. When targets move as a formation, the fact that there is a common bulk motion on which smaller individual movements are superimposed is an opportunity. The IMM-JBPDAF is developed, with theory and real-data examples.
  • “Sensor Management for Tracking Interacting Targets”, presented by Lucy Pao from the University of Colorado at Boulder, and co-authored by Michael Kalandros from APL. Sensors give trackers their data; but trackers can also tell sensors where (and how) to point. This is a particularly clever and practical scheme based on the notion that targets’ covariance ellipsoids (i.e. uncertainties) ought to intersect as little as possible.
  • “Multi-Station Data Fusion for CDMA Wireless Networks”, presented by Zhi Tian from Michigan Technological University, and co-authored by K.C. Chang from George Mason University. A clever application of data fusion to communications: if wireless cell hand-offs are co-operative, then power can be saved by fused decisions.
  • “On Design and Performance of Metafusers”, by Nagi Rao from Oak Ridge National Laboratory. A metafuser is a classification approach in which decisions from classifiers are themselves fused, and Dr. Rao adapts Vapnik’s theory (VC dimension) to express the limitations of this architecture in terms of the original classifiers’ performances.
  • “Reasoning Frameworks for Fusion of Imaging and Non-imaging Sensor Information”, by Pierre Valin from Lockheed-Martin Canada. Valin provides a list of requirements for platforms to participate in target ID within a taxonomy tree. The emphasis is on high-level fusion using fuzzy logic, neural networks and Dempster-Shafer beliefs.
  • “Multi-target Tracking Approaches for Statistical Recognition of Partially Occluded Objects”, presented by David Castanon and co-authored by Zhengrong Ying, both from Boston University. The problem here is to start with a template for an object in an image (e.g. a hammer on a toolbench) and to find it, even though it might be rotated and behind something else. By posing it as a quadratic integer-programming problem the authors develop a nice solution with a target-tracking flavor.
  • “Effect of Model Uncertainty on Target Discrimination”, presented by Larry Stone, and co-authored by Thy Tran, from Metron. The authors discuss the use of target features (such as RCS) for target ID, and evaluate the effect of probability-model perturbations.
  • “Quickest Detection of Targets In Multiple-Resolution-Element Systems: Sequential Detection Versus Non-sequential Detection”, by Alexander Tartakovsky from USC. Simply constructing, much less evaluating the performance of, a sequential test for multiple hypotheses is hard. This definitive paper does both, and even treats the case that hypotheses have unknown parameters.
  • “Estimation of Pulse Radar Parameters”, presented by Claude Jauffret, and co-authored by Christophe de Luigi, both from the University of Toulon in France. The paper gives a thorough treatment of estimation of frequency and chirp parameters of an intercepted radar signal.
  • “Using Target Orientation to Enhance Air-to-Air Missile Target Tracking”, presented by Yaakov Oshman, and co-authored by David Arad, both from the Technion in Israel. Most air-to-air missiles use proportional navigation based on rate of line-of-sight. This remarkably clear paper shows that by augmenting this with an estimate of target orientation from an on-board imager, and by feeding this to a game-theoretic course planner and a suitable tracker, then hit-to-kill performance can be achieved practically.

Trying to encapsulate all these great talks and papers in one or two sentences is hard work, and I hope that I have been fair; I recommend an email to Dr. Kirubarajan for a copy of the Proceedings, since I am fairly sure that it will end up being quoted considerably. You can judge my efforts.

 

It is a great credit to Yaakov that so many important people think so much of him, and feel that they owe him so much. The tribute is even greater from those whose idea the workshop was, and who did the heavy-lifting to put it on. In particular, let’s note Thia Kirubarajan (a UConn alumnus, one of Yaakov’s 17 PhD students, and now with McMaster University in Canada) for the Proceedings and publication; Gary Hutchins from NPS who did all the “local” arrangements; Rabinder Madan from ONR for financial support; and both Dale Blair from GTRI and Jean Dezert from ONERA in France for their many extra contributions. The tribute is perhaps greatest from X. Rong Li from the University of New Orleans (and another of Yaakov’s PhD graduates from UConn): his was the idea, and his was the force that carried it through. (One more acknowledgment, to Phil West of GTRI, for the photographs accompanying this piece.)

   

The Banquet

 

 

Most of those at the workshop came to dinner, a nice and well-lubricated affair at the Naval Postgraduate School, and many thanks again to Gary Hutchins for this. After dinner a number of people wanted to share their reminiscences of Yaakov, and these included K.C. Chang, Rabinder Madan, Ioan Landau and Dale Blair. Fred Daum, in particular, showed us several practical radar installations, built by Raytheon, that actually use Yaakov’s algorithms, and an example is shown in Figure 5. According to Fred “Raytheon built 3 of these for the US Army, and we have a new contract to build 4 more. They use JPDA, which gives excellent performance. The US Army is extremely proud of the technical performance of this radar in the real world!”

 

 

 

Many of us are curious about Yaakov. So, at Jean Dezert’s suggestion, the after-dinner “roast” was scripted into the form of an interview. This interview was a lot of fun, and I’ll recommend a look at http://www.inforfusion.org/Int-Ybs.htm, where it is reproduced in its entirety. Here are a few excerpts:

 

 

We learned that your first name has something to do with tracking.  What is that exactly? Do you think that has anything to do with the fact that you are a pioneer and an unquestionable world leader in tracking area?

Yaakov in modern Hebrew means “he shall track”. The original meaning comes from Jacob (the 3rd patriarch, son of Isaac) who was born “holding the heel” of his brother Esau. The etymological explanation is “following on the heels of…”, which became tracking. I also happen to believe in the causality between the given name of a person and this person’s profession.

 

Do you consider yourself as an ex-prodigy, as did Norbert Wiener?

I am a slow study -- by now I am probably at the level to be considered a child prodigy.

 

Before joining the University of Connecticut in 1976, you worked at Systems Control (SCI). Among the technical projects that you worked on there, are there any that you’d like to share, or that you are particularly proud of?

My best work in control was the “Dual Effect, Certainty Equivalence and Separation” paper, which drew a distinction between Certainty Equivalence and Separation in stochastic control and showed that, for a class of problems, Certainty Equivalence holds iff the control has no dual effect. Otherwise the PDAF (Probabilistic Data Association Filter) – in addition to several fielded radar-tracking systems it has found applications in image tracking as well as wireless communication.

 

How did you get into target tracking?

A colleague was trying to de-bias an EKF for reentry vehicle tracking and I noticed that the true initial range was 100kft, the initial estimate was 80kft and the initial variance given to the filter was 106 (that is 20s!). Changing the 106 to 108 immediately eliminated the bias.

 

When exactly did you decide to switch to academia, and why?

When my newly arrived boss asked me in 1975 to solve a problem I already solved years ago unbeknownst to him, I just gave him the report I wrote on it in 1971 and took it as a sign that it is time to leave for new pastures. In 1974 Dave Sworder told me that in 2 years I would be in academia – he had a perfect prediction algorithm.

 

You made some outstanding contributions in stochastic control area, particularly in the dual effect and in dual control.  In fact, you were a leading expert in that area in 1970s.  What was the driving force for your shift of research focus from that area to tracking area?

Murray Wonham from the University of Toronto wrote a paper stating (approximately): “Stochastic control can only change the system performance from very bad to bad”. First I insisted on proving him wrong, but eventually I succumbed to the obvious. My work in control did not have even 1% of the usefulness of the work done later in tracking (to a large extent thanks to the numerous colleagues with whom I have worked and keep working).

 

You have made so many great contributions, which one do you think had the greatest impact? Which one are you most proud of?

The IMM. Which is really not mine – it was invented by Henk Blom.

 

If you'd have only one paper to keep and you consider as your major contribution, which paper would it be?

The Maximum Likelihood PDA and CRLB-in-clutter paper, because they are exact.

 

Have you instilled upon your students any bad habits?

1.      To have high standards in reviewing papers (which, as journal editors, they have applied to me…).

2.      To charge properly when they consult (some companies think this is a bad habit).

 

How many more years do you plan to teach?

Until I get tired or run out of good students, whichever comes first. I am not yet ready for maturity leave. I have not yet started to play golf.

 

We’re glad to hear that.

 

 

 

 

Figure 1: Sam Blackman, Oliver Drummond, Yaakov Bar-Shalom and Rabinder Madan.

 

Figure 2: A group photograph during a break from the presentations.

 

 

 

Figure 3: Fred Daum, X. Rong Li, Tom Kerr and Sanjeev Arulambalam.

 

 

Figure 4: A relaxed Yaakov, after dinner. The T-shirt he is wearing is a present from Jean Dezert, a former French post-doc who spent a productive year at UConn. It reads: “I am 18 years old, with 42 years of experience”.

 

 

 

Figure 5: A Raytheon THAAD radar, which uses Yaakov's JPDAF algorithm.

 

 


[1] In the past two years Yaakov has added another 12 articles to this.