Spatial Reasoning: From Sketch-to-Text Towards Text-to-Sketch

  

Dr. James M. Keller

 

Bio:
James M. Keller received the Ph.D. in Mathematics in 1978. He holds the University of Missouri Curators’ Professorship in the Electrical and Computer Engineering and Computer Science Departments on the Columbia campus. He is also the R. L. Tatum Professor in the College of Engineering. His research interests center on computational intelligence: fuzzy set theory and fuzzy logic, neural networks, and evolutionary computation with a focus on problems in computer vision, pattern recognition, and information fusion including bioinformatics, spatial reasoning in robotics, geospatial intelligence, sensor and information analysis in technology for eldercare, and landmine detection. His industrial and government funding sources include the Electronics and Space Corporation, Union Electric, Geo-Centers, National Science Foundation, the Administration on Aging, The National Institutes of Health, NASA/JSC, the Air Force Office of Scientific Research, the Army Research Office, the Office of Naval Research, the National Geospatial Intelligence Agency, and the Army Night Vision and Electronic Sensors Directorate. Professor Keller has coauthored over 300 technical publications.
Jim is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for whom he has presented live and video tutorials on fuzzy logic in computer vision, is an International Fuzzy Systems Association (IFSA) Fellow, is a national lecturer for the Association for Computing Machinery (ACM), is an IEEE Computational Intelligence Society Distinguished Lecturer, and is a past President of the North American Fuzzy Information Processing Society (NAFIPS). He received the 2007 Fuzzy Systems Pioneer Award from the IEEE Computational Intelligence Society. He finished a full six year term as Editor-in-Chief of the IEEE Transactions on Fuzzy Systems, is an Associate Editor of the International Journal of Approximate Reasoning, and is on the editorial board of Pattern Analysis and Applications, Fuzzy Sets and Systems, International Journal of Fuzzy Systems, and the Journal of Intelligent and Fuzzy Systems. He was the Vice President for Publications of the IEEE Computational Intelligence Society from 2005-2008, and is currently an elected Adcom member. He was the conference chair of the 1991 NAFIPS Workshop, program co-chair of the 1996 NAFIPS meeting, program co-chair of the 1997 IEEE International Conference on Neural Networks, and the program chair of the 1998 IEEE International Conference on Fuzzy Systems. He was the general chair for the 2003 IEEE International Conference on Fuzzy Systems.

Abstract:
With the collaboration of several faculty colleagues and many students, I have been studying the creation and utilization of spatial relations in various sensor-related domains for many years. Scene description, involving linguistic expressions of the spatial relationships between image objects, is a major goal of high-level computer vision. In a series of papers, we introduced the use of histograms of forces to produce evidence for the description of relative position of objects in a digital image. There is a parameterized family of such histograms, for example, the histogram of constant forces (much like the earlier histogram of angles) and the histogram of gravitational forces that highlights areas that are close between the two objects. Utilizing the fuzzy directional membership information extracted from these histograms within fuzzy logic rule-based systems, we have produced high-level linguistic descriptions of natural scenes as viewed by an external observer. Additionally, we have exploited the theoretical properties of the histograms to match images that may be the same scene viewed under different pose conditions. In fact, we can even recover estimates of the pose parameters. These linguistic descriptions have then been brought into an ego-centered viewpoint for application to robotics, i.e., the production of linguistic scene description from a mobile robot standpoint, spatial language for human/robot communication and navigation, and understanding of a sketched route map for communicating navigation routes to robots. This last activity is sketch-to-text. In a newly awarded grant from the National Geospatial Intelligence Agency, we are starting to tackle the inverse problem: given one or more text descriptions of a temporal and spatial event, construct a sketch of the event for subsequent reasoning. The sketch must be grounded in reality by matching to a satellite image or geospatial database. This talk will survey the early applications and end with a demo highlighting our approach to the new problem.