IWISC 2017

Workshops


IWISC 2015 IWISC 2017 IWISC 2018 Resources & Media


 

IWISC 2017

UT-DIISC 2nd International Workshop on Interactive and Spatial Computing

 

April 13 – 14, 2017

The University of Texas at Dallas

Richardson, Texas

 

The University of Texas Dallas Institute for Interactive and Spatial Computing (UT-DIISC) is pleased to announce the 2nd International Workshop on Interactive and Spatial Computing (IWISC). UT-DIISC IWISC 2017 will have invited and research presentations in all areas related to interactive computing and spatial computing, including, but not limited to, the following topics:

  • Computational geometry
  • Computer graphics
  • High-dimensional data analysis
  • Human-computer interaction
  • Information visualization
  • Multimedia
  • Virtual reality

Submission Details

Each presentation should be further classifiable as mainly covering one of the following three areas: Research, applications, or systems:

1. Research presentations should describe results that contribute to advances in state-of-the-art software, hardware, algorithms, interaction, or human factors.

2. Application presentations should explain how the authors built upon existing ideas and applied them to solve an interesting problem in a novel way. Each paper should include a discussion of using interactive or spatial computing in the given application domain.

3. System presentations should indicate how the implementer’s integrated known techniques and technologies to produce an effective system, along with any lessons learned in the process.

 

Abstracts for presentations should be submitted in electronic form through the submission website: Easychair

 


April 13th Thursday – Double Tree Hilton, Richardson @ US 75 and Campbell – Bluebonnet Room


Approximate Time Event
5  pm Student setup for posters and demos
6  pm Posters and Demonstrations
7 pm Dinner
7: 30 pm Invited Talk “Identify Movement and Interaction Patterns from Large GPS Data of Individual Entities”, by Prof May Yuan, GIS/EPPS, UT Dallas

 

April 14th Friday – ECSS 2.415, UT Dallas

 

(Breakfast will be served)


Approximate Time Event
8.30 am -10.30 am Invited Presentations I
10.30 am -11.00 am Break
11.00 am – 12.30 pm Invited Presentations II
12:30 pm to 2:00 pm Lunch – with presentations on research in UTD CS Department
2:00 pm to 3:30 pm Under-graduate research presentations.
3:30 pm Coffee Break
4:00 pm Presentation of prizes and closing remarks.

List of Presentation

  1. David Liu, Samsung Research, Dallas
  2. Raul Enrique SÁNCHEZ YÁÑEZ, University of Guanajuatto, Salamenca
  3. Prasad Calyam, University of Missouri-Columbia
  4. Eric Ragan, TAMU
  5. Uichung (Francis) Cho – Mountainview College, DCCCD
  6. Gaurav Pradhan, Mayo Clinic
  7. Andrea Fumagalli, EE UT Dallas

Talk Details

1. May Yuan, GIS, UT Dallas

Identify Movement and Interaction Patterns from Large GPS Data of Individual Entities

May Yuan, GIS/EPPS at UT Dallas

Atsushi Nara, Geography at San Diego State University

The proliferation of location-aware devices enables the production of massive GPS trajectories for individuals. Many methods have been developed to analyze large amounts of trajectory data readily available from volunteered GPS data [1], cell phone data [2], Twitter data[3], public transit data [4] and taxi data [5]. Popular approaches these GPS trajectory data to uncover aggregate properties, such as activity zones, locations of visiting hot spots, trajectory clusters, and transportation choices. Few studies focus on the longitudinal movement patterns of an individual and how the movement patterns may suggest patterns of life and social interactions. An obvious reason for the lack of research on the subject is, and rightfully so, the concern for privacy. Anonymizing the data cannot fully protect privacy because one’s identify can be easily revealed through GPS locations over time.

Through a research project funded by the National Institute of Justice and in cooperation with Oklahoma Department of Correction, we developed trajectory analysis methods to identify movement patterns of offenders who were on Oklahoma location-based offender monitoring program. As an alternative to

imprisonment, offenders wear GPS devices that record their locations every hour when at rest, every minute when in motion or every 15 seconds when the device detects violation behaviors. Effective analytics methods to decipher movement patterns and identify movements that are out of the ordinary are critical to proper monitor GPS offenders and assure public safety.

Our data consist of 2871 GPS offenders over 23 months (2/23/2009 to 1/12/2011). We developed methods to identify incongruent movements of individuals, spatially implied social interactions, and geo-social contextualization of their movement. We summarized one’s relative locations across all trajectories over time to identify the phases of an individual’s re-entry to the society. We aggregated one’s trajectories into a grey-scale image in which each pixel corresponds to the relative location on a daily trajectory. Brighter a pixel on the image indicates a greater departure from the location at that time on the previous day, and the image reveals the regularity and irregularity of movements for an offender throughout the monitoring period. Crime-scene correlation is to identify offenders who were in the space-time proximity of reorted crime incidents. Our method detects co-existence in a space-time cube (e.g. 20m x 30m x one hour), and we applied the method for crime-scene correlation. Furthermore, we combined co-existence and social graphs to project the potential social networks among offenders.

At the workshop, we will present the methods with examples of findings from the GPS offender trajectory analysis project. Most GPS trajectory studies examine movement patterns in a 2D city scale. We are exploring ways to understand movements in 3D local space, such as in a mall, neighborhood, or campus. We have been building a 3D GIS database of UT Dallas campus, including buildings with detailed floor plans. We would like to understand how physical structures (indoor and outdoor) may influence human movements and sense of space. Such an understanding is useful to improve campus mobility and engagement. GPS trajectory data will be challenging. We plan to seek volunteers and crowdsource trajectories data through a moble app. Our presentation will end with examples of the 3D GIS campus and invite collaboration to build a smart and connected campus.

Bio: May Yuan received all her degrees in Geography: B.S. 1987 from National Taiwan University and M.S. 1992 and Ph.D. 1994 from State University of New York at Buffalo. She is Ashbel Smith Professor of Geospatial Information Sciences in the School of Economic, Political, and Policy Sciences at the University of Texas at Dallas. Before she joined UT-Dallas in August 2014, she was Brandt Professor and Edith Kinney Gaylord Presidential Professor and Director of Center for Spatial Analysis at the University of Oklahoma (1994-2014). Her research interest expands upon space-time representation and analytics to understanding geographic dynamics. Over the years, she has been working to develop new approaches to represent geographic processes and events in GIS databases to support space-time query, analytics and knowledge discovery. Her research has been supported by NSF, NASA, Department of Defense, Department of Homeland Security, Departments of Justice, Department of Energy, Environmental Protection Agency, National Oceanic and Atmospheric Administration, United States Geological Surveys, and Oklahoma state government agencies in the U.S.A.

2. Prasad Calyam, University of Missouri-Columbia

Visual Cloud Computing and Networking: Foundations and Application Case Studies

In the event of natural or man-made disaster incidents, providing rapid situational awareness through video/image data collected at salient incident scenes is often critical to first responders. Scalable processing of media-rich visual data and the subsequent visualization with high user Quality of Experience (QoE) demands new cloud computing and thin-client desktop delivery approaches. In this talk, we describe the challenges for incident-supporting visual cloud computing and a solution approach for a regional-scale application involving tracking objects in aerial full motion video and large scale wide-area motion imagery. Our solution approach features algorithms for intelligent fog computing at the network-edge coupled with cloud offloading to a public cloud, utilizing software-defined networking (SDN). We will

conclude with a discussion of our experimental results collected from GENI cloud testbed that demonstrate how SDN for on-demand compute offload with congestion-avoiding traffic steering can enhance remote user QoE, and also reduces latency, congestion and increases throughput in visual analytics.

Bio: Prasad Calyam is an Assistant Professor in the Department of Computer Science at University of Missouri-Columbia, and a Core Faculty in the University of Missouri Informatics Institute (MUII). Before coming to the university in 2013, he was a Research Director at Ohio Supercomputer Center/OARnet, The Ohio State University. He received his MS and PhD degrees from The Ohio State University in 2002 and 2007, respectively. His research and development areas of interest include: Distributed and Cloud Computing, Computer Networking, Networked-Multimedia Applications, and Cyber Security. He has published over 70 papers in various conference and journal venues. His research sponsors include: NSF, DOE, VMware, Cisco, Raytheon-BBN, Dell, Verizon, IBM, Huawei, Coulter Foundation, Internet2, and others. His basic research and software on multi-domain network measurement and monitoring has been commercialized as ‘Narada Metrics’. He is a Senior Member of IEEE.

3. Gaurav Pradhan, Mayo Clinic

Computing in Medicine: Applications for tracking and enhancing human performance

The development and application of computing tools that include high-dimensional data analysis, information visualization, and pattern mining have remarkable significance in the field of medicine. In the Aerospace Medicine and Vestibular Research Laboratory (AMVRL) at Mayo Clinic Arizona we are innovating health care data mining and developing analytical models that are benefitting medical challenges. These models have facilitated the discovery of surrogate measurements of cognitive performance/impairment using noncontact eye-tracking, treatment/mitigation for simulator or motion or “virtual reality” sickness, efficient diagnostic visualization tools, and effective neuro-cognitive tests. This computing oriented health informatics approach has so far steered multiple applications in the domains that include aerospace medicine and defense arena, neuro-vestibular function, neurology, cardiopulmonary physiology, gastroenterology, pediatrics, and biomedical informatics.

Bio: Dr. Gaurav N. Pradhan is the Assistant Professor of Biomedical Informatics at Mayo College of Medicine and Senior Research Scientist at the Aerospace Medicine and Vestibular Research Laboratory (AMVRL) at Mayo Clinic in Arizona. He has earned his PhD in Computer Science at University of Texas at Dallas in 2008. His research interests focus on data mining, pattern recognition, knowledge discovery, machine learning, cluster analysis, and content-based similarity retrieval in the context of high-dimensional, multi-sensor, and multi-stream databases. He is specifically interested in developing real-time computational/mathematical models and simulations in medicine, conducting health care data analytics and mining, and creating data visualizations in biomedical informatics.

4. Francis U. Cho, Mountain View College, DCCCD

Application of Empirical Similarity Method for 3D Technology

Last decade has seen significant advances in 3D hardware, such as 3D printer, 3D display devices and 3D vision cameras. Thanks to dramatically reduced cost and improved product performance of this 3D hardware, diverse group of people are experiencing and experimenting them in many application areas, which lead to unique commercial products and services. In this presentation, a novel empirical similarity method (ESM) that has a potential to enhance and integrate 3D technologies is introduced. The ESM was first developed to for functional testing with 3D printed parts. In the ESM, the geometric and non-geometric features are well cross-correlated, which was very limited with traditional correlation methods. Here, it is proposed to adopt ESM to 3D vision applications, such as estimation of 3D models from a set of

2D photos and 3D motion from video recordings. This research eventually aims to develop a deep-learning algorithm for effective 3D vision, which can become a solid foundation for novel 3D vision services and products.

Bio: Dr. Uichung “Francis” Cho is the engineering professor and division chair at Mountain View College. He has more than 20 years of research and field experiences in robotics, design methodology, 3D printing technology and virtual reality. He developed robotic automation systems for Samsung Electronics and Hyundai Motors at KIST (Korean Institute of Science and Technology).

5. Raul E. Sanchez-Yanez

A framework for developing computer vision applications using rough-set-based rules

Alberto Lopez-Alanis, Juan L. Alonso Cuevas, Eduardo Perez-Careta and Raul E. Sanchez-Yanez

A general framework for the development of computer vision-based applications is introduced. At the core of such a framework, we can find a set of rules, which model the complex relations between input data and output categories. Three stages are clearly identified in systems developed using the proposed scheme: (i) preprocessing of the image, for conditioning the input signal for better subsequent analysis; (ii) feature extraction, where the description of the image content takes place; and (iii) the conformation of the model, using a rough-set-based methodology. It is important to highlight that models are obtained directly from numerical evidence hence a supervised learning-from-samples approach is followed. The way that applications are developed is depicted using two cases of use: the first, a typical problem of image categorization; the second, the automatic detection of visual saliency in images. Once the set of rules has been obtained, performance of our models is evaluated in both qualitative and quantitative modalities.

6. Eric Ragan

Automating the Capture and Visualization of Analytic Provenance

Visual analytics systems help analysts perform complex analysis tasks by mitigating fundamental limitations in human cognition. However, even with these tools, it can be difficult for analysts to remember and communicate how they arrived at a particular conclusion or what specific steps were taken during analysis. Visual analytics can also address this need by capturing and presenting the history and rationale followed during data analysis. This talk discusses recent research of techniques for summarizing themes of analysis processes based on the history of interactions with visual analysis software. I will discuss examples using intelligence analysis scenarios with textual data and cyber security analysis scenarios with network data. We are currently investigating the use of topic modeling to infer the history of data exploration from interaction logs from exploratory text-document analysis. The research demonstrates that interaction data can be used as the basis for automatic generation of provenance visualization without requiring additional user annotation.

Bio: Dr. Eric Ragan is an Assistant Professor in the Department of Visualization at Texas A&M University. His research interests include human-computer interaction, visual analytics, virtual reality, evaluation methodology, and training systems. He previously worked as a research scientist at Oak Ridge National Laboratory, where his research focused on the design and evaluation of visualizations for the analysis of streaming data. Dr. Ragan received his Ph.D. in computer science from Virginia Tech. Contact him at eragan@tamu.edu.

6. David Liu, Samsung Research

A Learning Based Deformable Template Matching Method for Automatic Rib Centerline Extraction and Labeling in CT Images

The automatic extraction and labeling of the rib centerlines is a useful yet challenging task in many clinical applications. We demonstrate an approach integrating rib seed point detection and template matching to detect and identify each rib in chest CT scans. The bottom-up learning based detection exploits local image cues and top-down deformable template matching imposes global shape

constraints. To adapt to the shape deformation of different rib cages whereas maintain high computational efficiency, we employ a Markov Random Field (MRF) based articulated rigid transformation method followed by Active Contour Model (ACM) deformation. Compared with traditional methods that each rib is individually detected, traced and labeled, this approach is not only much more robust due to prior shape constraints of the whole rib cage, but removes tedious post-processing such as rib pairing and ordering steps because each rib is automatically labeled during the template matching.

Bio: David Liu received his PhD from Carnegie Mellon University in 2008. Previously he worked at Siemens as Director of Engineering for Medical Imaging Technologies. He is currently with Samsung Research America, as Director of Mobile Computer Vision and Deep Learning.

 

For future information on UT-DIISC workshops and events, please register and subscribe to our mailing list below.

Subscribe

 

Questions or comments? Please contact us at diisc@lists.utdallas.edu.