Jeongkyu Lee
  
Home

Research

Publication

Teaching

CV

DBA

Personal

MIG@UB: Multimedia Information Group

People

  • Professor: Dr. Jeongkyu Lee (jelee AT bridgeport.edu)

  • Saroj Poudyal (spoudyal AT bridgeport.edu)

  • Munther Abualkibash (mabualki AT bridgeport.edu)

  • Padmini Ramalingam (pkuppusa AT bridgeport.edu)

Lab Location

  • Dana Hall Room# 234 at University of Bridgeport

News

  • Dr. Jeongkyu Lee presented his research work at the Colloquium series at University of Bridgeport, September 20, 2007.

  • Invited Talk by Michael M. Begg at LaSalle Solutions on June 12, 2007 3:30 PM (Tech 116)

  • Pragya Rajauria, MS student in MIG, won the best poster award (1st prize) in DB/IR Day Spring 2007 by presenting her research work "Multimedia Ontology Creation for Wireless Capsule Endoscopy Videos"

  • Dr. Jeongkyu Lee is invited as a speaker of invited talk at Sogang University, Seoul Korea. He will present his research of Multimedia and Databases on March 14, 2007.

  • Dr. Jeongkyu Lee and his colleague, Dr. JungHwan Oh at University of North Texas will organize the 1st International Workshop on Multimedia Data Mining and Management (MDMM’07) in conjunction with DEXA 2007. The workshop will be held in Regensburg, Germany on September 3-7 2007.

  • Dr. Jeongkyu Lee presented his research work at the 8the IEEE International Symposium on Multimedia (11-13 December 2006, San Diego). Recently, his research paper was accepted by ISM 2006, titled “A Graph-based Approach for Modeling and Indexing Video Data”

  • Dr. Jeongkyu Lee and his students, Subodh Shah and Progya Rajauria, had an accepted paper by SPIE Electronic Imaging (Jan 28 – Feb 1, 2007 San Jose). Mr. Subodh Shah will have a presentation for the paper, “A Model-based Conceptual Clustering of Moving Objects in Video”

  • Dr. Jeongkyu Lee and his colleagues at University of North Texas and University of Texas South Western Medical Center, recently received the acceptance notice of submitted paper regarding to Wireless Capsule Endoscopy videos from ACM Symposium on Applied Computing 2007. He will present his research work during the conference (March 11-15, 2006 Seoul, Korea)

Group Seminar at MIG lab

  • Apr. 2007: "A model-based Conceptual Clustering of Moving Objects in Video Surveillance" presented by Subodh Shah.

  • Oct. 30, 2006: "Information Extraction from Multimedia Content using Ontology" presented by Pragya Rajauria

  • Oct. 16, 2006: "Wireless Capsule Endoscopy" presented by Subodh Shah

 

Research Interests

Multimedia Segmentation and Mining using Low-level Features. 

The first step of image and video processing is to segment the basic units of the data, i.e., block or region of image, and shot or scene of video.  Then, the features are computed from each unit for further processing, such as data analysis and data mining.  Therefore, the focus of my early research was on the segmentation and analysis of the data.  As shown in Figure 1, my early research was based on the low-level image features.  An inter-frame difference using background tracking are computed to handle various camera motions, and detect gradual changes.  Then, a new framework was proposed to segment key objects instead of key frames from the detected shots using color quantization and background adjustment [1,2,3].

Another direction of early research was video data mining.  Data mining is one of powerful techniques to find correlations and patterns previously unknown from large video database [4].  The outputs of video data mining are patterns of moving objects and detected events.  In order to detect video events, we proposed the framework of video data mining [5].  In the framework, the accumulated motions, i.e., the number of changed pixel, are represented as two-dimensional matrix.  Using the motion matrix, two motion features are extracted from each video segment: the amount and the location of motions.  The video mining was a multi-level hierarchical clustering based on k-means of the two motion features.  The degree of abnormality (Y) using the closeness of cluster was proposed to find whether a segment has normal or abnormal event [6].

 

[1]   JungHwan Oh, JeongKyu Lee and Sae Hwang. An Efficient Method for Detection of Key Objects in Video shots with Camera Motions. Colombian Journal of Computation. Vol.4, No.1. pp.35-56. Sep. 2003.

[2]   JungHwan Oh, JeongKyu Lee and Eswar Vemuri. Efficient Technique for Segmentation of Key Object(s) from Video Shots. In Proc. of ITCC 2003. pp. 384-388, April 28-30, 2003, Las Vegas, NV.

[3]   JungHwan Oh and JeongKyu Lee, Accelerating Shot Boundary Detection with Non-Sequential Video Parsing. Appear  to Machine Vision and Applications Journal (Accepted).

[4] JungHwan Oh, JeongKyu Lee and Sae Hwang. Video Data Mining: Current Status and Challenges. Encyclopedia of Data Warehousing and Mining. Idea Group Inc. and IRM Press. 2005.

[5]   JungHwan Oh, JeongKyu Lee, Sanjaykumar Kote and Babitha Bandi. Multimedia Data Mining Framework for Raw Video Sequences. Mining Multimedia and Complex Data, Lecture Notes in Artificial Intelligence, Volume 2797 published by Springer Verlag, pp.18-35, 2003.

[6]   JungHwan Oh, JeongKyu Lee and Sanjaykumar Kote. Real Time Video Data Mining for Surveillance Video Streams. In Proc. of PAKDD-03. pp. 222-233. April 30 - May 2, 2003. Seoul, Korea.

Graph-based Approach of Modeling and Indexing Video

Early video database systems segment video into shots, and extract key frames from each shot to represent it.  Such systems have been criticized for not conveying much semantics and ignoring temporal characteristics of the video.  Current approaches only employ low-level image features to model and index video data, which may cause semantically unrelated data to be close only because they may be similar in terms of their low-level features.  Furthermore, such systems using only low-level features cannot be interpret as high-level human perceptions.  In order to address these, I propose a novel graph-based data structure, called Spatio-Temporal Region Graph (STRG), which represents the spatio-temporal features and relationships among the objects extracted from video sequences [7,8]. Region Adjacency Graph (RAG) is generated from each frame, and an STRG is constructed from RAGs. The STRG is decomposed into its subgraphs, called Object Graphs (OGs) and Background Graphs (BGs) in which redundant BGs are eliminated to reduce index size and search time. Then, OGs are clustered using Expectation Maximization (EM) algorithm for more accurate indexing. To cluster OGs, I propose Extended Graph Edit Distance (EGED) to measure a distance between two OGs. The EGED is defined in a non-metric space first for the clustering of OGs, and it is extended to a metric space to compute the key values for indexing. Based on the clusters of OGs and the EGED, I propose a new indexing method STRG-Index that provides faster and more accurate indexing since it uses tree structure and data clustering.

The proposed STRG model is applied to other video processing areas: i.e., video segmentation and summarization [9,10].  The result of video segmentation using graph matching outperforms existing techniques since the STRG considers not only low-level features of data, but also spatial and temporal relationships among data [9].  For the video summarization, Graph Similarity Measure (GSM) is proposed to compute correlations among the segmented shots. A video can be summarized in various lengths and levels using GSM and generated scenarios [9].

[7]   JeongKyu Lee, JungHwan Oh, and Sae Hwang. STRG-Index: Spatio-Temporal Region Graph Indexing for Large Video Databases. In Proc. of 2005 ACM SIGMOD Intl. Conf. on Management of Data. pp. 718 - 729. June 14 - 16, 2005. Baltimore Maryland.

[8]   JeongKyu Lee and JungHwan Oh, A Graph-based Approach for Modeling and Indexing Videos, Submitted to ACM Transactions on Multimedia Computing, Communications, and Applications.

[9]   JeongKyu Lee, JungHwan Oh, and Sae Hwang. Scenario based Dynamic Video Abstractions using Graph Matching. In Proc. of the 13th ACM Multimedia’05. pp. 810 - 819. Singapore. Nov. 6-12, 2005.

[10] JungHwan Oh, Quan Wen, JeongKyu Lee and Sae Hwang. Video Abstraction. Video Data Management and Information Retrieval. (A book edited by Sagarmay Deb). Idea Group Inc. and IRM Press. 2004. pp. 321-346.

Automatic Ontology Generation using Conceptual Clustering

What existing multimedia databases miss is the concept of the data that can bridge the gap between low-level features and high-level human understandings.  For example, a red color is represented by RGB color values as (255, 0, 0). However, those who do not have any prior knowledge of RGB color domain cannot understand the values as a red. Current approaches use manually annotated text data to describe the low-level features. However, such manual operations are very subjective, and even time consuming tasks.  In this research, I employ ontology to manage the concepts of multimedia data, and to support high-level user requests, such as concept queries.  In order to generate the ontology automatically, , I propose a model-based conceptual clustering (MCC) based on a formal concept analysis [11,13].  The proposed MCC consists of three steps: model formation, model-based concept analysis, and concept graph generation.  We then construct ontology for multimedia data from the concept graph by mapping nodes and edges into concepts and relations, respectively. In addition, the generated ontology is used for a concept query that answers a high-level user request [12]. The model-based conceptual clustering and automatic ontology generation techniques can be easily applied to other spatial and temporal data, i.e., moving objects in video [11], hurricane track data [12,13], and medical video [14,15].

[11] JeongKyu Lee, JungHwan Oh, and Sae Hwang. Clustering of Video Objects by Graph Matching. In Proc. of IEEE Int’l Conference on Multimedia and Expo (ICME05). pp. 394-397. July, 2005. Amsterdam, Netherlands.

[12] JeongKyu Lee, and JungHwan Oh, A Model-based Conceptual Clustering of Spatio-Temporal Data, Submitted to SIGKDD 2006.

[13] JeongKyu Lee, JungHwan Oh, and Sae Hwang, A Model-based Conceptual Clustering of Spatio-Temporal Data using Formal Concept Analysis, Submitted to the Journal of Intelligent Information Systems.

[14] Sae Hwang, JungHwan Oh, JeongKyu Lee, Wallapak Tavanapong, Johnny Wong, Yu Cao Danyu Liu, and Piet C. de Groen. Automatic Measurement of Quality Metrics for Colonoscopy Videos. In Proc. of 2005 ACM SIGMOD Intl. Conf. on Management of Data. pp. 912 - 921. Singapore. Nov. 6-12, 2005.

[15]  Y.-H. An, S. Hwang, J. Oh, W. Tavanapong, P. C. de Groen, J. Wong, and JeongKyu Lee. Informative Frame Filtering in Endoscopy Videos. . In Proc. of SPIE Medical Imaging. Pp. 291-302. San Diego, CA, USA, 2005.


Last updated: 9/24/2007