Takayoshi SHOUDAI Tetsuhiro MIYAHARA Tomoyuki UCHIDA Satoshi MATSUMOTO Yusuke SUZUKI
A term is a connected acyclic graph (unrooted unordered tree) pattern with structured variables, which are ordered lists of one or more distinct vertices. A variable of a term has a variable label and can be replaced with an arbitrary tree by hyperedge replacement according to the variable label. The dimension of a term is the maximum number of vertices in the variables of it. A term is said to be linear if each variable label in it occurs exactly once. Let T be a tree and t a linear term. In this paper, we study the graph pattern matching problem (GPMP) for T and t, which decides whether or not T is obtained from t by replacing variables in t with some trees. First we show that GPMP for T and t is NP-complete if the dimension of t is greater than or equal to 4. Next we give a polynomial time algorithm for solving GPMP for a tree of bounded degree and a linear term of bounded dimension. Finally we show that GPMP for a tree of arbitrary degree and a linear term of dimension 2 is solvable in polynomial time.
ThienLuan HO Seung-Rohk OH HyunJin KIM
A parallel Aho-Corasick (AC) approach, named PAC-k, is proposed for string matching in deep packet inspection (DPI). The proposed approach adopts graphic processing units (GPUs) to perform the string matching in parallel for high throughput. In parallel string matching, the boundary detection problem happens when a pattern is matched across chunks. The PAC-k approach solves the boundary detection problem because the number of characters to be scanned by a thread can reach the longest pattern length. An input string is divided into multiple sub-chunks with k characters. By adopting the new starting position in each sub-chunk for the failure transition, the required number of threads is reduced by a factor of k. Therefore, the overhead of terminating and reassigning threads is also decreased. In order to avoid the unnecessary overlapped scanning with multiple threads, a checking procedure is proposed that decides whether a new starting position is in the sub-chunk. In the experiments with target patterns from Snort and realistic input strings from DEFCON, throughputs are enhanced greatly compared to those of previous AC-based string matching approaches.
Yusuke SUZUKI Takayoshi SHOUDAI Tomoyuki UCHIDA Tetsuhiro MIYAHARA
A term tree pattern is a rooted ordered tree pattern which consists of ordered tree structures with edge labels and structured variables with labels. All variables with the same label in a term tree pattern can be simultaneously replaced with ordered trees isomorphic to the same rooted ordered tree. Then, a term tree pattern is suitable for representing structural features common to tree structured data such as XML documents on the web, the secondary structures of RNA in biology and parse trees describing grammatical structures of natural languages. Let $ott$ be the set of all term tree patterns which have one or more variables with the same label. Let $lott$ be the set of all term tree patterns t such that all variables in t have distinct labels. We remark that $lottsubsetneq ott$ holds. In this paper, we consider a problem, called Matching problem for term tree patterns, of deciding whether or not a given rooted ordered tree T is obtained from a given term tree pattern t by replacing variables in t with rooted ordered trees. We show that Matching problem for term tree patterns in $ott$ is NP-complete, by giving a reduction from the string pattern matching problem, which is NP-complete. Next, by giving operations on an interval, which is a set containing all integers between two given integers representing vertex identifiers, we propose an efficient algorithm for solving Matching problem for term tree patterns in $lottsubsetneq ott$. Then, we show that, when an ordered tree having N vertices and a term tree pattern $t in lott$ having n vertices are given, the proposed matching algorithm correctly solves this problem in O(nN) time.
Manabu INUMA Akira OTSUKA Hideki IMAI
The security of biometric authentication systems against impersonation attack is usually evaluated by the false accept rate, FAR. The false accept rate FAR is a metric for zero-effort impersonation attack assuming that the attacker attempts to impersonate a user by presenting his own biometric sample to the system. However, when the attacker has some information about algorithms in the biometric authentication system, he might be able to find a “strange” sample (called a wolf) which shows high similarity to many templates and attempt to impersonate a user by presenting a wolf. Une, Otsuka, Imai [22],[23] formulated such a stronger impersonation attack (called it wolf attack), defined a new security metric (called wolf attack probability, WAP), and showed that WAP is extremely higher than FAR in a fingerprint-minutiae matching algorithm proposed by Ratha et al. [19] and in a finger-vein-patterns matching algorithm proposed by Miura et al. [15]. Previously, we constructed secure matching algorithms based on a feature-dependent threshold approach [8] and showed that if the score distribution is perfectly estimated for each input feature data, then the proposed algorithms can lower WAP to a small value almost the same as FAR. In this paper, in addition to reintroducing the results of our previous work [8], we show that the proposed matching algorithm can keep the false reject rate (FRR) low enough without degrading security, if the score distribution is normal for each feature data.
Lei WANG Jun WANG Satoshi GOTO Takeshi IKENAGA
With the ubiquitous application of Internet and wireless networks, H.264 video communication becomes more and more common. However, due to the high-efficiently predictive coding and the variable length entropy coding, it is more sensitive to transmission errors. The current error concealment (EC) scheme, which utilizes the spatial and temporal correlations to conceal the corrupted region, produces unsatisfied boundary artifacts. In this paper, first we propose variable block size error concealment (VBSEC) scheme inspired by variable block size motion estimation (VBSME) in H.264. This scheme provides four EC modes and four sub-block partitions. The whole corrupted macro-block (MB) will be divided into variable block size adaptively according to the actual motion. More precise motion vectors (MV) will be predicted for each sub-block. Then MV refinement (MVR) scheme is proposed to refine the MV of the heterogeneous sub-block by utilizing three step search (TSS) algorithm adaptively. Both VBSEC and MVR are based on our directional spatio-temporal boundary matching algorithm (DSTBMA). By utilizing these schemes, we can reconstruct the corrupted MB in the inter frame more accurately. The experimental results show that our proposed scheme can obtain better objective and subjective EC quality, respectively compared with the boundary matching algorithm (BMA) adopted in the JM11.0 reference software, spatio-temporal boundary matching algorithm (STBMA) and other comparable EC methods.
Masashi UNE Akira OTSUKA Hideki IMAI
This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding "wolves" into the system to be attacked. The "wolf" means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.
Donghyung KIM Jongho KIM Jechang JEONG
The H.264 standard allows each macroblock to have up to sixteen motion vectors, four reference frames, and a macroblock mode. Exploiting this feature, we present an efficient temporal error concealment algorithm for H.264-coded video. The proposed method turns out to show good performance compared with conventional approaches.
Golam SORWAR Manzur MURSHED Laurence DOOLEY
Though block-based motion estimation techniques are primarily designed for video coding applications, they are increasingly being used in other video analysis applications due to their simplicity and ease of implementation. The major drawback associated with these techniques is that noises, in the form of false motion vectors, cannot be avoided while capturing block motion vectors. Similar noises may further be introduced when the technique of global motion compensation is applied to obtain true object motion from video sequences where both the camera and object motions are present. This paper presents a new technique for capturing large number of true object motion vectors by eliminating the false motion vector fields, for use in the application of object motion based video indexing and retrieval applications. Experimental results prove that our proposed technique significantly increases the percentage of retained true object motion vectors while eliminating all false motion vectors for variety of standard and non-standard video sequences.
Dong-Noh KIM Ki-Hong KIM Tae-Yeon JUNG Duk-Gyoo KIM
The recent sight system requires high stabilization functions for the longer range of observation and the higher kill probability. To this end, it is necessary to compensate rotational disturbances which are not stabilized with the conventional 2-axes stabilization system. This paper proposes a simple method on the rotational motion estimation for the stabilization of the sight system.
In this paper, the false-peaks problem of the conventional correlation-based video tracking is investigated using a simple mathematical analysis. To reduce the false detection problem, a selective-attention correlation measure is proposed. The problem with the conventional correlation measures is that all pixels in the reference block image are equally treated in the computation of the correlation measures irrespective of target or background pixels. Therefore, the more the reference block image includes background pixels, the higher probability of false-peaks is introduced due to the correlation between the background pixels of the reference block and those of the input search image. The proposed selective-attention correlation measure has different consideration according to target and background pixels in the matching process, which conform with the selective-attention property of human visual system. Various computer simulations validated these analyses and confirmed that the proposed selective-attention measure is effective to reduce considerably the probability of the false-peaks.
Many algorithms have been introduced to obtain giga-bit routing performance by reducing searching time. As most of them, however, have not considered the importance of update time and memory requirement seriously, they couldn't work well in real networks. We propose a flexible and fast IP lookup algorithm, named FFILA, considering these factors and compare the performance of our scheme with that of the conventional scheme of Patricia trie.
Won Bae PARK Nae Joung KWAK Young Jun SONG Jae Hyeong AHN
In this paper, we propose a fast full-search block matching algorithm for motion estimation, based on binary edge information. The binary edge information allows a faster search by reducing the computational complexity. It also reduces error, which is generated by the block located on the boundary of moving objects. After we transform the input image into an edge-based image using Sobel masks, we convert the result into a binary edge image using median-cut quantization. We then perform block matching using the binary edge image. If there exists blocks such that the error of the binary block matching exceeds threshold, we only perform edge intensity-based block matching within those blocks. We improve computational efficiency by eliminating an unnecessary searching process in no-motion regions. Simulation results have shown that the proposed method reduces the computational complexity and provides similar PSNR performance to the Full Search Block Matching Algorithm (FS-BMA)
Jong-Nam KIM SeongChul BYUN ByungHa AHN
In this letter we propose a new fast matching algorithm that has no degradation of predicted images such as found in the conventional full search (FS) algorithm, so as to reduce the amount of computation of the FS algorithm for motion estimation in real-time video coding applications. That is, our proposing algorithm reduces only unnecessary computations in the process of motion estimation without decreasing the prediction quality compared to the conventional FS algorithm. The computational reduction comes from rapid elimination of impossible motion vectors. In comparison to the FS algorithm, we obtained faster elimination of inappropriate candidate motion vectors using efficient matching units based on image complexity. Experimentally, we demonstrated that the unnecessary computations were removed by about 30% as compared to the other fast FS algorithms.
We present pipelined simple matching, called PSM, for an input buffered switch to relax the scheduling timing constraint by modifying pipelined maximal-sized matching (PMM). Like the pipelined manner of PMM, to produce the matching results in every time slot, PSM employs multiple subschedulers which take more than one time slot to complete matching. Using only head-of-line information of input buffers, PSM successively sends each request to all subschedulers to provide a better matching opportunity. To obtain better performance, PSM uses unique starting points of scheduling pointers in which the difference between the starting points is equal for any two adjacent subschedulers for a same output. Using computer simulations under a uniform traffic, we show PSM is more appropriate than PMM for pipelined scheduling of an input buffered switch.
An advanced center biased search algorithm for block motion estimation is proposed in this letter. It adopts an innovative center biased search strategy to get correct motion vector. The computational complexity is reduced by strict application of the unimodal error surface assumption and half stop technique. Experimental results show that proposed algorithm has improved performance as compared to the conventional block matching algorithms.
We propose a new and fast full search (FS) motion estimation algorithm for video coding. The computational reduction comes from sequential rejection of impossible candidates with derived formula and subblock norms. Our algorithm reduces more the computations than the recent fast full search (FS) motion estimation algorithms.
Man-Soo HAN Woo-Seob LEE Kwon-Cheol PARK
We present a simple cell scheduling algorithm for an input buffered switch. The suggested algorithm is based on iSLIP and consists of request, grant and accept steps. The pointer update scheme of iSLIP is simplified in the suggested algorithm. By virtue of the new update scheme, the performance of the suggested algorithm is better than that of iSLIP with one iteration. Using computer simulations under a uniform traffic, we show the suggested algorithm is more appropriate than iSLIP for scheduling of an input buffered switch with multiple service classes.
Dong Shik SHIN Nae Joung KWAK Heak Bong KWON Jae hyeong AHN
In this paper, we propose a multi-level block matching algorithm using motion information in blocks. In the proposed algorithm, the block-level is decided by the motion degree in the block before motion searching procedure, and then adequate motion searching performs according to the block-level. Which improves computational efficiency by eliminating an unnecessary searching process in no motion or low motion regions, and brings more accurate estimation results by deepening motion searching process in high motion regions. Simulation results show that the proposed algorithm brings the lower estimation error--about 20% MSE reduction--with the fewer blocks per frame and the lower computational loading--about 98% operational amount reduction--than full search block matching algorithm with constant block size.
To reduce an amount of computation of full search algorithm for fast motion estimation, we propose a new and fast matching algorithm without any degradation of predicted images. The computational reduction without any degradation comes from adaptive matching scan algorithm according to the image complexity of the reference block in current frame. Experimentally, we significantly reduce the computational load compared with conventional full search algorithm.
Hitoshi SATOH Yuji UKAI Noboru NIKI Kenji EGUCHI Kiyoshi MORI Hironobu OHMATSU Ryutarou KAKINUMA Masahiro KANEKO Noriyuki MORIYAMA
In this paper, we present a computer-aided diagnosis (CAD) system to automatically detect lung cancer candidates at an early stage using a present and a past helical CT screening. We have developed a slice matching algorithm that can automatically match the slice images of a past CT scan to those of a present CT scan in order to detect changes in the lung fields over time. The slice matching algorithm consists of two main process: the process of extraction of the lungs, heart, and descending aorta and the process of matching slices of the present and past CT images using the information of the lungs, heart, and descending aorta. To evaluate the performance of this algorithm, we applied it to 50 subjects (total of 150 scans) screened between 1993 and 1998. From these scans, we selected 100 pairs for evaluation (each pair consisted of scans for the same subject). The algorithm correctly matched 88 out of the 100 pairs. The slice images for the present and past CT scans are displayed in parallel on the CRT monitor. Feature measurements of the suspicious regions are shown on the relevant images to facilitate identification of changes in size, shape, and intensity. The experimental results indicate that the CAD system can be effectively used in clinical practice to increase the speed and accuracy of routine diagnosis.