The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

6901-6920hit(16314hit)

  • Reducing Payload Inspection Cost Using Rule Classification for Fast Attack Signature Matching

    Sunghyun KIM  Heejo LEE  

     
    PAPER-DRM and Security

      Vol:
    E92-D No:10
      Page(s):
    1971-1978

    Network intrusion detection systems rely on a signature-based detection engine. When under attack or during heavy traffic, the detection engines need to make a fast decision whether a packet or a sequence of packets is normal or malicious. However, if packets have a heavy payload or the system has a great deal of attack patterns, the high cost of payload inspection severely diminishes detection performance. Therefore, it would be better to avoid unnecessary payload scans by checking the protocol fields in the packet header, before executing their heavy operations of payload inspection. When payload inspection is necessary, it is better to compare a minimum number of attack patterns. In this paper, we propose new methods to classify attack signatures and make pre-computed multi-pattern groups. Based on IDS rule analysis, we grouped the signatures of attack rules by a multi-dimensional classification method adapted to a simplified address flow. The proposed methods reduce unnecessary payload scans and make light pattern groups to be checked. While performance improvements are dependent on a given networking environment, the experimental results with the DARPA data set and university traffic show that the proposed methods outperform the most recent Snort by up to 33%.

  • Adaptive Decoding Algorithms for Low-Density Parity-Check Codes over the Binary Erasure Channel

    Gou HOSOYA  Hideki YAGI  Manabu KOBAYASHI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E92-A No:10
      Page(s):
    2418-2430

    Two decoding procedures combined with a belief-propagation (BP) decoding algorithm for low-density parity-check codes over the binary erasure channel are presented. These algorithms continue a decoding procedure after the BP decoding algorithm terminates. We derive a condition that our decoding algorithms can correct an erased bit which is uncorrectable by the BP decoding algorithm. We show by simulation results that the performance of our decoding algorithms is enhanced compared with that of the BP decoding algorithm with little increase of the decoding complexity.

  • An Integrated Platform for Digital Consumer Electronics Open Access

    Junji MICHIYAMA  

     
    INVITED PAPER

      Vol:
    E92-C No:10
      Page(s):
    1240-1248

    This paper describes the architecture of an integrated platform developed for improving the development efficiency of system LSIs built into digital consumer electronics equipment such as flat-panel TVs and optical disc recorders. The reason for developing an integrated platform is to improve the development efficiency of system LSIs that serve the principal functions of the said equipment. The key is to build a common interface between each software layer, with the system LSI located at the lowest layer. To make this possible, the hardware architecture of the system LSI is divided into five blocks according to its main functionality. In addition, a middleware layer is placed over the operating system to improve the ease of porting old applications and developing new applications in the higher layer. Based on this platform, a system LSI called UniPhierTM has been developed and used in 156 product families of digital consumer electronics equipment (as of December 2008).

  • FreeNA: A Multi-Platform Framework for Inserting Upper-Layer Network Services

    Ryota KAWASHIMA  Yusheng JI  Katsumi MARUYAMA  

     
    PAPER-QoS and Quality Management

      Vol:
    E92-D No:10
      Page(s):
    1923-1933

    Networking technologies have recently been evolving and network applications are now expected to support flexible composition of upper-layer network services, such as security, QoS, or personal firewall. We propose a multi-platform framework called FreeNA* that extends existing applications by incorporating the services based on user definitions. This extension does not require users to modify their systems at all. Therefore, FreeNA is valuable for experimental system usage. We implemented FreeNA on both Linux and Microsoft Windows operating systems, and evaluated their functionality and performance. In this paper, we describe the design and implementation of FreeNA including details on how to insert network services into existing applications and how to create services in a multi-platform environment. We also give an example implementation of a service with SSL, a functionality comparison with relevant systems, and our performance evaluation results. The results show that FreeNA offers finer configurability, composability, and usability than other similar systems. We also show that the throughput degradation of transparent service insertion is 2% at most compared with a method of directly inserting such services into applications.

  • Security Vulnerability of ID-Based Key Sharing Schemes

    JungYeon HWANG  Taek-Young YOUN  Willy SUSILO  

     
    LETTER-Cryptography and Information Security

      Vol:
    E92-A No:10
      Page(s):
    2641-2643

    Recently, several ID-based key sharing schemes have been proposed, where an initiation phase generates users' secret key associated with identities under the hardness of integer factorization. In this letter, we show that, unfortunately any key sharing scheme with this initiation phase is intrinsically insecure in the sense that the collusion of some users enables them to derive master private keys and hence, generating any user's secret key.

  • Dependency Parsing with Lattice Structures for Resource-Poor Languages

    Sutee SUDPRASERT  Asanee KAWTRAKUL  Christian BOITET  Vincent BERMENT  

     
    PAPER-Natural Language Processing

      Vol:
    E92-D No:10
      Page(s):
    2122-2136

    In this paper, we present a new dependency parsing method for languages which have very small annotated corpus and for which methods of segmentation and morphological analysis producing a unique (automatically disambiguated) result are very unreliable. Our method works on a morphosyntactic lattice factorizing all possible segmentation and part-of-speech tagging results. The quality of the input to syntactic analysis is hence much better than that of an unreliable unique sequence of lemmatized and tagged words. We propose an adaptation of Eisner's algorithm for finding the k-best dependency trees in a morphosyntactic lattice structure encoding multiple results of morphosyntactic analysis. Moreover, we present how to use Dependency Insertion Grammar in order to adjust the scores and filter out invalid trees, the use of language model to rescore the parse trees and the k-best extension of our parsing model. The highest parsing accuracy reported in this paper is 74.32% which represents a 6.31% improvement compared to the model taking the input from the unreliable morphosyntactic analysis tools.

  • Static Dependency Pair Method Based on Strong Computability for Higher-Order Rewrite Systems

    Keiichirou KUSAKARI  Yasuo ISOGAI  Masahiko SAKAI  Frederic BLANQUI  

     
    PAPER-Computation and Computational Models

      Vol:
    E92-D No:10
      Page(s):
    2007-2015

    Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

  • Sample-Adaptive Product Quantizers with Affine Index Assignments for Noisy Channels

    Dong Sik KIM  Youngcheol PARK  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E92-B No:10
      Page(s):
    3084-3093

    When we design a robust vector quantizer (VQ) for noisy channels, an appropriate index assignment function should be contrived to minimize the channel-error effect. For relatively high rates, the complexity for finding an optimal index assignment function is too high to be implemented. To overcome such a problem, we use a structurally constrained VQ, which is called the sample-adaptive product quantizer (SAPQ) [12], for low complexities of quantization and index assignment. The product quantizer (PQ) and its variation SAPQ [13], which are based on the scalar quantizer (SQ) and thus belong to a class of the binary lattice VQ [16], have inherent error resilience even though the conventional affine index assignment functions, such as the natural binary code, are employed. The error resilience of SAPQ is observed in a weak sense through worst-case bounds. Using SAPQ for noisy channels is useful especially for high rates, e.g., > 1 bit/sample, and it is numerically shown that the channel-limit performance of SAPQ is comparable to that of the best codebook permutation of binary switching algorithm (BSA) [23]. Further, the PQ or SAPQ codebook with an affine index assignment function is used for the initial guess of the conventional clustering algorithm, and it is shown that the performance of the best BSA can be easily achieved.

  • Direct Importance Estimation with Gaussian Mixture Models

    Makoto YAMADA  Masashi SUGIYAMA  

     
    LETTER-Pattern Recognition

      Vol:
    E92-D No:10
      Page(s):
    2159-2162

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method--which we call the Gaussian mixture KLIEP (GM-KLIEP)--is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  • Expediting Experiments across Testbeds with AnyBed: A Testbed-Independent Topology Configuration System and Its Tool Set

    Mio SUZUKI  Hiroaki HAZEYAMA  Daisuke MIYAMOTO  Shinsuke MIWA  Youki KADOBAYASHI  

     
    PAPER-Network Architecture and Testbed

      Vol:
    E92-D No:10
      Page(s):
    1877-1887

    Building an experimental network within a testbed has been a tiresome process for experimenters, due to the complexity of the physical resource assignment and the configuration overhead. Also, the process could not be expedited across testbeds, because the syntax of a configuration file varies depending on specific hardware and software. Re-configuration of an experimental topology for each testbed wastes time, an experimenter could not carry out his/her experiments during the limited lease time of a testbed at worst. In this paper, we propose the AnyBed: the experimental network-building system. The conceptual idea of AnyBed is "If experimental network topologies can be portable across any kinds of testbed, then, it would expedite building an experimental network on a testbed while manipulating experiments by each testbed support tool". To achieve this concept, AnyBed divide an experimental network configuration into the logical and physical network topologies. Mapping these two topologies, AnyBed can build intended logical network topology on any PC clusters. We have evaluated the AnyBed implementation using two distinct clusters. The evaluation result shows a BGP topology with 150 nodes can be constructed on a large scale testbed in less than 113 seconds.

  • ISI-Free Power Roll-Off Pulse

    Masayuki MOHRI  Masanori HAMAMURA  

     
    LETTER-Communication Theory

      Vol:
    E92-A No:10
      Page(s):
    2495-2497

    An ISI-free power roll-off pulse, the roll-off characteristic of which is tunable with one power parameter, is proposed. It is shown that the proposed pulse is advantageous in terms of the probability of error for pulse detection in the presence of a timing error among currently known good pulses, among which the raised cosine pulse, "better than" raised cosine pulse, and polynomial pulse are considered.

  • Estimating Node Characteristics from Topological Structure of Social Networks

    Kouhei SUGIYAMA  Hiroyuki OHSAKI  Makoto IMASE  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E92-B No:10
      Page(s):
    3094-3101

    In this paper, for systematically evaluating estimation methods of node characteristics, we first propose a social network generation model called LRE (Linkage with Relative Evaluation). LRE is a network generation model, which aims to reproduce the characteristics of a social network. LRE utilizes the fact that people generally build relationships with others based on relative evaluation, rather than absolute evaluation. We then extensively evaluate the accuracy of the estimation method called SSI (Structural Superiority Index). We reveal that SSI is effective for finding good nodes (e.g., top 10% nodes), but cannot be used for finding excellent nodes (e.g., top 1% nodes). For alleviating the problems of SSI, we propose a novel scheme for enhancing existing estimation methods called RENC (Recursive Estimation of Node Characteristic). RENC reduces the effect of noise by recursively estimating node characteristics. By investigating the estimation accuracy with RENC, we show that RENC is quite effective for improving the estimation accuracy in practical situations.

  • On the Security of a Conditional Proxy Re-Encryption

    Xi ZHANG  Min-Rong CHEN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E92-A No:10
      Page(s):
    2644-2647

    To enable fine-grained delegations for proxy re-encryption systems, in AsiaCCS'09, Weng et al.'s introduced the concept of conditional proxy re-encryption (C-PRE), in which the proxy can convert a ciphertext only if a specified condition is satisfied. Weng et al. also proposed a C-PRE scheme, and claimed that their scheme is secure against chosen-ciphertext attack (CCA). In this paper, we show that their scheme is not CCA-secure under their defined security model.

  • Optimizing Region of Support for Boundary-Based Corner Detection: A Statistic Approach

    Wen-Bing HORNG  Chun-Wen CHEN  

     
    PAPER-Pattern Recognition

      Vol:
    E92-D No:10
      Page(s):
    2103-2111

    Boundary-based corner detection has been widely applied in spline curve fitting, automated optical inspection, image segmentation, object recognition, etc. In order to obtain good results, users usually need to adjust the length of region of support to resist zigzags due to quantization and random noise on digital boundaries. To automatically determine the length of region of support for corner detection, Teh-Chin and Guru-Dinesh presented adaptive approaches based on some local properties of boundary points. However, these local-property based approaches are sensitive to noise. In this paper, we propose a new approach to find the optimum length of region of support for corner detection based on a statistic discriminant criterion. Since our approach is based on the global perspective of all boundary points, rather than the local properties of some points, the experiments show that the determined length of region of support increases as the noise intensity strengthens. In addition, the detected corners based on the optimum length of region of support are consistent with human experts' judgment, even for noisy boundaries.

  • Comments on an ID-Based Authenticated Group Key Agreement Protocol with Withstanding Insider Attacks

    Tsu-Yang WU  Yuh-Min TSENG  

     
    LETTER-Cryptography and Information Security

      Vol:
    E92-A No:10
      Page(s):
    2638-2640

    In PKC 2004, Choi et al. proposed an ID-based authenticated group key agreement (AGKA) protocol using bilinear pairings. Unfortunately, their protocol suffered from an impersonation attack and an insider colluding attack. In 2008, Choi et al. presented an improvement to resist insider attacks. In their modified protocol, they used an ID-based signature scheme on transcripts for binding them in a session to prevent replay of transcripts. In particular, they smartly used the batch verification technique to reduce the computational cost. In this paper, we first show that Choi et al.'s modified AGKA protocol still suffers from an insider colluding attack. Then, we prove that the batch verification of the adopted ID-based signature scheme in their modified protocol suffers from a forgery attack.

  • A Multi-Sensing-Range Method for Efficient Position Estimation by Passive RFID Technology

    Toshihiro HORI  Tomotaka WADA  Norie UCHITOMI  Kouichi MUTSUURA  Hiromi OKADA  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E92-A No:10
      Page(s):
    2609-2617

    The RFID tag system has received attention as an identification source. Each RFID tag is attached to some object. With the unique ID of the RFID tag, a user identifies the object provided with the RFID tag, and derives appropriate information about the object. One of important applications of the RFID technology is the position estimation of RFID tags. It can be very useful to acquire the location information concerning the RFID tags. It can be applied to navigation systems and positional detection systems for robots etc. In this paper, we propose a new position estimation method of RFID tags by using a probabilistic approach. In this method, mobile objects (person and robot, etc.) with RFID readers estimate the positions of RFID tags with multiple communication ranges. We show the effectiveness of the proposed method by computer simulations.

  • Image Restoration Using a Universal GMM Learning and Adaptive Wiener Filter

    Nobumoto YAMANE  Motohiro TABUCHI  Yoshitaka MORIKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E92-A No:10
      Page(s):
    2560-2571

    In this paper, an image restoration method using the Wiener filter is proposed. In order to bring the theory of the Wiener filter consistent with images that have spatially varying statistics, the proposed method adopts the locally adaptive Wiener filter (AWF) based on the universal Gaussian mixture distribution model (UNI-GMM) previously proposed for denoising. Applying the UNI-GMM-AWF for deconvolution problem, the proposed method employs the stationary Wiener filter (SWF) as a pre-filter. The SWF in the discrete cosine transform domain shrinks the blur point spread function and facilitates the modeling and filtering at the proceeding AWF. The SWF and UNI-GMM are learned using a generic training image set and the proposed method is tuned toward the image set. Simulation results are presented to demonstrate the effectiveness of the proposed method.

  • Strong Anonymous Signature

    Rui ZHANG  Hideki IMAI  

     
    LETTER-Cryptography and Information Security

      Vol:
    E92-A No:10
      Page(s):
    2487-2491

    The notion of anonymous signatures has recently been formalized by [18], which captures an interesting property that a digital signature can sometimes hide the identity of the signer, if the message is hidden from the verifier. However, in many practical applications, e.g., an anonymous paper review system mentioned in [18], the message for anonymous authentication is actually known to the verifier. This implies that the effectiveness of previous anonymous signatures may be unjustified in these applications. In this paper, we extend the previous models, and develop a related primitive called strong anonymous signatures. For strong anonymous signatures, the identity of the signer remains secret even if the challenge message is chosen by an adversary. We then demonstrate some efficient constructions and prove their security in our model.

  • CMOS Circuit Simulation Using Latency Insertion Method

    Tadatoshi SEKINE  Hideki ASAI  

     
    PAPER-Nonlinear Problems

      Vol:
    E92-A No:10
      Page(s):
    2546-2553

    This paper describes the application techniques of the latency insertion method (LIM) to CMOS circuit simulations. Though the existing LIM algorithm to CMOS circuit performs fast transient analysis, CMOS circuits are not modeled accurately. As a result, they do not provide accurate simulations. We propose a more accurate LIM scheme for the CMOS inverter circuit by adopting a more accurate model of the CMOS inverter characteristics. Moreover, we present the way to expand the LIM algorithm to general CMOS circuit simulations. In order to apply LIM to the general CMOS circuits which consist of CMOS NAND and NOR, we derive the updating formulas of the explicit form of the LIM algorithm. By using the explicit form of the LIM scheme, it becomes easy to take in the characteristics of CMOS NAND and NOR into the LIM simulations. As a result, it is confirmed that our techniques are useful and efficient for the simulations of CMOS circuits.

  • Proactive AP Selection Method Considering the Radio Interference Environment

    Yuzo TAENAKA  Shigeru KASHIHARA  Kazuya TSUKAMOTO  Suguru YAMAGUCHI  Yuji OIE  

     
    PAPER-Wireless Network

      Vol:
    E92-D No:10
      Page(s):
    1867-1876

    In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.

6901-6920hit(16314hit)