Hiroaki AKUTSU Ko ARAI
Lanxi LIU Pengpeng YANG Suwen DU Sani M. ABDULLAHI
Xiaoguang TU Zhi HE Gui FU Jianhua LIU Mian ZHONG Chao ZHOU Xia LEI Juhang YIN Yi HUANG Yu WANG
Yingying LU Cheng LU Yuan ZONG Feng ZHOU Chuangao TANG
Jialong LI Takuto YAMAUCHI Takanori HIRANO Jinyu CAI Kenji TEI
Wei LEI Yue ZHANG Hanfeng XIE Zebin CHEN Zengping CHEN Weixing LI
David CLARINO Naoya ASADA Atsushi MATSUO Shigeru YAMASHITA
Takashi YOKOTA Kanemitsu OOTSU
Xiaokang Jin Benben Huang Hao Sheng Yao Wu
Tomoki MIYAMOTO
Ken WATANABE Katsuhide FUJITA
Masashi UNOKI Kai LI Anuwat CHAIWONGYEN Quoc-Huy NGUYEN Khalid ZAMAN
Takaharu TSUBOYAMA Ryota TAKAHASHI Motoi IWATA Koichi KISE
Chi ZHANG Li TAO Toshihiko YAMASAKI
Ann Jelyn TIEMPO Yong-Jin JEONG
Haruhisa KATO Yoshitaka KIDANI Kei KAWAMURA
Jiakun LI Jiajian LI Yanjun SHI Hui LIAN Haifan WU
Gyuyeong KIM
Hyun KWON Jun LEE
Fan LI Enze YANG Chao LI Shuoyan LIU Haodong WANG
Guangjin Ouyang Yong Guo Yu Lu Fang He
Yuyao LIU Qingyong LI Shi BAO Wen WANG
Cong PANG Ye NI Jia Ming CHENG Lin ZHOU Li ZHAO
Nikolay FEDOROV Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Yukasa MURAKAMI Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Kazuya KAKIZAKI Kazuto FUKUCHI Jun SAKUMA
Yitong WANG Htoo Htoo Sandi KYAW Kunihiro FUJIYOSHI Keiichi KANEKO
Waqas NAWAZ Muhammad UZAIR Kifayat ULLAH KHAN Iram FATIMA
Haeyoung Lee
Ji XI Pengxu JIANG Yue XIE Wei JIANG Hao DING
Weiwei JING Zhonghua LI
Sena LEE Chaeyoung KIM Hoorin PARK
Akira ITO Yoshiaki TAKAHASHI
Rindo NAKANISHI Yoshiaki TAKATA Hiroyuki SEKI
Chuzo IWAMOTO Ryo TAKAISHI
Chih-Ping Wang Duen-Ren Liu
Yuya TAKADA Rikuto MOCHIDA Miya NAKAJIMA Syun-suke KADOYA Daisuke SANO Tsuyoshi KATO
Yi Huo Yun Ge
Rikuto MOCHIDA Miya NAKAJIMA Haruki ONO Takahiro ANDO Tsuyoshi KATO
Koichi FUJII Tomomi MATSUI
Yaotong SONG Zhipeng LIU Zhiming ZHANG Jun TANG Zhenyu LEI Shangce GAO
Souhei TAKAGI Takuya KOJIMA Hideharu AMANO Morihiro KUGA Masahiro IIDA
Jun ZHOU Masaaki KONDO
Tetsuya MANABE Wataru UNUMA
Kazuyuki AMANO
Takumi SHIOTA Tonan KAMATA Ryuhei UEHARA
Hitoshi MURAKAMI Yutaro YAMAGUCHI
Jingjing Liu Chuanyang Liu Yiquan Wu Zuo Sun
Zhenglong YANG Weihao DENG Guozhong WANG Tao FAN Yixi LUO
Yoshiaki TAKATA Akira ONISHI Ryoma SENDA Hiroyuki SEKI
Dinesh DAULTANI Masayuki TANAKA Masatoshi OKUTOMI Kazuki ENDO
Kento KIMURA Tomohiro HARAMIISHI Kazuyuki AMANO Shin-ichi NAKANO
Ryotaro MITSUBOSHI Kohei HATANO Eiji TAKIMOTO
Genta INOUE Daiki OKONOGI Satoru JIMBO Thiem Van CHU Masato MOTOMURA Kazushi KAWAMURA
Hikaru USAMI Yusuke KAMEDA
Yinan YANG
Takumi INABA Takatsugu ONO Koji INOUE Satoshi KAWAKAMI
Fengshan ZHAO Qin LIU Takeshi IKENAGA
Naohito MATSUMOTO Kazuhiro KURITA Masashi KIYOMI
Tomohiro KOBAYASHI Tomomi MATSUI
Shin-ichi NAKANO
Ming PAN
Tomoo FUKAZAWA Takemi NISASE Masahisa KAWASHIMA Takeo HARIU Yoshihito OSHIMA
Next Generation Network (NGN), which has been undergoing standardization as it has developed, is expected to create new services that converge the fixed and mobile networks. This paper introduces the basic requirements for NGN in terms of security and explains the standardization activities, in particular, the requirements for the security function described in Y.2701 discussed in ITU-T SG-13. In addition to the basic NGN security function, requirements for NGN authentication are also described from three aspects: security, deployability, and service. As examples of authentication implementation, three profiles--namely, fixed, nomadic, and mobile--are defined in this paper. That is, the "fixed profile" is typically for fixed-line subscribers, the "nomadic profile" basically utilizes WiFi access points, and the "mobile profile" provides ideal NGN mobility for mobile subscribers. All three of these profiles satisfy the requirements from security aspects. The three profiles are compared from the viewpoint of requirements for deployability and service. After showing that none of the three profiles can fulfill all of the requirements, we propose that multiple profiles should be used by NGN providers. As service and application examples, two promising NGN applications are proposed. The first is a strong authentication mechanism that makes Web applications more safe and secure even against password theft. It is based on NGN ID federation function. The second provides an easy peer-to-peer broadband virtual private network service aimed at safe and secure communication for personal/SOHO (small office, home office) users, based on NGN SIP (session initiation protocol) session control.
Sehun KIM Seong-jun SHIN Hyunwoo KIM Ki Hoon KWON Younggoo HAN
Recently, cyber attacks have become a serious hindrance to the stability of Internet. These attacks exploit interconnectivity of networks, propagate in an instant, and have become more sophisticated and evolutionary. Traditional Internet security systems such as firewalls, IDS and IPS are limited in terms of detecting recent cyber attacks in advance as these systems respond to Internet attacks only after the attacks inflict serious damage. In this paper, we propose a hybrid intrusion forecasting system framework for an early warning system. The proposed system utilizes three types of forecasting methods: time-series analysis, probabilistic modeling, and data mining method. By combining these methods, it is possible to take advantage of the forecasting technique of each while overcoming their drawbacks. Experimental results show that the hybrid intrusion forecasting method outperforms each of three forecasting methods.
Seiji MUNETOH Megumi NAKAMURA Sachiko YOSHIHAMA Michiharu KUDO
Computer security concerns have been rapidly increasing because of repeated security breaches and leakages of sensitive personal information. Such security breaches are mainly caused by an inappropriate management of the PCs, so maintaining integrity of the platform configuration is essential, and, verifying the integrity of the computer platform and software becomes more significant. To address these problems, the Trusted Computing Group (TCG) has developed various specifications that are used to measure the integrity of the platform based on hardware trust. In the trusted computing technology, the integrity data of each component running on the platform is recorded in the security chip and they are securely checked by a remote attestation. The infrastructure working group in the TCG is trying to define an Integrity Management Infrastructure in which the Platform Trust Services (PTS) is a new key component which deals with an Integrity Report. When we use the PTS in the target platform, it is a service component that collects and measures the runtime integrity of the target platform in a secure way. The PTS can also be used to validate the Integrity Reports. We introduce the notion of the Platform Validation Authority, a trusted third party, which verifies the composition of the integrity measurement of the target platform in the Integrity Reports. The Platform Validation Authority complements the role of the current Certificate Authority in the Public Key Infrastructure which attests to the integrity of the user identity as well as to related artifacts such as digital signatures. In this paper, we cover the research topics in this new area, the relevant technologies and open issues of the trusted computing, and the detail of our PTS implementation.
Hong-Wei SUN Kwok-Yan LAM Dieter GOLLMANN Siu-Leung CHUNG Jian-Bin LI Jia-Guang SUN
In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e.g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.
Kenji RIKITAKE Koji NAKAO Shinji SHIMOJO Hiroki NOGAWA
Domain Name System (DNS) is a major target for the network security attacks due to the weak authentication. A security extension DNSSEC has been proposed to introduce the public-key authentication, but it is still on the deployment phase. DNSSEC assumes IP fragmentation allowance for exchange of its messages over UDP large payloads. IP fragments are often blocked on network packet filters for administrative reasons, and the blockage may prevent fast exchange of DNSSEC messages. In this paper, we propose a scheme to detect the UDP large-payload transfer capability between two DNSSEC hosts. The proposed detection scheme does not require new protocol elements of DNS and DNSSEC, so it is applicable by solely modifying the application software and configuration. The scheme allows faster capability detection to probe the end-to-end communication capability between two DNS hosts by transferring a large UDP DNS message. The DNS software can choose the maximum transmission unit (MTU) on the application level using the probed detection results. Implementation test results show that the proposed scheme shortens the detection and transition time on fragment-blocked transports.
Denial of service (DoS) attacks have become one of the most serious threats to the Internet. Enabling detection of attacks in network traffic is an important and challenging task. However, most existing volume-based schemes can not detect short-term attacks that have a minor effect on traffic volume. On the other hand, feature-based schemes are not suitable for real-time detection because of their complicated calculations. In this paper, we develop an IP packet size entropy (IPSE)-based DoS/DDoS detection scheme in which the entropy is markedly changed when traffic is affected by an attack. Through our analysis, we find that the IPSE-based scheme is capable of detecting not only long-term attacks but also short-term attacks that are beyond the volume-based schemes' ability to detect. Moreover, we test our proposal using two typical Internet traffic data sets from DARPA and SINET, and the test results show that the IPSE-based detection scheme can provide detection of DoS/DDoS attacks not only in a local area network (DARPA) and but also in academic backbone network (SINET).
Jungsuk SONG Kenji OHIRA Hiroki TAKAKURA Yasuo OKABE Yongjin KWON
Intrusion detection system (IDS) has played a central role as an appliance to effectively defend our crucial computer systems or networks against attackers on the Internet. The most widely deployed and commercially available methods for intrusion detection employ signature-based detection. However, they cannot detect unknown intrusions intrinsically which are not matched to the signatures, and their methods consume huge amounts of cost and time to acquire the signatures. In order to cope with the problems, many researchers have proposed various kinds of methods that are based on unsupervised learning techniques. Although they enable one to construct intrusion detection model with low cost and effort, and have capability to detect unforeseen attacks, they still have mainly two problems in intrusion detection: a low detection rate and a high false positive rate. In this paper, we present a new clustering method to improve the detection rate while maintaining a low false positive rate. We evaluated our method using KDD Cup 1999 data set. Evaluation results show that superiority of our approach to other existing algorithms reported in the literature.
Yoshihide MATSUMOTO Hiroaki HAZEYAMA Youki KADOBAYASHI
The Bloom Filter (BF), a space-and-time-efficient hash-coding method, is used as one of the fundamental modules in several network processing algorithms and applications such as route lookups, cache hits, packet classification, per-flow state management or network monitoring. BF is a simple space-efficient randomized data structure used to represent a data set in order to support membership queries. However, BF generates false positives, and cannot count the number of distinct elements. A counting Bloom Filter (CBF) can count the number of distinct elements, but CBF needs more space than BF. We propose an alternative data structure of CBF, and we called this structure an Adaptive Bloom Filter (ABF). Although ABF uses the same-sized bit-vector used in BF, the number of hash functions employed by ABF is dynamically changed to record the number of appearances of a each key element. Considering the hash collisions, the multiplicity of a each key element on ABF can be estimated from the number of hash functions used to decode the membership of the each key element. Although ABF can realize the same functionality as CBF, ABF requires the same memory size as BF. We describe the construction of ABF and IABF (Improved ABF), and provide a mathematical analysis and simulation using Zipf's distribution. Finally, we show that ABF can be used for an unpredictable data set such as real network traffic.
Erwan LE MALECOT Masayoshi KOHARA Yoshiaki HORI Kouichi SAKURAI
With the multiplication of attacks against computer networks, system administrators are required to monitor carefully the traffic exchanged by the networks they manage. However, that monitoring task is increasingly laborious because of the augmentation of the amount of data to analyze. And that trend is going to intensify with the explosion of the number of devices connected to computer networks along with the global rise of the available network bandwidth. So system administrators now heavily rely on automated tools to assist them and simplify the analysis of the data. Yet, these tools provide limited support and, most of the time, require highly skilled operators. Recently, some research teams have started to study the application of visualization techniques to the analysis of network traffic data. We believe that this original approach can also allow system administrators to deal with the large amount of data they have to process. In this paper, we introduce a tool for network traffic monitoring using visualization techniques that we developed in order to assist the system administrators of our corporate network. We explain how we designed the tool and some of the choices we made regarding the visualization techniques to use. The resulting tool proposes two linked representations of the network traffic and activity, one in 2D and the other in 3D. As 2D and 3D visualization techniques have different assets, we resulted in combining them in our tool to take advantage of their complementarity. We finally tested our tool in order to evaluate the accuracy of our approach.
Tadashi KAJI Takahiro FUJISHIRO Satoru TEZUKA
Today, TLS is widely used for achieving a secure communication system. And TLS is used PKI for server authentication and/or client authentication. However, its PKI environment, which is called as "multiple trust anchors environment," causes the problem that the verifier has to maintain huge number of CA certificates in the ubiquitous network because the increase of terminals connected to the network brings the increase of CAs. However, most of terminals in the ubiquitous network will not have enough memory to hold such huge number of CA certificates. Therefore, another PKI environment, "cross certification environment", is useful for the ubiquitous network. But, because current TLS is designed for the multiple trust anchors model, TLS cannot work efficiently on the cross-certification model. This paper proposes a TLS implementation method to support the cross certification model efficiently. Our proposal reduces the size of exchanged messages between the TLS client and the TLS server during the handshake process. Therefore, our proposal is suitable for implementing TLS in the terminals that do not have enough computing power and memory in ubiquitous network.
Alejandro Perez MENDEZ Pedro J. Fernandez RUIZ Rafael Marin LOPEZ Gregorio Martinez PEREZ Antonio F. Gomez SKARMETA Kenichi TANIUCHI
This paper describes the IKEv2 protocol and presents how an open-source IKEv2 implementation, in particular OpenIKEv2 has been designed and implemented. All the issues found during this process and how they were solved are also described. Finally, a comparison between existing open-source implementations is presented.
Motoi YOSHITOMI Tsuyoshi TAKAGI Shinsaku KIYOMOTO Toshiaki TANAKA
Pairing based cryptosystems can accomplish novel security applications such as ID-based cryptosystems, which have not been constructed efficiently without the pairing. The processing speed of the pairing based cryptosystems is relatively slow compared with the other conventional public key cryptosystems. However, several efficient algorithms for computing the pairing have been proposed, namely Duursma-Lee algorithm and its variant ηT pairing. In this paper, we present an efficient implementation of the pairing over some mobilephones. Moreover, we compare the processing speed of the pairing with that of the other standard public key cryptosystems, i.e. RSA cryptosystem and elliptic curve cryptosystem. Indeed the processing speed of our implementation in ARM9 processors on BREW achieves under 100 milliseconds using the supersingular curve over F397. In addition, the pairing is more efficient than the other public key cryptosystems, and the pairing can be achieved enough also on BREW mobilephones. It has become efficient enough to implement security applications, such as short signature, ID-based cryptosystems or broadcast encryption, using the pairing on BREW mobilephones.
Seog Chung SEO Dong-Guk HAN Hyung Chan KIM Seokhie HONG
In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF(2m) on sensor motes using small word size is not appropriate because XOR multiplication over GF(2m) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF(2m) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF(2m) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF(2163) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15-19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve - a kind of TinyOS package supporting elliptic curve operations) which is the first implementation of Koblitz curve on sensor motes as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF(2m) can be faster than that over GF(p) on 8-bit Atmega128 processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF(p). TinyECCK with sect163k1 can generate a signature and verify it in 1.37 and 2.32 secs on a Micaz mote with 13,748-byte of ROM and 1,004-byte of RAM.
Mitsuo OKADA Hiroaki KIKUCHI Yasuo OKABE
A new method of multi-bit embedding based on a protocol of secure asymmetric digital watermarking detection is proposed. Secure watermark detection has been achieved by means of allowing watermark verifier to detect a message without any secret information exposed in extraction process. Our methodology is based on an asymmetric property of a watermark algorithm which hybridizes a statistical watermark algorithm and a public-key algorithm. In 2004, Furukawa proposed a secure watermark detection scheme using patchwork watermarking and Paillier encryption, but the feasibility had not tested in his work. We have examined it and have shown that it has a drawback in heavy overhead in processing time. We overcome the issue by replacing the cryptosystem with the modified El Gamal encryption and improve performance in processing time. We have developed software implementation for both methods and have measured effective performance. The obtained result shows that the performance of our method is better than Frukawa's method under most of practical conditions. In our method, multiple bits can be embedded by assigning distinct generators in each bit, while the embedding algorithm of Frukawa's method assumes a single-bit message. This strongly enhances capability of multi-bit information embedding, and also improves communication and computation cost.
Kyung-Su KIM Hae-Yeoun LEE Dong-Hyuck IM Heung-Kyu LEE
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
This paper proposes a spectroscopic method and system for preventing spoofing of biometric authentication. One of its focus is to enhance biometrics authentication with a spectroscopic method in a multi-factor manner such that a person's unique 'spectral signatures' or 'spectral factors' are recorded and compared in addition to a non-spectroscopic biometric signature to reduce the likelihood of imposter getting authenticated. By using the 'spectral factors' extracted from reflectance spectra of real fingers and employing cluster analysis, it shows how the authentic fingerprint image presented by a real finger can be distinguished from an authentic fingerprint image embossed on an artificial finger, or molded on a fingertip cover worn by an imposter. This paper also shows how to augment two widely used biometrics systems (fingerprint and iris recognition devices) with spectral biometrics capabilities in a practical manner and without creating much overhead or inconveniencing their users.
Masashi UNE Akira OTSUKA Hideki IMAI
This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding "wolves" into the system to be attacked. The "wolf" means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.
Today, users themselves are becoming subjects of content creation. The fact that blog, wiki, and UCC have become very popular shows that users want to participate to create and modify digital content. Users who participate in composing content also want to have their copyrights on their modification parts. Thus, a copyright protection system for the content which can be modified by multiple users is required. However, the conventional DRM (Digital Rights Management) systems like OMA DRM are not suitable for the modifiable content because they do not support the content created and modified by different users. Therefore in this paper, we propose a new copyright protection system which allows each modifier of the content created and modified by multiple users to have one's own copyright. We propose data formats and protocols, and analyze the proposed system in terms of the correctness and security. Performance evaluation in the view of response time shows that the proposed system is 2 to 18 times shorter than other comparative schemes.
Yohei HORI Hiroyuki YOKOYAMA Hirofumi SAKANE Kenji TODA
We developed a content delivery system using a partially reconfigurable FPGA to securely distribute digital content on the Internet. With partial reconfigurability of a Xilinx Virtex-II Pro FPGA, the system provides an innovative single-chip solution for protecting digital content. In the system, a partial circuit must be downloaded from a server to the client terminal to play content. Content will be played only when the downloaded circuit is correctly combined (=interlocked) with the circuit built in the terminal. Since each circuit has a unique I/O configuration, the downloaded circuit interlocks with the corresponding built-in circuit designed for a particular terminal. Thus, the interface of the circuit itself provides a novel authentication mechanism. This paper describes the detailed architecture of the system and clarify the feasibility and effectiveness of the system. In addition, we discuss a fail-safe mechanism and future work necessary for the practical application of the system.
Albert JENG Li-Chung CHANG Sheng-Hui CHEN
There are many protocols proposed for protecting Radio Frequency Identification (RFID) system privacy and security. A number of these protocols are designed for protecting long-term security of RFID system using symmetric key or public key cryptosystem. Others are designed for protecting user anonymity and privacy. In practice, the use of RFID technology often has a short lifespan, such as commodity check out, supply chain management and so on. Furthermore, we know that designing a long-term security architecture to protect the security and privacy of RFID tags information requires a thorough consideration from many different aspects. However, any security enhancement on RFID technology will jack up its cost which may be detrimental to its widespread deployment. Due to the severe constraints of RFID tag resources (e.g., power source, computing power, communication bandwidth) and open air communication nature of RFID usage, it is a great challenge to secure a typical RFID system. For example, computational heavy public key and symmetric key cryptography algorithms (e.g., RSA and AES) may not be suitable or over-killed to protect RFID security or privacy. These factors motivate us to research an efficient and cost effective solution for RFID security and privacy protection. In this paper, we propose a new effective generic binary tree based key agreement protocol (called BKAP) and its variations, and show how it can be applied to secure the low cost and resource constraint RFID system. This BKAP is not a general purpose key agreement protocol rather it is a special purpose protocol to protect privacy, un-traceability and anonymity in a single RFID closed system domain.
Hisashi MOHRI Ritsuko MATSUMOTO Yuichi KAJI
This study is to investigate new schemes for distributing cryptographic keys in sensor networks. Sharing a key is the very first step to realize secure communication over an untrusted network infrastructure, but commonly used cryptographic techniques cannot be employed for sensor networks due to the restriction of computational resources of sensor nodes. A practical solution to this issue is to predistribute cryptographic keys in sensor nodes before they are deployed. A focal point in this solution is the choice of keys that are assigned to a sensor node. Eschenauer et al. considered to choose keys randomly, and Chan et al. also followed the random choice approach. We consider in this paper a new approach in which keys are assigned according to a basic algebraic geometry. The performance of the proposed scheme is investigated analytically.
SeongHan SHIN Kazukuni KOBARA Hideki IMAI
The RSA-based Password-Authenticated Key Exchange (PAKE) protocols have been proposed to realize both mutual authentication and generation of secure session keys where a client is sharing his/her password only with a server and the latter should generate its RSA public/private key pair (e,n),(d,n) every time due to the lack of PKI (Public-Key Infrastructures). One of the ways to avoid a special kind of off-line (so called e-residue) attacks in the RSA-based PAKE protocols is to deploy a challenge/response method by which a client verifies the relative primality of e and φ(n) interactively with a server. However, this kind of RSA-based PAKE protocols did not give any proof of the underlying challenge/response method and therefore could not specify the exact complexity of their protocols since there exists another security parameter, needed in the challenge/response method. In this paper, we first present an RSA-based PAKE (RSA-PAKE) protocol that can deploy two different challenge/response methods (denoted by Challenge/Response Method1 and Challenge/Response Method2). The main contributions of this work include: (1) Based on the number theory, we prove that the Challenge/Response Method1 and the Challenge/Response Method2 are secure against e-residue attacks for any odd prime e; (2) With the security parameter for the on-line attacks, we show that the RSA-PAKE protocol is provably secure in the random oracle model where all of the off-line attacks are not more efficient than on-line dictionary attacks; and (3) By considering the Hamming weight of e and its complexity in the RSA-PAKE protocol, we search for primes to be recommended for a practical use. We also compare the RSA-PAKE protocol with the previous ones mainly in terms of computation and communication complexities.
It is well known that there is relationship between electromagnetic emanation and processing information in IT devices such as personal computers and smart cards. By analyzing such electromagnetic emanation, eavesdropper will be able to get some information, so it becomes a real threat of information security. In this paper, we show how to estimate amount of information that is leaked as electromagnetic emanation. We assume the space between the IT device and the receiver is a communication channel, and we define the amount of information leakage via electromagnetic emanations by its channel capacity. By some experimental results of Tempest, we show example estimations of amount of information leakage. Using the value of channel capacity, we can calculate the amount of information per pixel in the reconstructed image. And we evaluate the effectiveness of Tempest fonts generated by Gaussian method and its threshold of security.
Xinyu WANG Jianling SUN Xiaohu YANG Chao HUANG Di WU
This paper proposes a security violation detection method for RBAC based interoperation to meet the requirements of secure interoperation among distributed systems. We use role mappings between RBAC systems to implement trans-system access control, analyze security violation of interoperation with role mappings, and formalize definitions of secure interoperation. A minimum detection method according to the feature of RBAC system in distributed environment is introduced in detail. This method reduces complexity by decreasing the amount of roles involved in detection. Finally, we analyze security violation further based on the minimum detection method to help administrators eliminate security violation.
Yang CUI Kazukuni KOBARA Kanta MATSUURA Hideki IMAI
As pervasive computing technologies develop fast, the privacy protection becomes a crucial issue and needs to be coped with very carefully. Typically, it is difficult to efficiently identify and manage plenty of the low-cost pervasive devices like Radio Frequency Identification Devices (RFID), without leaking any privacy information. In particular, the attacker may not only eavesdrop the communication in a passive way, but also mount an active attack to ask queries adaptively, which is obviously more dangerous. Towards settling this problem, in this paper, we propose two lightweight authentication protocols which are privacy-preserving against active attack, in an asymmetric way. That asymmetric style with privacy-oriented simplification succeeds to reduce the load of low-cost devices and drastically decrease the computation cost for the management of server. This is because that, unlike the usual management of the identities, our approach does not require any synchronization nor exhaustive search in the database, which enjoys great convenience in case of a large-scale system. The protocols are based on a fast asymmetric encryption with specialized simplification and only one cryptographic hash function, which consequently assigns an easy work to pervasive devices. Besides, our results do not require the strong assumption of the random oracle.
Takahiro MATSUDA Nuttapong ATTRAPADUNG Goichiro HANAOKA Kanta MATSUURA Hideki IMAI
Unforgeability of digital signatures is closely related to the security of hash functions since hashing messages, such as hash-and-sign paradigm, is necessary in order to sign (arbitrarily) long messages. Recent successful collision finding attacks against practical hash functions would indicate that constructing practical collision resistant hash functions is difficult to achieve. Thus, it is worth considering to relax the requirement of collision resistance for hash functions that is used to hash messages in signature schemes. Currently, the most efficient strongly unforgeable signature scheme in the standard model which is based on the CDH assumption (in bilinear groups) is the Boneh-Shen-Waters (BSW) signature proposed in 2006. In their scheme, however, a collision resistant hash function is necessary to prove its security. In this paper, we construct a signature scheme which has the same properties as the BSW scheme but does not rely on collision resistant hash functions. Instead, we use a target collision resistant hash function, which is a strictly weaker primitive than a collision resistant hash function. Our scheme is, in terms of the signature size and the computational cost, as efficient as the BSW scheme.
Dae Hyun YUM Jong Hoon SHIN Pil Joong LEE
Yeh and Tsai recently proposed an enhanced mobile commerce security mechanism. They modified the lightweight security mechanism due to Lam, Chung, Gu, and Sun to relieve the burden of mobile clients. However, this article shows that a malicious WAP gateway can successfully obtain the mobile client's PIN by sending a fake public key of a mobile commerce server and exploiting information leakage caused by addition operation. We also present a countermeasure against the proposed attack.
Taek-Young YOUN Young-Ho PARK Taekyoung KWON Soonhak KWON Jongin LIM
Previously proposed batch signature schemes do not allow a signer to generate a signature immediately for sequentially asked signing queries. In this letter, we propose flexible batch signatures which do not need any waiting period and have very light computational overhead. Therefore our schemes are well suited for low power devices.
Ryuzou NISHI Yoshiaki HORI Kouichi SAKURAI
We address reliable key distribution scheme for lossy channels such as wireless or power line. In the key distribution over these lossy channels, if key information is lost, there is critical issue that the subsequent communication is disabled. In this paper, we show that our proposal has more reliable property than the related works and has the reliable property equivalent to the dedicated communication channels such as Ethernet.
Dae Hyun YUM Jae Woo SEO Pil Joong LEE
The accumulator was introduced as a decentralized alternative to digital signatures. While most of accumulators are based on number theoretic assumptions and require time-consuming modulo exponentiations, Nyberg's combinatoric accumulator dose not depend on any computational assumption and requires only bit operations and hash function evaluations. In this article, we present a generalization of Nyberg's combinatoric accumulator, which allows a lower false positive rate with the same output length. Our generalization also shows that the Bloom filter can be used as a cryptographic accumulator and moreover excels the Nyberg's accumulator.
Masaki NAKAMURA Weiqiang KONG Kazuhiro OGATA Kokichi FUTATSUGI
There are two ways to describe a state machine as an algebraic specification: a behavioral specification and a rewrite specification. In this study, we propose a translation system from behavioral specifications to rewrite specifications to obtain a verification system which has the strong points of verification techniques for both specifications. Since our translation system is complete with respect to invariant properties, it helps us to obtain a counter-example for an invariant property through automatic exhaustive searching for a rewrite specification.
The management of subscriptions and events is an important task in the content-based publish/subscribe system. A good management mechanism can not only produce lower matching costs to speed up the delivery of matched events to the interested subscribers but can also induce good load balancing for subscription storage. In this paper, we consider the construction of this kind of system over a peer-to-peer overlay network and propose two message-to-node mapping schemes for system management. We both analyze and simulate the performance of the proposed schemes. The simulation results show the superiority of our schemes over existing ones.
This paper presents an algorithm for the robust watermarking of 3D polygonal mesh models. The proposed algorithm embeds the watermark into a 2D image extracted from the 3D model, rather than directly embedding it into 3D geometry. The proposed embedding domain, i.e., the 2D image, is devised to be robust against the attacks like mesh simplification which severely modifies the vertices and connectivity while preserving the appearance of the model. The watermark-embedded model is obtained by using a simple vertex perturbation algorithm without iterative optimization. Two exemplary watermark applications using the proposed methods are also presented: one is to embed several bits into 3D models and the other is to detect only the existence of a watermark. The experimental results show that the proposed algorithm is robust against similarity transform, mesh simplification, additive Gaussian noise, quantization of vertex coordinates and mesh smoothing, and that its computational complexity is lower than that of the conventional methods.
Naoya MOCHIKI Tetsuji OGAWA Tetsunori KOBAYASHI
We propose a new type of direction-of-arrival estimation method for robot audition that is free from strict head related transfer function estimation. The proposed method is based on statistical pattern recognition that employs a ratio of power spectrum amplitudes occurring for a microphone pair as a feature vector. It does not require any phase information explicitly, which is frequently used in conventional techniques, because the phase information is unreliable for the case in which strong reflections and diffractions occur around the microphones. The feature vectors we adopted can treat these influences naturally. The effectiveness of the proposed method was shown from direction-of-arrival estimation tests for 19 kinds of directions: 92.4% of errors were reduced compared with the conventional phase-based method.
Manoj PERERA Takaaki SHIRATORI Shunsuke KUDOH Atsushi NAKAZAWA Katsushi IKEUCHI
In this paper, we present a novel approach to recognize motion styles and identify people using the Multi Factor Tensor (MFT) model. We apply a musical information analysis method in segmenting the motion sequence relevant to the keyposes and the musical rhythm. We define a task model by considering the repeated motion segments, where the motion is decomposed into a person-invariant factor task and a person-dependent factor style. Given the motion data set, we formulate the MFT model, factorize it efficiently in different modes, and use it in recognizing the tasks and the identities of the persons performing the tasks. We capture the motion data of different people for a few cycles, segment it using the musical analysis approach, normalize the segments using a vectorization method, and realize our MFT model. In our experiments, Japanese traditional dance sequences performed by several people are used. Provided with an unknown motion segment which is to be probed and which was performed at a different time in the time space, we first normalize the motion segment and flatten our MFT model appropriately, then recognize the task and the identity of the person. We follow two approaches in conducting our experiments. In one experiment, we recognize the tasks and the styles by maximizing a function in the tensor subdomain, and in the next experiment, we use a function value in the tensorial subdomain with a threshold for recognition. Interestingly, unlike the first experiment, we are capable of recognizing tasks and human identities that were not known beforehand. We conducted various experiments to evaluate the potential of the recognition ability of our proposed approaches, and the results demonstrate the high accuracy of our model.
Mousa SHAMSI Reza Aghaiezadeh ZOROOFI Caro LUCAS Mohammad Sadeghi HASANABADI Mohammad Reza ALSHARIF
Facial skin detection is an important step in facial surgical planning like as many other applications. There are many problems in facial skin detection. One of them is that the image features can be severely corrupted due to illumination, noise, and occlusion, where, shadows can cause numerous strong edges. Hence, in this paper, we present an automatic Expectation-Maximization (EM) algorithm for facial skin color segmentation that uses knowledge of chromatic space and varying illumination conditions to correct and segment frontal and lateral facial color images, simultaneously. The proposed EM algorithm leads to a method that allows for more robust and accurate segmentation of facial images. The initialization of the model parameters is very important in convergence of algorithm. For this purpose, we use a method for robust parameter estimation of Gaussian mixture components. Also, we use an additional class, which includes all pixels not modeled explicitly by Gaussian with small variance, by a uniform probability density, and amending the EM algorithm appropriately, in order to obtain significantly better results. Experimental results on facial color images show that accurate estimates of the Gaussian mixture parameters are computed. Also, other results on images presenting a wide range of variations in lighting conditions, demonstrate the efficiency of the proposed color skin segmentation algorithm compared to conventional EM algorithm.
Spoken language understanding (SLU) aims to map a user's speech into a semantic frame. Since most of the previous works use the semantic structures for SLU, we verify that the structure is useful even for noisy input. We apply a structured prediction method to SLU problem and compare it to an unstructured one. In addition, we present a combined method to embed long-distance dependency between entities in a cascaded manner. On air travel data, we show that our approach improves performance over baseline models.
Saowaluk C. WATANAPA Bundit THIPAKORN Nipon CHAROENKITKARN
Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.
GaoYun AN JiYing WU QiuQi RUAN
In order to overcome the drawback of TVQI and to utilize the property of dimensionality increasing techniques, a novel model for Kernel TV-based Quotient Image employing Gabor analysis is proposed and applied to face recognition with only one sample per subject. To deal with illumination outliers, an enhanced TV-based quotient image (ETVQI) model is first adopted. Then for preprocessed images by ETVQI, a bank of Gabor filters is built to extract features at specified scales and orientations. Lastly, KPCA is introduced to extract final high-order and nonlinear features of extracted Gabor features. According to experiments on the CAS-PEAL face database, our model could outperform Gabor-based KPCA, TVQI and Gabor-based TVQI when they face most outliers (illumination, expression, masking etc.).
Masashi SUGIYAMA Motoaki KAWANABE Gilles BLANCHARD Klaus-Robert MULLER
Obtaining the best linear unbiased estimator (BLUE) of noisy signals is a traditional but powerful approach to noise reduction. Explicitly computing the BLUE usually requires the prior knowledge of the noise covariance matrix and the subspace to which the true signal belongs. However, such prior knowledge is often unavailable in reality, which prevents us from applying the BLUE to real-world problems. To cope with this problem, we give a practical procedure for approximating the BLUE without such prior knowledge. Our additional assumption is that the true signal follows a non-Gaussian distribution while the noise is Gaussian.
A motion refinement algorithm is proposed to enhance motion compensated noise reduction (MCNR) efficiency. Instead of the vector with minimum distortion, the vector with minimum distance from motion vectors of neighboring blocks is selected as the best motion vector among vectors which have distortion values within the range set by noise level. This motion refinement finds more accurate motion vectors in the noisy sequences. The MCNR with the proposed algorithm maintains the details of an image sequence very well without blurring and joggling. And it achieves 10% bit-usage reduction or 0.5 dB objective quality enhancement in subsequent video coding.
Yeo-Chan YOON So-Young PARK Young-In SONG Hae-Chang RIM Dae-Woong RHEE
In this paper, we propose a new model of automatically constructing an acronym dictionary. The proposed model generates possible acronym candidates from a definition, and then verifies each acronym-definition pair with a Naive Bayes classifier based on web documents. In order to achieve high dictionary quality, the proposed model utilizes the characteristics of acronym generation types: a syllable-based generation type, a word-based generation type, and a mixed generation type. Compared with a previous model recognizing an acronym-definition pair in a document, the proposed model verifying a pair in web documents improves approximately 50% recall on obtaining acronym-definition pairs from 314 Korean definitions. Also, the proposed model improves 7.25% F-measure on verifying acronym-definition candidate pairs by utilizing specialized classifiers with the characteristics of acronym generation types.