The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

2061-2080hit(8214hit)

  • Iterative Detection and Decoding of MIMO Signals Using Low-Complexity Soft-In/Soft-Out Detector

    Seokhyun YOON  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:5
      Page(s):
    890-896

    In this paper, we investigate iterative detection and decoding, a.k.a. turbo detection, for multiple-input multiple-output (MIMO) transmission. Specifically, we consider using a low complexity soft-in/soft-out MIMO detector based on belief propagation over a pair-wise graph that accepts a priori information feedback from a channel decoder. Simulation results confirm that considerable performance improvement can be obtained with only a few detection-and-decoding iterations if convolutional channel coding is used. A brief estimate is given of the overall complexity of turbo detectors, to verify the key argument that the performance of a maximum a posteriori (MAP) detector (without turbo iteration) can be achieved, at much lower computation cost, by using the low complexity soft-in/soft-out MIMO detector under consideration.

  • Two-Switch Voltage Equalizer Using a Series-Resonant Voltage Multiplier Operating in Frequency-Multiplied Discontinuous Conduction Mode for Series-Connected Supercapacitors

    Masatoshi UNO  Akio KUKITA  

     
    PAPER-Energy in Electronics Communications

      Vol:
    E98-B No:5
      Page(s):
    842-853

    Cell voltage equalizers are necessary to ensure years of operation and maximize the chargeable/dischargeable energy of series-connected supercapacitors (SCs). A two-switch voltage equalizer using a series-resonant voltage multiplier operating in frequency-multiplied discontinuous conduction mode (DCM) is proposed for series-connected SCs in this paper. The frequency-multiplied mode virtually increases the operation frequency and hence mitigates the negative impact of the impedance mismatch of capacitors on equalization performance, allowing multi-layer ceramic capacitors (MLCCs) to be used instead of bulky and costly tantalum capacitors, the conventional approach when using voltage multipliers in equalizers. Furthermore, the DCM operation inherently provides the constant current characteristic, realizing the excessive current protection that is desirable for SCs, which experience 0V and equivalently become an equivalent short-circuit load. Experimental equalization tests were performed for eight SCs connected in series under two frequency conditions to verify the improved equalization performance at the increased virtual operation frequencies. The standard deviation of cell voltages under the higher-frequency condition was lower than that under the lower-frequency condition, demonstrating superior equalization performance at higher frequencies.

  • Direct Density Ratio Estimation with Convolutional Neural Networks with Application in Outlier Detection

    Hyunha NAM  Masashi SUGIYAMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/01/28
      Vol:
    E98-D No:5
      Page(s):
    1073-1079

    Recently, the ratio of probability density functions was demonstrated to be useful in solving various machine learning tasks such as outlier detection, non-stationarity adaptation, feature selection, and clustering. The key idea of this density ratio approach is that the ratio is directly estimated so that difficult density estimation is avoided. So far, parametric and non-parametric direct density ratio estimators with various loss functions have been developed, and the kernel least-squares method was demonstrated to be highly useful both in terms of accuracy and computational efficiency. On the other hand, recent study in pattern recognition exhibited that deep architectures such as a convolutional neural network can significantly outperform kernel methods. In this paper, we propose to use the convolutional neural network in density ratio estimation, and experimentally show that the proposed method tends to outperform the kernel-based method in outlying image detection.

  • Semi-Distributed Resource Allocation for Dense Small Cell Networks

    Hong LIU  Yang YANG  Xiumei YANG  Zhengmin ZHANG  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E98-A No:5
      Page(s):
    1140-1143

    Small cell networks have been promoted as an enabling solution to enhance indoor coverage and improve spectral efficiency. Users usually deploy small cells on-demand and pay no attention to global profile in residential areas or offices. The reduction of cell radius leads to dense deployment which brings intractable computation complexity for resource allocation. In this paper, we develop a semi-distributed resource allocation algorithm by dividing small cell networks into clusters with limited inter-cluster interference and selecting a reference cluster for interference estimation to reduce the coordination degree. Numerical results show that the proposed algorithm can maintain similar system performance while having low complexity and reduced information exchange overheads.

  • Noise Tolerant Heart Rate Extraction Algorithm Using Short-Term Autocorrelation for Wearable Healthcare Systems

    Shintaro IZUMI  Masanao NAKANO  Ken YAMASHITA  Yozaburo NAKAI  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER-Biological Engineering

      Pubricized:
    2015/01/26
      Vol:
    E98-D No:5
      Page(s):
    1095-1103

    This report describes a robust method of instantaneous heart rate (IHR) extraction from noisy electrocardiogram (ECG) signals. Generally, R-waves are extracted from ECG using a threshold to calculate the IHR from the interval of R-waves. However, noise increases the incidence of misdetection and false detection in wearable healthcare systems because the power consumption and electrode distance are limited to reduce the size and weight. To prevent incorrect detection, we propose a short-time autocorrelation (STAC) technique. The proposed method extracts the IHR by determining the search window shift length which maximizes the correlation coefficient between the template window and the search window. It uses the similarity of the QRS complex waveform beat-by-beat. Therefore, it has no threshold calculation process. Furthermore, it is robust against noisy environments. The proposed method was evaluated using MIT-BIH arrhythmia and noise stress test databases. Simulation results show that the proposed method achieves a state-of-the-art success rate of IHR extraction in a noise stress test using a muscle artifact and a motion artifact.

  • A Linguistics-Driven Approach to Statistical Parsing for Low-Resourced Languages

    Prachya BOONKWAN  Thepchai SUPNITHI  

     
    PAPER

      Pubricized:
    2015/01/21
      Vol:
    E98-D No:5
      Page(s):
    1045-1052

    Developing a practical and accurate statistical parser for low-resourced languages is a hard problem, because it requires large-scale treebanks, which are expensive and labor-intensive to build from scratch. Unsupervised grammar induction theoretically offers a way to overcome this hurdle by learning hidden syntactic structures from raw text automatically. The accuracy of grammar induction is still impractically low because frequent collocations of non-linguistically associable units are commonly found, resulting in dependency attachment errors. We introduce a novel approach to building a statistical parser for low-resourced languages by using language parameters as a guide for grammar induction. The intuition of this paper is: most dependency attachment errors are frequently used word orders which can be captured by a small prescribed set of linguistic constraints, while the rest of the language can be learned statistically by grammar induction. We then show that covering the most frequent grammar rules via our language parameters has a strong impact on the parsing accuracy in 12 languages.

  • Evaluating Cooperative ARQ Protocols from the Perspective of Physical Layer Security

    Lei WANG  Xinrong GUAN  Yueming CAI  Weiwei YANG  Wendong YANG  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:5
      Page(s):
    927-939

    This work investigates the physical layer security for three cooperative automatic-repeat-request (CARQ) protocols, including the decode-and-forward (DF) CARQ, opportunistic DF (ODF) CARQ, and the distributed space-time code (DSTC) CARQ. Assuming that there is no instantaneous channel state information (CSI) of legitimate users' channel and eavesdropper's channel at the transmitter, the connection outage performance and secrecy outage performance are derived to evaluate the reliability and security of each CARQ protocol. Then, we redefine the concept of the secrecy throughput to evaluate the overall efficiency of the system in terms of maintaining both reliable and secure transmission. Furthermore, through an asymptotic analysis in the high signal-to-noise ratio (SNR) regime, the direct relationship between reliability and security is established via the reliability-security tradeoff (RST). Numerical results verify the analysis and show the efficiency of the CARQ protocols in terms of the improvement on the secrecy throughput. More interestingly, increasing the transmit SNR and the maximum number of transmissions of the ARQ protocols may not achieve a security performance gain. In addition, the RST results underline the importance of determining how to balance the reliability vs. security, and show the superiority of ODF CARQ in terms of RST.

  • A Study of Effective Replica Reconstruction Schemes for the Hadoop Distributed File System

    Asami HIGAI  Atsuko TAKEFUSA  Hidemoto NAKADA  Masato OGUCHI  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/01/13
      Vol:
    E98-D No:4
      Page(s):
    872-882

    Distributed file systems, which manage large amounts of data over multiple commercially available machines, have attracted attention as management and processing systems for Big Data applications. A distributed file system consists of multiple data nodes and provides reliability and availability by holding multiple replicas of data. Due to system failure or maintenance, a data node may be removed from the system, and the data blocks held by the removed data node are lost. If data blocks are missing, the access load of the other data nodes that hold the lost data blocks increases, and as a result, the performance of data processing over the distributed file system decreases. Therefore, replica reconstruction is an important issue to reallocate the missing data blocks to prevent such performance degradation. The Hadoop Distributed File System (HDFS) is a widely used distributed file system. In the HDFS replica reconstruction process, source and destination data nodes for replication are selected randomly. We find that this replica reconstruction scheme is inefficient because data transfer is biased. Therefore, we propose two more effective replica reconstruction schemes that aim to balance the workloads of replication processes. Our proposed replication scheduling strategy assumes that nodes are arranged in a ring, and data blocks are transferred based on this one-directional ring structure to minimize the difference in the amount of transfer data for each node. Based on this strategy, we propose two replica reconstruction schemes: an optimization scheme and a heuristic scheme. We have implemented the proposed schemes in HDFS and evaluate them on an actual HDFS cluster. We also conduct experiments on a large-scale environment by simulation. From the experiments in the actual environment, we confirm that the replica reconstruction throughputs of the proposed schemes show a 45% improvement compared to the HDFS default scheme. We also verify that the heuristic scheme is effective because it shows performance comparable to the optimization scheme. Furthermore, the experimental results on the large-scale simulation environment show that while the optimization scheme is unrealistic because a long time is required to find the optimal solution, the heuristic scheme is very efficient because it can be scalable, and that scheme improved replica reconstruction throughput by up to 25% compared to the default scheme.

  • Transponder Array System with Universal On-Sheet Reference Scheme for Wireless Mobile Sensor Networks without Battery or Oscillator

    Takahide TERADA  Haruki FUKUDA  Tadahiro KURODA  

     
    PAPER-Analog Signal Processing

      Vol:
    E98-A No:4
      Page(s):
    932-941

    A rotating shaft with attached sensors is wrapped in a two-dimensional waveguide sheet through which the data and power are wirelessly transmitted. A retrodirective transponder array affixed to the sheet beamforms power to the moving sensor to eliminate the need for a battery. A universal on-sheet reference scheme is proposed for calibrating the transponder circuit delay variation and eliminating a crystal oscillator from the sensor. A base signal transmitted from the on-sheet reference device is used for generating the pilot signal transmitted from the sensor and the power signal transmitted from the transponder. A 0.18-µm CMOS transponder chip and the sheet with couplers were fabricated. The coupler has three resonant frequencies used for the proposed system. The measured propagation gain of the electric field changes to less than ±1.5dB within a 2.0-mm distance between the coupler and the sheet. The measured power transmission efficiency with beamforming is 23 times higher than that without it. Each transponder outputs 1W or less for providing 3mW to the sensor.

  • RFID Authentication with Un-Traceability and Forward Secrecy in the Partial-Distributed-Server Model Open Access

    Hung-Yu CHIEN  Tzong-Chen WU  Chien-Lung HSU  

     
    INVITED PAPER

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    750-759

    Secure authentication of low cost Radio Frequency Identification (RFID) tag with limited resources is a big challenge, especially when we simultaneously consider anonymity, un-traceability, and forward secrecy. The popularity of Internet of Things (IoT) further amplifies this challenge, as we should authenticate these mobile tags in the partial-distributed-server environments. In this paper, we propose an RFID authentication scheme in the partial-distributed-server environments. The proposed scheme owns excellent performance in terms of computational complexity and scalability as well as security properties.

  • Low Overhead Query Method for the Interface between Geo-Location Database and Secondary User

    Ha-Nguyen TRAN  Hiroshi HARADA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:4
      Page(s):
    714-722

    Accessing a geo-location database is one of the approaches for a secondary user (SU) to obtain the list of available channels for its operation. Channel availability is calculated based on information stored in the geo-location database and information submitted by the SU so that primary users (PU) are protected from harmful interference. The available channel checking process is modeled as a number of intersection tests between the protected contours of PUs and the operation area of the SU regarding to all potential channels. Existing studies indicated that these intersection tests consume time and introduce overhead to the database, especially when the contours or the operation areas are represented by n-polygons and the number of vertices n is a large number. This paper presents a novel method of determining available channels which reduces the number of intersection tests. By submitting SU's preferred channels or the number of channels to be checked to the database, the calculation time and database's load will be reduced significantly. This paper also presents analysis and simulation results of the database workload and the average number of channels obtained per query on different query methods. Suitable query method can be selected based on the number of similar channels in neighbor areas and the maximum number of intersection tests.

  • Predictive Control for Performance Improvement of a Feedback Control System Using Cyclostationary Channels

    Cesar CARRIZO  Kentaro KOBAYASHI  Hiraku OKADA  Masaaki KATAYAMA  

     
    PAPER-Communication Theory and Signals

      Vol:
    E98-A No:4
      Page(s):
    1000-1005

    This manuscript presents a simple scheme to improve the performance of a feedback control system that uses power line channels for its feedback loop. The noise and attenuation of power lines, and thus the signal to noise ratio, are known to be cyclostationary. Such cyclic features in the channel allow us to predict virtually error free transmission instants as well as instants of high probability of errors. This paper introduces and evaluates the effectiveness of a packet transmission scheduling that collaborates with a predictive control scheme adapted to this cyclostationary environment. In other words, we explore the cooperation between the physical and application layers of the system in order to achieve an overall optimization. To rate the control quality of the system we evaluate its stability as well as its ability to follow control commands accurately. We compare a scheme of increased packet rate against our proposed scheme which emulates a high packet rate with the use of predictive control. Through this comparison, we verify the effectiveness of the proposed scheme to improve the control quality of the system, even under low signal to noise ratio conditions in the cyclostationary channel.

  • Contextual Max Pooling for Human Action Recognition

    Zhong ZHANG  Shuang LIU  Xing MEI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/19
      Vol:
    E98-D No:4
      Page(s):
    989-993

    The bag-of-words model (BOW) has been extensively adopted by recent human action recognition methods. The pooling operation, which aggregates local descriptor encodings into a single representation, is a key determiner of the performance of the BOW-based methods. However, the spatio-temporal relationship among interest points has rarely been considered in the pooling step, which results in the imprecise representation of human actions. In this paper, we propose a novel pooling strategy named contextual max pooling (CMP) to overcome this limitation. We add a constraint term into the objective function under the framework of max pooling, which forces the weights of interest points to be consistent with their probabilities. In this way, CMP explicitly considers the spatio-temporal contextual relationships among interest points and inherits the positive properties of max pooling. Our method is verified on three challenging datasets (KTH, UCF Sports and UCF Films datasets), and the results demonstrate that our method achieves better results than the state-of-the-art methods in human action recognition.

  • Fuzzy-Based Adaptive Countering Method against False Data Injection Attacks in Wireless Sensor Networks

    Hae Young LEE  

     
    LETTER-Information Network

      Vol:
    E98-D No:4
      Page(s):
    964-967

    This letter presents a method to adaptively counter false data injection attacks (FDIAs) in wireless sensor networks, in which a fuzzy rule-based system detects FDIAs and chooses the most appropriate countermeasures. The method does not require en-route verification processes and manual parameter settings. The effectiveness of the method is shown with simulation results.

  • ROI-Based Reversible Data Hiding Scheme for Medical Images with Tamper Detection

    Yuling LIU  Xinxin QU  Guojiang XIN  Peng LIU  

     
    PAPER-Data Hiding

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    769-774

    A novel ROI-based reversible data hiding scheme is proposed for medical images, which is able to hide electronic patient record (EPR) and protect the region of interest (ROI) with tamper localization and recovery. The proposed scheme combines prediction error expansion with the sorting technique for embedding EPR into ROI, and the recovery information is embedded into the region of non-interest (RONI) using histogram shifting (HS) method which hardly leads to the overflow and underflow problems. The experimental results show that the proposed scheme not only can embed a large amount of information with low distortion, but also can localize and recover the tampered area inside ROI.

  • Multistage Function Speculation Adders

    Yinan SUN  Yongpan LIU  Zhibo WANG  Huazhong YANG  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E98-A No:4
      Page(s):
    954-965

    Function speculation design with error recovery mechanisms is quite promising due to its high performance and low area overhead. Previous work has focused on two-stage function speculation and thus lacks a systematic way to address the challenge of the multistage function speculation approach. This paper proposes a multistage function speculation with adaptive predictors and applies it in a novel adder. We deduced the analytical performance and area models for the design and validated them in our experiments. Based on those models, a general methodology is presented to guide design optimization. Both analytical proofs and experimental results on the fabricated chips show that the proposed adder's delay and area have a logarithmic and linear relationship with its bit number, respectively. Compared with the DesignWare IP, the proposed adder provides the same performance with 6-17% area reduction under different bit lengths.

  • Field-emission Characteristics of a Focused-Ion-Beam-Sharpened P-Type Silicon Single Emitter

    Tomomi YOSHIMOTO  Tatsuo IWATA  

     
    PAPER-Electron Tubes, Vacuum and Beam Technology

      Vol:
    E98-C No:4
      Page(s):
    371-376

    The field electron emission characteristics of a p-type Si emitter sharpened by a spirally scanned Ga focused-ion-beam milling process were investigated. Saturated Fowler--Nordheim (F--N) plots, which are unique phenomena of p-type semiconductor emitters, were observed. The slight increase of the emission current in the saturated F--N plots region was discussed in terms of the depletion layer width in which electron generation occurs. The temperature dependence of the field electron emission current was also discussed. The activation energy of carrier generation was determined to be 0.26,eV, ascribable to the surface states that accompany the defects introduced by the Ga ion beam. When the emitter was irradiated by a 650-nm-wavelength laser, the increase in the emission current, i.e., the photoexcited emission current, was observed in the saturated region of the F--N plots. The photoexcited emission current was proportional to the laser intensity.

  • Techniques for Measuring Business Process Based on Business Values

    Jihyun LEE  Sungwon KANG  

     
    PAPER-Office Information Systems, e-Business Modeling

      Pubricized:
    2014/12/26
      Vol:
    E98-D No:4
      Page(s):
    911-921

    The ultimate purpose of a business process is to promote business values. Thus, any process that fails to enhance or promote business values should be improved or adjusted so that business values can be achieved. Therefore an organization should have the capability of confirming whether a business value is achieved; furthermore, in order to cope with the changes of business environment, it should be able to define the necessary measures on the basis of business values. This paper proposes techniques for measuring a business process based on business values, which can be used to monitor and control business activities focusing on the attainment of business values. To show the feasibility of the techniques, we compare their monitoring and controlling capabilities with those of the current fulfillment process of a company. The results show that the proposed techniques are effective in linking business values to relevant processes and integrating each measurement result in accordance with the management level.

  • A New Approach to Identify User Authentication Methods toward SSH Dictionary Attack Detection

    Akihiro SATOH  Yutaka NAKAMURA  Takeshi IKENAGA  

     
    PAPER-Authentication

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    760-768

    A dictionary attack against SSH is a common security threat. Many methods rely on network traffic to detect SSH dictionary attacks because the connections of remote login, file transfer, and TCP/IP forwarding are visibly distinct from those of attacks. However, these methods incorrectly judge the connections of automated operation tasks as those of attacks due to their mutual similarities. In this paper, we propose a new approach to identify user authentication methods on SSH connections and to remove connections that employ non-keystroke based authentication. This approach is based on two perspectives: (1) an SSH dictionary attack targets a host that provides keystroke based authentication; and (2) automated tasks through SSH need to support non-keystroke based authentication. Keystroke based authentication relies on a character string that is input by a human; in contrast, non-keystroke based authentication relies on information other than a character string. We evaluated the effectiveness of our approach through experiments on real network traffic at the edges in four campus networks, and the experimental results showed that our approach provides high identification accuracy with only a few errors.

  • A New Generic Construction of Proxy Signatures under Enhanced Security Models

    Kee Sung KIM  Ik Rae JEONG  

     
    PAPER-Cryptography and Information Security

      Vol:
    E98-A No:4
      Page(s):
    975-981

    A proxy signature scheme allows an entity to delegate his signing capabilities to another. Many schemes have been provided for use in numerous applications such as distributed computing, grid computing, and mobile communications. In 2003, Boldyreva et al. introduced the first formal security model of proxy signatures and also proposed a generic construction secure in their model. However, an adversary can arbitrarily alter the warrants of the proxy signatures because the warrants are not explicitly considered in their model. To solve this problem, Huang et al. provided an enhanced security model of proxy signatures in 2005. Some proxy signatures secure in this security model have been proposed but there is no generic construction yet. In this paper, we redefine and improve the Huang et al.'s security model in terms of multi-user and then provide a new generic construction of proxy signatures secure against our enhanced security model based on ID-based signatures. Moreover, we can make a lattice-based proxy signature scheme in the standard model from our result.

2061-2080hit(8214hit)