The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

5221-5240hit(20498hit)

  • Optical absorption characteristics and polarization dependence of single-layer graphene on silicon waveguide Open Access

    Kaori WARABI  Rai KOU  Shinichi TANABE  Tai TSUCHIZAWA  Satoru SUZUKI  Hiroki HIBINO  Hirochika NAKAJIMA  Koji YAMADA  

     
    INVITED PAPER

      Vol:
    E97-C No:7
      Page(s):
    736-743

    Graphene is attracting attention in electrical and optical research fields recently. We measured the optical absorption characteristics and polarization dependence of single-layer graphene (SLG) on sub-micrometer Si waveguide. The results for graphene lengths ranging from 2.5 to 200 $mu$ m reveal that the optical absorption by graphene is 0.09 dB/$mu$ m with the TE mode and 0.05 dB/$mu$ m with the TM mode. The absorption in the TE mode is 1.8 times higher than that in the TM mode. An optical spectrum, theoretical analysis and Raman spectrum indicate that surface-plasmon polaritons in graphene support TM mode light propagation.

  • A Parallel Maximal Matching Algorithm for Large Graphs Using Pregel

    Byungnam LIM  Yon Dohn CHUNG  

     
    LETTER-Data Engineering, Web Information Systems

      Vol:
    E97-D No:7
      Page(s):
    1910-1913

    Graph matching is to find an independent edge set in a graph. It can be used for various purposes such as finding a cover in a graph, chemical structural computations, multi-level graph partitioning and so on. When a graph is too large to be handled by a single machine, we should use multiple machines. In this paper, we use Pregel, a cloud graph processing architecture which is able to process massive scale graph data in scalable and fault-tolerant ways. We propose a parallel maximal matching algorithm described in the Pregel's vertex-centric BSP model. We test our algorithm on an 8 node cluster and the results show that our algorithm can realize high quality matching for a large graph in a short time. Also, our algorithm is linearly scalable with the number of machines.

  • Novel 16-QAM Golay Complementary Sequences

    Fanxin ZENG  Xiaoping ZENG  Zhenyu ZHANG  Guixin XUAN  

     
    LETTER-Information Theory

      Vol:
    E97-A No:7
      Page(s):
    1631-1634

    Based on the non-standard generalized Boolean functions (GBFs) over Z4, we propose a new method to convert those functions into the 16-QAM Golay complementary sequences (CSs). The resultant 16-QAM Golay CSs have the upper bound of peak-to-mean envelope power ratio (PMEPR) as low as 2. In addition, we obtain multiple 16-QAM Golay CSs for a given quadrature phase shift keying (QPSK) Golay CS.

  • A Variable Step-Size Feedback Cancellation Algorithm Based on GSAP for Digital Hearing Aids

    Hongsub AN  Hyeonmin SHIM  Jangwoo KWON  Sangmin LEE  

     
    LETTER-Digital Signal Processing

      Vol:
    E97-A No:7
      Page(s):
    1615-1618

    Acoustic feedback is a major complaint of hearing aid users. Adaptive filters are a common method for suppressing acoustic feedback in digital hearing aids. In this letter, we propose a new variable step-size algorithm for normalized least mean square and an affine projection algorithm to combine with a variable step-size affine projection algorithm and global speech absence probability in an adaptive filter. The computer simulation used to test the proposed algorithm results in a lower misalignment error than the comparison algorithm at a similar convergence rate. Therefore, the proposed algorithm suggests an effective solution for the feedback suppression system of digital hearing aids.

  • Verification of Moore's Law Using Actual Semiconductor Production Data

    Junichi HIRASE  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E97-C No:6
      Page(s):
    599-608

    One of the technological innovations that has enabled the VLSI semiconductor industry to reduce the transistor size, increase the number of transistors per die, and also follow Moore's law year after year is the fact that an equivalent yield and equivalent testing quality have been ensured for the same die size. This has contributed to reducing the economically optimum production cost (production cost per component) as advocated by Moore. In this paper, we will verify Moore's law using actual values from VLSI manufacturing sites while introducing some of the technical progress that occurred from 1970 to 2010.

  • Utilizing Global Syntactic Tree Features for Phrase Reordering

    Yeon-Soo LEE  Hyoung-Gyu LEE  Hae-Chang RIM  Young-Sook HWANG  

     
    LETTER-Natural Language Processing

      Vol:
    E97-D No:6
      Page(s):
    1694-1698

    In phrase-based statistical machine translation, long distance reordering problem is one of the most challenging issues when translating syntactically distant language pairs. In this paper, we propose a novel reordering model to solve this problem. In our model, reordering is affected by the overall structures of sentences such as listings, reduplications, and modifications as well as the relationships of adjacent phrases. To this end, we reflect global syntactic contexts including the parts that are not yet translated during the decoding process.

  • Illumination Normalization-Based Face Detection under Varying Illumination

    Min YAO  Hiroshi NAGAHASHI  Kota AOKI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:6
      Page(s):
    1590-1598

    A number of well-known learning-based face detectors can achieve extraordinary performance in controlled environments. But face detection under varying illumination is still challenging. Possible solutions to this illumination problem could be creating illumination invariant features or utilizing skin color information. However, the features and skin colors are not sufficiently reliable under difficult lighting conditions. Another possible solution is to do illumination normalization (e.g., Histogram Equalization (HE)) prior to executing face detectors. However, applications of normalization to face detection have not been widely studied in the literature. This paper applies and evaluates various existing normalization methods under the framework of combining the illumination normalization and two learning-based face detectors (Haar-like face detector and LBP face detector). These methods were initially proposed for different purposes (face recognition or image quality enhancement), but some of them significantly improve the original face detectors and lead to better performance than HE according to the results of the comparative experiments on two databases. Meanwhile, we propose a new normalization method called segmentation-based half histogram stretching and truncation (SH) for face detection under varying illumination. It first employs Otsu method to segment the histogram (intensities) of the input image into several spans and then does the redistribution on the segmented spans. In this way, the non-uniform illumination can be efficiently compensated and local facial structures can be appropriately enhanced. Our method obtains good performance according to the experiments.

  • Implicit Generation of Pattern-Avoiding Permutations by Using Permutation Decision Diagrams

    Yuma INOUE  Takahisa TODA  Shin-ichi MINATO  

     
    PAPER

      Vol:
    E97-A No:6
      Page(s):
    1171-1179

    Pattern-avoiding permutations are permutations where none of the subsequences matches the relative order of a given pattern. Pattern-avoiding permutations are related to practical and abstract mathematical problems and can provide simple representations for such problems. For example, some floorplans, which are used for optimizing very-large-scale integration (VLSI) circuit design, can be encoded into pattern-avoiding permutations. The generation of pattern-avoiding permutations is an important topic in efficient VLSI design and mathematical analysis of patten-avoiding permutations. In this paper, we present an algorithm for generating pattern-avoiding permutations, and extend this algorithm beyond classical patterns to generalized patterns with more restrictions. Our approach is based on the data structure πDDs, which can represent a permutation set compactly and has useful set operations. We demonstrate the efficiency of our algorithm by computational experiments.

  • #P-hardness of Computing High Order Derivative and Its Logarithm

    Ei ANDO  

     
    LETTER

      Vol:
    E97-A No:6
      Page(s):
    1382-1384

    In this paper, we show a connection between #P and computing the (real) value of the high order derivative at the origin. Consider, as a problem instance, an integer b and a sufficiently often differentiable function F(x) that is given as a string. Then we consider computing the value F(b)(0) of the b-th derivative of F(x) at the origin. By showing a polynomial as an example, we show that we have FP = #P if we can compute log 2F(b)(0) up to certain precision. The previous statement holds even if F(x) is limited to a function that is analytic at any x ∈ R. It implies the hardness of computing the b-th value of a number sequence from the closed form of its generating function.

  • Expressing Algorithms as Concise as Possible via Computability Logic

    Keehang KWON  

     
    LETTER

      Vol:
    E97-A No:6
      Page(s):
    1385-1387

    This paper proposes a new approach to defining and expressing algorithms: the notion of task logical algorithms. This notion allows the user to define an algorithm for a task T as a set of agents who can collectively perform T. This notion considerably simplifies the algorithm development process and can be seen as an integration of the sequential pseudocode and logical algorithms. This observation requires some changes to algorithm development process. We propose a two-step approach: the first step is to define an algorithm for a task T via a set of agents that can collectively perform T. The second step is to translate these agents into (higher-order) computability logic.

  • Fingerprint Verification and Identification Based on Local Geometric Invariants Constructed from Minutiae Points and Augmented with Global Directional Filterbank Features

    Chuchart PINTAVIROOJ  Fernand S. COHEN  Woranut IAMPA  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:6
      Page(s):
    1599-1613

    This paper addresses the problems of fingerprint identification and verification when a query fingerprint is taken under conditions that differ from those under which the fingerprint of the same person stored in a database was constructed. This occurs when using a different fingerprint scanner with a different pressure, resulting in a fingerprint impression that is smeared and distorted in accordance with a geometric transformation (e.g., affine or even non-linear). Minutiae points on a query fingerprint are matched and aligned to those on one of the fingerprints in the database, using a set of absolute invariants constructed from the shape and/or size of minutiae triangles depending on the assumed map. Once the best candidate match is declared and the corresponding minutiae points are flagged, the query fingerprint image is warped against the candidate fingerprint image in accordance with the estimated warping map. An identification/verification cost function using a combination of distance map and global directional filterbank (DFB) features is then utilized to verify and identify a query fingerprint against candidate fingerprint(s). Performance of the algorithm yields an area of 0.99967 (perfect classification is a value of 1) under the receiver operating characteristic (ROC) curve based on a database consisting of a total of 1680 fingerprint images captured from 240 fingers. The average probability of error was found to be 0.713%. Our algorithm also yields the smallest false non-match rate (FNMR) for a comparable false match rate (FMR) when compared to the well-known technique of DFB features and triangulation-based matching integrated with modeling non-linear deformation. This work represents an advance in resolving the fingerprint identification problem beyond the state-of-the-art approaches in both performance and robustness.

  • Creating Stories from Socially Curated Microblog Messages

    Akisato KIMURA  Kevin DUH  Tsutomu HIRAO  Katsuhiko ISHIGURO  Tomoharu IWATA  Albert AU YEUNG  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:6
      Page(s):
    1557-1566

    Social media such as microblogs have become so pervasive such that it is now possible to use them as sensors for real-world events and memes. While much recent research has focused on developing automatic methods for filtering and summarizing these data streams, we explore a different trend called social curation. In contrast to automatic methods, social curation is characterized as a human-in-the-loop and sometimes crowd-sourced mechanism for exploiting social media as sensors. Although social curation web services like Togetter, Naver Matome and Storify are gaining popularity, little academic research has studied the phenomenon. In this paper, our goal is to investigate the phenomenon and potential of this new field of social curation. First, we perform an in-depth analysis of a large corpus of curated microblog data. We seek to understand why and how people participate in this laborious curation process. We then explore new ways in which information retrieval and machine learning technologies can be used to assist curators. In particular, we propose a novel method based on a learning-to-rank framework that increases the curator's productivity and breadth of perspective by suggesting which novel microblogs should be added to the curated content.

  • Motion Pattern Study and Analysis from Video Monitoring Trajectory

    Kai KANG  Weibin LIU  Weiwei XING  

     
    PAPER-Pattern Recognition

      Vol:
    E97-D No:6
      Page(s):
    1574-1582

    This paper introduces an unsupervised method for motion pattern learning and abnormality detection from video surveillance. In the preprocessing steps, trajectories are segmented based on their locations, and the sub-trajectories are represented as codebooks. Under our framework, Hidden Markov Models (HMMs) are used to characterize the motion pattern feature of the trajectory groups. The state of trajectory is represented by a HMM and has a probability distribution over the possible output sub-trajectories. Bayesian Information Criterion (BIC) is introduced to measure the similarity between groups. Based on the pairwise similarity scores, an affinity matrix is constructed which indicates the distance between different trajectory groups. An Adaptable Dynamic Hierarchical Clustering (ADHC) tree is proposed to gradually merge the most similar groups and form the trajectory motion patterns, which implements a simpler and more tractable dynamical clustering procedure in updating the clustering results with lower time complexity and avoids the traditional overfitting problem. By using the HMM models generated for the obtained trajectory motion patterns, we may recognize motion patterns and detect anomalies by computing the likelihood of the given trajectory, where a maximum likelihood for HMM indicates a pattern, and a small one below a threshold suggests an anomaly. Experiments are performed on EIFPD trajectory datasets from a structureless scene, where pedestrians choose their walking paths randomly. The experimental results show that our method can accurately learn motion patterns and detect anomalies with better performance.

  • A Novel Adaptive Unambiguous Acquisition Scheme for CBOC Signal Based on Galileo

    Ce LIANG  Xiyan SUN  Yuanfa JI  Qinghua LIU  Guisheng LIAO  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:6
      Page(s):
    1157-1165

    The composite binary offset carrier (CBOC) modulated signal contains multi-peaks in its auto-correlation function, which brings ambiguity to the signal acquisition process of a GNSS receiver. Currently, most traditional ambiguity-removing schemes for CBOC signal acquisition approximate CBOC signal as a BOC signal, which may incur performance degradation. Based on Galileo E1 CBOC signal, this paper proposes a novel adaptive ambiguity-removing acquisition scheme which doesn't adopt the approximation used in traditional schemes. According to the energy ratio of each sub-code of CBOC signal, the proposed scheme can self-adjust its local reference code to achieve unambiguous and precise signal synchronization. Monte Carlo simulation is conducted in this paper to analyze the performance of the proposed scheme and three traditional schemes. Simulation results show that the proposed scheme has higher detection probability and less mean acquisition time than the other three schemes, which verify the superiority of the proposed scheme.

  • Effects of Voluntary Movements on Audio-Tactile Temporal Order Judgment

    Atsuhiro NISHI  Masanori YOKOYAMA  Ken-ichiro OGAWA  Taiki OGATA  Takayuki NOZAWA  Yoshihiro MIYAKE  

     
    PAPER-Office Information Systems, e-Business Modeling

      Vol:
    E97-D No:6
      Page(s):
    1567-1573

    The present study aims to investigate the effect of voluntary movements on human temporal perception in multisensory integration. We therefore performed temporal order judgment (TOJ) tasks in audio-tactile integration under three conditions: no movement, involuntary movement, and voluntary movement. It is known that the point of subjective simultaneity (PSS) under the no movement condition, that is, normal TOJ tasks, appears when a tactile stimulus is presented before an auditory stimulus. Our experiment showed that involuntary and voluntary movements shift the PSS to a value that reduces the interval between the presentations of auditory and tactile stimuli. Here, the shift of the PSS under the voluntary movement condition was greater than that under the involuntary movement condition. Remarkably, the PSS under the voluntary movement condition appears when an auditory stimulus slightly precedes a tactile stimulus. In addition, a just noticeable difference (JND) under the voluntary movement condition was smaller than those under the other two conditions. These results reveal that voluntary movements alternate the temporal integration of audio-tactile stimuli. In particular, our results suggest that voluntary movements reverse the temporal perception order of auditory and tactile stimuli and improve the temporal resolution of temporal perception. We discuss the functional mechanism of shifting the PSS under the no movement condition with voluntary movements in audio-tactile integration.

  • Recovering RSA Secret Keys from Noisy Key Bits with Erasures and Errors

    Noboru KUNIHIRO  Naoyuki SHINOHARA  Tetsuya IZU  

     
    PAPER

      Vol:
    E97-A No:6
      Page(s):
    1273-1284

    We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto 2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.

  • Dynamic Check Message Majority-Logic Decoding Algorithm for Non-binary LDPC Codes

    Yichao LU  Xiao PENG  Guifen TIAN  Satoshi GOTO  

     
    PAPER

      Vol:
    E97-A No:6
      Page(s):
    1356-1364

    Majority-logic algorithms are devised for decoding non-binary LDPC codes in order to reduce computational complexity. However, compared with conventional belief propagation algorithms, majority-logic algorithms suffer from severe bit error performance degradation. This paper presents a low-complexity reliability-based algorithm aiming at improving error correcting ability of majority-logic algorithms. Reliability measures for check nodes are novelly introduced to realize mutual update between variable message and check message, and hence more efficient reliability propagation can be achieved, similar to belief-propagation algorithm. Simulation results on NB-LDPC codes with different characteristics demonstrate that our algorithm can reduce the bit error ratio by more than one order of magnitude and the coding gain enhancement over ISRB-MLGD can reach 0.2-2.0dB, compared with both the ISRB-MLGD and IISRB-MLGD algorithms. Moreover, simulations on typical LDPC codes show that the computational complexity of the proposed algorithm is closely equivalent to ISRB-MLGD algorithm, and is less than 10% of Min-max algorithm. As a result, the proposed algorithm achieves a more efficient trade-off between decoding computational complexity and error performance.

  • Practical and Exposure-Resilient Hierarchical ID-Based Authenticated Key Exchange without Random Oracles

    Kazuki YONEYAMA  

     
    PAPER

      Vol:
    E97-A No:6
      Page(s):
    1335-1344

    ID-based authenticated key exchange (ID-AKE) is a cryptographic tool to establish a common session key between parties with authentication based on their IDs. If IDs contain some hierarchical structure such as an e-mail address, hierarchical ID-AKE (HID-AKE) is especially suitable because of scalability. However, most of existing HID-AKE schemes do not satisfy advanced security properties such as forward secrecy, and the only known strongly secure HID-AKE scheme is inefficient. In this paper, we propose a new HID-AKE scheme which achieves both strong security and efficiency. We prove that our scheme is eCK-secure (which ensures maximal-exposure-resilience including forward secrecy) without random oracles, while existing schemes is proved in the random oracle model. Moreover, the number of messages and pairing operations are independent of the hierarchy depth; that is, really scalable and practical for a large-system.

  • A Low-Cost Stimulus Design for Linearity Test in SAR ADCs

    An-Sheng CHAO  Cheng-Wu LIN  Hsin-Wen TING  Soon-Jyh CHANG  

     
    PAPER

      Vol:
    E97-C No:6
      Page(s):
    538-545

    The proposed stimulus design for linearity test is embedded in a differential successive approximation register analog-to-digital converter (SAR ADC), i.e. a design for testability (DFT). The proposed DFT is compatible to the pattern generator (PG) and output response analyzer (ORA) with the cost of 12.4-% area of the SAR ADC. The 10-bit SAR ADC prototype is verified in a 0.18-µm CMOS technology and the measured differential nonlinearity (DNL) error is between -0.386 and 0.281 LSB at 1-MS/s.

  • Comparison of Calculation Techniques for Q-Factor Determination of Resonant Structures Based on Influence of VNA Measurement Uncertainty

    Yuto KATO  Masahiro HORIBE  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E97-C No:6
      Page(s):
    575-582

    Four calculation techniques for the Q-factor determination of resonant structures are compared on the basis of the influence of the VNA measurement uncertainty. The influence is evaluated using Monte Carlo calculations. On the basis of the deviation, the dispersion, and the effect of nearby resonances, the circle fitting method is the most appropriate technique. Although the 3dB method is the most popular technique, the Q-factors calculated by this method exhibit deviations, and the sign and amount of the deviation depend on the measurement setup. Comparisons using measurement data demonstrate that the uncertainty of the dielectric loss tangent calculated by the circle fitting method is less than a third of those calculated by the other three techniques.

5221-5240hit(20498hit)