The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

2461-2480hit(16314hit)

  • Non-Blind Deconvolution of Point Cloud Attributes in Graph Spectral Domain

    Kaoru YAMAMOTO  Masaki ONUKI  Yuichi TANAKA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1751-1759

    We propose a non-blind deconvolution algorithm of point cloud attributes inspired by multi-Wiener SURE-LET deconvolution for images. The image reconstructed by the SURE-LET approach is expressed as a linear combination of multiple filtered images where the filters are defined on the frequency domain. The coefficients of the linear combination are calculated so that the estimate of mean squared error between the original and restored images is minimized. Although the approach is very effective, it is only applicable to images. Recently we have to handle signals on irregular grids, e.g., texture data on 3D models, which are often blurred due to diffusion or motions of objects. However, we cannot utilize image processing-based approaches straightforwardly since these high-dimensional signals cannot be transformed into their frequency domain. To overcome the problem, we use graph signal processing (GSP) for deblurring the complex-structured data. That is, the SURE-LET approach is redefined on GSP, where the Wiener-like filtering is followed by the subband decomposition with an analysis graph filter bank, and then thresholding for each subband is performed. In the experiments, the proposed method is applied to blurred textures on 3D models and synthetic sparse data. The experimental results show clearly deblurred signals with SNR improvements.

  • Management of Information, Communications, and Networking: from the Past to the Future Open Access

    Shingo ATA  Toshio TONOUCHI  

     
    INVITED PAPER-Network Management/Operation

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1614-1622

    As a result of large-scale and complicated of ICT (Information and Communication Technology), the technologies of operations and management of ICT infrastructure and systems are changing to adopt the variation and diversity of usage and communication characteristics. In retrospect, operations and management technologies cover widely from the support of telecommunications operations and remote monitoring for maintaining network equipments, to an integrated network management framework for configuration, monitoring, testing and controls. Recently, the spread of network virtualization technologies enable agility, integrity, and resilience to ICT services. Operations and management technologies will be more important in future, for the support of integrated management of ICT infrastructure including computing resource, and automation of service operations. In this paper, we review research activities of the Technical Committee on Information and Communications Management (ICM) with discussions of individual research category. We then describe the state-of-the-art topics and future directions in the area of ICM.

  • Synthesis and Enumeration of Generalized Shift Registers for Strongly Secure SR-Equivalents

    Hideo FUJIWARA  Katsuya FUJIWARA  

     
    LETTER-Dependable Computing

      Pubricized:
    2017/05/26
      Vol:
    E100-D No:9
      Page(s):
    2232-2236

    In our previous work, we introduced new concepts of secure scan design; shift register equivalent circuits (SR-equivalents, for short) and strongly secure circuits, and also introduced generalized shift registers (GSRs, for short) to apply them to secure scan design. In this paper, we combine both concepts of SR-equivalents and strongly secure circuits and apply them to GSRs, and consider the synthesis problem of strongly secure SR-equivalents using GSRs. We also consider the enumeration problem of GSRs that are strongly secure and SR-equivalent, i.e., the cardinality of the class of strongly secure SR-equivalent GSRs to clarify the security level of the secure scan architecture.

  • Quantification of Human Stress Using Commercially Available Single Channel EEG Headset

    Sanay MUHAMMAD UMAR SAEED  Syed MUHAMMAD ANWAR  Muhammad MAJID  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2017/06/02
      Vol:
    E100-D No:9
      Page(s):
    2241-2244

    A study on quantification of human stress using low beta waves of electroencephalography (EEG) is presented. For the very first time the importance of low beta waves as a feature for quantification of human stress is highlighted. In this study, there were twenty-eight participants who filled the Perceived Stress Scale (PSS) questionnaire and recorded their EEG in closed eye condition by using a commercially available single channel EEG headset placed at frontal site. On the regression analysis of beta waves extracted from recorded EEG, it has been observed that low beta waves can predict PSS scores with a confidence level of 94%. Consequently, when low beta wave is used as a feature with the Naive Bayes algorithm for classification of stress level, it not only reduces the computational cost by 7 folds but also improves the accuracy to 71.4%.

  • A Finite Automaton-Based String Matching Engine on Graphic Processing Unit

    JinMyung YOON  Kang-Il CHOI  HyunJin KIM  

     
    LETTER-VLSI Design Technology and CAD

      Vol:
    E100-A No:9
      Page(s):
    2031-2033

    A non-deterministic finite automaton (NFA)-based parallel string matching scheme is proposed. To parallelize the operations of NFAs, a graphic processing unit (GPU) is adopted. Considering the resource occupancy of threads and size of the shared memory, the optimized resource allocation is performed in the proposed string matching scheme. Therefore, the performance is enhanced significantly in all evaluations.

  • NerveNet Architecture and Its Pilot Test in Shirahama for Resilient Social Infrastructure Open Access

    Masugi INOUE  Yasunori OWADA  

     
    INVITED PAPER-Network

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1526-1537

    From past experience of the large-scale cutoff of existing networks as a result of the East Japan Great Earthquake and tsunamis, and from previous research on stabilizing ad hoc networks that lack control mechanisms, we have strengthened the resilience of NerveNet. NerveNet was originally designed and developed as an access network for providing context-aware services with the use of sensors and actuators. Thus, at present, it has the capability to enable resilient information sharing and communications in a region even if access to the Internet is impossible in emergency situations. NerveNet is composed of single or multiple base stations interconnected by a variety of Ethernet-based wired or wireless transmission systems. A network is formed using line, star, tree, or mesh topology. Network and data management works in each base station in a distributed manner, resulting in the resilience of this system. In collaboration with the town of Shirahama in Wakayama prefecture in Japan, we have been conducting a pilot test with the NerveNet testbed. The test includes nine base stations interconnected by 5.6-GHz Wi-Fi and Fixed Wireless Access (FWA), providing tourists and residents with Internet access. In the future, we expect that not only NerveNet but also other novel technologies will contribute to solving social problems and enriching people's lives.

  • A Low Capture Power Test Generation Method Based on Capture Safe Test Vector Manipulation

    Toshinori HOSOKAWA  Atsushi HIRAI  Yukari YAMAUCHI  Masayuki ARAI  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/06/06
      Vol:
    E100-D No:9
      Page(s):
    2118-2125

    In at-speed scan testing, capture power is a serious problem because the high power dissipation that can occur when the response for a test vector is captured by flip-flops results in excessive voltage drops, known as IR-drops, which may cause significant capture-induced yield loss. In low capture power test generation, the test vectors that violate capture power constraints in an initial test set are defined as capture-unsafe test vectors, while faults that are detected solely by capture-unsafe test vectors are defined as unsafe faults. It is necessary to regenerate the test vectors used to detect unsafe faults in order to prevent unnecessary yield losses. In this paper, we propose a new low capture power test generation method based on fault simulation that uses capture-safe test vectors in an initial test set. Experimental results show that the use of this method reduces the number of unsafe faults by 94% while requiring just 18% more additional test vectors on average, and while requiring less test generation time compared with the conventional low capture power test generation method.

  • Signatures from Trapdoor Commitments with Strong Openings

    Goichiro HANAOKA  Jacob C. N. SCHULDT  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1924-1931

    In this paper, we propose a new generic construction of signatures from trapdoor commitments with strong openings in the random oracle model. Our construction is very efficient in the sense that signatures consist of just a single decommitment of the underlying commitment scheme, and verification corresponds to verifying this decommitment against a commitment derived via a hash function. Furthermore, assuming the commitment scheme provides sufficiently strong statistical hiding and trapdoor opening properties, the reduction of the security of the signature scheme to the binding property of the commitment scheme is tight. To instantiate our construction, we propose two new commitment schemes with strong openings. Both of these are statistically hiding, and have binding properties based on a Diffie-Hellman inversion problem and factoring, respectively. The signature schemes obtained from these are very efficient; the first matches the performance of BLS signatures, which currently provides the shortest signatures, and the second provides signatures of similar length to the shortest version of Rabin-Williams signatures while still being tightly related to factoring.

  • Sufficient and Necessary Conditions of Distributed Compressed Sensing with Prior Information

    Wenbo XU  Yupeng CUI  Yun TIAN  Siye WANG  Jiaru LIN  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E100-A No:9
      Page(s):
    2013-2020

    This paper considers the recovery problem of distributed compressed sensing (DCS), where J (J≥2) signals all have sparse common component and sparse innovation components. The decoder attempts to jointly recover each component based on {Mj} random noisy measurements (j=1,…,J) with the prior information on the support probabilities, i.e., the probabilities that the entries in each component are nonzero. We give both the sufficient and necessary conditions on the total number of measurements $sum olimits_{j = 1}^J M_j$ that is needed to recover the support set of each component perfectly. The results show that when the number of signal J increases, the required average number of measurements $sum olimits_{j = 1}^J M_j/J$ decreases. Furthermore, we propose an extension of one existing algorithm for DCS to exploit the prior information, and simulations verify its improved performance.

  • Visualizing Web Images Using Fisher Discriminant Locality Preserving Canonical Correlation Analysis

    Kohei TATENO  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2005-2016

    A novel dimensionality reduction method, Fisher Discriminant Locality Preserving Canonical Correlation Analysis (FDLP-CCA), for visualizing Web images is presented in this paper. FDLP-CCA can integrate two modalities and discriminate target items in terms of their semantics by considering unique characteristics of the two modalities. In this paper, we focus on Web images with text uploaded on Social Networking Services for these two modalities. Specifically, text features have high discriminate power in terms of semantics. On the other hand, visual features of images give their perceptual relationships. In order to consider both of the above unique characteristics of these two modalities, FDLP-CCA estimates the correlation between the text and visual features with consideration of the cluster structure based on the text features and the local structures based on the visual features. Thus, FDLP-CCA can integrate the different modalities and provide separated manifolds to organize enhanced compactness within each natural cluster.

  • Flexible and Fast Similarity Search for Enriched Trajectories

    Hideaki OHASHI  Toshiyuki SHIMIZU  Masatoshi YOSHIKAWA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/05/30
      Vol:
    E100-D No:9
      Page(s):
    2081-2091

    In this study, we focus on a method to search for similar trajectories. In the majority of previous works on searching for similar trajectories, only raw trajectory data were used. However, to obtain deeper insights, additional time-dependent trajectory features should be utilized depending on the search intent. For instance, to identify similar combination plays in soccer games, such additional features include the movements of the team players. In this paper, we develop a framework to flexibly search for similar trajectories associated with time-dependent features, which we call enriched trajectories. In this framework, weights, which represent the relative importance of each feature, can be flexibly given by users. Moreover, to facilitate fast searching, we first propose a lower bounding measure of the DTW distance between enriched trajectories, and then we propose algorithms based on this lower bounding measure. We evaluate the effectiveness of the lower bounding measure and compare the performances of the algorithms under various conditions using soccer data and synthetic data. Our experimental results suggest that the proposed lower bounding measure is superior to the existing measure, and one of the proposed algorithms, which is based on the threshold algorithm, is suitable for practical use.

  • Design of Two Channel Biorthogonal Graph Wavelet Filter Banks with Half-Band Kernels

    Xi ZHANG  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1743-1750

    In this paper, we propose a novel design method of two channel critically sampled compactly supported biorthogonal graph wavelet filter banks with half-band kernels. First of all, we use the polynomial half-band kernels to construct a class of biorthogonal graph wavelet filter banks, which exactly satisfy the PR (perfect reconstruction) condition. We then present a design method of the polynomial half-band kernels with the specified degree of flatness. The proposed design method utilizes the PBP (Parametric Bernstein Polynomial), which ensures that the half-band kernels have the specified zeros at λ=2. Therefore the constraints of flatness are satisfied at both of λ=0 and λ=2, and then the resulting graph wavelet filters have the flat spectral responses in passband and stopband. Furthermore, we apply the Remez exchange algorithm to minimize the spectral error of lowpass (highpass) filter in the band of interest by using the remaining degree of freedom. Finally, several examples are designed to demonstrate the effectiveness of the proposed design method.

  • Color Transfer by Region Exploration and Navigation

    Somchai PHATTHANACHUANCHOM  Rawesak TANAWONGSUWAN  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1962-1970

    Color transfer is a simple process to change a color tone in one image (source) to look like another image (target). In transferring colors between images, there are several issues needed to be considered including partial color transfer, trial-and-error, and multiple target color transfer. Our approach enables users to transfer colors partially and locally by letting users select their regions of interest from image segmentation. Since there are many ways that we can transfer colors from a set of target regions to a set of source regions, we introduce the region exploration and navigation approach where users can choose their preferred color tones to transfer one region at a time and gradually customize towards their desired results. The preferred color tones sometimes can come from more than one image; therefore our method is extended to allow users to select their preferred color tones from multiple images. Our experimental results have shown the flexibility of our approach to generate reasonable segmented regions of interest and to enable users to explore the possible results more conveniently.

  • Efficient Fault-Aware Routing for Wireless Sensor Networks

    Jaekeun YUN  Daehee KIM  Sunshin AN  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E100-A No:9
      Page(s):
    1985-1992

    Since the sensor nodes are subject to faults due to the highly-constrained resources and hostile deployment environments, fault management in wireless sensor networks (WSNs) is essential to guarantee the proper operation of networks, especially routing. In contrast to existing fault management methods which mainly aim to be tolerant to faults without considering the fault type, we propose a novel efficient fault-aware routing method where faults are classified and dealt with accordingly. More specifically, we first identify each fault and then try to set up the new routing path according to the fault type. Our proposed method can be easily integrated with any kind of existing routing method. We show that our proposed method outperforms AODV, REAR, and GPSR, which are the representative works of single-path routing, multipath routing and location based routing, in terms of energy efficiency and data delivery ratio.

  • Parameterized L1-Minimization Algorithm for Off-the-Gird Spectral Compressive Sensing

    Wei ZHANG  Feng YU  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:9
      Page(s):
    2026-2030

    Spectral compressive sensing is a novel approach that enables extraction of spectral information from a spectral-sparse signal, exclusively from its compressed measurements. Thus, the approach has received considerable attention from various fields. However, standard compressive sensing algorithms always require a sparse signal to be on the grid, whose spacing is the standard resolution limit. Thus, these algorithms severely degenerate while handling spectral compressive sensing, owing to the off-the-grid issue. Some off-the-grid algorithms were recently proposed to solve this problem, but they are either inaccurate or computationally expensive. In this paper, we propose a novel algorithm named parameterized ℓ1-minimization (PL1), which can efficiently solves the off-the-grid spectral estimation problem with relatively low computational complexity.

  • Development and Future of Optical Fiber Related Technologies Open Access

    Shigeru TOMITA  

     
    INVITED PAPER-Optical Fiber for Communications

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1688-1695

    The history of optical fiber and optical transmission technologies has been described in many publications. However, the history of other technologies designed to support the physical layer of optical transmission has not been described in much detail. I would like to highlight those technologies in addition to optical fibers. Therefore, this paper describes the history of the development of optical fiber related technologies such as fusion splicers, optical fiber connectors, ribbon fiber, and passive components based on changes in optical fibers and optical fiber cables. Moreover, I describe technologies designed to support multi-core fibers such as fan-in/fan-out devices.

  • Frontier-Based Search for Enumerating All Constrained Subgraphs with Compressed Representation

    Jun KAWAHARA  Takeru INOUE  Hiroaki IWASHITA  Shin-ichi MINATO  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1773-1784

    For subgraph enumeration problems, very efficient algorithms have been proposed whose time complexities are far smaller than the number of subgraphs. Although the number of subgraphs can exponentially increase with the input graph size, these algorithms exploit compressed representations to output and maintain enumerated subgraphs compactly so as to reduce the time and space complexities. However, they are designed for enumerating only some specific types of subgraphs, e.g., paths or trees. In this paper, we propose an algorithm framework, called the frontier-based search, which generalizes these specific algorithms without losing their efficiency. Our frontier-based search will be used to resolve various practical problems that include constrained subgraph enumeration.

  • Bit-Quad-Based Euler Number Computing

    Bin YAO  Lifeng HE  Shiying KANG  Xiao ZHAO  Yuyan CHAO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/06/20
      Vol:
    E100-D No:9
      Page(s):
    2197-2204

    The Euler number of a binary image is an important topological property for pattern recognition, image analysis, and computer vision. A famous method for computing the Euler number of a binary image is by counting certain patterns of bit-quads in the image, which has been improved by scanning three rows once to process two bit-quads simultaneously. This paper studies the bit-quad-based Euler number computing problem. We show that for a bit-quad-based Euler number computing algorithm, with the increase of the number of bit-quads being processed simultaneously, on the one hand, the average number of pixels to be checked for processing a bit-quad will decrease in theory, and on the other hand, the length of the codes for implementing the algorithm will increase, which will make the algorithm less efficient in practice. Experimental results on various types of images demonstrated that scanning five rows once and processing four bit-quads simultaneously is the optimal tradeoff, and that the optimal bit-quad-based Euler number computing algorithm is more efficient than other Euler number computing algorithms.

  • Contact Resistance Property of Gold Plated Contact Covered with Contact Lubricant Under High Temperature

    Terutaka TAMAI  Masahiro YAMAKAWA  

     
    PAPER

      Vol:
    E100-C No:9
      Page(s):
    702-708

    At the present time, as downsizing of connectors causes thin gold plated layer and low contact load, serious problem of degradation of contact resistance property is induced. For these contacts, corrosion of the contacts surface under environment and high temperature as soldering and reflow process should be existed. Oxidation of base metal atoms which are diffused from under layer and additives occurs. Contact resistance increases for both surface contamination and low contact load. In order to resolve these problems and wear of surface, application of contact lubricants is useful and effective. However, degradation of the lubricants under such reflow process as high temperature possibly occurs. Therefore, in this study, from view point of change of lubricant quality as viscosity, weight loss, polymerization, oxidation and molecular orientation were clarified. For increase in contact resistance, orientation of lubricant molecular acts as important factor was found. The other factors of the lubricant hardly does not effect on contact resistance.

  • Incorporating Security Constraints into Mixed-Criticality Real-Time Scheduling

    Hyeongboo BAEK  Jinkyu LEE  

     
    PAPER-Software System

      Pubricized:
    2017/05/31
      Vol:
    E100-D No:9
      Page(s):
    2068-2080

    While conventional studies on real-time systems have mostly considered the real-time constraint of real-time systems only, recent research initiatives are trying to incorporate a security constraint into real-time scheduling due to the recognition that the violation of either of two constrains can cause catastrophic losses for humans, the system, and even environment. The focus of most studies, however, is the single-criticality systems, while the security of mixed-criticality systems has received scant attention, even though security is also a critical issue for the design of mixed-criticality systems. In this paper, we address the problem of the information leakage that arises from the shared resources that are used by tasks with different security-levels of mixed-criticality systems. We define a new concept of the security constraint employing a pre-flushing mechanism to cleanse the state of shared resources whenever there is a possibility of the information leakage regarding it. Then, we propose a new non-preemptive real-time scheduling algorithm and a schedulability analysis, which incorporate the security constraint for mixed-criticality systems. Our evaluation demonstrated that a large number of real-time tasks can be scheduled without a significant performance loss under a new security constraint.

2461-2480hit(16314hit)