The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

6801-6820hit(16314hit)

  • A Sidelobe Suppression Technique by Regenerating Null Signals in OFDM-Based Cognitive Radios

    Tomoya TANDAI  Takahiro KOBAYASHI  

     
    PAPER-Spectrum Sensing

      Vol:
    E92-B No:12
      Page(s):
    3653-3664

    In this paper, a sidelobe suppression technique for orthogonal frequency division multiplexing (OFDM)-based cognitive radios (CR) is proposed. In the OFDM-based CR systems, after the CR terminal executes spectrum sensing, it transmits a CR packet by activating the subcarriers in the frequency bands where no signals are detected (hereinafter, these subcarriers are called "active subcarrier") and by disabling (nulling) the subcarriers in the frequency bands where the signals are detected. In this situation, a problem arises in that the signals that leak from the active subcarriers to the null subcarriers may interfere with the primary systems. Therefore, this signal leakage has to be minimized. In many OFDM-based wireless communication systems, one packet or frame consists of multiple OFDM symbols and the discontinuity between the consecutive OFDM symbols causes the signal leakage to the null subcarriers. In the proposed method, signal leakage to the null subcarriers is suppressed by regenerating null subcarriers in the frequency-domain signal of the whole packet as follows. One CR packet consisting of multiple OFDM symbols having null subcarriers and guard interval (GI) is buffered and oversampled, and then the oversampled signal is Fourier transformed at once and consequently the frequency-domain signal of the packet is obtained. The null subcarriers in the frequency-domain signal are zeroed again, and then the signal is inverse Fourier transformed and transmitted. The proposed method significantly suppresses the signal leakage. The spectral power density, the peak-to-average power ratio (PAPR) and the packet error rate (PER) performances of the proposed method are evaluated by computer simulations and the effectiveness of the proposed method is shown.

  • Constrained Stimulus Generation with Self-Adjusting Using Tabu Search with Memory

    Yanni ZHAO  Jinian BIAN  Shujun DENG  Zhiqiu KONG  Kang ZHAO  

     
    PAPER-Logic Synthesis, Test and Verfication

      Vol:
    E92-A No:12
      Page(s):
    3086-3093

    Despite the growing research effort in formal verification, industrial verification often relies on the constrained random simulation methodology, which is supported by constraint solvers as the stimulus generator integrated within simulator, especially for the large design with complex constraints nowadays. These stimulus generators need to be fast and well-distributed to maintain simulation performance. In this paper, we propose a dynamic method to guide stimulus generation by SAT solvers. An adjusting strategy named Tabu Search with Memory (TSwM) is integrated in the stimulus generator for the search and prune processes along with the constraint solver. Experimental results show that the method proposed in this paper could generate well-distributed stimuli with good performance.

  • Spectrum Sensing Architecture and Use Case Study: Distributed Sensing over Rayleigh Fading Channels

    Chen SUN  Yohannes D. ALEMSEGED  Ha Nguyen TRAN  Hiroshi HARADA  

     
    PAPER-Spectrum Sensing

      Vol:
    E92-B No:12
      Page(s):
    3606-3615

    To realize dynamic spectrum access (DSA), spectrum sensing is performed to detect the presence or absence of primary users (PUs). This paper proposes a sensing architecture. This architecture enables use cases such as DSA with PU detection using a single spectrum sensor and DSA with distributed sensing, such as cooperative sensing, collaborative sensing, and selective sensing. In this paper we focus on distributed sensing. These sensing schemes employ distributed spectrum sensors (DSSs) where each sensor uses energy detection (ED) in Rayleigh fading environment. To theoretically analyze the performance of the three sensing schemes, a closed-form expression for the probability of detection by ED with selective combining (SC) in Rayleigh fading environment is derived. Applying this expression to the PU detection problem, we obtain analytical models of the three sensing schemes. Analysis shows that at 5-dB signal-to-noise ratio (SNR) and with a false alarm rate of 0.004, the probability of detection is increased from 0.02 to 0.3 and 0.4, respectively, by cooperative sensing and collaborative sensing schemes using using three DSSs. Results also show that the selected sensing scheme matches the performance of the collaborative sensing scheme. Moreover, it provides a low false alarm rate.

  • Worst-Case Flit and Packet Delay Bounds in Wormhole Networks on Chip

    Yue QIAN  Zhonghai LU  Wenhua DOU  

     
    PAPER-Embedded, Real-Time and Reconfigurable Systems

      Vol:
    E92-A No:12
      Page(s):
    3211-3220

    We investigate per-flow flit and packet worst-case delay bounds in on-chip wormhole networks. Such investigation is essential in order to provide guarantees under worst-case conditions in cost-constrained systems, as required by many hard real-time embedded applications. We first propose analysis models for flow control, link and buffer sharing. Based on these analysis models, we obtain an open-ended service analysis model capturing the combined effect of flow control, link and buffer sharing. With the service analysis model, we compute equivalent service curves for individual flows, and then derive their flit and packet delay bounds. Our experimental results verify that our analytical bounds are correct and tight.

  • Efficient Frequency Sharing of Baseband and Subcarrier Coding UHF RFID Systems

    Jin MITSUGI  Yuusuke KAWAKITA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E92-B No:12
      Page(s):
    3794-3802

    UHF band passive RFID systems are being steadily adopted by industries because of their capability of long range automatic identification with passive tags. For an application which demands a large number of readers located in a limited geographical area, referred to as dense reader mode, interference rejection among readers is important. The coding method, baseband or subcarrier coding, in the tag-to-reader communication link results in a significant influence on the interference rejection performance. This paper examines the frequency sharing of baseband and subcarrier coding UHF RFID systems from the perspective of their transmission delay using a media access control (MAC) simulator. The validity of the numerical simulation was verified by an experiment. It is revealed that, in a mixed operation of baseband and subcarrier systems, assigning as many channels as possible to baseband system unless they do not exploit the subcarrier channels is the general principle for efficient frequency sharing. This frequency sharing principle is effective both to baseband and subcarrier coding systems. Otherwise, mixed operation fundamentally increases the transmission delay in subcarrier coding systems.

  • Estimation of Bridge Height over Water from Polarimetric SAR Image Data Using Mapping and Projection Algorithm and De-Orientation Theory

    Haipeng WANG  Feng XU  Ya-Qiu JIN  Kazuo OUCHI  

     
    PAPER-Sensing

      Vol:
    E92-B No:12
      Page(s):
    3875-3882

    An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.

  • QSLS: Efficient Quorum Based Sink Location Service for Geographic Routing in Irregular Wireless Sensor Networks

    Fucai YU  Soochang PARK  Euisin LEE  Younghwan CHOI  Sang-Ha KIM  

     
    LETTER-Network

      Vol:
    E92-B No:12
      Page(s):
    3935-3938

    Geographic routing for wireless sensor networks requires a source that can encapsulate the location of a sink in each data packet. How a source can obtain the location of a sink with low overhead is a difficult issue. This letter proposes a Quorum Based Sink Location Service (QSLS) which can be exploited by most geographic routing protocols in arbitrary irregular wireless sensor networks.

  • Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network

    Hyeong-Min NAM  Chun-Su PARK  Seung-Won JUNG  Sung-Jea KO  

     
    PAPER-Multimedia Systems for Communications

      Vol:
    E92-B No:12
      Page(s):
    3893-3902

    Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.

  • Application of Fuzzy Logic to Cognitive Radio Systems Open Access

    Marja MATINMIKKO  Tapio RAUMA  Miia MUSTONEN  Ilkka HARJULA  Heli SARVANKO  Aarne MAMMELA  

     
    INVITED PAPER

      Vol:
    E92-B No:12
      Page(s):
    3572-3580

    This paper reviews applications of fuzzy logic to telecommunications and proposes a novel fuzzy combining scheme for cooperative spectrum sensing in cognitive radio systems. A summary of previous applications of fuzzy logic to telecommunications is given outlining also potential applications of fuzzy logic in future cognitive radio systems. In complex and dynamic operational environments, future cognitive radio systems will need sophisticated decision making and environment awareness techniques that are capable of handling multidimensional, conflicting and usually non-predictable decision making problems where optimal solutions can not be necessarily found. The results indicate that fuzzy logic can be used in cooperative spectrum sensing to provide additional flexibility to existing combining methods.

  • Optimization of Polarimetric Contrast Enhancement Based on Fisher Criterion

    Qiming DENG  Jiong CHEN  Jian YANG  

     
    LETTER-Sensing

      Vol:
    E92-B No:12
      Page(s):
    3968-3971

    The optimization of polarimetric contrast enhancement (OPCE) is a widely used method for maximizing the received power ratio of a desired target versus an undesired target (clutter). In this letter, a new model of the OPCE is proposed based on the Fisher criterion. By introducing the well known two-class problem of linear discriminant analysis (LDA), the proposed model is to enlarge the normalized distance of mean value between the target and the clutter. In addition, a cross-iterative numerical method is proposed for solving the optimization with a quadratic constraint. Experimental results with the polarimetric SAR (POLSAR) data demonstrate the effectiveness of the proposed method.

  • Voltage and Level-Shifter Assignment Driven Floorplanning

    Bei YU  Sheqin DONG  Song CHEN  Satoshi GOTO  

     
    PAPER-Physical Level Desing

      Vol:
    E92-A No:12
      Page(s):
    2990-2997

    Low Power Design has become a significant requirement when the CMOS technology entered the nanometer era. Multiple-Supply Voltage (MSV) is a popular and effective method for both dynamic and static power reduction while maintaining performance. Level shifters may cause area and Interconnect Length Overhead (ILO), and should be considered at both floorplanning and post-floorplanning stages. In this paper, we propose a two phases algorithm framework, called VLSAF, to solve voltage and level shifter assignment problem. At floorplanning phase, we use a convex cost network flow algorithm to assign voltage and a minimum cost flow algorithm to handle level-shifter assignment. At post-floorplanning phase, a heuristic method is adopted to redistribute white spaces and calculate the positions and shapes of level shifters. The experimental results show VLSAF is effective.

  • Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    Ming-Chih CHEN  Shen-Fu HSIAO  

     
    PAPER-Embedded, Real-Time and Reconfigurable Systems

      Vol:
    E92-A No:12
      Page(s):
    3221-3228

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  • Addressing Defect Coverage through Generating Test Vectors for Transistor Defects

    Yoshinobu HIGAMI  Kewal K. SALUJA  Hiroshi TAKAHASHI  Shin-ya KOBAYASHI  Yuzo TAKAMATSU  

     
    PAPER-Logic Synthesis, Test and Verfication

      Vol:
    E92-A No:12
      Page(s):
    3128-3135

    Shorts and opens are two major kind of defects that are most likely to occur in Very Large Scale Integrated Circuits. In modern Integrated Circuit devices these defects must be considered not only at gate-level but also at transistor level. In this paper, we propose a method for generating test vectors that targets both transistor shorts (tr-shorts) and transistor opens (tr-opens). Since two consecutive test vectors need to be applied in order to detect tr-opens, we assume launch on capture (LOC) test application mechanism. This makes it possible to detect delay type defects. Further, the proposed method employs existing stuck-at test generation tools thus requiring no change in the design and development flow and development of no new tools is needed. Experimental results for benchmark circuits demonstrate the effectiveness of the proposed method by providing 100% fault efficiency while the test set size is still moderate.

  • Extended Relief-F Algorithm for Nominal Attribute Estimation in Small-Document Classification

    Heum PARK  Hyuk-Chul KWON  

     
    PAPER-Document Analysis

      Vol:
    E92-D No:12
      Page(s):
    2360-2368

    This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.

  • Burst Error Recovery Method for LZSS Coding

    Masato KITAKAMI  Teruki KAWASAKI  

     
    PAPER-Dependable Computing

      Vol:
    E92-D No:12
      Page(s):
    2439-2444

    Since the compressed data, which are frequently used in computer systems and communication systems, are very sensitive to errors, several error recovery methods for data compression have been proposed. Error recovery method for LZ77 coding, one of the most popular universal data compression methods, has been proposed. This cannot be applied to LZSS coding, a variation of LZ77 coding, because its compressed data consist of variable-length codewords. This paper proposes a burst error recovery method for LZSS coding. The error sensitive part of the compressed data are encoded by unary coding and moved to the beginning of the compressed data. After these data, a synchronization sequence is inserted. By searching the synchronization sequence, errors in the error sensitive part are detected. The errors are recovered by using a copy of the part. Computer simulation says that the compression ratio of the proposed method is almost equal to that of LZ77 coding and that it has very high error recovery capability.

  • Robust Toponym Resolution Based on Surface Statistics

    Tomohisa SANO  Shiho Hoshi NOBESAWA  Hiroyuki OKAMOTO  Hiroya SUSUKI  Masaki MATSUBARA  Hiroaki SAITO  

     
    PAPER-Unknown Word Processing

      Vol:
    E92-D No:12
      Page(s):
    2313-2320

    Toponyms and other named entities are main issues in unknown word processing problem. Our purpose is to salvage unknown toponyms, not only for avoiding noises but also providing them information of area candidates to where they may belong. Most of previous toponym resolution methods were targeting disambiguation among area candidates, which is caused by the multiple existence of a toponym. These approaches were mostly based on gazetteers and contexts. When it comes to the documents which may contain toponyms worldwide, like newspaper articles, toponym resolution is not just an ambiguity resolution, but an area candidate selection from all the areas on Earth. Thus we propose an automatic toponym resolution method which enables to identify its area candidates based only on their surface statistics, in place of dictionary-lookup approaches. Our method combines two modules, area candidate reduction and area candidate examination which uses block-unit data, to obtain high accuracy without reducing recall rate. Our empirical result showed 85.54% precision rate, 91.92% recall rate and .89 F-measure value on average. This method is a flexible and robust approach for toponym resolution targeting unrestricted number of areas.

  • Influence of PH3 Preflow Time on Initial Growth of GaP on Si Substrates by Metalorganic Vapor Phase Epitaxy

    Yasushi TAKANO  Takuya OKAMOTO  Tatsuya TAKAGI  Shunro FUKE  

     
    PAPER-Nanomaterials and Nanostructures

      Vol:
    E92-C No:12
      Page(s):
    1443-1448

    Initial growth of GaP on Si substrates using metalorganic vapor phase epitaxy was studied. Si substrates were exposed to PH3 preflow for 15 s or 120 s at 830 after they were preheated at 925. Atomic force microscopy (AFM) revealed that the Si surface after preflow for 120 s was much rougher than that after preflow for 15 s. After 1.5 nm GaP deposition on the Si substrates at 830, GaP islands nucleated more uniformly on the Si substrate after preflow for 15 s than on the substrate after preflow for 120 s. After 3 nm GaP deposition, layer structures were observed on a fraction of Si surface after preflow for 15 s. Island-like structures remained on the Si surface after preflow for 120 s. After 6 nm GaP deposition, the continuity of GaP layers improved on both substrates. However, AFM shows pits that penetrated a Si substrate with preflow for 120 s. Transmission electron microscopy of a GaP layer on the Si substrate after preflow for 120 s revealed that V-shaped pits penetrated the Si substrate. The preflow for a long time roughened the Si surface, which facilitated the pit formation during GaP growth in addition to degrading the surface morphology of GaP at the initial growth stage. Even after 50 nm GaP deposition, pits with a density on the order of 107 cm-2 remained in the sample. A 50-nm-thick flat GaP surface without pits was achieved for the sample with PH3 preflow for 15 s. The PH3 short preflow is necessary to produce a flat GaP surface on a Si substrate.

  • A Robust Secure Cooperative Spectrum Sensing Scheme Based on Evidence Theory and Robust Statistics in Cognitive Radio

    Nhan NGUYEN-THANH  Insoo KOO  

     
    PAPER-Spectrum Sensing

      Vol:
    E92-B No:12
      Page(s):
    3644-3652

    Spectrum sensing is a key technology within Cognitive Radio (CR) systems. Cooperative spectrum sensing using a distributed model provides improved detection for the primary user, which opens the CR system to a new security threat. This threat is the decrease of the cooperative sensing performance due to the spectrum sensing data falsification which is generated from malicious users. Our proposed scheme, based on robust statistics, utilizes only available past sensing nodes' received power data for estimating the distribution parameters of the primary signal presence and absence hypotheses. These estimated parameters are used to perform the Dempster-Shafer theory of evidence data fusion which causes the elimination of malicious users. Furthermore, in order to enhance performance, a node's reliability weight is supplemented along with the data fusion scheme. Simulation results indicate that our proposed scheme can provide a powerful capability in eliminating malicious users as well as a high gain of data fusion under various cases of channel condition.

  • Fast Mode Decision Using Global Disparity Vector for Multiview Video Coding

    Dong-Hoon HAN  Yung-Ki LEE  Yung-Lyul LEE  

     
    LETTER-Image

      Vol:
    E92-A No:12
      Page(s):
    3407-3411

    Since multiview video coding (MVC) based on H.264/AVC uses a prediction scheme exploiting inter-view correlation among multiview video, MVC encoder compresses multiple views more efficiently than simulcast H.264/AVC encoder. However, in case that the number of views to be encoded increases in MVC, the total encoding time will be greatly increased. To reduce computational complexity in MVC, a fast mode decision using both Macroblock-based region segmentation information and global disparity vector among views is proposed to reduce the encoding time. The proposed method achieves on the average 1.5 2.9 reduction of the total encoding time with the PSNR (Peak Signal-to-Noise Ratio) degradation of about 0.05 dB.

  • Face Alignment Based on Statistical Models Using SIFT Descriptors

    Zisheng LI  Jun-ichi IMAI  Masahide KANEKO  

     
    PAPER-Processing

      Vol:
    E92-A No:12
      Page(s):
    3336-3343

    Active Shape Model (ASM) is a powerful statistical tool for image interpretation, especially in face alignment. In the standard ASM, local appearances are described by intensity profiles, and the model parameter estimation is based on the assumption that the profiles follow a Gaussian distribution. It suffers from variations of poses, illumination, expressions and obstacles. In this paper, an improved ASM framework, GentleBoost based SIFT-ASM is proposed. Local appearances of landmarks are originally represented by SIFT (Scale-Invariant Feature Transform) descriptors, which are gradient orientation histograms based representations of image neighborhood. They can provide more robust and accurate guidance for search than grey-level profiles. Moreover, GentleBoost classifiers are applied to model and search the SIFT features instead of the unnecessary assumption of Gaussian distribution. Experimental results show that SIFT-ASM significantly outperforms the original ASM in aligning and localizing facial features.

6801-6820hit(16314hit)