The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] REM(1013hit)

341-360hit(1013hit)

  • Performance Analysis of MIMO Relay Network via Propagation Measurement in L-Shaped Corridor Environment

    Namzilp LERTWIRAM  Gia Khanh TRAN  Keiichi MIZUTANI  Kei SAKAGUCHI  Kiyomichi ARAKI  

     
    PAPER-Antennas and Propagation

      Vol:
    E95-B No:4
      Page(s):
    1345-1356

    Setting relays can address the shadowing problem between a transmitter (Tx) and a receiver (Rx). Moreover, the Multiple-Input Multiple-Output (MIMO) technique has been introduced to improve wireless link capacity. The MIMO technique can be applied in relay network to enhance system performance. However, the efficiency of relaying schemes and relay placement have not been well investigated with experiment-based study. This paper provides a propagation measurement campaign of a MIMO two-hop relay network in 5 GHz band in an L-shaped corridor environment with various relay locations. Furthermore, this paper proposes a Relay Placement Estimation (RPE) scheme to identify the optimum relay location, i.e. the point at which the network performance is highest. Analysis results of channel capacity show that relaying technique is beneficial over direct transmission in strong shadowing environment while it is ineffective in non-shadowing environment. In addition, the optimum relay location estimated with the RPE scheme also agrees with the location where the network achieves the highest performance as identified by network capacity. Finally, the capacity analysis shows that two-way MIMO relay employing network coding has the best performance while cooperative relaying scheme is not effective due to shadowing effect weakening the signal strength of the direct link.

  • Scenario Generation Using Differential Scenario Information

    Masayuki MAKINO  Atsushi OHNISHI  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    1044-1051

    A method of generating scenarios using differential scenaro information is presented. Behaviors of normal scenarios of similar purpose are quite similar each other, while actors and data in scenarios are different among these scenarios. We derive the differential information between them and apply the differential information to generate new alternative/exceptional scenarios. Our method will be illustrated with examples. This paper describes (1) a language for describing scenarios based on a simple case grammar of actions, (2) introduction of the differential scenario, and (3) method and examples of scenario generation using the differential scenario.

  • Workflows with Passbacks and Incremental Verification of Their Correctness

    Osamu TAKAKI  Izumi TAKEUTI  Noriaki IZUMI  Koiti HASIDA  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    989-1002

    In this paper, we discuss a fundamental theory of incremental verification for workflows. Incremental verification is a method to help multiple designers share and collaborate on huge workflows while maintaining their consistency. To this end, we introduce passbacks in workflows and their consistency property in the control flow perspective. passbacks indicate redoing of works. Workflows with passbacks are useful to naturally represent human works. To define the consistency property above, we define normality of workflows with passbacks and total correctness of normal workflows based on transition system-based semantics of normal workflows. We further extend workflows to sorted workflows and define their vertical division and composition. We also extend total correctness to normal sorted workflows, for the sake of incremental verification of a large-scale workflow with passbacks via vertical division and composition.

  • Finding Incorrect and Missing Quality Requirements Definitions Using Requirements Frame

    Haruhiko KAIYA  Atsushi OHNISHI  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    1031-1043

    Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to defining quality requirements more completely and correctly.

  • Toward the Decision Tree for Inferring Requirements Maturation Types

    Takako NAKATANI  Narihito KONDO  Junko SHIROGANE  Haruhiko KAIYA  Shozo HORI  Keiichi KATAMINE  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    1021-1030

    Requirements are elicited step by step during the requirements engineering (RE) process. However, some types of requirements are elicited completely after the scheduled requirements elicitation process is finished. Such a situation is regarded as problematic situation. In our study, the difficulties of eliciting various kinds of requirements is observed by components. We refer to the components as observation targets (OTs) and introduce the word “Requirements maturation.” It means when and how requirements are elicited completely in the project. The requirements maturation is discussed on physical and logical OTs. OTs Viewed from a logical viewpoint are called logical OTs, e.g. quality requirements. The requirements of physical OTs, e.g., modules, components, subsystems, etc., includes functional and non-functional requirements. They are influenced by their requesters' environmental changes, as well as developers' technical changes. In order to infer the requirements maturation period of each OT, we need to know how much these factors influence the OTs' requirements maturation. According to the observation of actual past projects, we defined the PRINCE (Pre Requirements Intelligence Net Consideration and Evaluation) model. It aims to guide developers in their observation of the requirements maturation of OTs. We quantitatively analyzed the actual cases with their requirements elicitation process and extracted essential factors that influence the requirements maturation. The results of interviews of project managers are analyzed by WEKA, a data mining system, from which the decision tree was derived. This paper introduces the PRINCE model and the category of logical OTs to be observed. The decision tree that helps developers infer the maturation type of an OT is also described. We evaluate the tree through real projects and discuss its ability to infer the requirements maturation types.

  • High Uniqueness Arbiter-Based PUF Circuit Utilizing RG-DTM Scheme for Identification and Authentication Applications

    Mitsuru SHIOZAKI  Kota FURUHASHI  Takahiko MURAYAMA  Akitaka FUKUSHIMA  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E95-C No:4
      Page(s):
    468-477

    Silicon Physical Unclonable Functions (PUFs) have been proposed to exploit inherent characteristics caused by process variations, such as transistor size, threshold voltage and so on, and to produce an inexpensive and tamper-resistant device such as IC identification, authentication and key generation. We have focused on the arbiter-PUF utilizing the relative delay-time difference between the equivalent paths. The conventional arbiter-PUF has a technical issue, which is low uniqueness caused by the ununiformity on response-generation. To enhance the uniqueness, a novel arbiter-based PUF utilizing the Response Generation according to the Delay Time Measurement (RG-DTM) scheme, has been proposed. In the conventional arbiter-PUF, the response 0 or 1 is assigned according to the single threshold of relative delay-time difference. On the contrary, the response 0 or 1 is assigned according to the multiple threshold of relative delay-time difference in the RG-DTM PUF. The conventional and RG-DTM PUF were designed and fabricated with 0.18 µm CMOS technology. The Hamming distances (HDs) between different chips, which indicate the uniqueness, were calculated by 256-bit responses from the identical challenges on each chip. The ideal distribution of HDs, which indicates high uniqueness, is achieved in the RG-DTM PUF using 16 thresholds of relative delay-time differences. The generative stability, which is the fluctuation of responses in the same environment, and the environmental stability, which is the changes of responses in the different environment were also evaluated. There is a trade-off between high uniqueness and high stability, however, the experimental data shows that the RG-DTM PUF has extremely smaller false matching probability in the identification compared to the conventional PUF.

  • Compressive Sampling for Remote Control Systems

    Masaaki NAGAHARA  Takahiro MATSUDA  Kazunori HAYASHI  

     
    PAPER

      Vol:
    E95-A No:4
      Page(s):
    713-722

    In remote control, efficient compression or representation of control signals is essential to send them through rate-limited channels. For this purpose, we propose an approach of sparse control signal representation using the compressive sampling technique. The problem of obtaining sparse representation is formulated by cardinality-constrained 2 optimization of the control performance, which is reducible to 1-2 optimization. The low rate random sampling employed in the proposed method based on the compressive sampling, in addition to the fact that the 1-2 optimization can be effectively solved by a fast iteration method, enables us to generate the sparse control signal with reduced computational complexity, which is preferable in remote control systems where computation delays seriously degrade the performance. We give a theoretical result for control performance analysis based on the notion of restricted isometry property (RIP). An example is shown to illustrate the effectiveness of the proposed approach via numerical experiments.

  • Energy Detection Based Estimation of Channel Occupancy Rate with Adaptive Noise Estimation

    Janne J. LEHTOMAKI  Risto VUOHTONIEMI  Kenta UMEBAYASHI  Juha-Pekka MAKELA  

     
    PAPER

      Vol:
    E95-B No:4
      Page(s):
    1076-1084

    Recently, there has been growing interest in opportunistically utilizing the 2.4 GHz ISM-band. Numerous spectrum occupancy measurements covering the ISM-band have been performed to analyze the spectrum usage. However, in these campaigns the verification of the correctness of the obtained occupancy values for the highly dynamic ISM-band has not been presented. In this paper, we propose and verify channel occupancy rate (COR) estimation utilizing energy detection mechanism with a novel adaptive energy detection threshold setting method. The results are compared with the true reference COR values. Several different types of verification measurements showed that our setup can estimate the COR values of 802.11 traffic well, with negligible overestimation. The results from real-time real-life measurements also confirm that the proposed adaptive threshold setting method enables accurate thresholds even in the situations where multiple interferers are present in the received signal.

  • Consistent Sampling and Signal Reconstruction in Noisy Under-Determined Case

    Akira HIRABAYASHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E95-A No:3
      Page(s):
    631-638

    We present sampling theorems that reconstruct consistent signals from noisy underdetermined measurements. The consistency criterion requires that the reconstructed signal yields the same measurements as the original one. The main issue in underdetermined cases is a choice of a complementary subspace L in the reconstruction space of the intersection between the reconstruction space and the orthogonal complement of the sampling space because signals are reconstructed in L. Conventional theorems determine L without taking noise in measurements into account. Hence, the present paper proposes to choose L such that variance of reconstructed signals due to noise is minimized. We first arbitrarily fix L and compute the minimum variance under the condition that the average of the reconstructed signals agrees with the noiseless reconstruction. The derived expression clearly shows that the minimum variance depends on L and leads us to a condition for L to further minimize the minimum value of the variance. This condition indicates that we can choose such an L if and only if L includes a subspace determined by the noise covariance matrix. Computer simulations show that the standard deviation for the proposed sampling theorem is improved by 8.72% over that for the conventional theorem.

  • Design of Predistorter with Efficient Updating Algorithm of Power Amplifier with Memory Effect

    Yasuyuki OISHI  Shigekazu KIMURA  Eisuke FUKUDA  Takeshi TAKANO  Daisuke TAKAGO  Yoshimasa DAIDO  Kiyomichi ARAKI  

     
    PAPER-Electronic Circuits

      Vol:
    E95-C No:3
      Page(s):
    382-394

    This paper describes a method to design a predistorter (PD) for a GaN-FET power amplifier (PA) by using nonlinear parameters extracted from measured IMD which has asymmetrical peaks peculiar to a memory effect with a second-order lag. While, computationally efficient equations have been reported by C. Rey et al. for the memory effect with a first-order lag. Their equations are extended to be applicable to the memory effect with the second-order lag. The extension provides a recursive algorithm for cancellation signals of the PD each of which updating is made by using signals in only two sampling points. The algorithm is equivalent to a memory depth of two in computational efficiency. The required times for multiplications and additions are counted for the updating of all the cancellation signals and it is confirmed that the algorithm reduces computational intensity lower than half of a memory polynomial in recent papers. A computer simulation has clarified that the PD improves the adjacent channel leakage power ratio (ACLR) of OFDM signals with several hundred subcarriers corresponding to 4G mobile radio communications. It has been confirmed that a fifth-order PD is effective up to a higher power level close to 1 dB compression. The improvement of error vector magnitude (EVM) by the PD is also simulated for OFDM signals of which the subcarrier channels are modulated by 16 QAM.

  • Date Flow Optimization of Dynamically Coarse Grain Reconfigurable Architecture for Multimedia Applications

    Xinning LIU  Chen MEI  Peng CAO  Min ZHU  Longxing SHI  

     
    PAPER-Design Methodology

      Vol:
    E95-D No:2
      Page(s):
    374-382

    This paper proposes a novel sub-architecture to optimize the data flow of REMUS-II (REconfigurable MUltimedia System 2), a dynamically coarse grain reconfigurable architecture. REMUS-II consists of a µPU (Micro-Processor Unit) and two RPUs (Reconfigurable Processor Unit), which are used to speeds up control-intensive tasks and data-intensive tasks respectively. The parallel computing capability and flexibility of REMUS-II makes itself an excellent candidate to process multimedia applications, which require a large amount of memory accesses. In this paper, we specifically optimize the data flow to deal with those performance-hazard and energy-hungry memory accessing in order to meet the bandwidth requirement of parallel computing. The RPU internal memory could work in multiple modes, like 2D-access mode and transformation mode, according to different multimedia access patterns. This novel design can improve the performance up to 26% compared to traditional on-chip memory. Meanwhile, the block buffer is implemented to optimize the off-chip data flow through reducing off-chip memory accesses, which reducing up to 43% compared to direct DDR access. Based on RTL simulation, REMUS-II can achieve 1080p@30 fps of H.264 High Profile@ Level 4 and High Level MPEG2 at 200 MHz clock frequency. The REMUS-II is implemented into 23.7 mm2 silicon on TSMC 65 nm logic process with a 400 MHz maximum working frequency.

  • Fast AdaBoost-Based Face Detection System on a Dynamically Coarse Grain Reconfigurable Architecture

    Jian XIAO  Jinguo ZHANG  Min ZHU  Jun YANG  Longxing SHI  

     
    PAPER-Application

      Vol:
    E95-D No:2
      Page(s):
    392-402

    An AdaBoost-based face detection system is proposed, on a Coarse Grain Reconfigurable Architecture (CGRA) named “REMUS-II”. Our work is quite distinguished from previous ones in three aspects. First, a new hardware-software partition method is proposed and the whole face detection system is divided into several parallel tasks implemented on two Reconfigurable Processing Units (RPU) and one micro Processors Unit (µPU) according to their relationships. These tasks communicate with each other by a mailbox mechanism. Second, a strong classifier is treated as a smallest phase of the detection system, and every phase needs to be executed by these tasks in order. A phase of Haar classifier is dynamically mapped onto a Reconfigurable Cell Array (RCA) only when needed, and it's quite different from traditional Field Programmable Gate Array (FPGA) methods in which all the classifiers are fabricated statically. Third, optimized data and configuration word pre-fetch mechanisms are employed to improve the whole system performance. Implementation results show that our approach under 200 MHz clock rate can process up-to 17 frames per second on VGA size images, and the detection rate is over 95%. Our system consumes 194 mW, and the die size of fabricated chip is 23 mm2 using TSMC 65 nm standard cell based technology. To the best of our knowledge, this work is the first implementation of the cascade Haar classifier algorithm on a dynamically CGRA platform presented in the literature.

  • Improved Algorithms for Calculating Addition Coefficients in Electromagnetic Scattering by Multi-Sphere Systems

    Nguyen Tien DONG  Masahiro TANAKA  Kazuo TANAKA  

     
    PAPER-Scattering and Diffraction

      Vol:
    E95-C No:1
      Page(s):
    27-35

    Evaluation of addition coefficients introduced by the addition theorems for vector spherical harmonics is one of the most intractable problems in electromagnetic scattering by multi-sphere systems. The derivation of the analytical expressions for the addition coefficients is lengthy and complex while the computation of the addition coefficients is annoyingly time-consuming even with the reasonably fast computers available nowadays. This paper presents an efficient algorithm for calculating addition coefficients which is based on the recursive relations of scalar addition coefficients. Numerical results from the formulation derived in this paper agree with those of previous published results but the algorithm proposed here reduces the computational time considerably. This paper also discusses the strengths and limitations of other formulations and numerical techniques found in the literature.

  • A Basic Fuzzy-Estimation Theory for Available Operation of Extremely Complicated Large-Scale Network Systems

    Kazuo HORIUCHI  

     
    PAPER-Circuit Theory

      Vol:
    E95-A No:1
      Page(s):
    338-345

    In this paper, we shall describe a basic fuzzy-estimation theory based on the concept of set-valued operators, suitable for available operation of extremely complicated large-scale network systems. Fundamental conditions for availability of system behaviors of such network systems are clarified in a form of β-level fixed point theorem for system of fuzzy-set-valued operators. Here, the proof of this theorem is accomplished by the concept of Hausdorff's ball measure of non-compactness introduced into the Banach space.

  • Accurate Surface Change Detection Method Using Phase of Coherence Function on SAR Imagery

    Takehiro HOSHINO  Shouhei KIDERA  Tetsuo KIRIMOTO  

     
    PAPER-Sensing

      Vol:
    E95-B No:1
      Page(s):
    263-270

    Satellite-borne SAR (synthetic aperture radar) is for high-resolution geosurface measurements. Recently, a feature extraction method based on CCD (coherent change detection) was developed, where a slight surface change on the geosurface is detected using the phase relationship between sequential complex SAR images of the same region made at different times. For accurate detection of the surface change, the log-likelihood method has been proposed. This method determines an appropriate threshold for change detection, making use of the phase characteristic of the changed area, and thus enhances the detection probability. However, this and other conventional methods do not seek to proactively employ phase information of the estimated coherence function, and their detection probability is often low, especially in the case that the target has small surface or local uniform changes. To overcome this problem, this paper proposes a novel transformation index that considers the phase difference of the coherence function. Furthermore, we introduce a pre-processing calibration method to compensate the bias error for the coherence phase which resulting mainly from the orbit error of the antenna platform. Finally, the results from numerical simulations and experiment modeling of the geosurface measurement verify the effectiveness of the proposed method, even in situations with low SNR (signal to noise ratio).

  • A Novel Bayes' Theorem-Based Saliency Detection Model

    Xin HE  Huiyun JING  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2545-2548

    We propose a novel saliency detection model based on Bayes' theorem. The model integrates the two parts of Bayes' equation to measure saliency, each part of which was considered separately in the previous models. The proposed model measures saliency by computing local kernel density estimation of features in the center-surround region and global kernel density estimation of features at each pixel across the whole image. Under the proposed model, a saliency detection method is presented that extracts DCT (Discrete Cosine Transform) magnitude of local region around each pixel as the feature. Experiments show that the proposed model not only performs competitively on psychological patterns and better than the current state-of-the-art models on human visual fixation data, but also is robust against signal uncertainty.

  • Downlink Multi-Point Transmission Effect Using Aggregate Base Station Architecture

    Soon-Gi PARK  Dae-Young KIM  

     
    LETTER

      Vol:
    E94-B No:12
      Page(s):
    3374-3377

    Downlink multi-point transmission as a capacity enhancement method for the users at cell edge and the operators is studied in this paper. It is based on the so-called aggregate base station architecture using distributed antennas and cloud computing. Its advantages are analyzed by both its architectural side and simulation. The simulation results show that the capacity may be affected by the number of cell belonging to an aggregate base station and by the parameters related to the operation of it.

  • Traffic Anomaly Analysis and Characteristics on a Virtualized Network Testbed

    Chunghan LEE  Hirotake ABE  Toshio HIROTSU  Kyoji UMEMURA  

     
    PAPER

      Vol:
    E94-D No:12
      Page(s):
    2353-2361

    Network testbeds have been used for network measurement and experiments. In such testbeds, resources, such as CPU, memory, and I/O interfaces, are shared and virtualized to maximize node utility for many users. A few studies have investigated the impact of virtualization on precise network measurement and understood Internet traffic characteristics on virtualized testbeds. Although scheduling latency and heavy loads are reportedly affected in precise network measurement, no clear conditions or criteria have been established. Moreover, empirical-statistical criteria and methods that pick out anomalous cases for precise network experiments are required on userland because virtualization technology used in the provided testbeds is hardly replaceable. In this paper, we show that ‘oversize packet spacing’, which can be caused by CPU scheduling latency, is a major cause of throughput instability on a virtualized network testbed even when no significant changes occur in well-known network metrics. These are unusual anomalies on virtualized network environment. Empirical-statistical analysis results accord with results at previous work. If network throughput is decreased by the anomalies, we should carefully review measurement results. Our empirical approach enables anomalous cases to be identified. We present CPU availability as an important criterion for estimating the anomalies.

  • Estimating ADSL Link Capacity by Measuring RTT of Different Length Packets

    Makoto AOKI  Eiji OKI  

     
    LETTER-Network

      Vol:
    E94-B No:12
      Page(s):
    3583-3587

    This letter proposes a practical scheme that can estimate ADSL link rates. The proposed scheme allows us to estimate ADSL link rates from measurements made at the NOC using existing communications protocols and network node facilities; it imposes no heavy traffic overhead. The proposed scheme consists of two major steps. The first step is to collect measured data of round trip times (RTT) for both long and short packets to find their minimum values of RTTs by sending Internet Control Message Protocol (ICMP) echo request messages. The second step is to estimate the ADSL down- and up-link rates by using the difference in RTT between long and short packets and the experimentally-obtained correlated relationships between ADSL down- and up-link rates. RTTs are experimentally measured for an IP network, and it is shown that the down- and up-link rates can be obtained in a simple manner.

  • A 65-nm CMOS Fully Integrated Shock-Wave Antenna Array with On-Chip Jitter and Pulse-Delay Adjustment for Millimeter-Wave Active Imaging Application

    Nguyen Ngoc MAI KHANH  Masahiro SASAKI  Kunihiro ASADA  

     
    PAPER-Device and Circuit Modeling and Analysis

      Vol:
    E94-A No:12
      Page(s):
    2554-2562

    This paper presents a 65-nm CMOS 8-antenna array transmitter operating in 117–130-GHz range for short range and portable millimeter-wave (mm-wave) active imaging applications. Each antenna element is a new on-chip antenna located on the top metal. By using on-chip transformer, pulse output of each resistor-less mm-wave pulse generators (PG) are sent to each integrated antenna. To adjust pulse delays for the purpose of pulse beam-forming, a 7-bit digitally programmable delay circuit (DPDC) is added to each of PGs. Moreover, in order to dynamically adjust pulse delays among eight SW's outputs, we implemented on-chip jitter and relative skew measuring circuit with 20-bit digital output to achieve cumulative distribution (CDF) and probability density (PDF) functions from which DPDC's input codes are decided to align eight antenna's output pulses. Two measured radiation peaks after relative skew alignment are obtained at (θ; φ) angles of (-56; 0) and (+57; 0). Measurement results shows that beam-forming angles of the fully integrated antenna array can be adjusted by digital input codes and by the on-chip skew adjustment circuit for active imaging applications.

341-360hit(1013hit)