The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Y(22683hit)

2821-2840hit(22683hit)

  • The Aggregation Point Placement Problem for Power Distribution Systems

    Hideharu KOJIMA  Tatsuhiro TSUCHIYA  Yasumasa FUJISAKI  

     
    PAPER-Graphs and Networks

      Vol:
    E101-A No:7
      Page(s):
    1074-1082

    This paper discusses the collection of sensor data for power distribution systems. In current power distribution systems, this is usually performed solely by the Remote Terminal Unit (RTU) which is located at the root of a power distribution network. The recent rise of distributed power sources, such as photovoltaic generators, raises the demand to increase the frequency of data collection because the output of these distributed generators varies quickly depending on the weather. Increasing data collection frequency in turn requires shortening the time required for data collection. The paper proposes the use of aggregation points for this purpose. An aggregation point can collect sensor data concurrently with other aggregation points as well as with the RTU. The data collection time can be shortened by having the RTU receive data from aggregation points, instead of from all sensors. This approach then poses the problem of finding the optimal location of aggregation points. To solve this problem, the paper proposes a Mixed Integer Linear Problem (MILP) formulation of the problem. The MILP problem can then be solved with off-the-shelf mathematical optimization software. The results of experiments show that the proposed approach is applicable to rather large scale power distribution systems.

  • Fast Time-Aware Sparse Trajectories Prediction with Tensor Factorization

    Lei ZHANG  Qingfu FAN  Guoxing ZHANG  Zhizheng LIANG  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2018/04/13
      Vol:
    E101-D No:7
      Page(s):
    1959-1962

    Existing trajectory prediction methods suffer from the “data sparsity” and neglect “time awareness”, which leads to low accuracy. Aiming to the problem, we propose a fast time-aware sparse trajectories prediction with tensor factorization method (TSTP-TF). Firstly, we do trajectory synthesis based on trajectory entropy and put synthesized trajectories into the original trajectory space. It resolves the sparse problem of trajectory data and makes the new trajectory space more reliable. Then, we introduce multidimensional tensor modeling into Markov model to add the time dimension. Tensor factorization is adopted to infer the missing regions transition probabilities to further solve the problem of data sparsity. Due to the scale of the tensor, we design a divide and conquer tensor factorization model to reduce memory consumption and speed up decomposition. Experiments with real dataset show that TSTP-TF improves prediction accuracy generally by as much as 9% and 2% compared to the Baseline algorithm and ESTP-MF algorithm, respectively.

  • Improved Wolf Pack Algorithm Based on Differential Evolution Elite Set

    Xiayang CHEN  Chaojing TANG  Jian WANG  Lei ZHANG  Qingkun MENG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2018/03/30
      Vol:
    E101-D No:7
      Page(s):
    1946-1949

    Although Wolf Pack Algorithm (WPA) is a novel optimal algorithm with good performance, there is still room for improvement with respect to its convergence. In order to speed up its convergence and strengthen the search ability, we improve WPA with the Differential Evolution (DE) elite set strategy. The new proposed algorithm is called the WPADEES for short. WPADEES is faster than WPA in convergence, and it has a more feasible adaptability for various optimizations. Six standard benchmark functions are applied to verify the effects of these improvements. Our experiments show that the performance of WPADEES is superior to the standard WPA and other intelligence optimal algorithms, such as GA, DE, PSO, and ABC, in several situations.

  • Identifying Core Objects for Trace Summarization by Analyzing Reference Relations and Dynamic Properties

    Kunihiro NODA  Takashi KOBAYASHI  Noritoshi ATSUMI  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1751-1765

    Behaviors of an object-oriented system can be visualized as reverse-engineered sequence diagrams from execution traces. This approach is a valuable tool for program comprehension tasks. However, owing to the massiveness of information contained in an execution trace, a reverse-engineered sequence diagram is often afflicted by a scalability issue. To address this issue, many trace summarization techniques have been proposed. Most of the previous techniques focused on reducing the vertical size of the diagram. To cope with the scalability issue, decreasing the horizontal size of the diagram is also very important. Nonetheless, few studies have addressed this point; thus, there is a lot of needs for further development of horizontal summarization techniques. We present in this paper a method for identifying core objects for trace summarization by analyzing reference relations and dynamic properties. Visualizing only interactions related to core objects, we can obtain a horizontally compactified reverse-engineered sequence diagram that contains system's key behaviors. To identify core objects, first, we detect and eliminate temporary objects that are trivial for a system by analyzing reference relations and lifetimes of objects. Then, estimating the importance of each non-trivial object based on their dynamic properties, we identify highly important ones (i.e., core objects). We implemented our technique in our tool and evaluated it by using traces from various open-source software systems. The results showed that our technique was much more effective in terms of the horizontal reduction of a reverse-engineered sequence diagram, compared with the state-of-the-art trace summarization technique. The horizontal compression ratio of our technique was 134.6 on average, whereas that of the state-of-the-art technique was 11.5. The runtime overhead imposed by our technique was 167.6% on average. This overhead is relatively small compared with recent scalable dynamic analysis techniques, which shows the practicality of our technique. Overall, our technique can achieve a significant reduction of the horizontal size of a reverse-engineered sequence diagram with a small overhead and is expected to be a valuable tool for program comprehension.

  • Character Feature Learning for Named Entity Recognition

    Ping ZENG  Qingping TAN  Haoyu ZHANG  Xiankai MENG  Zhuo ZHANG  Jianjun XU  Yan LEI  

     
    LETTER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1811-1815

    The deep neural named entity recognition model automatically learns and extracts the features of entities and solves the problem of the traditional model relying heavily on complex feature engineering and obscure professional knowledge. This issue has become a hot topic in recent years. Existing deep neural models only involve simple character learning and extraction methods, which limit their capability. To further explore the performance of deep neural models, we propose two character feature learning models based on convolution neural network and long short-term memory network. These two models consider the local semantic and position features of word characters. Experiments conducted on the CoNLL-2003 dataset show that the proposed models outperform traditional ones and demonstrate excellent performance.

  • Active Contours Driven by Local Rayleigh Distribution Fitting Energy for Ultrasound Image Segmentation

    Hui BI  Yibo JIANG  Hui LI  Xuan SHA  Yi WANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/08
      Vol:
    E101-D No:7
      Page(s):
    1933-1937

    The ultrasound image segmentation is a crucial task in many clinical applications. However, the ultrasound image is difficult to segment due to image inhomogeneity caused by the ultrasound imaging technique. In this paper, to deal with image inhomogeneity with considering ultrasound image properties the Local Rayleigh Distribution Fitting (LRDF) energy term is introduced into the traditional level set method newly. While the curve evolution equation is derived for energy minimization, and self-driven uterus contour is achieved on the ultrasound images. The experimental segmentation results on synthetic images and in-vivo ultrasound images present that the proposed approach is effective and accurate, with the Dice Score Coefficient (DSC) of 0.95 ± 0.02.

  • Secrecy Energy Efficiency Optimization for MIMO SWIPT Systems

    Yewang QIAN  Tingting ZHANG  Haiyang ZHANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E101-A No:7
      Page(s):
    1141-1145

    In this letter, we consider a multiple-input multiple-output (MIMO) simultaneous wireless information and power transfer (SWIPT) system, in which the confidential message intended for the information receiver (IR) should be kept secret from the energy receiver (ER). Our goal is to design the optimal transmit covariance matrix so as to maximize the secrecy energy efficiency (SEE) of the system while guaranteeing the secrecy rate, energy harvesting and transmit power constraints. To deal with the original non-convex optimization problem, we propose an alternating optimization (AO)- based algorithm and also prove its convergence. Simulation results show that the proposed algorithm outperforms conventional design methods in terms of SEE.

  • Analysis of a Plasmonic Pole-Absorber Using a Periodic Structure Open Access

    Junji YAMAUCHI  Shintaro OHKI  Yudai NAKAGOMI  Hisamatsu NAKANO  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    495-500

    A plasmonic black pole (PBP) consisting of a series of touching spherical metal surfaces is analyzed using the finite-difference time-domain (FDTD) method with the periodic boundary condition. First, the wavelength characteristics of the PBP are studied under the assumption that the PBP is omnidirectionally illuminated. It is found that partial truncation of each metal sphere reduces the reflectivity over a wide wavelength range. Next, we consider the case where the PBP is illuminated with a cylindrical wave from a specific direction. It is shown that an absorptivity of more than 80% is obtained over a wavelength range of λ=500 nm to 1000 nm. Calculation regarding the Poynting vector distribution also shows that the incident wave is bent and absorbed towards the center axis of the PBP.

  • Using Scattered X-Rays to Improve the Estimation Accuracy of Attenuation Coefficients: A Fundamental Analysis

    Naohiro TODA  Tetsuya NAKAGAMI  Yoichi YAMAZAKI  Hiroki YOSHIOKA  Shuji KOYAMA  

     
    PAPER-Measurement Technology

      Vol:
    E101-A No:7
      Page(s):
    1101-1114

    In X-ray computed tomography, scattered X-rays are generally removed by using a post-patient collimator located in front of the detector. In this paper, we show that the scattered X-rays have the potential to improve the estimation accuracy of the attenuation coefficient in computed tomography. In order to clarify the problem, we simplified the geometry of the computed tomography into a thin cylinder composed of a homogeneous material so that only one attenuation coefficient needs to be estimated. We then conducted a Monte Carlo numerical experiment on improving the estimation accuracy of attenuation coefficient by measuring the scattered X-rays with several dedicated toroidal detectors around the cylinder in addition to the primary X-rays. We further present a theoretical analysis to explain the experimental results. We employed a model that uses a T-junction (i.e., T-junction model) to divide the photon transport into primary and scattered components. This division is processed with respect to the attenuation coefficient. Using several T-junction models connected in series, we modeled the case of several scatter detectors. The estimation accuracy was evaluated according to the variance of the efficient estimator, i.e., the Cramer-Rao lower bound. We confirmed that the variance decreases as the number of scatter detectors increases, which implies that using scattered X-rays can reduce the irradiation dose for patients.

  • Cross-Layer Design for Exposed Node Reduction in Ad Hoc WLANs

    Emilia WEYULU  Masaki HANADA  Hidehiro KANEMITSU  Eun-Chan PARK  Moo Wan KIM  

     
    PAPER

      Pubricized:
    2018/01/22
      Vol:
    E101-B No:7
      Page(s):
    1575-1588

    Interference in ad hoc WLANs is a common occurrence as there is no centralized access point controlling device access to the wireless channel. IEEE 802.11 WLANs use carrier sense multiple access with collision avoidance (CSMA/CA) which initiates the Request to Send/Clear to Send (RTS/CTS) handshaking mechanism to solve the hidden node problem. While it solves the hidden node problem, RTS/CTS triggers the exposed node problem. In this paper, we present an evaluation of a method for reducing exposed nodes in 802.11 ad hoc WLANs. Using asymmetric transmission ranges for RTS and CTS frames, a cross-layer design is implemented between Layer 2 and 3 of the OSI model. Information obtained by the AODV routing protocol is utilized in adjusting the RTS transmission range at the MAC Layer. The proposed method is evaluated with the NS-2 simulator and we observe significant throughput improvement, and confirm the effectiveness of the proposed method. Especially when the mobile nodes are randomly distributed, the throughput gain of the Asymmetric RTS/CTS method is up to 30% over the Standard RTS/CTS method.

  • Error Correction for Search Engine by Mining Bad Case

    Jianyong DUAN  Tianxiao JI  Hao WANG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2018/03/26
      Vol:
    E101-D No:7
      Page(s):
    1938-1945

    Automatic error correction of users' search terms for search engines is an important aspect of improving search engine retrieval efficiency, accuracy and user experience. In the era of big data, we can analyze and mine massive search engine logs to release the hidden mind with big data ideas. It can obtain better results through statistical modeling of query errors in search engine log data. But when we cannot find the error query in the log, we can't make good use of the information in the log to correct the query result. These undiscovered error queries are called Bad Case. This paper combines the error correction algorithm model and search engine query log mining analysis. First, we explored Bad Cases in the query error correction process through the search engine query logs. Then we quantified the characteristics of these Bad Cases and built a model to allow search engines to automatically mine Bad Cases with these features. Finally, we applied Bad Cases to the N-gram error correction algorithm model to check the impact of Bad Case mining on error correction. The experimental results show that the error correction based on Bad Case mining makes the precision rate and recall rate of the automatic error correction improved obviously. Users experience is improved and the interaction becomes more friendly.

  • Growth Mechanism of Polar-Plane-Free Faceted InGaN Quantum Wells Open Access

    Yoshinobu MATSUDA  Mitsuru FUNATO  Yoichi KAWAKAMI  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    532-536

    The growth mechanisms of three-dimensionally (3D) faceted InGaN quantum wells (QWs) on (=1=12=2) GaN substrates are discussed. The structure is composed of (=1=12=2), {=110=1}, and {=1100} planes, and the cross sectional shape is similar to that of 3D QWs on (0001). However, the 3D QWs on (=1=12=2) and (0001) show quite different inter-facet variation of In compositions. To clarify this observation, the local thicknesses of constituent InN and GaN on the 3D GaN are fitted with a formula derived from the diffusion equation. It is suggested that the difference in the In incorporation efficiency of each crystallographic plane strongly affects the surface In adatom migration.

  • A Relaxed Bit-Write-Reducing and Error-Correcting Code for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    LETTER

      Vol:
    E101-A No:7
      Page(s):
    1045-1052

    Non-volatile memories are a promising alternative to memory design but data stored in them still may be destructed due to crosstalk and radiation. The data stored in them can be restored by using error-correcting codes but they require extra bits to correct bit errors. One of the largest problems in non-volatile memories is that they consume ten to hundred times more energy than normal memories in bit-writing. It is quite necessary to reduce writing bits. Recently, a REC code (bit-write-reducing and error-correcting code) is proposed for non-volatile memories which can reduce writing bits and has a capability of error correction. The REC code is generated from a linear systematic error-correcting code but it must include the codeword of all 1's, i.e., 11…1. The codeword bit length must be longer in order to satisfy this condition. In this letter, we propose a method to generate a relaxed REC code which is generated from a relaxed error-correcting code, which does not necessarily include the codeword of all 1's and thus its codeword bit length can be shorter. We prove that the maximum flipping bits of the relaxed REC code is still limited theoretically. Experimental results show that the relaxed REC code efficiently reduce the number of writing bits.

  • Two High Accuracy Frequency Estimation Algorithms Based on New Autocorrelation-Like Function for Noncircular/Sinusoid Signal

    Kai WANG  Jiaying DING  Yili XIA  Xu LIU  Jinguang HAO  Wenjiang PEI  

     
    PAPER-Digital Signal Processing

      Vol:
    E101-A No:7
      Page(s):
    1065-1073

    Computing autocorrelation coefficient can effectively reduce the influence of additive white noise, thus estimation precision will be improved. In this paper, an autocorrelation-like function, different from the ordinary one, is defined, and is proven to own better linear predictive performance. Two algorithms for signal model are developed to achieve frequency estimates. We analyze the theoretical properties of the algorithms in the additive white Gaussian noise. The simulation results match with the theoretical values well in the sense of mean square error. The proposed algorithms compare with existing estimators, are closer to the Cramer-Rao bound (CRLB). In addition, computer simulations demonstrate that the proposed algorithms provide high accuracy and good anti-noise capability.

  • Fast Rendezvous Scheme with a Few Control Signals for Multi-Channel Cognitive Radio

    Hayato SOYA  Osamu TAKYU  Keiichiro SHIRAI  Mai OHTA  Takeo FUJII  Fumihito SASAMORI  Shiro HANDA  

     
    PAPER

      Pubricized:
    2018/01/22
      Vol:
    E101-B No:7
      Page(s):
    1589-1601

    A multi-channel cognitive radio is a powerful solution for recovering the exhaustion of frequency spectrum resources. In a cognitive radio, although master and slave terminals (which construct a communication link) have the freedom to access arbitrary channels, access channel mismatch is caused. A rendezvous scheme based on frequency hopping can compensate for this mismatch by exchanging control signals through a selected channel in accordance with a certain rule. However, conventional frequency hopping schemes do not consider an access protocol of both control signals in the rendezvous scheme and the signal caused by channel access from other systems. Further, they do not consider an information sharing method to reach a consensus between the master and slave terminals. This paper proposes a modified rendezvous scheme based on learning-based channel occupancy rate (COR) estimation and describes a specific channel-access rule in the slave terminal. On the basis of this rule, the master estimates a channel selected by the slave by considering the average COR of the other systems. Since the master can narrow down the number of channels, a fast rendezvous scheme with a few control signals is established.

  • Dynamic Energy Efficient Virtual Link Resource Reallocation Approach for Network Virtualization Environment

    Shanming ZHANG  Takehiro SATO  Satoru OKAMOTO  Naoaki YAMANAKA  

     
    PAPER-Network

      Pubricized:
    2018/01/10
      Vol:
    E101-B No:7
      Page(s):
    1675-1684

    The energy consumption of network virtualization environments (NVEs) has become a critical issue. In this paper, we focus on reducing the data switching energy consumption of NVE. We first analyze the data switching energy of NVE. Then, we propose a dynamic energy efficient virtual link resource reallocation (eEVLRR) approach for NVE. eEVLRR dynamically reallocates the energy efficient substrate resources (s-resources) for virtual links with dynamic changes of embeddable s-resources to save the data switching energy. In order to avoid traffic interruptions while reallocating, we design a cross layer application-session-based forwarding model for eEVLRR that can identify and forward each data transmission flow along the initial specified substrate data transport path until end without traffic interruptions. The results of performance evaluations show that eEVLRR not only guarantees the allocated s-resources of virtual links are continuously energy efficient to save data switching energy but also has positive impacts on virtual network acceptance rate, revenues and s-resources utilization.

  • Welch FFT Segment Size Selection Method for FFT Based Wide Band Spectrum Measurement

    Hiroki IWATA  Kenta UMEBAYASHI  Janne J. LEHTOMÄKI  Shusuke NARIEDA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/01/18
      Vol:
    E101-B No:7
      Page(s):
    1733-1743

    We introduce a Welch FFT segment size selection method for FFT-based wide band spectrum measurement in the context of smart spectrum access (SSA), in which statistical spectrum usage information of primary users (PUs), such as duty cycle (DC), will be exploited by secondary users (SUs). Energy detectors (EDs) based on Welch FFT can detect the presence of PU signals in a broadband environment efficiently, and DC can be estimated properly if a Welch FFT segment size is set suitably. There is a trade-off between detection performance and frequency resolution in terms of the Welch FFT segment size. The optimum segment size depends on signal-to-noise ratio (SNR) which makes practical and optimum segment size setting difficult. For this issue, we previously proposed a segment size selection method employing a relationship between noise floor (NF) estimation output and the segment size without SNR information. It can achieve accurate spectrum awareness at the expense of relatively high computational complexity since it employs exhaustive search to select a proper segment size. In this paper, we propose a segment size selection method that offers reasonable spectrum awareness performance with low computational complexity since limited search is used. Numerical evaluations show that the proposed method can match the spectrum awareness performance of the conventional method with 70% lower complexity or less.

  • A Unified Analysis of the Signal Transfer Characteristics of a Single-Path FET-R-C Circuit Open Access

    Tetsuya IIZUKA  Asad A. ABIDI  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    432-443

    A frequently occurring subcircuit consists of a loop of a resistor (R), a field-effect transistor (FET), and a capacitor (C). The FET acts as a switch, controlled at its gate terminal by a clock voltage. This subcircuit may be acting as a sample-and-hold (S/H), as a passive mixer (P-M), or as a bandpass filter or bandpass impedance. In this work, we will present a useful analysis that leads to a simple signal flow graph (SFG), which captures the FET-R-C circuit's action completely across a wide range of design parameters. The SFG dissects the circuit into three filtering functions and ideal sampling. This greatly simplifies analysis of frequency response, noise, input impedance, and conversion gain, and leads to guidelines for optimum design. This paper focuses on the analysis of a single-path FET-R-C circuit's signal transfer characteristics including the reconstruction of the complete waveform from the discrete-time sampled voltage.

  • Column-Parallel ADCs for CMOS Image Sensors and Their FoM-Based Evaluations Open Access

    Shoji KAWAHITO  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    444-456

    This paper reviews architectures and topologies for column-parallel analog-to-digital converters (ADCs) used for CMOS image sensors (CISs) and discusses the performance of CISs using column-parallel ADCs based on figures-of-merit (FoM) with considering noise models which behave differently at low/middle and high pixel-rate regions. Various FoM considering different performance factors are defined. The defined FoM are applied to surveyed data on reported CISs using column-parallel ADCs which are categorized into 4 types; single slope, SAR, cyclic and delta-sigma ADCs. The FoM defined by (noise)2(power)/(pixel-rate) separately for low/middle and high pixel-rate regions well explains the frontline of the CIS' performance in all the pixel rates. Using the FoM defined by (noise)2(power)/(intrascene dynamic range)(pixel-rate), the effectiveness of recently-reported techniques for extended-dynamic-range CISs is clarified.

  • 32-Gbit/s CMOS Receivers in 300-GHz Band Open Access

    Shinsuke HARA  Kosuke KATAYAMA  Kyoya TAKANO  Ruibing DONG  Issei WATANABE  Norihiko SEKINE  Akifumi KASAMATSU  Takeshi YOSHIDA  Shuhei AMAKAWA  Minoru FUJISHIMA  

     
    PAPER

      Vol:
    E101-C No:7
      Page(s):
    464-471

    This paper presents low-noise amplifier (LNA)-less 300-GHz CMOS receivers that operate above the NMOS unity-power-gain frequency, fmax. The receivers consist of a down-conversion mixer with a doubler- or tripler-last multiplier chain that upconverts an LO1/n signal into 300 GHz. The conversion gain of the receiver with the doubler-last multiplier is -19.5 dB and its noise figure, 3-dB bandwidth, and power consumption are 27 dB, 27 GHz, and 0.65 W, respectively. The conversion gain of the receiver with the tripler-last multiplier is -18 dB and its noise figure, 3-dB bandwidth, and power consumption are 25.5 dB, 33 GHz, and 0.41 W, respectively. The receivers achieve a wireless data rate of 32 Gb/s with 16QAM. This shows the potential of the moderate-fmax CMOS technology for ultrahigh-speed THz wireless communications.

2821-2840hit(22683hit)