The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

5001-5020hit(21534hit)

  • Lesion Type Classification by Applying Machine-Learning Technique to Contrast-Enhanced Ultrasound Images

    Kazuya TAKAGI  Satoshi KONDO  Kensuke NAKAMURA  Mitsuyoshi TAKIGUCHI  

     
    PAPER-Biological Engineering

      Vol:
    E97-D No:11
      Page(s):
    2947-2954

    One of the major applications of contrast-enhanced ultrasound (CEUS) is lesion classification. After contrast agents are administered, it is possible to identify a lesion type from its enhancement pattern. However, CEUS image reading is not easy because there are various types of enhancement patterns even for the same type of lesion, and clear classification criteria have not yet been defined. Some studies have used conventional time intensity curves (TICs), which show the vessel dynamics of a lesion. It is possible to predict lesion type from the TIC parameters, such as the coefficients obtained by curve fitting, peak intensity, flow rate and time to peak. However, these parameters are not always provide sufficient accuracy. In this paper, we prepare 1D Haar-like features which describe intensity changes in a TIC and adopt the Adaboost machine learning technique, which eases understanding of which features are useful. Hyperparameters of weak classifiers, e.g., the step size of a Haar-like filter length and threshold for output of the filter, are optimized by searching for those parameters that give the best accuracy. We evaluate the proposed method using 36 focal splenic lesions in canines 16 of which were benign and 20 malignant. The accuracies were 91.7% (33/36) when inspected by an experienced veterinarian, 75.0% (27/36) by linear discriminant analysis (LDA) using conventional three TIC parameters: time to peak, area under curve and peak intensity, and 91.7% (33/36) using our proposed method. McNemar testing shows the p-value to be less than 0.05 between the proposed method and LDA. This result shows the statistical significance of differences between the proposed method and the conventional TIC analysis method using LDA.

  • Adaptive Sensing Period Based Distributed Medium Access Control for Cognitive Radio Networks

    Su Min KIM  Junsu KIM  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:11
      Page(s):
    2502-2511

    In this paper, we propose distributed medium access control (MAC) protocols based on an adaptive sensing period adjustment scheme for low-cost multiple secondary users in interweave-type cognitive radio (CR) networks. The proposed MAC protocols adjust the sensing period of each secondary user based on both primary sensing and secondary data channels in distributed manner. Then, the secondary user with the shortest sensing period accesses the medium using request-to-send (RTS) and clear-to-send (CTS) message exchange. Three components affect the length of each user's sensing period: sensing channel quality from the primary system, data channel quality to the secondary receiver, and collision probability among multiple secondary transmitters. We propose two sensing period adjustment (SPA) schemes to efficiently improve achievable rate considering the three components, which are logarithmic SPA (LSPA) and exponential SPA (ESPA). We evaluate the performance of the proposed schemes in terms of the achievable rate and other factors affecting it, such as collision probability, false alarm probability, and average sensing period.

  • A Novel Structure of HTTP Adaptive Streaming Based on Unequal Error Protection Rateless Code

    Yun SHEN  Yitong LIU  Jing LIU  Hongwen YANG  Dacheng YANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:11
      Page(s):
    2903-2911

    In this paper, we design an Unequal Error Protection (UEP) rateless code with special coding graph and apply it to propose a novel HTTP adaptive streaming based on UEP rateless code (HASUR). Our designed UEP rateless code provides high diversity on decoding probability and priority for data in different important level with overhead smaller than 0.27. By adopting this UEP rateless channel coding and scalable video source coding, our HASUR ensures symbols with basic quality to be decoded first to guarantee fluent playback experience. Besides, it also provides multiple layers to ensure the most suitable quality for fluctuant bandwidth and packet loss rate (PLR) without estimating them in advance. We evaluate our HASUR against the alternative solutions. Simulation results show that HASUR provides higher video quality and more adapts to bandwidth and PLR than other two commercial schemes under End-to-End transmission.

  • Interactive Evolutionary System for Synthesizing Facial Caricature with Non-planar Expression

    Tatsuya UGAI  Keita SATO  Kaoru ARAKAWA  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E97-A No:11
      Page(s):
    2154-2160

    A method to synthesize facial caricatures with non-planar expression is proposed. Several methods have been already proposed to synthesize facial caricatures automatically, but they mainly synthesize plane facial caricatures which look somewhat monotonous. In order to generate expressive facial caricature, the image should be expressed in non-planar style, expressing the depth of the face by shading and highlighting. In this paper, a new method to express such non-planar effect in facial caricatures is proposed by blending the grayscale information of the real face image into the plane caricature. Some methods also have been proposed to generate non-planar facial caricature, but the proposed method can adjust the degree of non-planar expression by interactive evolutionary computing, so that the obtained expression is satisfied by the user based on his/her subjective criteria. Since the color of the face looks changed, when the grayscale information of the natural face image is mixed, the color information of the skin area are also set by interactive evolutionary computing. Experimental results show the high performance of the proposed method.

  • Self-Adjustable Rate Control for Congestion Avoidance in Wireless Mesh Networks

    Youngmi BAEK  Kijun HAN  

     
    PAPER-Network

      Vol:
    E97-B No:11
      Page(s):
    2368-2377

    In this paper, we investigate the problems of the established congestion solution and then introduce a self-adjustable rate control that supports quality of service assurances over multi-hop wireless mesh networks. This scheme eliminates two phases of the established congestion solution and works on the MAC layer for congestion control. Each node performs rate control by itself so network congestion is eliminated after it independently collects its vector parameters and network status parameters for rate control. It decides its transmission rate based on a predication model which uses a rate function including a congestion risk level and a passing function. We prove that our scheme works efficiently without any negative effects between the network layer and the data link layer. Simulation results show that the proposed scheme is more effective and has better performance than the existing method.

  • Dynamic Game Approach of H2/H Control for Stochastic Discrete-Time Systems

    Hiroaki MUKAIDANI  Ryousei TANABATA  Chihiro MATSUMOTO  

     
    PAPER-Systems and Control

      Vol:
    E97-A No:11
      Page(s):
    2200-2211

    In this paper, the H2/H∞ control problem for a class of stochastic discrete-time linear systems with state-, control-, and external-disturbance-dependent noise or (x, u, v)-dependent noise involving multiple decision makers is investigated. It is shown that the conditions for the existence of a strategy are given by the solvability of cross-coupled stochastic algebraic Riccati equations (CSAREs). Some algorithms for solving these equations are discussed. Moreover, weakly-coupled large-scale stochastic systems are considered as an important application, and some illustrative examples are provided to demonstrate the effectiveness of the proposed decision strategies.

  • Non-tunneling Overlay Approach for Virtual Tenant Networks in Cloud Datacenter

    Ryota KAWASHIMA  Hiroshi MATSUO  

     
    PAPER

      Vol:
    E97-B No:11
      Page(s):
    2259-2268

    Network virtualization is an essential technology for cloud datacenters that provide multi-tenancy services. SDN-enabled datacenters have introduced an edge-overlay (distributed tunneling) model to construct virtual tenant networks. The edge-overlay model generally uses L2-in-L3 tunneling protocols like VXLAN. However, the tunneling-based edge-overlay model has some performance and compatibility problems. We have proposed a yet another overlay approach without using IP tunneling. Our model leverages two methods, OpenFlow-based Virtual/Physical MAC address translation and host-based VLAN ID usage. The former method replaces VMs' MAC addresses to physical servers' ones, which prevents frame encapsulation as well as unnecessary MAC address learning by physical switches. The later method breaks a limitation of the number of VLAN-based virtual tenant networks (4094) by allocating entire VLAN ID space to each physical server and by mapping VLAN ID to VM with OpenFlow controller support. In our model, any special hardware equipment like OpenFlow hardware switches is not required and only software-based virtual switches and the controller are used. In this paper, we evaluated the performance of the proposed model comparing with the tunneling model using 40GbE environment. The results show that the performance of VM-to-VM communication with the proposed model is close to that of physical communication and exceeds 10Gbps throughput with large TCP segment, and the proposed model shows better scalability for the number of VMs.

  • Hybrid Integration of Visual Attention Model into Image Quality Metric

    Chanho JUNG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2014/08/22
      Vol:
    E97-D No:11
      Page(s):
    2971-2973

    Integrating the visual attention (VA) model into an objective image quality metric is a rapidly evolving area in modern image quality assessment (IQA) research due to the significant opportunities the VA information presents. So far, in the literature, it has been suggested to use either a task-free saliency map or a quality-task one for the integration into quality metric. A hybrid integration approach which takes the advantages of both saliency maps is presented in this paper. We compare our hybrid integration scheme with existing integration schemes using simple quality metrics. Results show that the proposed method performs better than the previous techniques in terms of prediction accuracy.

  • An Interleaved Otsu Segmentation for MR Images with Intensity Inhomogeneity

    Haoqi XIONG  Jingjing GAO  Chongjin ZHU  Yanling LI  Shu ZHANG  Mei XIE  

     
    LETTER-Biological Engineering

      Vol:
    E97-D No:11
      Page(s):
    2974-2978

    The MR image segmentation is always a challenging problem because of the intensity inhomogeneity. Many existing methods don't reach their expected segmentations; besides their implementations are usually complicated. Therefore, we originally interleave the extended Otsu segmentation with bias field estimation in an energy minimization. Via our proposed method, the optimal segmentation and bias field estimation are achieved simultaneously throughout the reciprocal iteration. The results of our method not only satisfy the required classification via its applications in the synthetic and the real images, but also demonstrate that our method is superior to the baseline methods in accordance with the performance analysis of JS metrics.

  • An Efficient Channel Estimation and CSI Feedback Method for Device-to-Device Communication in 3GPP LTE System

    Kyunghoon LEE  Wipil KANG  Hyung-Jin CHOI  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E97-B No:11
      Page(s):
    2524-2533

    In 3GPP (3-rd Generation Partnership Project) LTE (Long Term Evolution) systems, D2D (Device-to-Device) communication has been selected as a next generation study item. In uplink D2D communication that underlies LTE systems, uplink interference signals generated by CUE (Cellular User Equipment) have a profound impact on the throughput of DUE (D2D User Equipment). For that reason, various resource allocation algorithms which consider interference channels have been studied; however, these algorithms assume accurate channel estimation and feedback of D2D related links. Therefore, in order to estimate uplink channels of D2D communication, SRS (Sounding Reference Signal) defined in LTE uplink channel structure can be considered. However, when the number of interferes increases, the SRS based method incurs significant overheads such as side information, operational complexity, channel estimation and feedback to UE. Therefore, in this paper, we propose an efficient channel estimation and CSI (Channel State Information) feedback method for D2D communication, and its application in LTE systems. We verify that the proposed method can achieve a similar performance to SRS based method with lower operational complexity and overhead.

  • The Background Noise Estimation in the ELF Electromagnetic Wave Data Using Outer Product Expansion with Non-linear Filter

    Akitoshi ITAI  Hiroshi YASUKAWA  Ichi TAKUMI  Masayasu HATA  

     
    PAPER

      Vol:
    E97-A No:11
      Page(s):
    2114-2120

    This paper proposes a background noise estimation method using an outer product expansion with non-linear filters for ELF (extremely low frequency) electromagnetic (EM) waves. We proposed a novel source separation technique that uses a tensor product expansion. This signal separation technique means that the background noise, which is observed in almost all input signals, can be estimated using a tensor product expansion (TPE) where the absolute error (AE) is used as the error function, which is thus known as TPE-AE. TPE-AE has two problems: the first is that the results of TPE-AE are strongly affected by Gaussian random noise, and the second is that the estimated signal varies widely because of the random search. To solve these problems, an outer product expansion based on a modified trimmed mean (MTM) is proposed in this paper. The results show that this novel technique separates the background noise from the signal more accurately than conventional methods.

  • Adaptive MIMO Detection for Circular Signals by Jointly Exploiting the Properties of Both Signal and Channel

    Yuehua DING  Yide WANG  Nanxi LI  Suili FENG  Wei FENG  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:11
      Page(s):
    2413-2423

    In this paper, an adaptive expansion strategy (AES) is proposed for multiple-input/multiple-output (MIMO) detection in the presence of circular signals. By exploiting channel properties, the AES classifies MIMO channels into three types: excellent, average and deep fading. To avoid unnecessary branch-searching, the AES adopts single expansion (SE), partial expansion (PE) and full expansion (FE) for excellent channels, average channels and deep fading channels, respectively. In the PE, the non-circularity of signal is exploited, and the widely linear processing is extended from non-circular signals to circular signals by I (or Q) component cancellation. An analytical performance analysis is given to quantify the performance improvement. Simulation results show that the proposed algorithm can achieve quasi-optimal performance with much less complexity (hundreds of flops/symbol are saved) compared with the fixed-complexity sphere decoder (FSD) and the sphere decoder (SD).

  • MVP-Cache: A Multi-Banked Cache Memory for Energy-Efficient Vector Processing of Multimedia Applications

    Ye GAO  Masayuki SATO  Ryusuke EGAWA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  

     
    PAPER-Computer System

      Pubricized:
    2014/08/22
      Vol:
    E97-D No:11
      Page(s):
    2835-2843

    Vector processors have significant advantages for next generation multimedia applications (MMAs). One of the advantages is that vector processors can achieve high data transfer performance by using a high bandwidth memory sub-system, resulting in a high sustained computing performance. However, the high bandwidth memory sub-system usually leads to enormous costs in terms of chip area, power and energy consumption. These costs are too expensive for commodity computer systems, which are the main execution platform of MMAs. This paper proposes a new multi-banked cache memory for commodity computer systems called MVP-cache in order to expand the potential of vector architectures on MMAs. Unlike conventional multi-banked cache memories, which employ one tag array and one data array in a sub-cache, MVP-cache associates one tag array with multiple independent data arrays of small-sized cache lines. In this way, MVP-cache realizes less static power consumption on its tag arrays. MVP-cache can also achieve high efficiency on short vector data transfers because the flexibility of data transfers can be improved by independently controlling the data transfers of each data array.

  • Peculiar Characteristics of Amplification and Noise for Intensity Modulated Light in Semiconductor Optical Amplifier

    Kazuki HIGUCHI  Nobuhito TAKEUCHI  Minoru YAMADA  

     
    PAPER-Lasers, Quantum Electronics

      Vol:
    E97-C No:11
      Page(s):
    1093-1103

    Amplification characteristics of the signal and the noise in the semiconductor optical amplifier (SOA), without facet mirrors for the intensity modulated light, are theoretically analyzed and experimentally confirmed. We have found that the amplification factor of the temporarily varying intensity component is smaller than that of the continuous wave (CW) component, but increases up to that of the CW component in the high frequency region in the SOA. These properties are very peculiar in the SOA, which is not shown in conventional electronic devices and semiconductor lasers. Therefore, the relative intensity noise (RIN), which is defined as ratio of the square value of the intensity fluctuation to that of the CW power can be improved by the amplification by the SOA. On the other hand, the signal to the noise ratio (S/N ratio) defined for ratio of the square value of the modulated signal power to that of the intensity fluctuation have both cases of the degradation and the improvement by the amplification depending on combination of the modulation and the noise frequencies. Experimental confirmations of these peculiar characteristics are also demonstrated.

  • Distributing Garbage Collection Costs over Multiple Requests to Improve the Worst-Case Performance of Hybrid Mapping Schemes

    Ilhoon SHIN  

     
    PAPER-Software System

      Vol:
    E97-D No:11
      Page(s):
    2844-2851

    NAND-based block devices such as memory cards and solid-state drives embed a flash translation layer (FTL) to emulate the standard block device interface and its features. The overall performance of these devices is determined mainly by the efficiency of the FTL scheme, so intensive research has been performed to improve the average performance of the FTL scheme. However, its worst-case performance has rarely been considered. The present study aims to improve the worst-case performance without affecting the average performance. The central concept is to distribute the garbage collection cost, which is the main source of performance fluctuations, over multiple requests. The proposed scheme comprises three modules: i) anticipated partial log block merging to distribute the garbage collection time; ii) reclaiming clean pages by moving valid pages to bound the worst-case garbage collection time, instead of performing repeated block merges; and iii) victim selection based on the valid page count in a victim log and the required clean page count to avoid subsequent garbage collections. A trace-driven simulation showed that the worst-case performance was improved up to 1,300% using the proposed garbage collection scheme. The average performance was also similar to that of the original scheme. This improvement was achieved without additional memory overheads.

  • Synthesis and Photoluminescence Properties of HEu1-xGdx(MoO4)2 Nanophosphor Open Access

    Mizuki WATANABE  Kazuyoshi UEMATSU  Sun Woog KIM  Kenji TODA  Mineo SATO  

     
    INVITED PAPER

      Vol:
    E97-C No:11
      Page(s):
    1063-1067

    New HEu$_{1-x}$Gd$_{x}$(MoO$_4$)$_2$ nanophosphors were synthesized by a simple one-step ion-exchange method. These nanophosphors have rod-like particle morphology with 0.5--15,$mu$ m in length and outer diameters in the range of 50--500,nm. By optimization of the composition, the highest emission intensity was obtained for the samples with $x = 0.50$ for both KEu$_{1-x}$Gd$_{x}$(MoO$_{4}$)$_{2}$ and HEu$_{1-x}$Gd$_{x}$(MoO$_{4}$)$_{2}$.

  • Reusing EPR Pairs for Change of Receiver in Quantum Repeater

    Kenichiro FURUTA  

     
    LETTER-General Fundamentals and Boundaries

      Vol:
    E97-A No:11
      Page(s):
    2283-2286

    We focus on a characteristic which is specific to the quantum repeater protocol. In the quantum repeater protocol, quantum states which are generated by the protocol do not depend on receivers. Therefore, we can reuse EPR pairs which are generated before a change of a receiver for the quantum repeater protocol after the change. The purpose of reusing is advancing the finishing time of sharing EPR pairs, which is not equal to increasing the fidelity. In this paper, we construct concrete methods of reusing EPR pairs and analyze the effectiveness of reusing EPR pairs. Besides, we derive conditions in which reusing EPR pairs is effective.

  • Evaluation of Agile Software Develeopment Method for Carrier Cloud Service Platform Development

    Yoji YAMATO  Naoko SHIGEMATSU  Norihiro MIURA  

     
    LETTER-Software Engineering

      Pubricized:
    2014/08/19
      Vol:
    E97-D No:11
      Page(s):
    2959-2962

    In this paper, we evaluate a method of agile software development for carrier Cloud service platform development. It is generally said that agile software development is suitable for small-scale development, but we adopt it for the development which has more than 30 members. We attempted to enable automatic regression tests for each iteration when we adopted agile software development, so that we could start our Cloud service sufficiently fast. We compared and evaluated software reliability growth curves, regression test efforts and bug causes with waterfall development.

  • Partial Volume Correction on ASL-MRI and Its Application on Alzheimer's Disease Diagnosis

    Wenji YANG  Wei HUANG  Shanxue CHEN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:11
      Page(s):
    2912-2918

    Arterial spin labeling (ASL) is a non-invasive magnetic resonance imaging (MRI) method that can provide direct and quantitative measurements of cerebral blood flow (CBF) of scanned patients. ASL can be utilized as an imaging modality to detect Alzheimer's disease (AD), as brain atrophy of AD patients can be revealed by low CBF values in certain brain regions. However, partial volume effects (PVE), which is mainly caused by signal cross-contamination due to voxel heterogeneity and limited spatial resolution of ASL images, often prevents CBF in ASL from being precisely measured. In this study, a novel PVE correction method is proposed based on pixel-wise voxels in ASL images; it can well handle with the existing problems of blurring and loss of brain details in conventional PVE correction methods. Dozens of comparison experiments and statistical analysis also suggest that the proposed method is superior to other PVE correction methods in AD diagnosis based on real patients data.

  • Reducing Speech Noise for Patients with Dysarthria in Noisy Environments

    Woo KYEONG SEONG  Ji HUN PARK  Hong KOOK KIM  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:11
      Page(s):
    2881-2887

    Dysarthric speech results from damage to the central nervous system involving the articulator, which can mainly be characterized by poor articulation due to irregular sub-glottal pressure, loudness bursts, phoneme elongation, and unexpected pauses during utterances. Since dysarthric speakers have physical disabilities due to the impairment of their nervous system, they cannot easily control electronic devices. For this reason, automatic speech recognition (ASR) can be a convenient interface for dysarthric speakers to control electronic devices. However, the performance of dysarthric ASR severely degrades when there is background noise. Thus, in this paper, we propose a noise reduction method that improves the performance of dysarthric ASR. The proposed method selectively applies either a Wiener filtering algorithm or a Kalman filtering algorithm according to the result of voiced or unvoiced classification. Then, the performance of the proposed method is compared to a conventional Wiener filtering method in terms of ASR accuracy.

5001-5020hit(21534hit)