The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Q(6809hit)

721-740hit(6809hit)

  • Improved Radiometric Calibration by Brightness Transfer Function Based Noise & Outlier Removal and Weighted Least Square Minimization

    Chanchai TECHAWATCHARAPAIKUL  Pradit MITTRAPIYANURUK  Pakorn KAEWTRAKULPONG  Supakorn SIDDHICHAI  Werapon CHIRACHARIT  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/05/16
      Vol:
    E101-D No:8
      Page(s):
    2101-2114

    An improved radiometric calibration algorithm by extending the Mitsunaga and Nayar least-square minimization based algorithm with two major ideas is presented. First, a noise & outlier removal procedure based on the analysis of brightness transfer function is included for improving the algorithm's capability on handling noise and outlier in least-square estimation. Second, an alternative minimization formulation based on weighted least square is proposed to improve the weakness of least square minimization when dealing with biased distribution observations. The performance of the proposed algorithm with regards to two baseline algorithms is demonstrated, i.e. the classical least square based algorithm proposed by Mitsunaga and Nayar and the state-of-the-art rank minimization based algorithm proposed by Lee et al. From the results, the proposed algorithm outperforms both baseline algorithms on both the synthetic dataset and the dataset of real-world images.

  • A New Interpretation of Physical Optics Approximation from Surface Equivalence Theorem

    Hieu Ngoc QUANG  Hiroshi SHIRAI  

     
    PAPER-Electromagnetic Theory

      Vol:
    E101-C No:8
      Page(s):
    664-670

    In this study, the electromagnetic scatterings from conducting bodies have been investigated via a surface equivalence theorem. When one formulates equivalent electric and magnetic currents from geometrical optics (GO) reflected field in the illuminated surface and GO incident field in the shadowed surface, it has been found that the asymptotically derived radiation fields are found to be the same as those formulated from physical optics (PO) approximation.

  • Dielectric Measurement in Liquids Using an Estimation Equation without Short Termination via the Cut-Off Circular Waveguide Reflection Method

    Kouji SHIBATA  

     
    PAPER

      Vol:
    E101-C No:8
      Page(s):
    627-636

    In this study, a theory for estimating the dielectric properties for unknown materials from three reference materials without using a short condition was developed. Specifically, the relationships linking the S parameter, electrostatic capacity, the measurement instrument and the jig were determined for four equivalent circuits with three reference materials and an unknown material inserted into the jig. An equation for estimation of complex permittivity from three reference materials without short termination was thus derived. The formula's accuracy was then numerically verified for cases in which values indicating the dielectric properties of the reference materials and the actual material differed significantly, thereby verifying the effectiveness of the proposed method. Next, it was also found that dielectric constant could be correctly determined even when the observation plane was moved to the SOL calibration plane on the generator side. The dielectric properties of various liquids in the 0.50, 1.0 and 2.5 GHz bands as measured using the proposed method were then compared with corresponding conventional-method values. Finally, the validity of the proposed method was also indicated by measurement values showing the frequency characteristics of dielectric properties at frequencies ranging from 0.50 to 3.0 GHz.

  • Quantized Decoder Adaptively Predicting both Optimum Clock Frequency and Optimum Supply Voltage for a Dynamic Voltage and Frequency Scaling Controlled Multimedia Processor

    Nobuaki KOBAYASHI  Tadayoshi ENOMOTO  

     
    PAPER-Electronic Circuits

      Vol:
    E101-C No:8
      Page(s):
    671-679

    To completely utilize the advantages of dynamic voltage and frequency scaling (DVFS) techniques, a quantized decoder (QNT-D) was developed. The QNT-D generates a quantized signal processing quantity (Q) using a predicted signal processing quantity (M). Q is used to produce the optimum frequency (opt.fc) and the optimum supply voltage (opt.VD) that are proportional to Q. To develop a DVFS controlled motion estimation (ME) processor, we used both the QNT-D and a fast ME algorithm called A2BC (Adaptively Assigned Breaking-off Condition) to predict M for each macro-block (MB). A DVFS controlled ME processor was fabricated using 90-nm CMOS technology. The total power dissipation (PT) of the processor was significantly reduced and varied from 38.65 to 99.5 µW, only 3.27 to 8.41 % of PT of a conventional ME processor, depending on the test video picture.

  • Ultra-Low Field MRI Food Inspection System Using HTS-SQUID with Flux Transformer

    Saburo TANAKA  Satoshi KAWAGOE  Kazuma DEMACHI  Junichi HATTA  

     
    PAPER-Superconducting Electronics

      Vol:
    E101-C No:8
      Page(s):
    680-684

    We are developing an Ultra-Low Field (ULF) Magnetic Resonance Imaging (MRI) system with a tuned high-Tc (HTS)-rf-SQUID for food inspection. We previously reported that a small hole in a piece of cucumber can be detected. The acquired image was based on filtered back-projection reconstruction using a polarizing permanent magnet. However the resolution of the image was insufficient for food inspection and took a long time to process. The purpose of this study is to improve image quality and shorten processing time. We constructed a specially designed cryostat, which consists of a liquid nitrogen tank for cooling an electromagnetic polarizing coil (135mT) at 77K and a room temperature bore. A Cu pickup coil was installed at the room temperature bore and detected an NMR signal from a sample. The signal was then transferred to an HTS SQUID via an input coil. Following a proper MRI sequence, spatial frequency data at 64×32 points in k-space were obtained. Then, a 2D-FFT (Fast Fourier Transformation) method was applied to reconstruct the 2D-MR images. As a result, we successfully obtained a clear water image of the characters “TUT”, which contains a narrowest width of 0.5mm. The imaging time was also shortened by a factor of 10 when compared to the previous system.

  • Multiport Signal-Flow Analysis to Improve Signal Quality of Time-Interleaved Digital-to-Analog Converters

    Youngcheol PARK  

     
    PAPER-Electronic Instrumentation and Control

      Vol:
    E101-C No:8
      Page(s):
    685-689

    This letter describes a method that characterizes and improves the performance of a time-interleaved (TI) digital-to-analog converter (DAC) system by using multiport signal-flow graphs at microwave frequencies. A commercial signal generator with two TI DACs was characterized through s-parameter measurements and was compared to the conventional method. Moreover, prefilters were applied to correct the response, resulting in an error-vector magnitude improvement of greater than 8 dB for a 64-quadrature-amplitude-modulated signal of 4.8 Gbps. As a result, the bandwidth limitation and the complex post processing of the conventional method could be minimized.

  • Adaptive Beamforming Based on Compressed Sensing with Gain/Phase Uncertainties

    Bin HU  Xiaochuan WU  Xin ZHANG  Qiang YANG  Di YAO  Weibo DENG  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:8
      Page(s):
    1257-1262

    A new method for adaptive digital beamforming technique with compressed sensing (CS) for sparse receiving arrays with gain/phase uncertainties is presented. Because of the sparsity of the arriving signals, CS theory can be adopted to sample and recover receiving signals with less data. But due to the existence of the gain/phase uncertainties, the sparse representation of the signal is not optimal. In order to eliminating the influence of the gain/phase uncertainties to the sparse representation, most present study focus on calibrating the gain/phase uncertainties first. To overcome the effect of the gain/phase uncertainties, a new dictionary optimization method based on the total least squares (TLS) algorithm is proposed in this paper. We transfer the array signal receiving model with the gain/phase uncertainties into an EIV model, treating the gain/phase uncertainties effect as an additive error matrix. The method we proposed in this paper reconstructs the data by estimating the sparse coefficients using CS signal reconstruction algorithm and using TLS method toupdate error matrix with gain/phase uncertainties. Simulation results show that the sparse regularized total least squares algorithm can recover the receiving signals better with the effect of gain/phase uncertainties. Then adaptive digital beamforming algorithms are adopted to form antenna beam using the recovered data.

  • Path Loss Model Considering Blockage Effects of Traffic Signs Up to 40GHz in Urban Microcell Environments

    Motoharu SASAKI  Minoru INOMATA  Wataru YAMADA  Naoki KITA  Takeshi ONIZAWA  Masashi NAKATSUGAWA  Koshiro KITAO  Tetsuro IMAI  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2018/02/21
      Vol:
    E101-B No:8
      Page(s):
    1891-1902

    This paper presents the characteristics of path loss produced by traffic sign blockage. Multi frequency bands including high frequency bands up to 40 GHz are analyzed on the basis of measurement results in urban microcell environments. It is shown that the measured path loss increases compared to free space path loss even on a straight line-of-sight road, and that the excess attenuation is caused by the blockage effects of traffic signs. It is also shown that the measurement area affected by the blockage becomes small as frequency increases. The blocking object occupies the same area for all frequencies, but it takes up a larger portion of the Fresnel Zone as frequency increases. Therefore, if blockage occurs, the excess loss in high frequency bands becomes larger than in low frequency bands. In addition, the validity of two blockage path loss models is verified on the basis of measurement results. The first is the 3GPP blockage model and the second is the proposed blockage model, which is an expanded version of the basic diffraction model in ITU-R P.526. It is shown that these blockage models can predict the path loss increased by the traffic sign blockage and that their root mean square error can be improved compared to that of the 3GPP two slope model and a free space path loss model. The 3GPP blockage model is found to be more accurate for 26.4 and 37.1GHz, while the proposed model is more accurate for 0.8, 2.2, and 4.7GHz. The results show the blockage path loss due to traffic signs is clarified in a wide frequency range, and it is verified that the 3GPP blockage model and the proposed blockage model can accurately predict the blockage path loss.

  • An Extended Generalized Minimum Distance Decoding for Binary Linear Codes on a 4-Level Quantization over an AWGN Channel

    Shunsuke UEDA  Ken IKUTA  Takuya KUSAKA  Md. Al-Amin KHANDAKER  Md. Arshad ALI  Yasuyuki NOGAMI  

     
    PAPER-Coding Theory

      Vol:
    E101-A No:8
      Page(s):
    1235-1244

    Generalized Minimum Distance (GMD) decoding is a well-known soft-decision decoding for linear codes. Previous research on GMD decoding focused mainly on unquantized AWGN channels with BPSK signaling for binary linear codes. In this paper, a study on the design of a 4-level uniform quantizer for GMD decoding is given. In addition, an extended version of a GMD decoding algorithm for a 4-level quantizer is proposed, and the effectiveness of the proposed decoding is shown by simulation.

  • Analysis of the k-Error Linear Complexity and Error Sequence for 2pn-Periodic Binary Sequence

    Zhihua NIU  Deyu KONG  Yanli REN  Xiaoni DU  

     
    PAPER-Cryptography and Information Security

      Vol:
    E101-A No:8
      Page(s):
    1197-1203

    The k-error linear complexity of a sequence is a fundamental concept for assessing the stability of the linear complexity. After computing the k-error linear complexity of a sequence, those bits that cause the linear complexity reduced also need to be determined. For binary sequences with period 2pn, where p is an odd prime and 2 is a primitive root modulo p2, we present an algorithm which computes the minimum number k such that the k-error linear complexity is not greater than a given constant c. The corresponding error sequence is also obtained.

  • Efficient Transceiver Design for Large-Scale SWIPT System with Time-Switching and Power-Splitting Receivers

    Pham-Viet TUAN  Insoo KOO  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2018/01/12
      Vol:
    E101-B No:7
      Page(s):
    1744-1751

    The combination of large-scale antenna arrays and simultaneous wireless information and power transfer (SWIPT), which can provide enormous increase of throughput and energy efficiency is a promising key in next generation wireless system (5G). This paper investigates efficient transceiver design to minimize transmit power, subject to users' required data rates and energy harvesting, in large-scale SWIPT system where the base station utilizes a very large number of antennas for transmitting both data and energy to multiple users equipped with time-switching (TS) or power-splitting (PS) receive structures. We first propose the well-known semidefinite relaxation (SDR) and Gaussian randomization techniques to solve the minimum transmit power problems. However, for these large-scale SWIPT problems, the proposed scheme, which is based on conventional SDR method, is not suitable due to its excessive computation costs, and a consensus alternating direction method of multipliers (ADMM) cannot be directly applied to the case that TS or PS ratios are involved in the optimization problem. Therefore, in the second solution, our first step is to optimize the variables of TS or PS ratios, and to achieve simplified problems. After then, we propose fast algorithms for solving these problems, where the outer loop of sequential parametric convex approximation (SPCA) is combined with the inner loop of ADMM. Numerical simulations show the fast convergence and superiority of the proposed solutions.

  • Phase Sensitive Amplifier Using Periodically Poled LiNbO3 Waveguides and Their Applications Open Access

    Masaki ASOBE  Takeshi UMEKI  Osamu TADANAGA  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    586-593

    Recent advances in phase-sensitive amplifiers (PSAs) using periodically poled LiNbO3 are reviewed. Their principles of operation and distinct features are described. Applications in optical communication are studied in terms of the inline operation and amplification of a sophisticated modulation format. Challenges for the future are also discussed.

  • A Method of Verifying Time-Response Requirements

    Yuma MATSUMOTO  Takayuki OMORI  Hiroya ITOGA  Atsushi OHNISHI  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1725-1732

    In order to verify the correctness of functional requirements, we have been developing a verification method of the correctness of functional requirements specification using the Requirements Frame model. In this paper, we propose a verification method of non-functional requirements specification in terms of time-response requirements written with a natural language. We established a verification method by extending the Requirements Frame model. We have also developed a prototype system based on the method using Java. The extended Requirements Frame model and the verification method will be illustrated with examples.

  • Enhancement of Video Streaming QoE by Considering Burst Loss in Wireless LANs

    Toshiro NUNOME  Yuta MATSUI  

     
    PAPER

      Pubricized:
    2018/01/22
      Vol:
    E101-B No:7
      Page(s):
    1653-1660

    In order to enhance QoE of audio and video IP transmission, this paper proposes a method for mitigating the spatial quality impairment during burst loss periods over the wireless networks in the video output scheme SCS, which is a QoE-based video output scheme. SCS switches between two common video output schemes: frame skipping and error concealment. The proposed method pauses video output with an undamaged frame during the burst loss period in order not to pause video output on a degraded frame. We perform an experiment with constant thresholds, the table-lookup method, and the proposed method under various network conditions. The result shows that the effect of the proposed method on QoE can differ with the contents and GOP structures.

  • Identifying Core Objects for Trace Summarization by Analyzing Reference Relations and Dynamic Properties

    Kunihiro NODA  Takashi KOBAYASHI  Noritoshi ATSUMI  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1751-1765

    Behaviors of an object-oriented system can be visualized as reverse-engineered sequence diagrams from execution traces. This approach is a valuable tool for program comprehension tasks. However, owing to the massiveness of information contained in an execution trace, a reverse-engineered sequence diagram is often afflicted by a scalability issue. To address this issue, many trace summarization techniques have been proposed. Most of the previous techniques focused on reducing the vertical size of the diagram. To cope with the scalability issue, decreasing the horizontal size of the diagram is also very important. Nonetheless, few studies have addressed this point; thus, there is a lot of needs for further development of horizontal summarization techniques. We present in this paper a method for identifying core objects for trace summarization by analyzing reference relations and dynamic properties. Visualizing only interactions related to core objects, we can obtain a horizontally compactified reverse-engineered sequence diagram that contains system's key behaviors. To identify core objects, first, we detect and eliminate temporary objects that are trivial for a system by analyzing reference relations and lifetimes of objects. Then, estimating the importance of each non-trivial object based on their dynamic properties, we identify highly important ones (i.e., core objects). We implemented our technique in our tool and evaluated it by using traces from various open-source software systems. The results showed that our technique was much more effective in terms of the horizontal reduction of a reverse-engineered sequence diagram, compared with the state-of-the-art trace summarization technique. The horizontal compression ratio of our technique was 134.6 on average, whereas that of the state-of-the-art technique was 11.5. The runtime overhead imposed by our technique was 167.6% on average. This overhead is relatively small compared with recent scalable dynamic analysis techniques, which shows the practicality of our technique. Overall, our technique can achieve a significant reduction of the horizontal size of a reverse-engineered sequence diagram with a small overhead and is expected to be a valuable tool for program comprehension.

  • Analysis of a Plasmonic Pole-Absorber Using a Periodic Structure Open Access

    Junji YAMAUCHI  Shintaro OHKI  Yudai NAKAGOMI  Hisamatsu NAKANO  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    495-500

    A plasmonic black pole (PBP) consisting of a series of touching spherical metal surfaces is analyzed using the finite-difference time-domain (FDTD) method with the periodic boundary condition. First, the wavelength characteristics of the PBP are studied under the assumption that the PBP is omnidirectionally illuminated. It is found that partial truncation of each metal sphere reduces the reflectivity over a wide wavelength range. Next, we consider the case where the PBP is illuminated with a cylindrical wave from a specific direction. It is shown that an absorptivity of more than 80% is obtained over a wavelength range of λ=500 nm to 1000 nm. Calculation regarding the Poynting vector distribution also shows that the incident wave is bent and absorbed towards the center axis of the PBP.

  • Error Correction for Search Engine by Mining Bad Case

    Jianyong DUAN  Tianxiao JI  Hao WANG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2018/03/26
      Vol:
    E101-D No:7
      Page(s):
    1938-1945

    Automatic error correction of users' search terms for search engines is an important aspect of improving search engine retrieval efficiency, accuracy and user experience. In the era of big data, we can analyze and mine massive search engine logs to release the hidden mind with big data ideas. It can obtain better results through statistical modeling of query errors in search engine log data. But when we cannot find the error query in the log, we can't make good use of the information in the log to correct the query result. These undiscovered error queries are called Bad Case. This paper combines the error correction algorithm model and search engine query log mining analysis. First, we explored Bad Cases in the query error correction process through the search engine query logs. Then we quantified the characteristics of these Bad Cases and built a model to allow search engines to automatically mine Bad Cases with these features. Finally, we applied Bad Cases to the N-gram error correction algorithm model to check the impact of Bad Case mining on error correction. The experimental results show that the error correction based on Bad Case mining makes the precision rate and recall rate of the automatic error correction improved obviously. Users experience is improved and the interaction becomes more friendly.

  • Two High Accuracy Frequency Estimation Algorithms Based on New Autocorrelation-Like Function for Noncircular/Sinusoid Signal

    Kai WANG  Jiaying DING  Yili XIA  Xu LIU  Jinguang HAO  Wenjiang PEI  

     
    PAPER-Digital Signal Processing

      Vol:
    E101-A No:7
      Page(s):
    1065-1073

    Computing autocorrelation coefficient can effectively reduce the influence of additive white noise, thus estimation precision will be improved. In this paper, an autocorrelation-like function, different from the ordinary one, is defined, and is proven to own better linear predictive performance. Two algorithms for signal model are developed to achieve frequency estimates. We analyze the theoretical properties of the algorithms in the additive white Gaussian noise. The simulation results match with the theoretical values well in the sense of mean square error. The proposed algorithms compare with existing estimators, are closer to the Cramer-Rao bound (CRLB). In addition, computer simulations demonstrate that the proposed algorithms provide high accuracy and good anti-noise capability.

  • A 1024-QAM Capable WLAN Receiver with -56.3dB Image Rejection Ratio Using Self-Calibration Technique

    Shusuke KAWAI  Toshiyuki YAMAGISHI  Yosuke HAGIWARA  Shigehito SAIGUSA  Ichiro SETO  Shoji OTAKA  Shuichi ITO  

     
    PAPER

      Vol:
    E101-C No:7
      Page(s):
    457-463

    This paper presents a 1024-QAM OFDM signal capable WLAN receiver in 65nm CMOS technology. Thermal noise-based IQ frequency-independent mismatch correction and IQ frequency-dependent mismatch correction with baseband loopback are proposed for the self-calibration in the receiver. The measured image rejection ratio of the self-calibration is -56.3dB. The receiver achieves the extremely low EVM of -37.1dB even with wide channel bandwidth of 80MHz and has the ability to receive the 1024-QAM signal. The result indicates that the receiver is extendable for the 802.11ax compliant receiver that supports a higher density modulation scheme of MIMO.

  • A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading

    Cheng ZHANG  Zhi LIU  Bo GU  Kyoko YAMORI  Yoshiaki TANAKA  

     
    PAPER

      Pubricized:
    2018/01/22
      Vol:
    E101-B No:7
      Page(s):
    1625-1634

    With the rapid increase in demand for mobile data, mobile network operators are trying to expand wireless network capacity by deploying wireless local area network (LAN) hotspots on to which they can offload their mobile traffic. However, these network-centric methods usually do not fulfill the interests of mobile users (MUs). Taking into consideration many issues such as different applications' deadlines, monetary cost and energy consumption, how the MU decides whether to offload their traffic to a complementary wireless LAN is an important issue. Previous studies assume the MU's mobility pattern is known in advance, which is not always true. In this paper, we study the MU's policy to minimize his monetary cost and energy consumption without known MU mobility pattern. We propose to use a kind of reinforcement learning technique called deep Q-network (DQN) for MU to learn the optimal offloading policy from past experiences. In the proposed DQN based offloading algorithm, MU's mobility pattern is no longer needed. Furthermore, MU's state of remaining data is directly fed into the convolution neural network in DQN without discretization. Therefore, not only does the discretization error present in previous work disappear, but also it makes the proposed algorithm has the ability to generalize the past experiences, which is especially effective when the number of states is large. Extensive simulations are conducted to validate our proposed offloading algorithms.

721-740hit(6809hit)