The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] EMP(607hit)

41-60hit(607hit)

  • Combined Effects of Test Voltages and Climatic Conditions on Air Discharge Currents from ESD Generator with Two Different Approach Speeds

    Takeshi ISHIDA  Osamu FUJIWARA  

     
    PAPER-Electromagnetic Compatibility(EMC)

      Pubricized:
    2020/06/08
      Vol:
    E103-B No:12
      Page(s):
    1432-1437

    Air discharge immunity testing for electronic equipment is specified in the standard 61000-4-2 of the International Eelectrotechnical Commission (IEC) under the climatic conditions of temperature (T) from 15 to 35 degrees Celsius and relative humidity (RH) from 30 to 60%. This implies that the air discharge testing is likely to provide significantly different test results due to the wide climatic range. To clarify effects of the above climatic conditions on air discharge testing, we previously measured air discharge currents from an electrostatic discharge (ESD) generator with test voltages from 2kV to 15kV at an approach speed of 80mm/s under 6 combinations of T and RH in the IEC specified range and non-specified climatic range. The result showed that the same absolute humidity (AH), which is determined by T and RH, provides almost the identical waveforms of the discharge currents despite different T and RH, and also that the current peaks at higher test voltages decrease as the AH increases. In this study, we further examine the combined effects of air discharges on test voltages, T, RH and AH with respect to two different approach speeds of 20mm/s and 80mm/s. As a result, the approach speed of 80mm/s is confirmed to provide the same results as the previous ones under the identical climatic conditions, whereas at a test voltage of 15kV under the IEC specified climatic conditions over 30% RH, the 20mm/s approach speed yields current waveforms entirely different from those at 80mm/s despite the same AH, and the peaks are basically unaffected by the AH. Under the IEC non-specified climatic conditions with RH less than 20%, however, the peaks decrease at higher test voltages as the AH increases. These findings obtained imply that under the same AH condition, at 80mm/s the air discharge peak is not almost affected by the RH, while at 20mm/s the lower the RH is, the higher is the peak on air discharge current.

  • Loss Function Considering Multiple Attributes of a Temporal Sequence for Feed-Forward Neural Networks

    Noriyuki MATSUNAGA  Yamato OHTANI  Tatsuya HIRAHARA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/08/31
      Vol:
    E103-D No:12
      Page(s):
    2659-2672

    Deep neural network (DNN)-based speech synthesis became popular in recent years and is expected to soon be widely used in embedded devices and environments with limited computing resources. The key intention of these systems in poor computing environments is to reduce the computational cost of generating speech parameter sequences while maintaining voice quality. However, reducing computational costs is challenging for two primary conventional DNN-based methods used for modeling speech parameter sequences. In feed-forward neural networks (FFNNs) with maximum likelihood parameter generation (MLPG), the MLPG reconstructs the temporal structure of the speech parameter sequences ignored by FFNNs but requires additional computational cost according to the sequence length. In recurrent neural networks, the recursive structure allows for the generation of speech parameter sequences while considering temporal structures without the MLPG, but increases the computational cost compared to FFNNs. We propose a new approach for DNNs to acquire parameters captured from the temporal structure by backpropagating the errors of multiple attributes of the temporal sequence via the loss function. This method enables FFNNs to generate speech parameter sequences by considering their temporal structure without the MLPG. We generated the fundamental frequency sequence and the mel-cepstrum sequence with our proposed method and conventional methods, and then synthesized and subjectively evaluated the speeches from these sequences. The proposed method enables even FFNNs that work on a frame-by-frame basis to generate speech parameter sequences by considering the temporal structure and to generate sequences perceptually superior to those from the conventional methods.

  • Corrected Stochastic Dual Coordinate Ascent for Top-k SVM

    Yoshihiro HIROHASHI  Tsuyoshi KATO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2020/08/06
      Vol:
    E103-D No:11
      Page(s):
    2323-2331

    Currently, the top-k error ratio is one of the primary methods to measure the accuracy of multi-category classification. Top-k multiclass SVM was designed to minimize the empirical risk based on the top-k error ratio. Two SDCA-based algorithms exist for learning the top-k SVM, both of which have several desirable properties for achieving optimization. However, both algorithms suffer from a serious disadvantage, that is, they cannot attain the optimal convergence in most cases owing to their theoretical imperfections. As demonstrated through numerical simulations, if the modified SDCA algorithm is employed, optimal convergence is always achieved, in contrast to the failure of the two existing SDCA-based algorithms. Finally, our analytical results are presented to clarify the significance of these existing algorithms.

  • Empirical Evaluation of Mimic Software Project Data Sets for Software Effort Estimation

    Maohua GAN  Zeynep YÜCEL  Akito MONDEN  Kentaro SASAKI  

     
    PAPER-Software Engineering

      Pubricized:
    2020/07/03
      Vol:
    E103-D No:10
      Page(s):
    2094-2103

    To conduct empirical research on industry software development, it is necessary to obtain data of real software projects from industry. However, only few such industry data sets are publicly available; and unfortunately, most of them are very old. In addition, most of today's software companies cannot make their data open, because software development involves many stakeholders, and thus, its data confidentiality must be strongly preserved. To that end, this study proposes a method for artificially generating a “mimic” software project data set, whose characteristics (such as average, standard deviation and correlation coefficients) are very similar to a given confidential data set. Instead of using the original (confidential) data set, researchers are expected to use the mimic data set to produce similar results as the original data set. The proposed method uses the Box-Muller transform for generating normally distributed random numbers; and exponential transformation and number reordering for data mimicry. To evaluate the efficacy of the proposed method, effort estimation is considered as potential application domain for employing mimic data. Estimation models are built from 8 reference data sets and their concerning mimic data. Our experiments confirmed that models built from mimic data sets show similar effort estimation performance as the models built from original data sets, which indicate the capability of the proposed method in generating representative samples.

  • A 0.6-V Adaptive Voltage Swing Serial Link Transmitter Using Near Threshold Body Bias Control and Jitter Estimation

    Yoshihide KOMATSU  Akinori SHINMYO  Mayuko FUJITA  Tsuyoshi HIRAKI  Kouichi FUKUDA  Noriyuki MIURA  Makoto NAGATA  

     
    PAPER-Electronic Circuits

      Pubricized:
    2020/04/09
      Vol:
    E103-C No:10
      Page(s):
    497-504

    With increasing technology scaling and the use of lower voltages, more research interest is being shown in variability-tolerant analog front end design. In this paper, we describe an adaptive amplitude control transmitter that is operated using differential signaling to reduce the temperature variability effect. It enables low power, low voltage operation by synergy between adaptive amplitude control and Vth temperature variation control. It is suitable for high-speed interface applications, particularly cable interfaces. By installing an aggressor circuit to estimate transmitter jitter and changing its frequency and activation rate, we were able to analyze the effects of the interface block on the input buffer and thence on the entire system. We also report a detailed estimation of the receiver clock-data recovery (CDR) operation for transmitter jitter estimation. These investigations provide suggestions for widening the eye opening of the transmitter.

  • An MMT-Based Hierarchical Transmission Module for 4K/120fps Temporally Scalable Video

    Yasuhiro MOCHIDA  Takayuki NAKACHI  Takahiro YAMAGUCHI  

     
    PAPER

      Pubricized:
    2020/06/22
      Vol:
    E103-D No:10
      Page(s):
    2059-2066

    High frame rate (HFR) video is attracting strong interest since it is considered as a next step toward providing Ultra-High Definition video service. For instance, the Association of Radio Industries and Businesses (ARIB) standard, the latest broadcasting standard in Japan, defines a 120 fps broadcasting format. The standard stipulates temporally scalable coding and hierarchical transmission by MPEG Media Transport (MMT), in which the base layer and the enhancement layer are transmitted over different paths for flexible distribution. We have developed the first ever MMT transmitter/receiver module for 4K/120fps temporally scalable video. The module is equipped with a newly proposed encapsulation method of temporally scalable bitstreams with correct boundaries. It is also designed to be tolerant to severe network constraints, including packet loss, arrival timing offset, and delay jitter. We conducted a hierarchical transmission experiment for 4K/120fps temporally scalable video. The experiment demonstrated that the MMT module was successfully fabricated and capable of dealing with severe network constraints. Consequently, the module has excellent potential as a means to support HFR video distribution in various network situations.

  • Exploiting Configurable Approximations for Tolerating Aging-induced Timing Violations

    Toshinori SATO  Tomoaki UKEZONO  

     
    PAPER

      Vol:
    E103-A No:9
      Page(s):
    1028-1036

    This paper proposes a technique that increases the lifetime of large scale integration (LSI) devices. As semiconductor technology improves at miniaturizing transistors, aging effects due to bias temperature instability (BTI) seriously affects their lifetime. BTI increases the threshold voltage of transistors thereby also increasing the delay of an electronics device, resulting in failures due to timing violations. To compensate for aging-induced timing violations, we exploit configurable approximate computing. Assuming that target circuits have exact and approximate modes, they are configured for the approximate mode if an aging sensor predicts violations. Experiments using an example circuit revealed an increase in its lifetime to >10 years.

  • Improvement on Uneven Heating in Microwave Oven by Diodes-Loaded Planar Electromagnetic Field Stirrer

    Ryosuke SUGA  Naruki SAITO  

     
    PAPER-Microwaves, Millimeter-Waves

      Pubricized:
    2020/03/30
      Vol:
    E103-C No:9
      Page(s):
    388-395

    A planar electromagnetic field stirrer with periodically arranged metal patterns and diode switches is proposed for improving uneven heating of a heated object placed in a microwave oven. The reflection phase of the proposed stirrer changes by switching the states of diodes mounted on the stirrer and the electromagnetic field in the microwave oven is stirred. The temperature distribution of a heated object located in a microwave oven was simulated and measured using the stirrer in order to evaluate the improving effect of the uneven heating. As the result, the heated parts of the objects were changed with the diode states and the improving effect of the uneven heating was experimentally indicated.

  • A Comparative Study on Bandwidth and Noise for Pre-Emphasis and Post-Equalization in Visible Light Communication Open Access

    Dong YAN  Xurui MAO  Sheng XIE  Jia CONG  Dongqun HAN  Yicheng WU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2020/02/25
      Vol:
    E103-B No:8
      Page(s):
    872-880

    This paper presents an analysis of the relationship between noise and bandwidth in visible light communication (VLC) systems. In the past few years, pre-emphasis and post-equalization techniques were proposed to extend the bandwidth of VLC systems. However, these bandwidth extension techniques also influence noise and sensitivity of the VLC systems. In this paper, first, we build a system model of VLC transceivers and circuit models of pre-emphasis and post-equalization. Next, we theoretically compare the bandwidth and noise of three different transceiver structures comprising a single pre-emphasis circuit, a single post-equalization circuit and a combination of pre-emphasis and post-equalization circuits. Finally, we validate the presented theoretical analysis using experimental results. The result shows that for the same resonant frequency, and for high signal-to-noise ratio (S/N), VLC systems employing post-equalization or pre-emphasis have the same bandwidth extension ability. Therefore, a transceiver employing both the pre-emphasis and post-equalization techniques has a bandwidth √2 times the bandwidth of the systems employing only the pre-emphasis or post-equalization. Based on the theoretical analysis of noise, the VLC system with only active pre-emphasis shows the lowest noise, which is a good choice for low-noise systems. The result of this paper may provide a new perspective of noise and sensitivity of the bandwidth extension techniques in VLC systems.

  • Online-Efficient Interval Test via Secure Empty-Set Check

    Katsunari SHISHIDO  Atsuko MIYAJI  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2020/05/14
      Vol:
    E103-D No:7
      Page(s):
    1598-1607

    In the age of information and communications technology (ICT), not only collecting data but also using such data is provided in various services. It is necessary to ensure data privacy in such services while providing efficient computation and communication complexity. In this paper, we propose the first interval test designed according to the notion of online and offline phases by executing our new empty-set check. Our protocol is proved to ensure both server and client privacy. Furthermore, neither the computational complexity of a client in the online phase nor the communicational complexity from a server to a client depends on the size of the set. As a result, even in a practical situation in which one server receives requests from numerous clients, the waiting time for a client to obtain the result of an interval test can be minimized.

  • Gate Array Using Low-Temperature Poly-Si Thin-Film Transistors

    Mutsumi KIMURA  Masashi INOUE  Tokiyoshi MATSUDA  

     
    PAPER-Semiconductor Materials and Devices

      Pubricized:
    2020/01/27
      Vol:
    E103-C No:7
      Page(s):
    341-344

    We have designed gate arrays using low-temperature poly-Si thin-film transistors and confirmed the correct operations. Various kinds of logic gates are beforehand prepared, contact holes are later bored, and mutual wiring is formed between the logic gates on demand. A half adder, two-bit decoder, and flip flop are composed as examples. The static behaviors are evaluated, and it is confirmed that the correct waveforms are output. The dynamic behaviors are also evaluated, and it is concluded that the dynamic behaviors of the gate array are less deteriorated than that of the independent circuit.

  • Temporally Forward Nonlinear Scale Space for High Frame Rate and Ultra-Low Delay A-KAZE Matching System

    Songlin DU  Yuan LI  Takeshi IKENAGA  

     
    PAPER

      Pubricized:
    2020/03/06
      Vol:
    E103-D No:6
      Page(s):
    1226-1235

    High frame rate and ultra-low delay are the most essential requirements for building excellent human-machine-interaction systems. As a state-of-the-art local keypoint detection and feature extraction algorithm, A-KAZE shows high accuracy and robustness. Nonlinear scale space is one of the most important modules in A-KAZE, but it not only has at least one frame delay and but also is not hardware friendly. This paper proposes a hardware oriented nonlinear scale space for high frame rate and ultra-low delay A-KAZE matching system. In the proposed matching system, one part of nonlinear scale space is temporally forward and calculated in the previous frame (proposal #1), so that the processing delay is reduced to be less than 1 ms. To improve the matching accuracy affected by proposal #1, pre-adjustment of nonlinear scale (proposal #2) is proposed. Previous two frames are used to do motion estimation to predict the motion vector between previous frame and current frame. For further improvement of matching accuracy, pixel-level pre-adjustment (proposal #3) is proposed. The pre-adjustment changes from block-level to pixel-level, each pixel is assigned an unique motion vector. Experimental results prove that the proposed matching system shows average matching accuracy higher than 95% which is 5.88% higher than the existing high frame rate and ultra-low delay matching system. As for hardware performance, the proposed matching system processes VGA videos (640×480 pixels/frame) at the speed of 784 frame/second (fps) with a delay of 0.978 ms/frame.

  • Temporal Constraints and Block Weighting Judgement Based High Frame Rate and Ultra-Low Delay Mismatch Removal System

    Songlin DU  Zhe WANG  Takeshi IKENAGA  

     
    PAPER

      Pubricized:
    2020/03/18
      Vol:
    E103-D No:6
      Page(s):
    1236-1246

    High frame rate and ultra-low delay matching system plays an increasingly important role in human-machine interactions, because it guarantees high-quality experiences for users. Existing image matching algorithms always generate mismatches which heavily weaken the performance the human-machine-interactive systems. Although many mismatch removal algorithms have been proposed, few of them achieve real-time speed with high frame rate and low delay, because of complicated arithmetic operations and iterations. This paper proposes a temporal constraints and block weighting judgement based high frame rate and ultra-low delay mismatch removal system. The proposed method is based on two temporal constraints (proposal #1 and proposal #2) to firstly find some true matches, and uses these true matches to generate block weighting (proposal #3). Proposal #1 finds out some correct matches through checking a triangle route formed by three adjacent frames. Proposal #2 further reduces mismatch risk by adding one more time of matching with opposite matching direction. Finally, proposal #3 distinguishes the unverified matches to be correct or incorrect through weighting of each block. Software experiments show that the proposed mismatch removal system achieves state-of-the-art accuracy in mismatch removal. Hardware experiments indicate that the designed image processing core successfully achieves real-time processing of 784fps VGA (640×480 pixels/frame) video on field programmable gate array (FPGA), with a delay of 0.858 ms/frame.

  • Rapid Revolution Speed Control of the Brushless DC Motor for Automotive LIDAR Applications

    Hironobu AKITA  Tsunenobu KIMOTO  

     
    PAPER-Storage Technology

      Pubricized:
    2020/01/10
      Vol:
    E103-C No:6
      Page(s):
    324-331

    A laser imaging detection and ranging (LIDAR) is one of the key sensors for autonomous driving. In order to improve its performance of the measurable distance, especially toward the front-side direction of the vehicle, this paper presents rapid revolution speed control of a brushless DC (BLDC) motor with a cyclostationary command signal. This enables the increase of the signal integration time for the designated direction, and thus improves the signal-to-noise ratio (SNR), while maintaining the averaged revolution speed. We propose the use of pre-emphasis circuits to accelerate and decelerate the revolution speed of the motor rapidly, by modifying the command signal so as to enhance the transition of the speed. The adaptive signal processing can adjust coefficients of the pre-emphasis filter automatically, so that it can compensate for the decayed response of the motor and its controller. Experiments with a 20-W BLDC motor prove that the proposed technique can achieve the actual revolution speed output to track the designated speed profile ranging from 600 to 1400 revolutions per minute (rpm) during one turn.

  • Security Evaluation of Negative Iris Recognition

    Osama OUDA  Slim CHAOUI  Norimichi TSUMURA  

     
    PAPER-Biological Engineering

      Pubricized:
    2020/01/29
      Vol:
    E103-D No:5
      Page(s):
    1144-1152

    Biometric template protection techniques have been proposed to address security and privacy issues inherent to biometric-based authentication systems. However, it has been shown that the robustness of most of such techniques against reversibility and linkability attacks are overestimated. Thus, a thorough security analysis of recently proposed template protection schemes has to be carried out. Negative iris recognition is an interesting iris template protection scheme based on the concept of negative databases. In this paper, we present a comprehensive security analysis of this scheme in order to validate its practical usefulness. Although the authors of negative iris recognition claim that their scheme possesses both irreversibility and unlinkability, we demonstrate that more than 75% of the original iris-code bits can be recovered using a single protected template. Moreover, we show that the negative iris recognition scheme is vulnerable to attacks via record multiplicity where an adversary can combine several transformed templates to recover more proportion of the original iris-code. Finally, we demonstrate that the scheme does not possess unlinkability. The experimental results, on the CASIA-IrisV3 Interval public database, support our theory and confirm that the negative iris recognition scheme is susceptible to reversibility, linkability, and record multiplicity attacks.

  • Low Delay 4K 120fps HEVC Decoder with Parallel Processing Architecture

    Ken NAKAMURA  Daisuke KOBAYASHI  Yuya OMORI  Tatsuya OSAWA  Takayuki ONISHI  Koyo NITTA  Hiroe IWASAKI  

     
    PAPER

      Vol:
    E103-C No:3
      Page(s):
    77-84

    In this paper, we describe a novel low-delay 4K 120-fps real-time HEVC decoder with a parallel processing architecture that conforms to the HEVC main 4:2:2 10 profile. It supports the hierarchical temporal scalable streams required for Ultra High Definition high-frame-rate broadcasting and also supports low-delay and high-bitrate decoding for video transmission uses. To achieve this support, the decoding processes are parallelized and pipelined at the frame level, slice level, and coding tree unit row level. The proposed decoder was implemented on three FPGAs operated at 133 and 150 MHz, and it achieved 300-Mbps stream decoding and 37-msec end-to-end delay with our concurrently developed 4K 120-fps encoder.

  • Template-Based Monte-Carlo Test-Suite Generation for Large and Complex Simulink Models Open Access

    Takashi TOMITA  Daisuke ISHII  Toru MURAKAMI  Shigeki TAKEUCHI  Toshiaki AOKI  

     
    PAPER

      Vol:
    E103-A No:2
      Page(s):
    451-461

    MATLAB/Simulink is the de facto standard tool for the model-based development (MBD) of control software for automotive systems. A Simulink model developed in MBD for real automotive systems involves complex computation as well as tens of thousands of blocks. In this paper, we focus on decision coverage (DC), condition coverage (CC) and modified condition/decision coverage (MC/DC) criteria, and propose a Monte-Carlo test suite generation method for large and complex Simulink models. In the method, a candidate test case is generated by assigning random values to the parameters of signal templates with specific waveforms. We try to find contributable candidates in a plausible and understandable search space, specified by a set of templates. We implemented the method as a tool, and our experimental evaluation showed that the tool was able to generate test suites for industrial implementation models with higher coverages and shorter execution times than Simulink Design Verifier. Additionally, the tool includes a fast coverage measurement engine, which demonstrated better performance than Simulink Coverage in our experiments.

  • Temporal Domain Difference Based Secondary Background Modeling Algorithm

    Guowei TENG  Hao LI  Zhenglong YANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E103-A No:2
      Page(s):
    571-575

    This paper proposes a temporal domain difference based secondary background modeling algorithm for surveillance video coding. The proposed algorithm has three key technical contributions as following. Firstly, the LDBCBR (Long Distance Block Composed Background Reference) algorithm is proposed, which exploits IBBS (interval of background blocks searching) to weaken the temporal correlation of the foreground. Secondly, both BCBR (Block Composed Background Reference) and LDBCBR are exploited at the same time to generate the temporary background reference frame. The secondary modeling algorithm utilizes the temporary background blocks generated by BCBR and LDBCBR to get the final background frame. Thirdly, monitor the background reference frame after it is generated is also important. We would update the background blocks immediately when it has a big change, shorten the modeling period of the areas where foreground moves frequently and check the stable background regularly. The proposed algorithm is implemented in the platform of IEEE1857 and the experimental results demonstrate that it has significant improvement in coding efficiency. In surveillance test sequences recommended by the China AVS (Advanced Audio Video Standard) working group, our method achieve BD-Rate gain by 6.81% and 27.30% comparing with BCBR and the baseline profile.

  • Adaptive-Partial Template Update with Center-Shifting Recovery for High Frame Rate and Ultra-Low Delay Deformation Matching

    Songlin DU  Yuhao XU  Tingting HU  Takeshi IKENAGA  

     
    PAPER-Image

      Vol:
    E102-A No:12
      Page(s):
    1872-1881

    High frame rate and ultra-low delay matching system plays an important role in various human-machine interactive applications, which demands better performance in matching deformable and out-of-plane rotating objects. Although many algorithms have been proposed for deformation tracking and matching, few of them are suitable for hardware implementation due to complicated operations and large time consumption. This paper proposes a hardware-oriented template update and recovery method for high frame rate and ultra-low delay deformation matching system. In the proposed method, the new template is generated in real time by partially updating the template descriptor and adding new keypoints simultaneously with the matching process in pixels (proposal #1), which avoids the large inter-frame delay. The size and shape of region of interest (ROI) are made flexible and the Hamming threshold used for brute-force matching is adjusted according to pixel position and the flexible ROI (proposal #2), which solves the problem of template drift. The template is recovered by the previous one with a relative center-shifting vector when it is judged as lost via region-wise difference check (proposal #3). Evaluation results indicate that the proposed method successfully achieves the real-time processing of 784fps at the resolution of 640×480 on field-programmable gate array (FPGA), with a delay of 0.808ms/frame, as well as achieves satisfactory deformation matching results in comparison with other general methods.

  • Understanding Developer Commenting in Code Reviews

    Toshiki HIRAO  Raula GAIKOVINA KULA  Akinori IHARA  Kenichi MATSUMOTO  

     
    PAPER

      Pubricized:
    2019/09/11
      Vol:
    E102-D No:12
      Page(s):
    2423-2432

    Modern code review is a well-known practice to assess the quality of software where developers discuss the quality in a web-based review tool. However, this lightweight approach may risk an inefficient review participation, especially when comments becomes either excessive (i.e., too many) or underwhelming (i.e., too few). In this study, we investigate the phenomena of reviewer commenting. Through a large-scale empirical analysis of over 1.1 million reviews from five OSS systems, we conduct an exploratory study to investigate the frequency, size, and evolution of reviewer commenting. Moreover, we also conduct a modeling study to understand the most important features that potentially drive reviewer comments. Our results find that (i) the number of comments and the number of words in the comments tend to vary among reviews and across studied systems; (ii) reviewers change their behaviours in commenting over time; and (iii) human experience and patch property aspects impact the number of comments and the number of words in the comments.

41-60hit(607hit)