The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] scale(272hit)

141-160hit(272hit)

  • CFAR Detector Based on Goodness-of-Fit Tests

    Xiaobo DENG  Yiming PI  Zhenglin CAO  

     
    PAPER-Sensing

      Vol:
    E92-B No:6
      Page(s):
    2209-2217

    This paper develops a complete architecture for constant false alarm rate (CFAR) detection based on a goodness-of-fit (GOF) test. This architecture begins with a logarithmic amplifier, which transforms the background distribution, whether Weibull or lognormal into a location-scale (LS) one, some relevant properties of which are exploited to ensure CFAR. A GOF test is adopted at last to decide whether the samples under test belong to the background or are abnormal given the background and so should be declared to be a target of interest. The performance of this new CFAR scheme is investigated both in homogeneous and multiple interfering targets environment.

  • High-Speed and Ultra-Low-Voltage Divide-by-4/5 Counter for Frequency Synthesizer

    Yu-Lung LO  Wei-Bin YANG  Ting-Sheng CHAO  Kuo-Hsing CHENG  

     
    LETTER-Electronic Circuits

      Vol:
    E92-C No:6
      Page(s):
    890-893

    A high-speed and ultra-low-voltage divide-by-4/5 counter with dynamic floating input D flip-flop (DFIDFF) is presented in this paper. The proposed DFIDFF and control logic gates are merged to reduce effective capacitance of internal and external nodes, and increase the operating speed of divide-by-4/5 counter. The proposed divide-by-4/5 counter is fabricated in a 0.13-µm CMOS process. The measured maximum operating frequency and power consumption of the counter are 600 MHz and 8.35 µW at a 0.5 V supply voltage. HSPICE simulations demonstrate that the proposed counter (divide-by-4) reduces power-delay product (PDP) by 37%, 71%, and 57% from those of the TGFF counter, Yang's counter [1], and the E-TSPC counter [2], respectively.

  • A 2.0 Vpp Input, 0.5 V Supply Delta Amplifier with A-to-D Conversion

    Yoshihiro MASUI  Takeshi YOSHIDA  Atsushi IWATA  

     
    PAPER

      Vol:
    E92-C No:6
      Page(s):
    828-834

    Recent progress in scaled CMOS technologies can enhance signal bandwidth and clock frequency of analog-digital mixed VLSIs. However, the inevitable reduction of supply voltage causes a signal voltage mismatch between a non-scaled analog chip and a scaled A-D mixed chip. To overcome this problem, we present a Delta-Amplifier (DeltAMP) which can handle larger signal amplitude than the supply voltage. DeltaAMP folds a delta signal of an input voltage within a window using a virtual ground amplifier, modulation switches and comparators. For reconstruction of the folded delta signal to the ordinal signal, Analog-Time-Digital conversion (ATD) was also proposed, in which pulse-width analog information obtained at the comparators in DeltAMP was converted to a digital signal by counting. A test chip of DeltAMP with ATD was designed and fabricated using a 90 nm CMOS technology. A 2 Vpp input voltage range and 50 µW power consumption were achieved by the measurements with a 0.5 V supply. High accuracy of 62 dB SNR was obtained at signal bandwidth of 120 kHz.

  • Improved Estimation of the Number of Competing Stations Using Scaled Unscented Filter in an IEEE 802.11 Network

    Jang Sub KIM  Ho Jin SHIN  Dong Ryeol SHIN  

     
    PAPER-Terrestrial Radio Communications

      Vol:
    E91-B No:11
      Page(s):
    3688-3694

    In this paper, a new methodology to estimate the number of competing stations in an IEEE 802.11 network, is proposed. Due to the nonlinear nature of the measurement model, an iterative nonlinear filtering algorithm, called the Scaled Unscented Filter (SUF), is employed. The SUF can provide a superior alternative to nonlinear filtering than the conventional Extended Kalman Filter (EKF), since it avoids errors associated with linearization. This approach demonstrates both high accuracy in addition to prompt reactivity to changes in the network occupancy status. In particular, the proposed algorithm shows superior performance in non saturated conditions when compared to the EKF. Numerical results demonstrate that the proposed algorithm provides a more viable method for estimation of the number of competing stations in an IEEE 802.11 network, than estimators based on the EKF.

  • A Fuzzy Estimation Theory for Available Operation of Extremely Complicated Large-Scale Network Systems

    Kazuo HORIUCHI  

     
    PAPER-Nonlinear System Theory

      Vol:
    E91-A No:9
      Page(s):
    2396-2402

    In this paper, we shall describe about a fuzzy estimation theory based on the concept of set-valued operators, suitable for available operation of extremely complicated large-scale network systems. Fundamental conditions for availability of system behaviors of such network systems are clarified in a form of β-level fixed point theorem for system of fuzzy-set-valued operators. Here, the proof of this theorem is accomplished in a weak topology introduced into the Banach space.

  • Light Weight MP3 Watermarking Method for Mobile Terminals

    Koichi TAKAGI  Shigeyuki SAKAZAWA  Yasuhiro TAKISHIMA  

     
    PAPER-Engineering Acoustics

      Vol:
    E91-A No:9
      Page(s):
    2546-2554

    This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.

  • A Large-Scale, Flip-Flop RAM Imitating a Logic LSI for Fast Development of Process Technology

    Masako FUJII  Koji NII  Hiroshi MAKINO  Shigeki OHBAYASHI  Motoshige IGARASHI  Takeshi KAWAMURA  Miho YOKOTA  Nobuhiro TSUDA  Tomoaki YOSHIZAWA  Toshikazu TSUTSUI  Naohiko TAKESHITA  Naofumi MURATA  Tomohiro TANAKA  Takanari FUJIWARA  Kyoko ASAHINA  Masakazu OKADA  Kazuo TOMITA  Masahiko TAKEUCHI  Shigehisa YAMAMOTO  Hiromitsu SUGIMOTO  Hirofumi SHINOHARA  

     
    PAPER

      Vol:
    E91-C No:8
      Page(s):
    1338-1347

    We propose a new large-scale logic test element group (TEG), called a flip-flop RAM (FF-RAM), to improve the total process quality before and during initial mass production. It is designed to be as convenient as an SRAM for measurement and to imitate a logic LSI. We implemented a 10 Mgates FF-RAM using our 65-nm CMOS process. The FF-RAM enables us to make fail-bit maps (FBM) of logic cells because of its cell array structure as an SRAM. An FF-RAM has an additional structure to detect the open and short failure of upper metal layers. The test results show that it can detect failure locations and layers effortlessly using FBMs. We measured and analyzed it for both the cell arrays and the upper metal layers. Their results provided many important clues to improve our processes. We also measured the neutron-induced soft error rate (SER) of FF-RAM, which is becoming a serious problem as transistors become smaller. We compared the results of the neutron-induced soft error rate to those of previous generations: 180 nm, 130 nm, and 90 nm. Because of this TEG, we can considerably shorten the development period for advanced CMOS technology.

  • Robust Object-Based Watermarking Using Feature Matching

    Viet-Quoc PHAM  Takashi MIYAKI  Toshihiko YAMASAKI  Kiyoharu AIZAWA  

     
    PAPER-Application Information Security

      Vol:
    E91-D No:7
      Page(s):
    2027-2034

    We present a robust object-based watermarking algorithm using the scale-invariant feature transform (SIFT) in conjunction with a data embedding method based on Discrete Cosine Transform (DCT). The message is embedded in the DCT domain of randomly generated blocks in the selected object region. To recognize the object region after being distorted, its SIFT features are registered in advance. In the detection scheme, we extract SIFT features from the distorted image and match them with the registered ones. Then we recover the distorted object region based on the transformation parameters obtained from the matching result using SIFT, and the watermarked message can be detected. Experimental results demonstrated that our proposed algorithm is very robust to distortions such as JPEG compression, scaling, rotation, shearing, aspect ratio change, and image filtering.

  • The Interaction of Art, Technology and Customers in Picture Making

    John J. MCCANN  Yoichi MIYAKE  

     
    INVITED PAPER

      Vol:
    E91-A No:6
      Page(s):
    1369-1382

    Human interest in pictures dates back to 14,000 BC. Pictures can be drawn by hand or imaged by optical means. Over time pictures have changed from being rare and unique to ubiquitous and common. They have changed from treasures to transients. This paper summarizes many picture technologies, and discusses their dynamic range, their color and tone scale rendering. This paper discusses the interactions between advances in technology and the interests of its users over time. It is the combination of both technology and society's usage that has shaped imaging since its beginning and continues to do so.

  • Junction Depth Dependence of the Gate Induced Drain Leakage in Shallow Junction Source/Drain-Extension Nano-CMOS

    Seung-Hyun SONG  Jae-Chul KIM  Sung-Woo JUNG  Yoon-Ha JEONG  

     
    PAPER

      Vol:
    E91-C No:5
      Page(s):
    761-766

    This study describes the dependence of the surface electric field to the junction depth of source/drain-extension, and the suppression of gate induced drain leakage (GIDL) in fully depleted shallow junction gate-overlapped source/drain-extension (SDE). The GIDL can be reduced by reducing shallow junction depth of drain-extension. Total space charges are a function of junction depth in fully depleted shallow junction drain-extension, and the surface potential is proportional to these charges. Because the GIDL is proportional to surface potential, GIDL is the function of junction depth in fully depleted shallow junction drain-extension. Therefore, the GIDL is suppressed in a fully depleted shallow junction drain-extension by reducing surface potential. Negative substrate bias and halo doping could suppress the GIDL, too. The GIDL characteristic under negative substrate bias is contrary to other GIDL models.

  • CombNET-III with Nonlinear Gating Network and Its Application in Large-Scale Classification Problems

    Mauricio KUGLER  Susumu KUROYANAGI  Anto Satriyo NUGROHO  Akira IWATA  

     
    PAPER-Pattern Recognition

      Vol:
    E91-D No:2
      Page(s):
    286-295

    Modern applications of pattern recognition generate very large amounts of data, which require large computational effort to process. However, the majority of the methods intended for large-scale problems aim to merely adapt standard classification methods without considering if those algorithms are appropriated for large-scale problems. CombNET-II was one of the first methods specifically proposed for such kind of a task. Recently, an extension of this model, named CombNET-III, was proposed. The main modifications over the previous model was the substitution of the expert networks by Support Vectors Machines (SVM) and the development of a general probabilistic framework. Although the previous model's performance and flexibility were improved, the low accuracy of the gating network was still compromising CombNET-III's classification results. In addition, due to the use of SVM based experts, the computational complexity is higher than CombNET-II. This paper proposes a new two-layered gating network structure that reduces the compromise between number of clusters and accuracy, increasing the model's performance with only a small complexity increase. This high-accuracy gating network also enables the removal the low confidence expert networks from the decoding procedure. This, in addition to a new faster strategy for calculating multiclass SVM outputs significantly reduced the computational complexity. Experimental results of problems with large number of categories show that the proposed model outperforms the original CombNET-III, while presenting a computational complexity more than one order of magnitude smaller. Moreover, when applied to a database with a large number of samples, it outperformed all compared methods, confirming the proposed model's flexibility.

  • DFP: Data Forwarding Protocol to Provide End-to-End Reliable Delivery Service in Large-Scale Wireless Sensor Networks

    Joo-Sang YOUN  Chul-Hee KANG  

     
    PAPER

      Vol:
    E90-B No:12
      Page(s):
    3383-3391

    Reliable end-to-end delivery service is one of the most important issues for wireless sensor networks in large-scale deployments. In this paper, a reliable data transport protocol, called the Data Forwarding Protocol (DFP), is proposed to improve the end-to-end delivery rate with minimum transport overhead for recovering from data loss in large-scale wireless sensor environments consisting of low speed mobile sensor nodes. The key idea behind this protocol is the establishment of multi-split connection on an end-to-end route, through the Agent Host (AH), which plays the role of a virtual source or a sink node. In addition, DFP applies the local error control and the local flow control mechanisms to multi-split connections, according to network state. Extensive simulations are carried out via ns-2 simulator. The simulation results demonstrate that DFP not only provide up to 30% more reliable data delivery, but also reduces the number of retransmission generated by data loss, compared with the TCP-like end-to-end approach.

  • A Basic Theory for Available Operation of Extremely Complicated Large-Scale Network Systems

    Kazuo HORIUCHI  

     
    PAPER-Systems Theory and Control

      Vol:
    E90-A No:10
      Page(s):
    2232-2238

    In this paper, we shall describe about a basic theory based on the concept of set-valued operators, suitable for available operation of extremely complicated large-scale network systems. Fundamental conditions for availability of system behaviors of such network systems are clarified in a form of fixed point theorem for system of set-valued operators. Here, the proof of this theorem is accomplished by the concept of Hausdorff's ball measure of non-compactness.

  • Evaluation of Reliable Multicast Applications for Large-Scale Contents Delivery

    Teruji SHIROSHITA  Shingo KINOSHITA  Takahiko NAGATA  Tetsuo SANO  Yukihiro NAKAMURA  

     
    PAPER

      Vol:
    E90-B No:10
      Page(s):
    2738-2745

    Reliable Multicast has been applied to large-scale contents delivery systems for distributing digital contents to a large number of users without data loss. Reliable contents distribution is indispensable for software updates and management data sharing in actual delivery services. This paper evaluates the implementation and performance of RMTP; a reliable multicast protocol for bulk-data transfer, through the developments of contents delivery systems. Software configuration is also examined including operation functions such as delivery scheduling. Furthermore, applicability of reliable multicast to emerging broadband networks is also discussed based on the experimentation results. Through the deployment of the protocol and the software, performance estimation has played a key role for constructing the delivery systems as well as for designing the communication protocol.

  • Performance Analysis of Large-Scale IP Networks Considering TCP Traffic

    Hiroyuki HISAMATSU  Go HASEGAWA  Masayuki MURATA  

     
    PAPER-Network Management/Operation

      Vol:
    E90-B No:10
      Page(s):
    2845-2853

    In this paper, we propose a novel analysis method for large-scale networks with consideration of the behavior of the congestion control mechanism of TCP. In the analysis, we model the behavior of TCP at end-host and network link as independent systems, and combine them into a single system in order to analyze the entire network. Using this analysis, we can analyze a large-scale network, i.e. with over 100/1,000/10,000 routers/hosts/links and 100,000 TCP connections very rapidly. Especially, a calculation time of our analysis, it is different from that of ns-2, is independent of a network bandwidth and/or propagation delay. Specifically, we can derive the utilization of the network links, the packet loss ratio of the link buffer, the round-trip time (RTT) and the throughput of TCP connections, and the location and degree of the network congestion. We validate our approximate analysis by comparing analytic results with simulation ones. We also show that our analysis method treats the behavior of TCP connection in a large-scale network appropriately.

  • A Multi-Scale Adaptive Grey World Algorithm

    Bing LI  De XU  Moon Ho LEE  Song-He FENG  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E90-D No:7
      Page(s):
    1121-1124

    Grey world algorithm is a well-known color constancy algorithm. It is based on the Grey-World assumption i.e., the average reflectance of surfaces in the world is achromatic. This algorithm is simple and has low computational costs. However, for the images with several colors, the light source color could not be estimated correctly using the Grey World algorithm. In this paper, we propose a Multi-scale Adaptive Grey World algorithm (MAGW). First, multi-scale images are obtained based on wavelet transformation and the illumination color is estimated from different scales images. Then according to the estimated illumination color, the original image is mapped into the image under a canonical illumination with supervision of an adaptive reliability function, which is based on the image entropy. The experimental results show that our algorithm is effective and also has low computational costs.

  • Experimental Verification of Power Supply Noise Modeling for EMI Analysis through On-Board and On-Chip Noise Measurements

    Kouji ICHIKAWA  Yuki TAKAHASHI  Makoto NAGATA  

     
    PAPER

      Vol:
    E90-C No:6
      Page(s):
    1282-1290

    Power supply noise waveforms are acquired in a voltage domain by an on-chip monitor at resolutions of 0.3 ns/1.2 mV, in a digital test circuit consisting of 0.18-µm CMOS standard logic cells. Concurrently, magnetic field variation on a printed circuit board (PCB) due to power supply current of the test circuit is measured by an off-chip magnetic probing technique. An equivalent circuit model that unifies on- and off-chip impedance network of the entire test setup for EMI analysis is used for calculating the on-chip voltage-mode power supply noise from the off-chip magnetic field measurements. We have confirmed excellent consistency in frequency components of power supply noises up to 300 MHz among those derived by the on-chip direct sensing and the off-chip magnetic probing techniques. These results not only validate the state-of-the art EMI analysis methodology but also promise its connectivity with on-chip power supply integrity analysis at the integrated circuit level, for the first time in both technical fields.

  • Improved Turbo Equalization Schemes Robust to SNR Estimation Errors

    Qiang LI  Wai Ho MOW  Zhongpei ZHANG  Shaoqian LI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E90-B No:6
      Page(s):
    1454-1459

    An improved Max-Log-Map (MLM) turbo equalization algorithm called Scaled Max-Log-Map (SMLM) iterative equalization is presented. Simulations show that the scheme can dramatically outperform the MLM besides it is insensitive to SNR mismatch. Unfortunately, its performance is still much worse than that of Log-Map (LM) with exact SNR over high-loss channels. Accordingly, we also propose a new SNR estimation algorithm based on the reliability values of soft output extrinsic information of SMLM decoder. Using the new scheme, we obtain good performance close to that of LM with ideal knowledge of SNR.

  • A Modified Gaussian Filter for the Arbitrary Scale LP Enlargement Method

    Shuai YUAN  Akira TAGUCHI  Masahide ABE  Masayuki KAWAMATA  

     
    LETTER-Image

      Vol:
    E90-A No:5
      Page(s):
    1115-1120

    In this paper, we use a modified Gaussian filter to improve enlargement accuracy of the arbitrary scale LP enlargement method, which is based on the Laplacian pyramid representation (so called "LP method"). The parameters of the proposed algorithm are extracted through a theoretical analysis and an experimental estimation. Experimental results show that the proposed modified Gaussian filter is effective for the arbitrary scale LP enlargement method.

  • An Embedding Scheme for Binary and Grayscale Watermarks by Spectrum Spreading and Its Performance Analysis

    Ming-Chiang CHENG  Kuen-Tsair LAY  

     
    PAPER-Image

      Vol:
    E90-A No:3
      Page(s):
    670-681

    Digital watermarking is a technique that aims at hiding a message signal in a multimedia signal for copyright claim, authentication, device control, or broadcast monitoring, etc. In this paper, we focus on embedding watermarks into still images, where the watermarks themselves can be binary sequences or grayscale images. We propose to scramble the watermark bits with pseudo-noise (PN) or orthogonal codes before they are embedded into an image. We also try to incorporate error correction coding (ECC) into the watermarking scheme, anticipating reduction of the watermark bit error rate (WBER). Due to the similarity between the PN/orthogonal-coded watermarking and the spread spectrum communication, it is natural that, following similar derivations regarding data BER in digital communications, we derive certain explicit quantitative relationships regarding the tradeoff between the WBER, the watermark capacity (i.e. the number of watermark bits) and the distortion suffered by the original image, which is measured in terms of the embedded image's signal-to-noise ratio (abbreviated as ISNR). These quantitative relationships are compactly summarized into a so-called tradeoff triangle, which constitutes the major contribution of this paper. For the embedding of grayscale watermarks, an unequal error protection (UEP) scheme is proposed to provide different degrees of robustness for watermark bits of different degrees of significance. In this UEP scheme, optimal strength factors for embedding different watermark bits are sought so that the mean squared error suffered by the extracted watermark, which is by itself a grayscale image, is minimized while a specified ISNR is maintained.

141-160hit(272hit)