Kwisung YOO Hoon LEE Gunhee HAN
The cable length in wired serial data communication is limited because the limited bandwidth of a long cable introduces ISI (Inter Symbol Interference). A line equalizer can be used at the receiver to extend the channel bandwidth. This paper proposes a low-power and small-area analog adaptive line equalizer for 100-Mb/s operation on UTP (Unshielded Twisted Pair) cable up to 100 m. The proposed adaptive line equalizer is fabricated with 0.35-µm CMOS process, consumes 19 mW and occupies only 0.07 mm2 Measurement results show that the prototype can operate at data rate of 100 Mb/s on a 100-m cable and 155 Mb/s on a 50-m cable.
Hiroyuki TOMIYAMA Hiroaki TAKADA Nikil D. DUTT
Energy consumption has become one of the most critical constraints in the design of portable multimedia systems. For media applications, address buses between processor and data memory consume a considerable amount of energy due to their large capacitance and frequent accesses. This paper studies impacts of memory data organization on the address bus energy. Our experiments show that the address bus activity is significantly reduced by 50% through exploring memory data organization and encoding address buses.
Deepshikha GARG Fumiyuki ADACHI
In this paper, the space time transmit diversity (STTD) decoding combined with minimum mean square error (MMSE) equalization is presented for MC-CDMA downlink and uplink in the presence of multiple receive antennas. The equalization weights that minimize the MSE for each subcarrier are derived. From computer simulation, it was found that the BER performance of STTD decoding combined with MMSE equalization and Mr-antenna diversity reception using the weights derived in this paper provides the same diversity order as 2Mr-antenna receive diversity with MMSE equalization but with 3 dB performance penalty and is always better than that with no diversity. The uplink BER performance can also be improved with STTD, but the error floor still exists. However, with 2-receive antennas in addition to 2-antenna STTD, the BER floor can be reduced to around 10-5 even for the uplink.
It is a critical design process to estimate the fractional errors of the Synthetic Aperture Radar (SAR) processor before implementation. The contribution of this paper is to identify the chief sources and types and to suggest an estimation technique for overall fractional errors of the space-based SAR processor using Range-Doppler Algorithm (RDA). Also, simulation is performed to the Experimental-SAR (E-SAR) processor to examine the practicability and efficiency of the technique, the results are discussed, and the solutions for problems are recommended. Therefore, this technique can be used to estimate the fractional errors of the space-based SAR processor using RDA.
Guo-rui FENG Ling-ge JIANG Chen HE
A watermarking system is secure as long as it satisfies Kerckhoffs principle according to the cryptography. In this letter, two novel techniques named the encrypted orthogonal transformation and its improved scheme as useful preprocessing methods are presented to apply to the watermarking field, which can enhance the security of the watermarking scheme. Compared to discrete cosine transform watermarking algorithms, this method has similar robustness but higher security.
Kenji TAKATSUKASA Shinya MATSUFUJI Yoshihiro TANADA
This paper formulates functions generating four kinds of binary sequence sets of length 2n with zero correlation zone, which have been discussed for approximately synchronized CDMA systems without co-channel interference nor influence of multipath. They are logic functions of a binary vector of order n, expressed by EXOR and AND operations.
Shigueo NOMURA Keiji YAMANAKA Osamu KATAI Hiroshi KAWAKAMI
We present a novel adaptive method to improve the binarization quality of degraded word color images. The objective of this work is to solve a nonlinear problem concerning the binarization quality, that is, to achieve edge enhancement and noise reduction in images. The digitized data used in this work were extracted automatically from real world photos. The motion of objects with reference to static camera and bad environmental conditions provoked serious quality problems on those images. Conventional methods, such as the nonlinear adaptive filter method proposed by Mo, or Otsu's method cannot produce satisfactory binarization results for those types of degraded images. Among other problems, we note mainly that contrast (between shapes and backgrounds) varies greatly within every degraded image due to non-uniform illumination. The proposed method is based on the automatic extraction of background information, such as luminance distribution to adaptively control the intensity levels, that is, without the need for any manual fine-tuning of parameters. Consequently, the new method can avoid noise or inappropriate shapes in the output binary images. Otsu's method is also applied to automatic threshold selection for classifying the pixels into background and shape pixels. To demonstrate the efficiency and the feasibility of the new adaptive method, we present results obtained by the binarization system. The results were satisfactory as we expected, and we have concluded that they can be used successfully as data in further processing such as segmentation or extraction of characters. Furthermore, the method helps to increase the eventual efficiency of a recognition system for poor-quality word images, such as number plate photos with non-uniform illumination and low contrast.
Yasuyuki SUGAYA Kenichi KANATANI
Feature point tracking over a video sequence fails when the points go out of the field of view or behind other objects. In this paper, we extend such interrupted tracking by imposing the constraint that under the affine camera model all feature trajectories should be in an affine space. Our method consists of iterations for optimally extending the trajectories and for optimally estimating the affine space, coupled with an outlier removal process. Using real video images, we demonstrate that our method can restore a sufficient number of trajectories for detailed 3-D reconstruction.
Hiroshi TAKAHASHI Shigeshi ABIKO Kenichi TASHIRO Kaoru AWAKA Yutaka TOYONOH Rimon IKENO Shigetoshi MURAMATSU Yasumasa IKEZAKI Tsuyoshi TANAKA Akihiro TAKEGAMA Hiroshi KIMIZUKA Hidehiko NITTA Miki KOJIMA Masaharu SUZUKI James Lowell LARIMER
A new high-speed and low-power digital signal processor (DSP) core, C55x, was developed for next generation applications such as 3G cellular phone, PDA, digital still camera (DSC), audio, video, embedded modem, DVD, and so on. To support such MIPS-rich applications, a packet size of an instruction fetch increased from 16-bit to 32-bit comparing with the world's most popular C54x DSP core, while maintaining complete software compatibility with the legacy DSP code. An on-chip instruction buffer queue (IBQ) automatically unpacks the packets and issues multiple instructions in parallel for the efficient use of circuit resources. The efficiency of the parallelism has been further improved by additional hardwares such as second 1717-bit MAC, a 16-bit ALU, and three temporary registers that can be used for simple computations. Four 40-bit accumulators make it possible to execute more operation per cycle with dramatically reduced overall power consumption. These new architecture allows two times efficiency of instruction per cycle (IPC) than the previous DSP core on typical applications at the same MHz. The new DSP core was designed for TI's two 130 nm technologies, one with high-VT for low-leakage and middle-performance operation at 1.5 V, and the other with low-VT for high-performance and low-VDD operation at 1.2 V, to provide best choices for any applications with a single layout data base. With the low-leakage process, the DSP core operates at over 200 MHz with 188 µA/MHz (at 75% Dual MAC + 25% ADD) active power and less than 1.63 µA standby current. The high-performance process provides it with 300 MHz with 169 µA/MHz active power and less than 680 µA standby current. The new core was designed by a semi-custom approach (ASIC + custom library) using 5-level Cu metal system with low-k dielectric material of fluorosilicate glass (FSG), and about one million transistors are contained in the core. The total balance of its power, performance, area, and leakage current (PPAL) is well suitable to most of next generation applications. In this paper, we will discuss features of the new DSP core, including circuit design techniques for high-speed and low-power, and present an example product.
Kritsada SRIPHAEW Thanaruk THEERAMUNKONG
Mining generalized frequent patterns of generalized association rules is an important process in knowledge discovery system. In this paper, we propose a new approach for efficiently mining all frequent patterns using a novel set enumeration algorithm with two types of constraints on two generalized itemset relationships, called subset-superset and ancestor-descendant constraints. We also show a method to mine a smaller set of generalized closed frequent itemsets instead of mining a large set of conventional generalized frequent itemsets. To this end, we develop two algorithms called SET and cSET for mining generalized frequent itemsets and generalized closed frequent itemsets, respectively. By a number of experiments, the proposed algorithms outperform the previous well-known algorithms in both computational time and memory utilization. Furthermore, the experiments with real datasets indicate that mining generalized closed frequent itemsets gains more merit on computational costs since the number of generalized closed frequent itemsets is much more smaller than the number of generalized frequent itemsets.
In this paper we study a classical firing squad synchronization problem on a model of fault-tolerant cellular automata that have possibly some defective cells. Several fault-tolerant time-efficient synchronization algorithms are developed based on a simple freezing-thawing technique. It is shown that, under some constraints on the distribution of defective cells, any cellular array of length n with p defective cell segments can be synchronized in 2n - 2 + p steps.
Takayuki KAWASHIMA Yoshihiro SASAKI Kenta MIURA Naoki HASHIMOTO Akiyoshi BABA Hiroyuki OHKUBO Yasuo OHTERA Takashi SATO Wataru ISHIKAWA Tsutomu AOYAMA Shojiro KAWAKAMI
Autocloning is a method for fabricating multi-dimensional structures by stacking the corrugated films while keeping the shape. Its productivity, robustness against perturbation, and flexibility regarding materials and lattice types make autocloning suitable for mass production of photonic crystals. Therefore we aim to industrialize autocloned photonic crystals. Recently, we are starting to market polarization beam splitters for optical telecommunication by using 2D photonic crystals, and are developing some devices using the splitters, such as isolators or beam combiners. The applications of the splitters are also extending to multi-section type of devices and to visible range devices. Meanwhile, development of optical integrated circuits by utilizing autocloned photonic crystals is in progress. Low loss propagation and some functions have been demonstrated.
Akiko GOMYO Jun USHIDA Masayuki SHIRANE Masatoshi TOKUSHIMA Hirohito YAMADA
Low-loss optical coupling structures between photonic crystal waveguides and channel waveguides were investigated. It was emphasized that impedance matching of guided modes of those waveguides, as well as field-profile matching, was essential to achieving the low-loss optical coupling. We developed an impedance matching theory for Bloch waves, and applied it to designing the low-loss optical coupling structures. It was demonstrated that the optical coupling loss between a photonic crystal waveguide and a Si-channel waveguide was reduced to as low as 0.7 dB by introducing an interface structure for impedance matching between the two waveguides.
Koji YAMADA Tai TSUCHIZAWA Toshifumi WATANABE Jun-ichi TAKAHASHI Emi TAMECHIKA Mitsutoshi TAKAHASHI Shingo UCHIYAMA Hiroshi FUKUDA Tetsufumi SHOJI Sei-ichi ITABASHI Hirofumi MORITA
A silicon (Si) wire waveguiding system fabricated on silicon-on-insulator (SOI) substrates is one of the most promising platforms for highly-integrated, ultra-small optical circuits, or microphotonics devices. The cross-section of the waveguide's core is about 300-nm-square, and the minimum bending radius are a few micrometers. Recently, crucial problems involving propagation losses and in coupling with external circuits have been resolved. Functional devices using silicon wire waveguides are now being tested. In this paper, we describe our recent progress and future prospects on the microphotonics devices based on the silicon-wire waveguiding system.
Hyeongseok YU Byung Wook KIM Jun-Dong CHO
In this paper, an area efficient VLSI architecture of decision feedback equalizer is derived accommodating 64/256 QAM modulators. This architecture is implemented efficiently in VLSI structure using EDA tools due to its regular structure. The method is to employ a time-multiplexed design scheme, so-called Folding, which executes multiple operation on a single functional unit. In addition, we define a new folding set by grouping the adjacent filter taps with data transfer having the same processing sequence between blocks and perform the internal data-bit optimization. By doing so, the computational complexity is reduced by performance optimization and also silicon area is reduced by using a shared operator. Moreover, through the performance and convergence time comparison of the various LMS (e.g. LMS, data signed LMS, error signed LMS, signed-signed LMS) ) coefficient updating algorithms, we identify an optimum LMS algorithm scheme suitable for the low complexity, high performance and high order (64 and 256) QAM applications for the presented Fractionally Spaced Decision Feedback Equalizer. We simulated the proposed design scheme using SYNOPSYSTM and SPWTM.
Ahmed SWILEM Kousuke IMAMURA Hideo HASHIMOTO
In this paper, we propose two fast codebook generation algorithms for entropy-constrained vector quantization. The first algorithm uses the angular constraint to reduce the search area and to accelerate the search process in the codebook design. It employs the projection angles of the vectors to a reference line. The second algorithm has feature of using a suitable hyperplane to partition the codebook and image data. These algorithms allow significant acceleration in codebook design process. Experimental results are presented on image block data. These results show that our new algorithms perform better than the previously known methods.
Aun HAIDER Harsha SIRISENA Krzysztof PAWLIKOWSKI
Using the Proportional Integral Derivative (PID) principle of classical feedback control theory, this paper develops two general congestion control algorithms for routers implementing Active Queue Management (AQM) while supporting TCP/IP traffic flows. The general designs of non-interacting (N-PID) and interacting (I-PID) congestion control algorithms are tailored for practical network scenarios using the Ziegler-Nichols guidelines for tuning such controllers. Discrete event simulations using ns are performed to evaluate the performance of an existing F-PID and new N-PID and I-PID algorithms. The performance of N-PID and I-PID is compared mutually as well as with the F-PID algorithm. It reveals that N-PID and I-PID have higher speed of response but lower stability margins than F-PID. In general the accurate following of the target queue size by the PID principle congestion control algorithms, while providing high link utilization, low loss rate and low queuing delays, is also demonstrated.
Satoshi KAWATA Satoru SHOJI Hong-Bo SUN
Lasers have been established as a unique nanoprocessing tool due to its intrinsic three-dimensional (3D) fabrication capability and the excellent compatibility to various functional materials. Here we report two methods that have been proved particularly promising for tailoring 3D photonic crystals (PhCs): pinpoint writing via two-photon photopolymerization and multibeam interferential patterning. In the two-photon fabrication, a finely quantified pixel writing scheme and a method of pre-compensation to the shrinkage induced by polymerization enable high-reproducibility and high-fidelity prototyping; well-defined diamond-lattice PhCs prove the arbitrary 3D processing capability of the two-photon technology. In the interference patterning method, we proposed and utilized a two-step exposure approach, which not only increases the number of achievable lattice types, but also expands the freedom in tuning lattice constant.
This paper proposes new frame synchronizers that can achieve frame sync in the presence of a frequency offset. In particular, a maximum likelihood (ML) algorithm for joint frame synchronization and frequency estimation is developed for additive white Gaussian noise (AWGN) channels, then the result is extended to frequency selective channels. Computer simulations demonstrate that the proposed schemes can outperform existing methods when a frequency offset exists.
A novel modified midtread quantizer is proposed for number-controlled oscillator frequency quantization in digital phase-locked loops (DPLLs). We show that DPLLs employing the proposed quantizer provide significantly improved cycle slip performance compared to those employing conventional midtread or midrise quantizers, especially when the number of quantization bits is small and the magnitude of input signal frequency normalized by the quantization interval is less than 0.5.