We analyze the Lagarias-Odlyzko low-density attack precisely, and show that this low-density attack can be applied to the Chor-Rivest and the Okamoto-Tanaka-Uchiyama cryptosystemes, which are considered to be secure against the low-density attack. According to our analysis, these schemes turn out to be no longer secure against the low-density attack.
Yasuhito ASANO Tsuyoshi ITO Hiroshi IMAI Masashi TOYODA Masaru KITSUREGAWA
Compact encodings of the web graph are required in order to keep the graph on the main memory and to perform operations on the graph efficiently. In this paper, we propose a new compact encoding of the web graph. It is 10% more compact than Link2 used in the Connectivity Server of Altavista and 20% more compact than the encoding proposed by Guillaume et al. in 2002 and is comparable to it in terms of extraction time.
Hiroyuki TOMIYAMA Hiroaki TAKADA Nikil D. DUTT
Energy consumption has become one of the most critical constraints in the design of portable multimedia systems. For media applications, address buses between processor and data memory consume a considerable amount of energy due to their large capacitance and frequent accesses. This paper studies impacts of memory data organization on the address bus energy. Our experiments show that the address bus activity is significantly reduced by 50% through exploring memory data organization and encoding address buses.
Zhibin PAN Koji KOTANI Tadahiro OHMI
Vector quantization (VQ) features a very heavy encoding process. In previous work, an efficient encoding algorithm using mean pyramid has been developed. To improve it further, a fast search algorithm is proposed in this letter. Specifically speaking, four major modifications are made. First, to rearrange the original codebook directly along the sorted real sums to reduce the search scope and then update the lower and upper bound dynamically. Second, to use sum instead of the mean that includes roundoff error to thoroughly avoid a possible mismatched winner. Third, to construct a sum pyramid using 2-pixel-merging other than 4-pixel-merging way to generate more in-between levels. Fourth, to introduce the Cauchy-Schwarz inequality to bridge Euclidean and Manhattan distance together so that the difference check between 2 vectors can be pre-conducted only by much lighter Manhattan distance computation. Experimental results show that the proposed algorithm is more search-efficient.
This paper presents a novel wavelet compression technique to increase compression of images. Based on zerotree entropy coding method, this technique initially uses only two symbols (significant and zerotree) to compress image data for each level. Additionally, sign bit is used for newly significant coefficients to indicate them being positive or negative. Contrary to isolated zero symbols used in conventional zerotree algorithms, the proposed algorithm changes them to significant coefficients and saves its location, they are then treated just like other significant coefficients. This is done to decrease number of symbols and hence, decrease number of bits to represent the symbols used. In the end, algorithm indicates isolated zero coordinates that are used to change the value back to original during reconstruction. Noticeably high compression ratio is achieved for most of the images, without changing image quality.
Satoshi KOMATSU Masahiro FUJITA
The power dissipation at the off-chip bus has become a significant part of the overall power dissipation in micro-processor based digital systems. This paper presents irredundant address bus encoding methods which reduce signal transitions on the instruction address buses by using adaptive codebook methods. These methods are based on the temporal locality and spatial locality of instruction address. Since applications tend to JUMP/BRANCH to limited sets of addresses, proposed encoding methods assign the least signal transition codes to the addresses of JUMP/BRANCH operations in the past. In addition, our methods can be easily applicable for conventional digital systems since they are irredundant encoding methods. Our encoding methods reduce the signal transitions on the instruction address buses, which results in the reduction of total power dissipation of digital systems. Experimental results show that our methods can reduce the signal transition by an average of 88%.
This paper presents a novel digit-level algorithm for motion estimation (ME) and its hardware implementations. It uses the most-significant-digit-first (MSD-first) processing and on-line arithmetic ME components. A dedicated array architecture is also proposed for applications with high-throughput ME. Various fast search algorithms were presented in literatures to reduce the complexity but sacrifice the motion vector (MV) quality. Our MSD-first ME decomposes the summation of absolute differences (SAD) and comparison operations to digit level with MSD-plane first. These comparisons are interleaved into SADs to distinguish the MV as soon as possible. The algorithm precisely extracts the impossible candidates and removes their rest operations. It saves 47.4 % to 64.3 % of SAD computations in full search block matching (FSBM) ME. In the past, the high implementation cost of redundant number system prevented the practical use of on-line arithmetic. Besides, the redundant SAD removal results in irregular data flow in system-level integration. All these problems are solved by our novel architecture design. In this paper, we propose novel architecture designs to solve these problems. Besides, the architecture requires only one memory access per pixel to lower memory bandwidth by extensive data parallelism and a particular memory addressing while keeping the controller simple. A 4 4 array processor is implemented in 0.35 µm 1P4M CMOS cell library, with 2.84 ns cycle time and 1510 gates. It can support 83 M FSBM operations per second. After normalization, our implementation can support 2.67 times SAD operations per unit area (estimated in gate count) of the conventional two's complement ones. MSD-first ME can realize with other ME algorithms to improve the performance as well.
Koji INOUE Vasily G. MOSHNYAGA Kazuaki MURAKAMI
In this paper, we propose an instruction encoding scheme to reduce power consumption of instruction ROMs. The power consumption of the instruction ROM strongly depends on the switching activity of bit-lines due to their large load capacitance. In our approach, the binary-patterns to be assigned as op-codes are determined based on the frequency of instructions in order to reduce the number of bit-line dis-charging. Simulation results show that our approach can reduce 40% of bit-line switchings from a conventional organization.
Junho JEONG Jitae SHIN Doug Young SUH
In the past, enhancement techniques for the end-to-end quality of a networked application were studied by looking at each individual layer. Examples of such techniques include the error resilience/concealment methods in the application layer, the FEC/ARQ in the transport layer, and the Quality of Service (QoS) techniques in the network layers. However, an integrated approach that would look across all related layers had yet to be investigated. This paper proposes an approach that combines priority-aware video packetization, adaptively used layered FEC, and QoS controlled networks such as IntServ and DiffServ in order to provide an efficient end-to-end quality in layered streaming video. The combination is more efficient in terms of a simple network price mechanism, that is, in getting the best end-to-end quality under a given total cost constraint. Our proposed approach in DiffServ with video packet (VP) data-splitting and layered FEC guarantees minimal service quality in a scalable and cost effective manner without introducing resource reservation. For video, we also propose performance metrics such as corrupted/frozen frame ratio (CFR, FFR). These provide objective metrics like PSNR as well as a measurement for subjective perceptions. Our approach, which combines related layers such as video coding, transport, and network, has yielded results that have proven to be more cost-effective and practical than the supporting network QoS.
This paper describes an effective technique for coding QCIF video sequences based on a JPEG2000 codec. In the proposed method, multiple frames are combined into one large picture. The larger picture enables images to be coded more efficiently. Image quality is further improved by combining the frames symmetrically. The video sequence is efficiently coded by adapting the time correlation of the video sequences to spatial correlation. We demonstrated the effectiveness of this method by encoding QCIF video sequences using JPEG2000.
Hyochang NAM Jong KIM Sung Je HONG Sunggu LEE
For checkpointing to be practical, it has to introduce low overhead for the targeted application. As a means of reducing the overhead of checkpointing, this paper proposes a probabilistic checkpointing method, which uses block encoding to detect the modified memory area between two consecutive checkpoints. Since the proposed technique uses block encoding to detect the modified area, the possibility of aliasing exists in encoded words. However, this paper shows that the aliasing probability is near zero when an 8-byte encoded word is used. The performance of the proposed technique is analyzed and measured by using experiments. An analytic model which predicts the checkpointing overhead is first constructed. By using this model, the block size that produces the best performance for a given target program is estimated. In most cases, medium block sizes, i.e., 128 or 256 bytes, show the best performance. The proposed technique has also been implemented on Unix based systems, and its performance has been measured in real environments. According to the experimental results, the proposed technique reduces the overhead by 11.7% in the best case and increases the overhead by 0.5% in the worst case in comparison with page-based incremental checkpointing.
Shin-ichi WAKABAYASHI Hitomi MORIYA Asako BABA Yoshinori TAKEUCHI
We have developed optical encoding devices for processing femtosecond pulses. These devices are based on spectral separation devices and light modulators with fiber gratings. Experiments were made to encode a light pulse in the spectral domain. These experiments utilize the characteristics that a femtosecond light pulse has a very broad spectrum. An input femtosecond light pulse is decomposed into a series of wavelength components. Each wavelength component with narrow spectra <1 nm width is successfully extracted into a single mode fiber. Light modulators corresponding to wavelength components are assigned to the 1st bit, the 2nd bit, the 3rd bit,
An adaptive rate communication system based on the N-MSK modulation technique is described. Two examples of the system using a 2-MSK adaptive modulation scheme and a 4-MSK adaptive modulation scheme are presented and analysed in slow fading channel. The channel attenuation obeys either Rayleigh or lognormal distribution. The proposed adaptive rate communication system is able to track slow variations of channel attenuation and the average system throughput is therefore increased at a given BER.
In this paper, two efficient codebook searching algorithms for vector quantization (VQ) are presented. The first fast search algorithm utilizes the compactness property of signal energy of orthogonal transformation. On the transformed domain, the algorithm uses geometrical relations between the input vector and codeword to discard many unlikely codewords. The second algorithm, which transforms principal components only, is proposed to alleviate some calculation overhead and the amount of storage. The relation between the principal components and the input vector is utilized in the second algorithm. Since both of the proposed algorithms reject those codewords that are impossible to be the nearest codeword, they produce the same output as conventional full search algorithm. Simulation results confirm the effectiveness of the proposed algorithms.
In this paper, we deal with the problem of compatibility class encoding, and propose a novel algorithm for finding a good functional decomposition with application to LUT-based FPGA synthesis. Based on exploration of the design space, we concentrate on extracting a set of components, which can be merged into the minimum number of multiple-output CLBs or LUTs, such that the decomposition constructed from these components is also minimal. In particular, to explore more degrees of freedom, we introduce pliable encoding to take over the conventional rigid encoding when it fails to find a satisfactory decomposition by rigid encoding. Experimental results on a large set of MCNC91 logic synthesis benchmarks show that our method is quite promising.
The encoding procedure that allows one to represent integers by binary vectors (codewords) in such a way that addition is replaced with the OR operation applied to these vectors is described. The codeword of the sum is constructed using the decoding algorithm. As a result, many of the transformations can be realized using parallel processing, and the method can be considered as a competitor to existing computer arithmetic.
Shouhei NISHI Isamu SAEKI Hideki TODE Koso MURAKAMI
Increasing the capacity and intelligence of the next-generation Internet requires the application of optical technologies to switching nodes as well as transmission lines, and the development of advanced network architectures with end-to-end connection setup processing at the source node and autonomous routing at intermediate nodes. In the present paper, we design a new CDM-based switching scheme and node configurations that are suitable for a photonic IP switching system, in which a set of multiple-encoding CDM codes is utilized as routing information. In addition, we calculate the BER characteristics of the multiple-encoding CDM system by simulation. Under the condition that the chip duration of a certain code is a multiple of that of another code, the BER characteristics of the multiple-encoding system are shown to coincide with that of the single-encoding system by the longer code.
Kunihiro ASADA Makoto IKEDA Satoshi KOMATSU
This paper summarizes power reduction methods applicable for VLSI bus systems in terms of reduction of signal swing, effective capacitance reduction and reduction of signal transition, which have been studied in authors' research group. In each method the basic concept is reviewed quickly along with some examples of its application. A future perspective is also described in conclusion.
Ken-ichi IWATA Masakatu MORII Tomohiko UYEMATSU Eiji OKAMOTO
Many Ziv-Lempel algorithms have a similar property, that is, slow encoding and fast decoding. This paper proposes a simple improved Ziv-Lempel algorithm to encode a large amount of data quickly as well as compactly by using multiple-processor system.
Takatoshi SUGIYAMA Masahiro UMEHIRA
This paper proposes a novel FEC (forward error correction) scheme for high-speed wireless systems aiming at mobile computing applications. The proposed scheme combines inner nonredundant error correction with outer parallel encoding random FEC for differentially detected QPSK (quadrature phase shift keying) signals. This paper, first, examines error patterns after the differential detection with nonredundant error correction and reveals that particular double symbol errors occur with relatively high probability. To improve the outer FEC performance degradation due to the double symbol errors, the proposed scheme uses I and Q channel serial to parallel conversion in the transmission side and parallel to serial conversion in the receiving side. As a result, it enables to use simple FEC for the outer parallel encoding random FEC without interleaving. Computer simulation results show the proposed scheme employing one bit correction BCH coding obtains a required Eb/No improvement of 1.2 dB at a Pe of 10-5 compared to that with the same memory size interleaving in an AWGN environment. Moreover, in a Rician fading environment where directional beam antennas are assumed to be used to improve the degradation due to severe multipath signals, an overall Eb/No improvement at Pe of 10-5 of 3.0 dB is achieved compared to simple differential detection when the condition of delay spread of 5 nsec, carrier to multipath signal power ratio of 20 dB and Doppler frequency at 20 GHz band of 150 Hz.