Haruo KOBAYASHI Kensuke KOBAYASHI Masanao MORIMURA Yoshitaka ONAYA Yuuich TAKAHASHI Kouhei ENOMOTO Hideyuki KOGURE
This paper presents an explicit analysis of the output error power in wideband sampling systems with finite aperture time in the presence of sampling jitter. Sampling jitter and finite aperture time affect the ability of wideband sampling systems to capture high-frequency signals with high precision. Sampling jitter skews data acquisition timing points, which causes large errors in high-frequency (large slew rate) signal acquisition. Finite sampling-window aperture works as a low pass filter, and hence it degrades the high-frequency performance of sampling systems. In this paper, we discuss these effects explicitly not only in the case that either sampling jitter or finite aperture time exists but also the case that they exist together, for any aperture window function (whose Fourier transform exists) and sampling jitter of Gaussian distribution. These would be useful for the designer of wideband sampling data acquisition systems to know how much sampling jitter and aperture time are tolerable for a specified SNR. Some experimental measurement results as well as simulation results are provided as validation of the analytical results.
Kenji ITO Shuji TASAKA Yutaka ISHIBASHI
This paper studies effect of packet scheduling algorithms at routers on media synchronization quality in live audio and video transmission by experiment. In the experiment, we deal with four packet scheduling algorithms: First-In First-Out, Priority Queueing, Class-Based Queueing and Weighted Fair Queueing. We assess the synchronization quality of both intra-stream and inter-stream with and without media synchronization control. The paper clarifies the features of each algorithm from a media synchronization point of view. A comparison of the experimental results shows that Weighted Fair Queueing is the most efficient packet scheduling algorithm for continuous media among the four.
In recent years, several inverse solutions of magnetoencephalography (MEG) have been proposed. Among them, the multiple signal classification (MUSIC) method utilizes spatio-temporal information obtained from magnetic fields. The conventional MUSIC method is, however, sensitive to Gaussian noise and a sufficiently large signal-to-noise ratio (SNR) is required to estimate the number of sources and to specify the precise locations of electrical neural activities. In this paper, a new algorithm for solving the inverse problem using the fourth order MUSIC (FO-MUSIC) method is proposed. We apply it to the MEG source estimation problem. Numerical simulations demonstrate that the proposed FO-MUSIC algorithm is more robust against Gaussian noise than the conventional MUSIC algorithm.
Ichiro TAKASHIMA Riichi KAJIWARA Toshio IIJIMA
The concept of a "standardized brain" is familiar in modern functional neuro-imaging techniques including PET and fMRI, but it has never been adopted for optical imaging studies that deal with a regional cortical area rather than the whole brain. In this paper, we propose a "standardized barrel cortex" for rodents, and present a method for mapping optically detected neural activity onto the standard cortex. The standard cortex is defined as a set of simple cortical columns, which are modeled on the cytoarchitectonic patterns of cell aggregates in cortical layer IV of the barrel cortex. Referring to its underlying anatomical structure, the method warps the surface image of individual cortices to fit the standard cortex. The cortex is warped using a two-dimensional free-form deformation technique with direct manipulation. Since optical imaging provides a map of neural activity on the cortical surface, the warping consequently remaps it on the standard cortex. Data presented in this paper show that somatosensory evoked neural activity is successfully represented on the standardized cortex, suggesting that the combination of optical imaging with our method is a promising approach for investigating the functional architecture of the cortex.
Takayoshi TAKEHARA Hideki TODE Koso MURAKAMI
The requirement to realize large-capacity, high-speed and guaranteed Quality of Service (QoS) communications in IP networks is a recent development. A technique to satisfy these requirements, Multi-Protocol Label Switching (MPLS) is the focus of this paper. In the future, it is expected that congestion and faults on a Label Switched Path (LSP) will seriously affect service contents because various applications are densely served in a large area. In MPLS, however, methods to solve these problems are not clear. Therefore, this study proposes a concrete traffic engineering method to avoid heavy congestion, and at the same time, endeavors to realize a fault-tolerant network by autonomous restoration, or self-healing.
Feng GAO Huijuan ZHAO Yukari TANIKAWA Yukio YAMADA
Generalized Pulse Spectrum Technique (GPST) is a method to solve the inverse problems of wave-propagation and diffusion-dominated phenomena, and therefore has been popularly applied in image reconstruction of time-resolved diffuse optical tomography. With a standard GPST for simultaneous reconstruction of absorption and scattering coefficients, the products of the gradients of the Green's function and the photon-density flux, based on the photon-diffusion equation, are required to calculate the diffusion-related Jacobian matrix. The adversities are of two-folds: time-consuming and singular in the field near the source. The latter causes a severe insensitivity of the algorithm to the scattering changes deep inside tissue. To cope with the above difficulties, we propose in this paper a modified GPST algorithm that only involves the Green's function and the photon-density flux themselves in the scattering-related matrix. Our simulated and experimental reconstructions show that the modified algorithm can significantly improve the quality of scattering image and accelerate the reconstruction process, without an evident degradation in absorption image.
Sungjae KIM Hyungwoo LEE Juho KIM
We present an efficient heuristic algorithm to reduce glitch power dissipation in CMOS digital circuits. In this paper, gate sizing is classified into three types and the buffer insertion is classified into two types. The proposed algorithm combines three types of gate sizing and two types of buffer insertion into a single optimization process to maximize the glitch reduction. The efficiency of our algorithm has been verified on LGSynth91 benchmark circuits with a 0.5 µm standard cell library. Experimental results show an average of 69.98% glitch reduction and 28.69% power reduction that are much better than those of gate sizing and buffer insertion performed independently.
Shintaro HISATAKE Yoshihiro KUROKAWA Takahiro KAWAMOTO Wakao SASAKI
We propose a frequency stabilization system for laser diodes (LD's), in which the major parameters in the stabilization process can be controlled in respond to the monitored frequency noise characteristics in real-time basis. The performance of this system was also tested through stabilizing a 35 mW visible LD. The center frequency of the LD has been stabilized by negative electrical feedback based on Pound-Drever-Hall technique. The linewidth of the LD has been reduced by adapting optical feedback from resonant confocal Fabry-Perot (CFP) cavity. The controlling parameters, especially gain levels and frequency responses of the negative electrical feedback loop can be manipulated to remove the instantaneous frequency noise by monitoring power spectral density (PSD) of the frequency error signals in the real-time basis. The achieved PSD of frequency noise of a sample LD stabilized by the present system was less than 1105 Hz2/Hz for the Fourier frequency < 10 MHz. The reduced linewidth was estimated to be narrower than 400 kHz. The achieved minimum square root of the Allan variance was 3.910-11 at τ = 0.1 msec.
Jinjung KIM Yunho CHOI Chongho LEE Duckjin CHUNG
In this paper, a hardware-oriented Genetic Algorithm (GA) was proposed in order to save the hardware resources and to reduce the execution time of GAP. Based on steady-state model among continuous generation model, the proposed GA used modified tournament selection, as well as special survival condition, with replaced whenever the offspring's fitness is better than worse-fit parent's. The proposed algorithm shows more than 30% in convergence speed over the conventional algorithm. Finally, by employing the efficient pipeline parallelization and handshaking protocol in proposed GAP, above 30% of the computation speed-up can be achieved over survival-based GA which runs one million crossovers per second (1 MHz), when device speed and size of application are taken into account on prototype. It would be used for high speed processing such of central processor of evolvable hardware, robot control and many optimization problems.
When a randomized algorithm elects a leader on anonymous networks, initial information (which is called in general initial condition in this paper) of some sort is always needed. In this paper, we study common properties of initial conditions that enable a randomized algorithm to elect a leader. In the previous papers, the author introduced the notion of transformation between initial conditions using distributed algorithms. By using this notion of transformation, we investigate the property of initial conditions for the leader election. We define that an initial condition C is p(N)-complete if there exists some randomized algorithm that elects a leader with probability p(N) on any size N network satisfying C. We show that we can divide p(N)-completeness into four types as follows. 1. p(N)=1: For any 1-complete initial conditions, there exists a deterministic distributed algorithm that can compute the size of the network for any initial information satisfying the initial condition. 2. inf p(N) >0: For any p(N)-complete initial conditions with inf p(N) >0, there exists a deterministic distributed algorithm that can compute an upper-bound for the size of the network for any initial information satisfying the initial condition. 3. inf p(N) converges to 0: The set of p(N)-complete initial conditions varies depending on the decrease rate of p(N). 4. p(N) decreases exponentially: Any initial condition is regarded as p(N)-complete.
Tomokazu NAGAO Kazuki MATSUZAKI Miho TAKAHASHI Yoshiharu IMAZEKI Haruyuki MINAMITANI
Confocal laser scanning microscope (CLSM) is capable of delivering a high axial resolution, and with this instrument even thin layers of cells can be imaged in good quality. Therefore, intracellular uptake and distribution properties of photosensitizer zinc coproporphyrin III tetrasodium salt (Zn CP-III) in human lung small cell carcinoma (Ms-1) were examined by using CLSM. In particular, the uptake of Zn CP-III in cytoplasm, plasma membrane, and nucleus was individually evaluated for the first time from fluorescence images obtained by CLSM. The results show that the Zn CP-III content in three cellular areas correlates with extracellular Zn CP-III concentration and time of incubation with Zn CP-III. Furthermore, it was found that the cytoplasmic fluorescence was approximately two times higher than that in the nucleus under all uptake conditions. In addition, cellular accumulation of Zn CP-III was compared with photodynamic cytotoxicity. The photocytotoxicity was to a great extent dependent on the uptake of the photosensitizer. The damaged site of Ms-1 cells induced by photodynamic therapy was plasma membrane. However, the content of Zn CP-III accumulated in cytoplasm was the highest among the three areas, implying that, besides the direct damage on plasma membrane, an oxidative damage to cellular component arose from the cytoplasmic Zn CP-III may also play an important role in photocytotoxicity. The quantitative information obtained in this study will be useful for further investigation of the photocytotoxicity as well as the uptake mechanism of photosensitizer.
WanKyoo CHOI IlYong CHUNG SungJoo LEE
There were researches that measured effort required to understand and adapt components based on the complexity of the component, which is some general criterion related to the intrinsic quality of the component to be adapted and understood. They, however, don't consider significance of the measurement attributes and user must decide reusability of similar components for himself. Therefore, in this paper, we propose a new method that can measure the DOR (Degree Of Reusability) of the components by considering the significance of the measurement attributes. We calculates the relative significance of them by using rough set and integrate the significance with the measurement value by using Sugeno's fuzzy integral. Lastly, we apply our method to the source code components and show through statistical technique that it can be used as the ordinal and ratio scale.
Patrick BRINDEL Bruno DANY Delphine ROUVILLAIN Bruno LAVIGNE Patricia GUERBER Elodie BALMEFREZOL Olivier LECLERC
In this paper, we review recent developments in the field of optical regeneration for both ultra long-haul transmission and terrestrial networking applications. Different techniques (2R/3R) using nonlinear properties of materials and/or devices are proposed such as saturable absorber or InP based interferometer structures showing regenerative capabilities. Principles of operation as well as system experiments are described.
In this paper, four coupled chaotic circuits generating four-phase quasi-synchronization of chaos are proposed. By tuning the coupling parameter, chaotic wandering over the phase states characterized by the four-phase synchronization occurs. In order to analyze chaotic wandering, dependent variables corresponding to phases of solutions in subcircuits are introduced. Combining the variables with hysteresis decision of the phase states enables statistical analysis of chaotic wandering.
Under cutoff and threshold priority policies, we mathematically formulate a prioritized channel allocation problem which is combinatorial in nature. We then reduce that problem using the concept of pattern, and apply a simulated annealing approach to the reduced problem. Computational experiments show that our method works very well and the cutoff priority policy outperforms the non-prioritized complete sharing policy and the threshold priority policy.
Hassan ABOLHASSANI Hui CHEN Zenya KOONO
This paper reports on clich
Deukjo HONG Jaechul SUNG Shiho MORIAI Sangjin LEE Jongin LIM
In this paper, we discuss the impossible differential cryptanalysis for the block cipher Zodiac. The main design principles of Zodiac include simplicity and efficiency. However, the diffusion layer in its round function is too simple to offer enough security. The impossible differential cryptanalysis exploits such weakness in Zodiac. Our attack using a 14-round impossible characteristic derives the 128-bit master key of the full 16-round Zodiac faster than the exhaustive search. The efficiency of the attack compared with exhaustive search increases as the key size increases.
Antialiased is one of challenging problems to be solved for the high fidelity image synthesis in 3D graphics. In this paper a rasterization processor which is capable of single-pass full-screen antialiasing is presented. To implement a H/W accelerated single-pass antialiased rasterization processor at the reasonable H/W cost and minimized processing performance degradation, our work is mainly focused on the efficient H/W implementation of a modified version of the A-buffer algorithm. For the efficient handling of partial-pixel fragments of the rasterization phase, a new partial-pixel-merging scheme and a simple and efficient new dynamic memory management scheme are proposed. For the final blending of partial-pixels without loss of generality, a parallel subpixel blender is introduced. To study the feasibility of the proposed rasterization processor as a practical rasterization processor, a prototype processor has been designed using a 0.35 µm EML technology. It operates 100 MHz @3.3 V and has the rendering performance from 25M to 80M pixel-fragments/sec depending on the scene complexity.
Deterministic execution testing has been considered a promising way for concurrent program testing because of its ability to replay a program's execution. Since, however, deterministic execution requires that a synchronization event sequence to be replayed be feasible and valid, it is not directly applicable to a situation in which synchronization sequences, being valid but infeasible, are taken into account. Resolving this problem is very important because a program may still meet its specification although the feasibility of all valid sequences is not satisfied. In this paper, we present a new approach to deterministic execution for testing concurrent systems. The proposed approach makes use of the notion of event independence and constructs an automation which accepts all the sequences semantically equivalent to a given event sequence to be replayed. Consequently, we can allow a program to be executed according to event sequences other than the given (possible infeasible) sequence if they can be accepted by the automation.
Tomoharu SHIBUYA Kohichi SAKANIWA
A lower bound for the generalized Hamming weight of linear codes is proposed. The proposed bound is a generalization of the bound we previously presented and gives good estimate for generalized Hamming weight of Reed-Muller, some one point algebraic geometry, and arbitrary cyclic codes. Moreover the proposed bound contains the BCH bound as its special case. The relation between the proposed bound and conventional bounds is also investigated.