The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SC(4570hit)

3561-3580hit(4570hit)

  • Channel State Dependent Resource Scheduling for Wireless Message Transport with Framed ALOHA-Reservation Access Protocol

    Masugi INOUE  

     
    PAPER

      Vol:
    E83-A No:7
      Page(s):
    1338-1346

    Channel-state-dependent (CSD) radio-resource scheduling algorithms for wireless message transport using a framed ALOHA-reservation access protocol are presented. In future wireless systems that provide Mbps-class high-speed wireless links using high frequencies, burst packet errors, which last a certain number of packets in time, would cause serious performance degradation. CSD resource scheduling algorithms utilize channel-state information for increasing overall throughput. These algorithms were comparatively evaluated in terms of average allocation plus transfer delay, average throughput, variance in throughput, and utilization of resources. Computer simulation results showed that the CSD mechanism has a good effect, especially on equal sharing (ES)-based algorithms, and also CSD-ES provides low allocation plus transfer delay, high average throughput, low variance in throughput, and efficient utilization of radio resources.

  • Seismic Events Discrimination Using a New FLVQ Clustering Model

    Payam NASSERY  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Vol:
    E83-D No:7
      Page(s):
    1533-1539

    In this paper, the LVQ (Learning Vector Quantization) model and its variants are regarded as the clustering tools to discriminate the natural seismic events (earthquakes) from the artificial ones (nuclear explosions). The study is based on the six spectral features of the P-wave spectra computed from the short period teleseismic recordings. The conventional LVQ proposed by Kohenen and also the Fuzzy LVQ (FLVQ) models proposed by Sakuraba and Bezdek are all tested on a set of 26 earthquakes and 24 nuclear explosions using the leave-one-out testing strategy. The primary experimental results have shown that the shapes, the number and also the overlaps of the clusters play an important role in seismic classification. The results also showed how an improper feature space partitioning would strongly weaken both the clustering and recognition phases. To improve the numerical results, a new combined FLVQ algorithm is employed in this paper. The algorithm is composed of two nested sub-algorithms. The inner sub-algorithm tries to generate a well-defined fuzzy partitioning with the fuzzy reference vectors in the feature space. To achieve this goal, a cost function is defined as a function of the number, the shapes and also the overlaps of the fuzzy reference vectors. The update rule tries to minimize this cost function in a stepwise learning algorithm. On the other hand, the outer sub-algorithm tries to find an optimum value for the number of the clusters, in each step. For this optimization in the outer loop, we have used two different criteria. In the first criterion, the newly defined "fuzzy entropy" is used while in the second criterion, a performance index is employed by generalizing the Huntsberger formula for the learning rate, using the concept of fuzzy distance. The experimental results of the new model show a promising improvement in the error rate, an acceptable convergence time, and also more flexibility in boundary decision making.

  • Local Area Characterization of TTF-TCNQ Evaporated Films by Scanning Probe Microscope

    Kazuhiro KUDO  Masaaki IIZUKA  Shigekazu KUNIYOSHI  Kuniaki TANAKA  

     
    LETTER-Ultra Thin Film

      Vol:
    E83-C No:7
      Page(s):
    1069-1070

    We have developed a new type electrical probing system based on an atomic force microscope. This method enables us to measure simultaneously the surface topography and surface potential of thin films containing the crystal grains. The obtained local potential changes give an insight into conduction through the grains and their boundaries.

  • Red EL Properties of OLED Having Hole Blocking Layer

    Hyeong-Gweon KIM  Tatsuo MORI  Teruyoshi MIZUTANI  Duck-Chool LEE  

     
    PAPER-Electro Luminescence

      Vol:
    E83-C No:7
      Page(s):
    1012-1016

    In this study, we prepared red organic light- emitting-diode (OLED) with a fluorescent dye(Sq)-doped and inserted 1,3-bis (5-p-t-butylphenyl)-1,3,4-oxadiazol-2-yl) benzene (OXD7) or/and tris (8-hydroxyquinoline) aluminum (Alq3) layers between emission layer and cathode in order to increase electroluminescent (EL) efficiency. This inserting effect has been observed and EL mechanism characteristics have been examined. The hole transport layer was N,N'-diphenyl-N, N'bis-(3-methylphenyl)-1,1'diphenyl-4,4'-diamine (TPD); the host material of emission layer was Alq3; the guest material of emission layer was Sq. When Alq3 was inserted between the emission layer and the cathode, emission efficiency increased. Highly pure red emission, however, was not attaina ble with Alq3. On the other hand, the insertion of OXD7 between the two layers blocked and accumulated holes. Because of its increasing recombination probability of electron and hole, luminance characteristics and emission efficiency were improved with holding highly pure red color.

  • A Cell Scheduler for Non-Real-Time Traffic with Service Fairness in ATM Networks

    Wen-Tsuen CHEN  Rong-Ruey LEE  

     
    PAPER-Switching

      Vol:
    E83-B No:7
      Page(s):
    1465-1473

    Non-real-time (NRT) services such as nrt-VBR, ABR and UBR traffic are intended for data applications. Although NRT services do not have stringent QoS requirements for cell transfer delay and cell delay variation, ATM networks should provide NRT services while considering other criteria to ensure an excellent performance such as cell loss ratio (CLR), buffer size requirement and service fairness. Service fairness means that networks should treat all connections fairly. That is, connections with low arrival rates should not be discriminated against. In addition, given a fixed buffer size for a connection, reducing the maximum number of cells in a buffer during the lifetime of a connection can lead to a low CLR due to buffer overflow. Thus, these criteria should be considered as much as possible when designing a cell scheduler to provide NRT services. Whereas most of the conventional cell scheduling schemes are usually appropriate for one performance criterion, but inappropriate for another one. In this work, we present a novel cell scheduling scheme, called buffer minimized and service fairness (BMSF), to schedule NRT services in ATM networks. Using probability constraints and selecting a connection with the longest buffer size to transmit first allow BMSF to attain a satisfactory performance with respect to maximum buffer size requirement, CLR, and service fairness in terms of the maximum buffer size and cell waiting delay criteria. Simulation results demonstrate that BMSF performs better than some conventional schemes in terms of these criteria, particularly when NRT services have diverse arrival rates. Thus, the BMSF scheme proposed herein can feasibly schedule NRT services in ATM networks.

  • Performance Analysis of Packet-Level Scheduling in an IP-over-ATM Network with QoS Control

    Chie DOU  Cheng-Tien LIN  Shu-Wei WANG  Kuo-Cheng LEU  

     
    PAPER-Internet

      Vol:
    E83-B No:7
      Page(s):
    1534-1543

    In this paper, we study the performance of packet-level scheduling in IP-over-ATM networks with QoS control. That is, we assume the traffic is composed of multiple classes. We analyze the performance of different queue mappings between traffic types and the number of available traffic classes (priority queues). Since cells of a given packet are not bound to be transmitted back-to-back if multiple traffic classes are used, it is quite interesting to know the packet delay characteristic as well as the cell delay characteristic of respective traffic types in different queue mappings. Closed-form solutions for the mean cell waiting time and the mean packet waiting time of individual traffic types in different queue mappings are presented. The numerical results obtained in this paper can be helpful in understanding the behavior of IP-over-ATM networks which adopting packet-level scheduling and QoS control.

  • A New Image Sensor with Space Variant Sampling Control on a Focal Plane

    Yasuhiro OHTSUKA  Takayuki HAMAMOTO  Kiyoharu AIZAWA  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1331-1337

    We propose a new sampling control system on image sensor array. Contrary to the random access pixels, the proposed sensor is able to read out spatially variant sampled pixels at high speed, without inputting pixel address for each access. The sampling positions can be changed dynamically by rewriting the sampling position memory. The proposed sensor has a memory array that stores the sampling positions. It can achieve any spatially varying sampling patterns. A prototype of 64 64 pixels are fabricated under 0.7 µm CMOS precess.

  • Scheduling DAGs on Message Passing m-Processor Systems

    Sanjeev BASKIYAR  

     
    PAPER-Computer Systems

      Vol:
    E83-D No:7
      Page(s):
    1497-1507

    Scheduling directed a-cyclic task graphs (DAGs) onto multiprocessors is known to be an intractable problem. Although there have been several heuristic algorithms for scheduling DAGs onto multiprocessors, few address the mapping onto a given number of completely connected processors with an objective of minimizing the finish time. We present an efficient algorithm called ClusterMerge to statically schedule directed a-cyclic task graphs onto a homogeneous completely connected MIMD system with a given number of processors. The algorithm clusters tasks in a DAG using a longest path heuristic and then iteratively merges these clusters to give a number of clusters identical to the number of available processors. Each of these clusters is then scheduled on a separate processor. Using simulations, we demonstrate that ClusterMerge schedules task graphs yielding the same or lower execution times than those of other researchers, but using fewer processors. We also discuss pitfalls in the various approaches to defining the longest path in a directed a-cyclic task graph.

  • Efficient Fair Queueing for ATM Networks Using Uniform Round Robin

    Norio MATSUFURU  Kouji NISHIMURA  Reiji AIBARA  

     
    PAPER-Switching

      Vol:
    E83-B No:6
      Page(s):
    1330-1341

    In this paper, we study efficient scheduling algorithms that are suitable for ATM networks. In ATM networks, all packets have a fixed small length of 53 bytes and they are transmitted at very high rate. Thus time complexity of a scheduling algorithm is quite important. Most scheduling algorithms proposed so far have a complexity of O(log N) per packet, where N denotes the number of connections sharing the link. In contrast, weighted round robin (WRR) has the advantage of having O(1) complexity; however, it is known that its delay property gets worse as N increases. To solve this problem, in this paper we propose two new variants of WRR, uniform round robin (URR) and idling uniform round robin (I-URR). Both disciplines provide end-to-end delay and fairness bounds which are independent of N. Complexity of URR, however, slightly increases as N increases, while I-URR has complexity of O(1) per packet. I-URR also works as a traffic shaper, so that it can significantly alleviate congestion on the network. We also introduce a hierarchical WRR discipline (H-WRR) which consists of different WRR servers using I-URR as the root server. H-WRR efficiently accommodates both guaranteed and best-effort connections, while maintaining O(1) complexity per packet. If several connections are reserving the same bandwidth, H-WRR provides them with delay bounds that are close to those of weighted fair queueing.

  • Role-Based Autonomous and Collaborative Mechanism for Cooperative Behavior

    Yoshihiko SAKASHITA  Tetsuo IDEGUCHI  Fumiaki SATO  Tadanori MIZUNO  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:6
      Page(s):
    1255-1265

    It has been proposed that the collaborative working environments have been created by the computing assistance for human behaviors with the instructions of related information and items. The purpose of this study is to propose the collaboration mechanism and the environments constrained by the roles. The basic principle at issue in this studies is that all members should behave autonomously, and behave collaboratively with understanding the surrounding environments. We have already presented the Distributed Collaborative Computing architecture called Noah that has the concept of field with tupple space. On this mechanism, we designed the role-based cooperative work environments on the collaboration and coordination mechanism. We applied this mechanism to the typical models in the industrial system's domain.

  • Parallelism-Independent Scheduling Method

    Kirilka NIKOLOVA  Atusi MAEDA  Masahiro SOWA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1138-1150

    All the existing scheduling algorithms order the instructions of the program in such a way that it can be executed in minimal time only for one fixed number of processors. In this paper we propose a new scheduling method, called Parallelism-Independent Scheduling Method, which enables the execution of the scheduled program on parallel computers with any degree of parallelism in near-optimal time. We propose three Parallelism-Independent algorithms, which have the following phases: obtaining a parallel schedule by using a list scheduling heuristics, optimization of the parallel schedule by rearranging the tasks in each level, so that they can be executed efficiently with different degrees of parallelism, serialization of the parallel schedule, and insertion of markers for the parallel execution limits. The three algorithms differ in their optimization phase. To prove the efficiency of our algorithms, we have made simulations with random directed acyclic graphs with different size and degree of parallelism. We compared the results in terms of schedule length to those obtained using the Critical Path Algorithm separately for each degree of parallelism.

  • Evaluation of PARAdeg of Acyclic SWITCH-Less Program Nets

    Qi-Wei GE  Kenji ONAGA  

     
    LETTER

      Vol:
    E83-A No:6
      Page(s):
    1186-1191

    PARAdeg has been defined to try to measure parallelism inherent in a program net. Studies on computation of PARAdeg have been done, but the quantitative evaluation, on how much PARAdeg fits parallelism of program nets, has not been studied. In this paper, we do the evaluation by applying genetic algorithm to measure firing completion times when PARAdeg processors, and less and more processors are provided for 400 program nets. Our experimental results show that the firing completion times decrease rapidly with increase of processors till PARAdeg and slowly when processors are increased to more than PARAdeg, which implies PARAdeg is a reasonable standard to measure parallelism of program nets.

  • High Speed and High Accuracy Rough Classification for Handwritten Characters Using Hierarchical Learning Vector Quantization

    Yuji WAIZUMI  Nei KATO  Kazuki SARUTA  Yoshiaki NEMOTO  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:6
      Page(s):
    1282-1290

    We propose a rough classification system using Hierarchical Learning Vector Quantization (HLVQ) for large scale classification problems which involve many categories. HLVQ of proposed system divides categories hierarchically in the feature space, makes a tree and multiplies the nodes down the hierarchy. The feature space is divided by a few codebook vectors in each layer. The adjacent feature spaces overlap at the borders. HLVQ classification is both speedy and accurate due to the hierarchical architecture and the overlapping technique. In a classification experiment using ETL9B, the largest database of handwritten characters in Japan, (it contains a total of 607,200 samples from 3036 categories) the speed and accuracy of classification by HLVQ was found to be higher than that by Self-Organizing feature Map (SOM) and Learning Vector Quantization methods. We demonstrate that the classification rate of the proposed system which uses multi-codebook vectors for each category under HLVQ can achieve higher speed and accuracy than that of systems which use average vectors.

  • Frontiers Related with Automatic Shaping of Photonic Crystals

    Osamu HANAIZUMI  Kenta MIURA  Makito SAITO  Takashi SATO  Shojiro KAWAKAMI  Eiichi KURAMOCHI  Satoshi OKU  

     
    INVITED PAPER-Switches and Novel Devices

      Vol:
    E83-C No:6
      Page(s):
    912-919

    Photonic crystals have optical properties characterized by photonic bandgap, large anisotropy and high dispersion, which can be applied to various optical devices. We have proposed an autocloning method for fabricating 2D or 3D photonic crystals and are developing novel structures and functions in photonic crystals. The autocloning is an easy process based on the combination of sputter deposition and sputter etching and is suitable for industry. We have already demonstrated devices or functions such as polarization splitters and surface-normal waveguides. In this paper, we describe our latest work on photonic crystals utilizing the autocloning technology. Phase plates and polarization selective gratings for optical pick-ups are demonstrated utilizing TiO2/SiO2 photonic crystals. The technology to introduce CdS into 3D photonic crystals is also developed and photoluminescence from the introduced CdS is observed, which is the first step to realize luminescent devices with 3D confinement or high polarization controllability.

  • An Ordered-Deme Genetic Algorithm for Multiprocessor Scheduling

    Bong-Joon JUNG  Kwang-Il PARK  Kyu Ho PARK  

     
    PAPER-Algorithms

      Vol:
    E83-D No:6
      Page(s):
    1207-1215

    In static multiprocessor scheduling, heuristic algorithms have been widely used. Instead of gaining execution speed, most of them show non promising solutions since they search only a part of solution spaces. In this paper, we propose a scheduling algorithm using the genetic algorithm (GA) which is a well-known stochastic search algorithm. The proposed algorithm, named ordered-deme GA (OGA), is based on the multiple subpopulation GA, where a global population is divided into several subpopulations (demes) and each demes evolves independently. To find better schedules, the OGA orders demes from the highest to the lowest deme and migrates both the best and the worst individuals at the same time. In addition, the OGA adaptively assigns different mutation probabilities to each deme to improve search capability. We compare the OGA with well-known heuristic algorithms and other GAs for random task graphs and the task graphs from real numerical problems. The results indicate that the OGA finds mostly better schedules than others although being slower in terms of execution time.

  • Reorder Buffer Structure with Shelter Buffer for Out-of-Order Issue Superscalar Processors

    Mun-Suek CHANG  Choung-Shik PARK  Sang-Bang CHOI  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1091-1099

    The reorder buffer is usually employed to maintain the instruction execution in the correct order for a superscalar pipeline with out-of-order issue. In this paper, we propose a reorder buffer structure with shelter buffer for out-of-order issue superscalar processors not only to control stagnation efficiently, but also to reduce the buffer size. We can get remarkable performance improvement with only one or two buffers. Simulation results show that if the size of reorder buffer is between 8 and 32, performance gain obtained from the shelter is noticeable. For the shelter buffer of size 4, there is no performance improvement compared to that of size 2, which means that the shelter buffer of size 2 is large enough to handle most of the stagnation. If the shelter buffer of size 2 is employed, we can reduce the reorder buffer by 44% in Whetstone, 50% in FFT, 60% in FM, and 75% in Linpack benchmark program without loss of any throughput. Execution time is also improved by 19.78% in Whetstone, 19.67% in FFT, 23.93% in FM, and 8.65% in Linpack benchmark when the shelter buffer is used.

  • Data-Driven Implementation of Highly Efficient TCP/IP Handler to Access the TINA Network

    Hiroshi ISHII  Hiroaki NISHIKAWA  Yuji INOUE  

     
    PAPER-Software Platform

      Vol:
    E83-B No:6
      Page(s):
    1355-1362

    This paper discusses and clarifies effectiveness of data-driven implementation of protocol handling system to access TINA (Telecommunications Information Networking Architecture) network and internet. TINA is a networking architecture that achieves networking services and management ubiquitously for users and networks. Many TINA related ACTS (Advanced Communication Technologies and Services) projects have been organized in Europe. In Japan, The TINA Trial (TTT) to achieve ATM network management and services based on TINA architectures was done by NTT and several manufactures from April 1997 to April 1999. In these studies and trials, much effort is devoted to development of software based on service architecture and network architecture being standardized in TINA-C (TINA Consortium). In order to achieve TINA environment universally in customers and network sides, we have to consider how to deploy TINA environment onto user side and how to use access transmission capacity as efficiently as possible. Recent technology can easily achieve application and environment downloading from the network side to user side by use of e. g. , JAVA. In accessing the network, there are several possible bottlenecks in information exchange in customer side such as PC processing capability, access protocol handling capability, intra-house wiring bandwidth. Authors, in parallel with TINA software architecture study, have been studying versatile requirements for hardware platform of TINA network. In those studies, we have clarified that the stream-oriented data-driven processor authors have been studying and developing have high reliability, high multiprocessing and multimedia information processing capability. Based on these studies, this paper first shows Von Neumann-based protocol handler is ineffective in case of multiprocessing through mathematical and emulation studies. Then, we show our data-driven protocol handling can effectively realize access protocol handling by emulation study. Then, we describe a result of first step of implementation of data-driven TCP/IP protocol handling. This result proves our TCP/IP hub based on data-driven processor is applicable not only for TINA/CORBA network but normal internet access. Finally, we show a possible customer premises network configuration which resolves bottleneck to access TINA network through ATM access.

  • Generalization of Threshold Signature and Authenticated Encryption for Group Communications

    Ching-Te WANG  Chin-Chen CHANG  Chu-Hsing LIN  

     
    PAPER-Information Security

      Vol:
    E83-A No:6
      Page(s):
    1228-1237

    In this paper, we propose an idea of the generalization of threshold signature and authenticated encryption for group communications. The concept of the (t, n) threshold signature with (k, l) shared verification is implemented in group-oriented cryptosystems. In the system, any t members can represent a group to sign a message and any k verifiers can represent another group to authenticate the signature. By integrating the cryptographic techniques of data encryption, digital signature and message recovery, a group-oriented authenticated encryption scheme with (k, l) shared verification is also proposed. The message expansion and communication cost can also be reduced in our schemes.

  • A New Approach to Ultrasonic Liver Image Classification

    Jiann-Shu LEE  Yung-Nien SUN  Xi-Zhang LIN  

     
    PAPER-Medical Engineering

      Vol:
    E83-D No:6
      Page(s):
    1301-1308

    In this paper, we have proposed a new method for diffuse liver disease classification with sonogram, including the normal liver, hepatitis and cirrhosis, from a new point of view "scale. " The new system utilizes a multiscale analysis tool, called wavelet transforms, to analyze the ultrasonic liver images. A new set of features consisting of second order statistics derived from the wavelet transformed images is employed. From these features, we have found that the third scale is the representative scale for the classification of the considered liver diseases, and the horizontal wavelet transform can improve the representation of the corresponding features. Experimental results show that our method can achieve about 88% correct classification rate which is superior to other measures such as the co-occurrence matrices, the Fourier power spectrum, and the texture spectrum. This implies that our feature set can access the granularity from sonogram more effectively. It should be pointed out that our features are powerful for discriminating the normal livers from the cirrhosis because there is no misclassification samples between the normal liver and the cirrhosis sets. In addition, the experimental results also verify the usefulness of "scale" because our multiscale feature set can gain eighteen percent advantage over the direct use of the statistical features. This means that the wavelet transform at proper scales can effectively increase the distances among the statistical feature clusters of different liver diseases.

  • Construction of Complex-Valued Wavelets and Its Applications to Scattering Problems

    Jeng-Long LEOU  Jiunn-Ming HUANG  Shyh-Kang JENG  Hsueh-Jyh LI  

     
    PAPER-Fiber-Optic Transmission

      Vol:
    E83-B No:6
      Page(s):
    1298-1307

    This paper introduces the construction of a family of complex-valued scaling functions and wavelets with symmetry/antisymmetry, compact support and orthogonality from the Daubechies polynomial, and applies them to solve electromagnetic scattering problems. For simplicity, only two extreme cases in the family, maximum-localized complex-valued wavelets and minimum-localized complex-valued wavelets are investigated. Regularity of root location of the Daubechies polynomial in spectral factorization are also presented to construct these two extreme genus of complex-valued wavelets. When wavelets are used as basis functions to solve electromagnetic scattering problems by the method of moment (MoM), they often lead to sparse matrix equations. We will compare the sparsity of MoM matrices by the real-valued Daubechies wavelets, minimum-localized complex-valued Daubechies and maximum-localized complex-valued Daubechies wavelets. Our research summarized in this paper shows that the wavelets with smaller signal width will result in a more sparse MoM matrix, especially when the scatterer is with many corners.

3561-3580hit(4570hit)