The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SC(4570hit)

2701-2720hit(4570hit)

  • Discrete Hessian Matrix for L-Convex Functions

    Satoko MORIGUCHI  Kazuo MUROTA  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1104-1108

    L-convex functions are nonlinear discrete functions on integer points that are computationally tractable in optimization. In this paper, a discrete Hessian matrix and a local quadratic expansion are defined for L-convex functions. We characterize L-convex functions in terms of the discrete Hessian matrix and the local quadratic expansion.

  • Scale-Space Processing of Point-Sampled Geometry for Efficient 3D Object Segmentation

    Hamid LAGA  Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    963-970

    In this paper, we present a novel framework for analyzing and segmenting point-sampled 3D objects. Our algorithm computes a decomposition of a given point set surface into meaningful components, which are delimited by line features and deep concavities. Central to our method is the extension of the scale-space theory to the three-dimensional space to allow feature analysis and classification at different scales. Then, a new surface classifier is computed and used in an anisotropic diffusion process via partial differential equations (PDEs). The algorithm avoids the misclassifications due to fuzzy and incomplete line features. Our algorithm operates directly on points requiring no vertex connectivity information. We demonstrate and discuss its performance on a collection of point sampled 3D objects including CAD and natural models. Applications include 3D shape matching and retrieval, surface reconstruction and feature preserving simplification.

  • Estimating the Performance of a Large Enterprise Network for Updating Routing Information

    Yukio OGAWA  Teruhiro HIRATA  Kouji TAKAMURA  Keiichi YAMAHA  Satomu SAITOU  Kouichi IWANAGA  Tsutomu KOITA  

     
    PAPER-Network Management/Operation

      Vol:
    E88-B No:5
      Page(s):
    2054-2061

    We have developed an experimental approach that allows us to estimate the performance of a large-scale enterprise network to update routing information. This approach was applied to the integration of the UFJ Bank network system on January 15, 2002. The main characteristic of this approach is the application of a formula that represents the delays in updating routing information that accompany reductions in CPU resources. This procedure consists of two steps: one is to estimate the reduction in the availability of CPU resources caused by forwarding of data packets at a router, and the other is to estimate the levels of CPU resources required for replying to a query about a new route and subsequently updating the routing information. These steps were applied to estimate the performance of the network in terms of routing information convergence. The results of our experiments on the network showed that updating the routing information was possible as long as the average level of CPU utilization during any five-minute period at the routers was less than 40%. We were able to apply this guideline and thus confirm the stability of the UFJ Bank network.

  • Acquisition and Maintenance of Knowledge for Online Navigation Suggestions

    Juan D. VELASQUEZ  Richard WEBER  Hiroshi YASUDA  Terumasa AOKI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E88-D No:5
      Page(s):
    993-1003

    The Internet has become an important medium for effective marketing and efficient operations for many institutions. Visitors of a particular web site leave behind valuable information on their preferences, requirements, and demands regarding the offered products and/or services. Understanding these requirements online, i.e., during a particular visit, is both a difficult technical challenge and a tremendous business opportunity. Web sites that can provide effective online navigation suggestions to their visitors can exploit the potential inherent in the data such visits generate every day. However, identifying, collecting, and maintaining the necessary knowledge that navigation suggestions are based on is far from trivial. We propose a methodology for acquiring and maintaining this knowledge efficiently using data mart and web mining technology. Its effectiveness has been shown in an application for a bank's web site.

  • Efficient Wavelet-Based Image Retrieval Using Coarse Segmentation and Fine Region Feature Extraction

    Yongqing SUN  Shinji OZAWA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E88-D No:5
      Page(s):
    1021-1030

    Semantic image segmentation and appropriate region content description are crucial issues for region-based image retrieval (RBIR). In this paper, a novel region-based image retrieval method is proposed, which performs fast coarse image segmentation and fine region feature extraction using the decomposition property of image wavelet transform. First, coarse image segmentation is conducted efficiently in the Low-Low(LL) frequency subband of image wavelet transform. Second, the feature vector of each segmented region is hierarchically extracted from all different wavelet frequency subbands, which captures the distinctive feature (e.g., semantic texture) inside one region finely. Experiment results show the efficiency and the effectiveness of the proposed method for region-based image retrieval.

  • Strong Identification Based on a Hard-on-Average Problem

    Pino CABALLERO-GIL  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1117-1121

    The aim of this work is to investigate the possibility of designing zero-knowledge identification schemes based on hard-on-average problems. It includes a new two-party identification protocol whose security relies on a discrete mathematics problem classified as DistNP-Complete under the average-case analysis, the so-called Distributional Matrix Representability Problem. Thanks to the use of the search version of the mentioned problem, the zero-knowledge property is formally proved by black-box simulation, and consequently the security of the proposed scheme is actually guaranteed. Furthermore, with the proposal of a new zero-knowledge proof based on a problem never used before for this purpose, the set of tools for designing cryptographic applications is enlarged.

  • Scalable QoS Assurances with Measurement-Based Admission Control

    Chaiwat OOTTAMAKORN  Dennis BUSHMITCH  

     
    PAPER-Network

      Vol:
    E88-B No:5
      Page(s):
    2023-2034

    Among recent trends in Quality of Service (QoS) provisioning in the Internet is the Differentiated Services Architecture, termed DiffServ. The successful deployment of Diffserv to provide a premium QoS guarantees to network traffic requires an effective admission control mechanism, which needs to be scalable and relatively simple to implement. In this paper we present a QoS network framework with novel and effective measurement-based resource management and admission control mechanisms. The mechanism is based on the characteristics of measured arrival and departure traffic. Those characteristics are captured via a passive monitoring. We implement the mechanism at the edge routers of a DiffServ Domain. The admission control mechanism is only executed at the edge routers and doesn't require any signaling between inner routers. The mechanism does not depend on the underlying network topology or any specifications of the cross traffic present in the domain. Therefore the mechanism is scalable. In addition, the proposed approach does not require any traffic policing mechanism at the entrance of the network. This approach can provide the statistical QoS guarantees to a variety of service classes within a DiffServ domain. We show that the proposed framework can provide a high degree of network resource sharing among multiple traffic classes while satisfying their QoS requirements. To evaluate the effectiveness of the proposed framework, we perform a set of simulations on a number of bursty video traffic sources.

  • Practical Issues Related to Disk Scheduling for Video-On-Demand Services

    Ilhoon SHIN  Kern KOH  Youjip WON  

     
    PAPER-Multimedia Systems for Communications" Multimedia Systems for Communications

      Vol:
    E88-B No:5
      Page(s):
    2156-2164

    This paper discusses several practical issues related to the provision of video-on-demand (VOD) services, focusing on retrieval of video data from disk on the server. First, with regard to system design, the pros and cons of cycle-based scheduling algorithms for VOD servers are compared, and an adequate policy according to system configuration is presented. Second, we present a way to tune the cycle-based scheduling algorithm so that it maximizes profit. Third, a method to overcome the cons of cycle-based scheduling algorithms is proposed, and its cost is analyzed.

  • The Long Length DHT Design with a New Hardware Efficient Distributed Arithmetic Approach and Cyclic Preserving Partitioning

    Hun-Chen CHEN  Tian-Sheuan CHANG  Jiun-In GUO  Chein-Wei JEN  

     
    PAPER-Electronic Circuits

      Vol:
    E88-C No:5
      Page(s):
    1061-1069

    This paper presents a long length discrete Hartley transform (DHT) design with a new hardware efficient distributed arithmetic (DA) approach. The new DA design approach not only explores the constant property of coefficients as the conventional DA, but also exploits its cyclic property. To efficiently apply this approach to long length DHT, we first decompose the long length DHT algorithm to short ones using the prime factor algorithm (PFA), and further reformulate it by using Agarwal-Cooley algorithm such that all the partitioned short DHT still consists of the cyclic property. Besides, we also exploit the scheme of computation sharing on the content of ROM to reduce the hardware cost with the trade-off in slowing down the computing speeds. Comparing with the existing designs shows that the proposed design can reduce the area-delay product from 52% to 91% according to a 0.35 µm CMOS cell library.

  • Automatic Repair Mechanism of Secret Sharing Storage System

    Daisuke HAYASHI  Toshiyuki MIYAMOTO  Shinji DOI  Sadatoshi KUMAGAI  

     
    PAPER-Database

      Vol:
    E88-D No:5
      Page(s):
    971-975

    For mission-critical and safety-critical systems such as medical, financial, or administrative information systems, a secure and reliable storage system is indispensable. The main purpose of our research is to develop a highly secure and highly reliable storage system. We have proposed a storage system that utilizes a secret sharing scheme. The storage system is called the Secret Sharing Storage System. So far, we have developed a prototype of the storage system. In this paper, we propose an automatic repair mechanism, and an interval decision method for this system.

  • Antenna Design by Using Method of Moments

    Kunio SAWAYA  

     
    INVITED PAPER

      Vol:
    E88-B No:5
      Page(s):
    1766-1773

    The theory of the method of moments (MoM), which has been widely used as a numerical technique for analyzing the characteristics of antennas and scatterers, is described. First, the steps of MoM to solve integral equations for conducting wires and planes are presented. It is pointed out that MoM combined with Galerkin's method yields highly accurate results. The importance of ensuring the continuity condition of current on conducting bodies is emphasized and numerical examples for a conducting structure involving junctions of wire segments and planar segments are presented. Finally, MoM for dielectric scatterers including recent developments is described.

  • An Optimization Process for Hybrid Dual-Stage Raman/EDF Amplifiers When Kerr-Nonlinearity, Double Rayleigh Backscattering Noise and OSNR are Important

    Andrew Che-On CHAN  Malin PREMARATNE  

     
    PAPER-Optical Fibers, Cables and Fiber Devices

      Vol:
    E88-C No:5
      Page(s):
    912-919

    In this paper, a detailed model of a hybrid dual-stage Raman/erbium-doped fiber (EDF) amplifier is presented. This model takes into account the impact of double Rayleigh backscattering (DRB) noise, amplified spontaneous emission (ASE) noise and Kerr-nonlinearity induced impairments in the amplification process. Using this model, we present a comprehensive analysis of the operation of hybrid dual-stage Raman/EDF amplifiers under above impairments. We show that under fixed total gain conditions for the amplifier module, high Raman gain causes the introduction of increased DRB noise to the amplified signals whereas low Raman gain causes the introduction of high ASE noise power through EDF amplifier. Therefore a balance between the Raman amplifier gain and EDF amplifier gain is required for optimal operation. These observations are then combined to show an optimization process, which could be applied to improve the design of hybrid dual-stage Raman/EDF amplifiers.

  • On the Security and the Efficiency of Multi-Signature Schemes Based on a Trapdoor One-Way Permutation

    Kei KAWAUCHI  Mitsuru TADA  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1274-1282

    Up to present, proposed are many multi-signature schemes in which signers use respective moduli in the signature generation process. The FDH-based schemes are proposed by Mitomi et al. and Lysyanskaya et al.. The PSS-based schemes are proposed by Kawauchi et al. and Komano et al.. The FDH-based schemes have the advantage that the signature size is independent of the number of the signers. However, since the signature generation algorithm is deterministic, it has a bad reduction rate as a defect. Consequently, the signers must unfortunately use the keys large enough to keep the security. On the other hand, in the PSS-based schemes, good reduction rates can be obtained since the signature generation algorithms are probabilistic. However, the size of the random component shall overflow the security parameter, and thereby the signature size shall grow by the total size of the random components used the signers. That means, if the size of the random component is smaller, the growth of the signature size can be kept smaller. In this paper, we propose new probabilistic multi-signature scheme, which can be proven secure despite that smaller random components are used. We compare the proposed scheme and two existing schemes. Finally, we conclude that the proposed scheme is so-called optimal due to.

  • A Heuristic Algorithm for One-Machine Just-In-Time Scheduling Problem with Periodic Time Slots

    Eishi CHIBA  Kunihiko HIRAISHI  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1192-1199

    Just-in-time scheduling problem is the problem of finding an optimal schedule such that each job finishes exactly at its due date. We study the problem under a realistic assumption called periodic time slots. In this paper, we prove that this problem cannot be approximated, assuming P≠NP. Next, we present a heuristic algorithm, assuming that the number of machines is one. The key idea is a reduction of the problem to a network flow problem. The heuristic algorithm is fast because its main part consists of computation of the minimum cost flow that dominates the total time. Our algorithm is O(n3) in the worst case, where n is the number of jobs. Next, we show some simulation results. Finally, we show cases in which our algorithm returns an optimal schedule and is a factor 1.5 approximation algorithm, respectively, and also give an approximation ratio depending on the upper bound of set-up times.

  • An Efficient Algorithm to Reduce the Inflations in Multi-Supertask Environment by Using a Transient Behavior Prediction Method

    Da-Ren CHEN  Chiun-Chieh HSU  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1181-1191

    The supertask approach was proposed by Moir and Ramamthy as a means of supporting non-migratory tasks in Pfair-scheduled systems. In this approach, tasks bound to the same processor are combined into a single server task, called a supertask, which is scheduled as an ordinary Pfair task. When a supertask is scheduled, one of its component tasks is selected for execution. In previous work, Holman et al. showed that component-task deadlines can be guaranteed by inflating each supertask's utilization. In addition, their experimental results showed that the required inflation factors should be small in practice. Consequently, the average inflation produced by their rules is much greater than that actually required by the supertasks. In this paper, we first propose a notion of Transient Behavior Prediction for supertasks, which predicts the latest possible finish time of subtasks that belong to supertasks. On the basis of the notion, we present an efficient schedulability algorithm for Pfair supertasks in which the deadlines of all component tasks can be guaranteed. In addition, we propose a task merging process which combines the unschedulable supertasks with some Pfair tasks; hence, a newly supertask can be scheduled in the system. Finally, we propose the new reweighting functions that can be used when the previous two methods fail. Our reweighting functions produce smaller inflation factor than the previous work does. To demonstrate the efficacy of the supertasking approach, we present the experimental evaluations of our algorithm, which decreases substantially a number of reweights and the size of inflation when there are many supertasks in the Pfair-scheduled systems.

  • Immersive Multi-Projector Display on Hybrid Screens with Human-Scale Haptic Interface

    Seungzoo JEONG  Naoki HASHIMOTO  Makoto SATO  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    888-893

    Many immersive displays developed in previous researches are strongly influenced by the design concept of the CAVE, which is the origin of the immersive displays. In the view of human-scale interactive system for virtual environment (VE), the existing immersive systems are not enough to use the potential of a human sense further extent. The displays require more complicated structure for flexible extension, and are more restrictive to user's movement. Therefore we propose a novel multi-projector display for immersive VE with haptic interface for more flexible and dynamic interaction. The display part of our system named "D-vision" has a hybrid curved screen which consist of compound prototype with flat and curve screen. This renders images seamlessly in real time, and generates high-quality stereovision by PC cluster and two-pass technology. Furthermore a human-scale string-based haptic device will integrate with the D-vision for more interactive and immersive VE. In this paper, we show an overview of the D-vision and technologies used for the human-scale haptic interface.

  • Zero-Knowledge Proof for the Independent Set Problem

    Pino CABALLERO-GIL  

     
    LETTER

      Vol:
    E88-A No:5
      Page(s):
    1301-1302

    An efficient computational Zero-Knowledge Proof of Knowledge whose security relies on the NP-completeness of the Independent Set Problem is presented here. The proposed algorithm is constructed from a bit commitment scheme based on the hardness of the Discrete Logarithm Problem, which guarantees the fulfillment of soundness, completeness and computational zero-knowledge properties, and allows avoiding the use of the Graph Isomorphism Problem, which is present in every known Zero-Knowledge Proofs for the Independent Set Problem.

  • On the Polynomial Time Computability of Abstract Ray-Tracing Problems

    Shuji ISOBE  Tetsuo KURIYAMA  Masahiro MAMBO  Hiroki SHIZUYA  

     
    PAPER

      Vol:
    E88-A No:5
      Page(s):
    1209-1213

    The abstract ray-tracing problem asks, for a given scene consisting of a light source, a light receiver and finitely many obstacles in a space, and a given positive integer ε > 0, whether a ray going out from the light source could reach the light receiver with intensity at least ε. The problem is known to be PSPACE-hard, and it is very unlikely that there exists an efficient algorithm to solve the problem without adding any restriction. In this paper, we show that the problem can be solved in polynomial time under some weak practical restrictions.

  • Thermally Robust Nickel Silicide Process for Nano-Scale CMOS Technology

    Soon-Young OH  Jang-Gn YUN  Bin-Feng HUANG  Yong-Jin KIM  Hee-Hwan JI  Sang-Bum HUH  Han-Seob CHA  Ui-Sik KIM  Jin-Suk WANG  Hi-Deok LEE  

     
    PAPER-Si Devices and Processes

      Vol:
    E88-C No:4
      Page(s):
    651-655

    A novel NiSi technology with bi-layer Co/TiN structure as a capping layer is proposed for the highly thermal immune Ni Silicide technology. Much better thermal immunity of Ni Silicide was certified up to 700, 30 min post silicidation furnace annealing by introducing Co/TiN bi-layer capping. The proposed structure is successfully applied to nano-scale CMOSFET with a gate length of 80 nm. The sheet resistance of nano-scale gate poly shows little degradation even after the high temperature furnace annealing of 650, 30 min. The Ni/Co/TiN structure is very promising for the nano-scale MOSFET technology which needs the ultra shallow junction and high temperature post silicidation processes

  • Scalable Packet Classification Using Condensate Bit Vector

    Pi-Chung WANG  Hung-Yi CHANG  Chia-Tai CHAN  Shuo-Cheng HU  

     
    PAPER

      Vol:
    E88-B No:4
      Page(s):
    1440-1447

    Packet classification is important in fulfilling the requirements of differentiated services in next generation networks. One of interesting hardware solutions proposed to solve the packet classification problem is bit vector algorithm. Different from other hardware solutions such as ternary CAM, it efficiently utilizes the memories to achieve an excellent performance in medium size policy database; however, it exhibits poor worst-case performance with a potentially large number of policies. In this paper, we proposed an improved bit-vector algorithm named Condensate Bit Vector which can be adapted to large policy databases in the backbone network. Experiments showed that our proposed algorithm drastically improves in the storage requirements and search speed as compared to the original algorithm.

2701-2720hit(4570hit)