The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

2781-2800hit(3945hit)

  • An Access Control Model for the Adhocracy Organization Using RBAC

    Won Bo SHIM  Seog PARK  

     
    PAPER-Protocols etc.

      Vol:
    E86-A No:1
      Page(s):
    165-175

    Access control involves a check to see if a user has an access right to a resource and then a decision is made as to whether his/her access to the resource is to be allowed or denied. Typical access control models are the Discretionary Access Control Model, the Mandatory Access Control Model, and the Role-Based Access Control Model. Today, the Role-Based Access Control Model has become popular and is recognized as an effective method. However, until now, the Role-Based Access Control Model was adequate only for bureaucracy organizations, in which some roles are standardized and organizational hierarchy is stable. Team-Based Access Control models that were designed for team-based organizations have been proposed, but they do not reflect some features of an adhocracy organization, which are organic, temporary, not standardized, changeable, and obscure in terms of hierarchical relationship, such as a Task Force Team in the company. This study shows the characteristics of an adhocracy organization that differ from the existing bureaucracy organization, and then shows why existing access control models have caused some problems. Finally, a revised Role-Based Access Control model is proposed to solve those problems and is analyzed according to main evaluation standards.

  • Stereo Matching between Three Images by Iterative Refinement in PVS

    Makoto KIMURA  Hideo SAITO  Takeo KANADE  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:1
      Page(s):
    89-100

    In the field of computer vision and computer graphics, Image-Based-Rendering (IBR) methods are often used to synthesize images from real scene. The image synthesis by IBR requires dense correct matching points in the images. However, IBR does not require 3D geometry reconstruction or camera calibration in Euclidean geometry. On the other hand, 3D reconstructed model can easily point out the occlusion in images. In this paper, we propose an approach to reconstruct 3D shape in a voxel space, which is named Projective Voxel Space (PVS). Since PVS is defined by projective geometry, it requires only weak calibration. PVS is determined by rectifications of the epipolar lines in three images. Three rectified images are orthogonal projected images of a scene in PVS, so processing about image projection is easy in PVS. In both PVS and Euclidean geometry, a point in an image is on a projection from a point on a surface of the object in the scene. Then the other image might have a correct matching point without occlusion, or no matching point because of occlusion. This is a kind of restriction about searching matching points or surface of the object. Taking advantage of simplicity of projection in PVS, the correlation values of points in images are computed, and the values are iteratively refined using the restriction described above. Finally, the shapes of the objects in the scene are acquired in PVS. The reconstructed shape in PVS does not have similarity to 3D shape in Euclidean geometry. However, it denotes consistent matching points in three images, and also indicates the existence of occluded points. Therefore, the reconstructed shape in PVS is sufficient for image synthesis by IBR.

  • Digit-Recurrence Algorithm for Computing Reciprocal Square-Root

    Naofumi TAKAGI  Daisuke MATSUOKA  Kazuyoshi TAKAGI  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E86-A No:1
      Page(s):
    221-228

    A digit-recurrence algorithm for computing reciprocal square-root which appears frequently in multimedia and graphics applications is proposed. The reciprocal square-root is computed by iteration of carry-propagation-free additions, shifts, and multiplications by one digit. Different specific versions of the algorithm are possible, depending on the radix, the redundancy factor of the digit set, and etc. Details of a radix-2 version and a radix-4 version and designs of a floating-point reciprocal square-root circuit based on them are shown.

  • Effectiveness of Power Control for Approximately Synchronized CDMA System

    Satoshi WAKOH  Hideyuki TORII  Makoto NAKAMURA  

     
    PAPER

      Vol:
    E86-B No:1
      Page(s):
    88-95

    Approximately synchronized CDMA (AS-CDMA) can reduce the inter-channel interference in a cell to zero. This property of AS-CDMA is an advantage over the conventional DS-CDMA. However, the inter-cell interference of the AS-CDMA cellular system has not been sufficiently examined previously. Therefore, the synthetic performance of AS-CDMA cellular system also has not been sufficiently clarified previously. Some factors that affect the inter-cell interference of the AS-CDMA cellular system were theoretically examined, and evaluated by using computer simulation. As the result, we found that transmission power control is effective for reducing the inter-cell interference of the AS-CDMA cellular system. In addition, the synthetic performance of AS-CDMA cellular system was clarified for the first time. Consequently, it was also found that the synthetic performance of the AS-CDMA cellular system is higher than that of the conventional DS-CDMA cellular system.

  • Software Obfuscation on a Theoretical Basis and Its Implementation

    Toshio OGISO  Yusuke SAKABE  Masakazu SOSHI  Atsuko MIYAJI  

     
    PAPER-Protocols etc.

      Vol:
    E86-A No:1
      Page(s):
    176-186

    Software obfuscation is a promising approach to protect intellectual property rights and secret information of software in untrusted environments. Unfortunately previous software obfuscation techniques share a major drawback that they do not have a theoretical basis and thus it is unclear how effective they are. Therefore we propose new software obfuscation techniques in this paper. The techniques are based on the difficulty of interprocedural analysis of software programs. The essence of our obfuscation techniques is a new complexity problem to precisely determine the address a function pointer points to in the presence of arrays of function pointers. We show that the problem is NP-hard and the fact provides a theoretical basis for our obfuscation techniques. Furthermore, we have already implemented a prototype tool that obfuscates C programs according to our proposed techniques and in this paper we describe the implementation and discuss the experiments results.

  • Use of Montgomery Trick in Precomputation of Multi-Scalar Multiplication in Elliptic Curve Cryptosystems

    Katsuyuki OKEYA  Kouichi SAKURAI  

     
    PAPER-Asymmetric Ciphers

      Vol:
    E86-A No:1
      Page(s):
    98-112

    We develop efficient precomputation methods of multi-scalar multiplication on ECC. We should recall that multi-scalar multiplication is required in some elliptic curve cryptosystems including the signature verification of ECDSA signature scheme. One of the known fast computation methods of multi-scalar multiplication is a simultaneous method. A simultaneous method consists of two stages; precomputation stage and evaluation stage. Precomputation stage computes points of precomputation, which are used at evaluation stage. Evaluation stage computes multi-scalar multiplication using precomputed points. In the evaluation stage of simultaneous methods, we can compute the multi-scalar multiplied point quickly because the number of additions is small. However, if we take a large window width, we have to compute an enormous number of points in precomputation stage. Hence, we have to compute an abundance of inversions, which have large computational amount. As a result, precomputation stage requires much time, as well known. Our proposed method reduces from O(22w) inversions to O(w) inversions for a window width w, using Montgomery trick. In addition, our proposed method computes uP and vQ first, then compute uP+vQ, where P,Q are elliptic points. This procedure enables us to remove unused points of precomputation. Compared with the method without Montgomery trick, our proposed method is 3.6 times faster in the case of the precomputation stage for simultaneous sliding window NAF method with window width w=3 and 160-bit scalars under the assumption that I/M=30, S/M=0.8, where I,M,S respectively denote computational amounts of inversion, multiplication and squaring on a finite field.

  • Improvement of CT Image Degraded by Quantum Mottle Using Singularity Detection

    Yi-Qiang YANG  Nobuyuki NAKAMORI  Yasuo YOSHIDA  

     
    PAPER-Medical Engineering

      Vol:
    E86-D No:1
      Page(s):
    123-130

    To improve the CT image degraded by radiographic noise (such as quantum mottle), we propose a method based on the wavelet transform modulus sum (WTMS). The noise and regular parts of a signal can be observed by tracing the evolution of its WTMS across scales. Our results show that most of the quantum mottle in the projections of Shepp-Logan phantom has been removed by the proposed method with the supposed cranium well preserved. The denoised CT images show good signal to noise ratio in the region of interest. We also have investigated the relation between the number of X-ray photons and the quality of images reconstructed from denoised projections. From experimental results, this method shows the possibility to reduce a patient's dose about 1/10 with the same visual quality.

  • Optimization of Path Bandwidth Allocation for Large-Scale Telecommunication Networks

    Sheng Ye HUANG  Wu YE  Sui Li FENG  

     
    LETTER-Network

      Vol:
    E85-B No:12
      Page(s):
    2960-2962

    The optimization of path bandwidth allocation in large-scale telecommunication networks is studied. By introducing a decomposition-coordination scheme to global optimization of the path bandwidth allocation which aims at minimizing the worst case call blocking probabilities in the network, the spatial and time complexities are both reduced, while the accuracy is almost the same as that given by direct optimization.

  • Design Exploration of an Industrial Embedded Microcontroller: Performance, Cost and Software Compatibility

    Ing-Jer HUANG  Li-Rong WANG  Yu-Min WANG  Tai-An LU  

     
    PAPER-VLSI Design

      Vol:
    E85-A No:12
      Page(s):
    2624-2635

    This paper presents a case study of synthesis of the industrial embedded microcontroller HT48100 and analysis of performance, cost and software compatibility for its implementation alternatives, using the hardware/software co-design system for microcontrollers/microprocessors PIPER-II. The synthesis tool accepts as input the instruction set architecture (behavioral) specification, and produces as outputs the pipelined RTL designs with their simulators, and the reordering constraints which guide the compiler backend to optimize the code for the synthesized designs. A compiler backend is provided to optimize the application software according to the reordering constraints. The study shows that the co-design approach was able to help the original design team to analyze the architectural properties, identify inefficient architecture features, and explore possible architectural improvements and their impacts in both hardware and software. Feasible future upgrades for the microcontroller family have been identified by the study.

  • Application of a Word-Based Text Compression Method to Japanese and Chinese Texts

    Shigeru YOSHIDA  Takashi MORIHARA  Hironori YAHAGI  Noriko ITANI  

     
    PAPER-Information Theory

      Vol:
    E85-A No:12
      Page(s):
    2933-2938

    16-bit Asian language codes can not be compressed well by conventional 8-bit sampling text compression schemes. Previously, we reported the application of a word-based text compression method that uses 16-bit sampling for the compression of Japanese texts. This paper describes our further efforts in applying a word-based method with a static canonical Huffman encoder to both Japanese and Chinese texts. The method was proposed to support a multilingual environment, as we replaced the word-dictionary and the canonical Huffman code table for the respective language appropriately. A computer simulation showed that this method is effective for both languages. The obtained compression ratio was a little less than 0.5 without regarding the Markov context, and around 0.4 when accounting for the first order Markov context.

  • Effectiveness of Receiver-Side Compensation against FBG Dispersion-Induced SNR Degradation in Long-Haul WDM Optical Networks

    Hideki MAEDA  Masatoyo SUMIDA  Tsutomu KUBO  Takamasa IMAI  

     
    LETTER-Fiber-Optic Transmission

      Vol:
    E85-B No:12
      Page(s):
    2943-2945

    We clarify the effectiveness of receiver-side compensation in offsetting fiber Bragg grating (FBG) dispersion induced-electrical signal-to-noise ratio (SNR) degradation in a 10 Gb/s 8-channel wavelength-division multiplexing (WDM) 6,400 km transmission system. The receiver-side compensation greatly improves the SNR degradation. The allowable accumulated FBG dispersion is -400 1000ps/nm for the worst arrangement, a single FBG at the transmitter, which is about half the accumulated fiber dispersion permissible with receiver-side compensation.

  • A Parallel Algorithm for the Stack Breadth-First Search

    Takaaki NAKASHIMA  Akihiro FUJIWARA  

     
    LETTER-Computational Complexity Theory

      Vol:
    E85-D No:12
      Page(s):
    1955-1958

    Parallelization of the P-complete problem is known to be difficult. In this paper, we consider the parallelizability of a stack breadth-first search (stack BFS) problem, which is proved to be P-complete. We first propose the longest path length (LPL) as a measure for the P-completeness of the stack BFS. Next, using this measure, we propose an efficient parallel algorithm for the stack BFS. Assuming the size and LPL of an input graph are n and l, respectively, the complexity of the algorithm indicates that the stack BFS is in the class NCk+1 if l = O(logk n), where k is a positive integer. In addition, the algorithm is cost optimal if l=O(nε), where 0 < ε < 1.

  • An Efficient Algorithm Finding Simple Disjoint Decompositions Using BDDs

    Yusuke MATSUNAGA  

     
    PAPER-Logic Synthesis

      Vol:
    E85-A No:12
      Page(s):
    2715-2724

    Functional decomposition is an essential technique of logic synthesis and is important especially for FPGA design. Bertacco and Damiani proposed an efficient algorithm finding simple disjoint decomposition using Binary Decision Diagrams (BDDs). However, their algorithm is not complete and does not find all the decompositions. This paper presents a complete theory of simple disjoint decomposition and describes an efficient algorithm using BDDs.

  • A Compiler Generation Method for HW/SW Codesign Based on Configurable Processors

    Shinsuke KOBAYASHI  Kentaro MITA  Yoshinori TAKEUCHI  Masaharu IMAI  

     
    PAPER-Hardware/Software Codesign

      Vol:
    E85-A No:12
      Page(s):
    2586-2595

    This paper proposes a compiler generation method for PEAS-III (Practical Environment for ASIP development), which is a configurable processor development environment for application domain specific embedded systems. Using the PEAS-III system, not only the HDL description of a target processor but also its target compiler can be generated. Therefore, execution cycles and dynamic power consumption can be rapidly evaluated. Two processors and their derivatives were designed using the PEAS-III system in the experiment. Experimental results show that the trade-offs among area, performance and power consumption of processors were analyzed in about twelve hours and the optimal processor was selected under the design constraints by using generated compilers and processors.

  • Dispersion Compensation for Ultrashort Light Pulse CDMA Communication Systems

    Yasutaka IGARASHI  Hiroyuki YASHIMA  

     
    PAPER-Fiber-Optic Transmission

      Vol:
    E85-B No:12
      Page(s):
    2776-2784

    We investigate dispersion compensation using dispersion-compensating fibers (DCFs) for ultrashort light pulse code division multiple access (CDMA) communication systems in a multi-user environment. We employ fiber link that consists of a standard single-mode fiber (SMF) connected with two different types of DCFs. Fiber dispersion can be effectively decreased by adjusting the length ratios of DCFs to SMF appropriately. Some criteria for dispersion compensation are proposed and their performances are compared. We theoretically derive a bit error rate (BER) of ultrashort light pulse CDMA systems including the effects of the dispersion and multiple access interference (MAI). Moreover, we reveal the mutual relations among BER performance, fiber dispersion, MAI, the number of chips, a bandwidth of a signal, and a transmission distance for the first time. As a result, we show that our compensation strategy improves system performance drastically.

  • Bi-Partition of Shared Binary Decision Diagrams

    Munehiro MATSUURA  Tsutomu SASAO  Jon T. BUTLER  Yukihiro IGUCHI  

     
    PAPER-Logic Synthesis

      Vol:
    E85-A No:12
      Page(s):
    2693-2700

    A shared binary decision diagram (SBDD) represents a multiple-output function, where nodes are shared among BDDs representing the various outputs. A partitioned SBDD consists of two or more SBDDs that share nodes. The separate SBDDs are optimized independently, often resulting in a reduction in the number of nodes over a single SBDD. We show a method for partitioning a single SBDD into two parts that reduces the node count. Among the benchmark functions tested, a node reduction of up to 23% is realized.

  • Predictive Geometry Compression of 3-D Mesh Models Using a Joint Prediction

    Jeong-Hwan AHN  Yo-Sung HO  

     
    LETTER-Multimedia Systems

      Vol:
    E85-B No:12
      Page(s):
    2966-2970

    In this letter, we address geometry coding of 3-D mesh models. Using a joint prediction, the encoder predicts vertex positions in the layer traversal order. After we apply the joint prediction algorithm to eliminate redundancy among vertex positions using both position and angle values of neighboring triangles, we encode those prediction errors using a uniform quantizer and an entropy coder. The proposed scheme demonstrates improved coding efficiency for various VRML test data.

  • Using Similarity Parameters for Supervised Polarimetric SAR Image Classification

    Junyi XU  Jian YANG  Yingning PENG  Chao WANG  Yuei-An LIOU  

     
    PAPER-Sensing

      Vol:
    E85-B No:12
      Page(s):
    2934-2942

    In this paper, a new method is proposed for supervised classification of ground cover types by using polarimetric synthetic aperture radar (SAR) data. The concept of similarity parameter between two scattering matrices is introduced for characterizing target scattering mechanism. Four similarity parameters of each pixel in image are used for classification. They are the similarity parameters between a pixel and a plane, a dihedral, a helix and a wire. The total received power of each pixel is also used since the similarity parameter is independent of the spans of target scattering matrices. The supervised classification is carried out based on the principal component analysis. This analysis is applied to each data set in image in the feature space for getting the corresponding feature transform vector. The inner product of two vectors is used as a distance measure in classification. The classification result of the new scheme is shown and it is compared to the results of principal component analysis with other decomposition coefficients, to demonstrate the effectiveness of the similarity parameters.

  • A Computation Reduced MMSE Adaptive Array Antenna Using Space-Temporal Simultaneous Processing Equalizer

    Yoshihiro ICHIKAWA  Koji TOMITSUKA  Shigeki OBOTE  Kenichi KAGOSHIMA  

     
    PAPER

      Vol:
    E85-B No:12
      Page(s):
    2622-2629

    When we use an adaptive array antenna (AAA) with the minimum mean square error (MMSE) criterion under the multipath environment, where the receiving signal level varies, it is difficult for the AAA to converge because of the distortion of the desired wave. Then, we need the equalization both in space and time domains. A tapped-delay-line adaptive array antenna (TDL-AAA) and the AAA with linear equalizer (AAA-LE) have been proposed as simple space-temporal equalization. The AAA-LE has not utilized the recursive least square (RLS) algorithm. In this paper, we propose a space-temporal simultaneous processing equalizer (ST-SPE) that is an AAA-LE with the RLS algorithm. We proposed that the first tap weight of the LE should be fixed and the necessity of that is derived from a normal equation in the MMSE criterion. We achieved the space-temporal simultaneous equalization with the RLS algorithm by this configuration. The ST-SPE can reduce the computational complexity of the space-temporal joint equalization in comparison to the TDL-AAA, when the ST-SPE has almost the same performance as the TDL-AAA in multipath environment with minimum phase condition such as appeared at line-of-sight (LOS).

  • A Genetic Algorithm for the Minimization of OPKFDDs

    Migyoung JUNG  Gueesang LEE  Sungju PARK  Rolf DRECHSLER  

     
    LETTER-VLSI Design Technology and CAD

      Vol:
    E85-A No:12
      Page(s):
    2943-2945

    OPKFDDs (Ordered Pseudo-Kronecker Functional Decision Diagrams) are a data structure that provides compact representation of Boolean functions. The size of OPKFDDs depends on a variable ordering and on decomposition type choices. Finding an optimal representation is very hard and the size of the search space is n! 32n-1, where n is the number of input variables. To overcome the huge search space of the problem, a genetic algorithm is proposed for the generation of OPKFDDs with minimal number of nodes.

2781-2800hit(3945hit)