The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6821-6840hit(42807hit)

  • Reciprocity Theorems and Their Application to Numerical Analysis in Grating Theory

    Junichi NAKAYAMA  Yasuhiko TAMURA  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    3-10

    This paper deals with the diffraction of a monochromatic plane wave by a periodic grating. We discuss a problem how to obtain a numerical diffraction efficiency (NDE) satisfying the reciprocity theorem for diffraction efficiencies, because diffraction efficiencies are the subject of the diffraction theories. First, this paper introduces a new formula that decomposes an NDE into two components: the even component and the odd one. The former satisfies the reciprocity theorem for diffraction efficiencies, but the latter does not. Therefore, the even component of an NDE becomes an answer to our problem. On the other hand, the odd component of an NDE represents an unwanted error. Using such the decomposition formula, we then obtain another new formula that decomposes the conventional energy error into two components. One is the energy error made by even components of NDE's. The other is the energy error constructed by unwanted odd ones and it may be used as a reciprocity criterion of a numerical solution. This decomposition formula shows a drawback of the conventional energy balance. The total energy error is newly introduced as a more strict condition for a desirable solution. We point out theoretically that the reciprocal wave solution, an approximate solution satisfying the reciprocity for wave fields, gives another solution to our problem. Numerical examples are given for the diffraction of a TM plane wave by a very rough periodic surface with perfect conductivity. In the case of a numerical solution by the image integral equation of the second kind, we found that the energy error is much reduced by use of the even component of an NDE as an approximate diffraction efficiency or by use of a reciprocal wave solution.

  • Information Hiding and Its Criteria for Evaluation Open Access

    Keiichi IWAMURA  Masaki KAWAMURA  Minoru KURIBAYASHI  Motoi IWATA  Hyunho KANG  Seiichi GOHSHI  Akira NISHIMURA  

     
    INVITED PAPER

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    2-12

    Within information hiding technology, digital watermarking is one of the most important technologies for copyright protection of digital content. Many digital watermarking schemes have been proposed in academia. However, these schemes are not used, because they are not practical; one reason for this is that the evaluation criteria are loosely defined. To make the evaluation more concrete and improve the practicality of digital watermarking, watermarking schemes must use common evaluation criteria. To realize such criteria, we organized the Information Hiding and its Criteria for Evaluation (IHC) Committee to create useful, globally accepted evaluation criteria for information hiding technology. The IHC Committee improves their evaluation criteria every year, and holds a competition for digital watermarking based on state-of-the-art evaluation criteria. In this paper, we describe the activities of the IHC Committee and its evaluation criteria for digital watermarking of still images, videos, and audio.

  • Theoretical Limit of the Radiation Efficiency for Electrically Small Self-Resonant Spherical Surface Antennas

    Keisuke FUJITA  Hiroshi SHIRAI  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    20-26

    Theoretical maximum radiation efficiency of electrically small spherical surface antennas has been derived in this study. The current on the antenna surface is described in terms of vector spherical harmonics, and the radiated and the dissipated powers are calculated to obtain the radiation efficiency. It has been found that non-resonant TM1m mode shows the best radiation efficiency, and a proper combination of TM10 and TE10 modes establishes a resonant spherical surface antenna whose radiation efficiency is bounded by those values of non-resonant TM10 and TE10 modes. As a practical example of the spherical surface antennas, the radiation efficiency of the spherical helix antennas has also been computed to check the validity of our formulation.

  • Initial Value Problem Formulation TDBEM with 4-D Domain Decomposition Method and Application to Wake Fields Analysis

    Hideki KAWAGUCHI  Thomas WEILAND  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    37-44

    The Time Domain Boundary Element Method (TDBEM) has its advantages in the analysis of transient electromagnetic fields (wake fields) induced by a charged particle beam with curved trajectory in a particle accelerator. On the other hand, the TDBEM has disadvantages of huge required memory and computation time compared with those of the Finite Difference Time Domain (FDTD) method or the Finite Integration Technique (FIT). This paper presents a comparison of the FDTD method and 4-D domain decomposition method of the TDBEM based on an initial value problem formulation for the curved trajectory electron beam, and application to a full model simulation of the bunch compressor section of the high-energy particle accelerators.

  • Low-Temperature Polycrystalline-Silicon Thin-Film Transistors Fabricated by Continuous-Wave Laser Lateral Crystallization and Metal/Hafnium Oxide Gate Stack on Nonalkaline Glass Substrate

    Tatsuya MEGURO  Akito HARA  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E100-C No:1
      Page(s):
    94-100

    Enhancing the performance of low-temperature (LT) polycrystalline-silicon (poly-Si) thin-film transistors (TFTs) requires high-quality poly-Si films. One of the authors (A.H.) has already demonstrated a continuous-wave (CW) laser lateral crystallization (CLC) method to improve the crystalline quality of thin poly-Si films, using a diode-pumped solid-state CW laser. Another candidate method to increase the on-current and decrease the subthreshold swing (s.s.) is the use of a high-k gate stack. In this paper, we discuss the performance of top-gate CLC LT poly-Si TFTs with sputtering metal/hafnium oxide (HfO2) gate stacks on nonalkaline glass substrates. A mobility of 180 cm2/Vs is obtained for n-ch TFTs, which is considerably higher than those of previously reported n-ch LT poly-Si TFTs with high-k gate stacks; it is, however, lower than the one obtained with a plasma enhanced chemical vapor deposited SiO2 gate stack. For p-ch TFTs, a mobility of 92 cm2/Vs and an s.s. of 98 mV/dec were obtained. This s.s. value is smaller than the ones of the previously reported p-ch LT poly-Si TFTs with high-k gate stacks. The evaluation of a fabricated complementary metal-oxide-semiconductor inverter showed a switching threshold voltage of 0.8 V and a gain of 38 at an input voltage of 2.0 V; moreover, full swing inverter operation was successfully confirmed at the low input voltage of 1.0 V. This shows the feasibility of CLC LT poly-Si TFTs with a sputtered HfO2 gate dielectric on nonalkaline glass substrates.

  • Prefiltering and Postfiltering Based on Global Motion Compensation for Improving Coding Efficiency in H.264 and HEVC Codecs

    Ho Hyeong RYU  Kwang Yeon CHOI  Byung Cheol SONG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    160-165

    In this paper, we propose a filtering approach based on global motion estimation (GME) and global motion compensation (GMC) for pre- and postprocessing of video codecs. For preprocessing a video codec, group of pictures (GOP), which is a basic unit for GMC, and reference frames are first defined for an input video sequence. Next, GME and GMC are sequentially performed for every frame in each GOP. Finally, a block-based adaptive temporal filter is applied between the GMC frames before video encoding. For postprocessing a video codec at the decoder end, every decoded frame is inversely motion-compensated using the transmitted global motion information. The holes generated during inverse motion compensation can be filled with the reference frames. The experimental results show that the proposed algorithm provides higher Bjontegaard-delta peak signal-to-noise ratios (BD-PSNRs) of 0.63 and 0.57 dB on an average compared with conventional H.264 and HEVC platforms, respectively.

  • Assessing the Bug-Prediction with Re-Usability Based Package Organization for Object Oriented Software Systems

    Mohsin SHAIKH  Ki-Seong LEE  Chan-Gun LEE  

     
    PAPER-Software Engineering

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    107-117

    Packages are re-usable components for faster and effective software maintenance. To promote the re-use in object-oriented systems and maintenance tasks easier, packages should be organized to depict compact design. Therefore, understanding and assessing package organization is primordial for maintenance tasks like Re-usability and Changeability. We believe that additional investigations of prevalent basic design principles such as defined by R.C. Martin are required to explore different aspects of package organization. In this study, we propose package-organization framework based on reachable components that measures re-usability index. Package re-usability index measures common effect of change taking place over dependent elements of a package in an object-oriented design paradigm. A detailed quality assessment on different versions of open source software systems is presented which evaluates capability of the proposed package re-usability index and other traditional package-level metrics to predict fault-proneness in software. The experimental study shows that proposed index captures different aspects of package-design which can be practically integrated with best practices of software development. Furthermore, the results provide insights on organization of feasible software design to counter potential faults appearing due to complex package dependencies.

  • FOREWORD Open Access

    Akinori ITO  

     
    FOREWORD

      Vol:
    E100-D No:1
      Page(s):
    1-1
  • General Bounds for Small Inverse Problems and Its Applications to Multi-Prime RSA

    Atsushi TAKAYASU  Noboru KUNIHIRO  

     
    PAPER

      Vol:
    E100-A No:1
      Page(s):
    50-61

    In 1999, Boneh and Durfee introduced the small inverse problem, which solves the bivariate modular equation x(N+y)≡1(mod e. Absolute values of solutions for x and y are bounded above by X=Nδ and Y=Nβ, respectively. They solved the problem for β=1/2 in the context of small secret exponent attacks on RSA and proposed a polynomial time algorithm that works when δ<(7-2√7)/6≈0.284. In the same work, the bound was further improved to δ<1-1/≈2≈0.292. Thus far, the small inverse problem has also been analyzed for an arbitrary β. Generalizations of Boneh and Durfee's lattices to obtain the stronger bound yielded the bound δ<1-≈β. However, the algorithm works only when β≥1/4. When 0<β<1/4, there have been several works where the authors claimed their results are the best. In this paper, we revisit the problem for an arbitrary β. At first, we summarize the previous results for 0<β<1/4. We reveal that there are some results that are not valid and show that Weger's algorithms provide the best bounds. Next, we propose an improved algorithm to solve the problem for 0<β<1/4. Our algorithm works when δ<1-2(≈β(3+4β)-β)/3. Our algorithm construction is based on the combinations of Boneh and Durfee's two forms of lattices and it is more natural compared with previous works. For the cryptographic application, we introduce small secret exponent attacks on Multi-Prime RSA with small prime differences.

  • Digital Multiple Notch Filter Design with Nelder-Mead Simplex Method

    Qiusheng WANG  Xiaolan GU  Yingyi LIU  Haiwen YUAN  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:1
      Page(s):
    259-265

    Multiple notch filters are used to suppress narrow-band or sinusoidal interferences in digital signals. In this paper, we propose a novel optimization design technique of an infinite impulse response (IIR) multiple notch filter. It is based on the Nelder-Mead simplex method. Firstly, the system function of the desired notch filter is constructed to form the objective function of the optimization technique. Secondly, the design parameters of the desired notch filter are optimized by Nelder-Mead simplex method. A weight function is also introduced to improve amplitude response of the notch filter. Thirdly, the convergence and amplitude response of the proposed technique are compared with other Nelder-Mead based design methods and the cascade-based design method. Finally, the practicability of the proposed notch filter design technique is demonstrated by some practical applications.

  • FOREWORD Open Access

    Motoyuki SATO  Akira HIROSE  

     
    FOREWORD

      Vol:
    E100-C No:1
      Page(s):
    1-2
  • A Ranking Approach to Source Retrieval of Plagiarism Detection

    Leilei KONG  Zhimao LU  Zhongyuan HAN  Haoliang QI  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2016/09/29
      Vol:
    E100-D No:1
      Page(s):
    203-205

    This paper addresses the issue of source retrieval in plagiarism detection. The task of source retrieval is retrieving all plagiarized sources of a suspicious document from a source document corpus whilst minimizing retrieval costs. The classification-based methods achieved the best performance in the current researches of source retrieval. This paper points out that it is more important to cast the problem as ranking and employ learning to rank methods to perform source retrieval. Specially, it employs RankBoost and Ranking SVM to obtain the candidate plagiarism source documents. Experimental results on the dataset of PAN@CLEF 2013 Source Retrieval show that the ranking based methods significantly outperforms the baseline methods based on classification. We argue that considering the source retrieval as a ranking problem is better than a classification problem.

  • Home Base-Aware Store-Carry-Forward Routing Using Location-Dependent Utilities of Nodes

    Tomotaka KIMURA  Yutsuki KAYAMA  Tetsuya TAKINE  

     
    PAPER-Network

      Vol:
    E100-B No:1
      Page(s):
    17-27

    We propose a home base-aware store-carry-forward routing scheme using location-dependent utilities of nodes, which adopts different message forwarding strategies depending on location where nodes encounter. Our routing scheme first attempts to deliver messages to its home base, the area with the highest potential for the presence of the destination node in the near future. Once a message copy reaches its home base, message dissemination is limited within the home base, and nodes with message copies wait for encountering the destination node. To realize our routing scheme, we use two different utilities of nodes depending on location: Outside the home base of a message, nodes approaching to the home base have high utility values, while within the home base, nodes staying the home base have high utility values. By using these utilities properly, nodes with message copies will catch the destination node “by ambush” in the home base of the destination node. Through simulation experiments, we demonstrate the effectiveness of our routing scheme.

  • Efficient Analysis of Diffraction Grating with 10000 Random Grooves by Difference-Field Boundary Element Method Open Access

    Jun-ichiro SUGISAKA  Takashi YASUI  Koichi HIRAYAMA  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    27-36

    A numerical investigation revealed the relation between the groove randomness of actual-size diffraction gratings and the diffraction efficiencies. The diffraction gratings we treat in this study have around 10000 grooves. When the illumination wavelength is 600 nm, the entire grating size becomes 16.2 mm. The simulation was performed using the difference-field boundary element method (DFBEM). The DFBEM treats the vectorial field with a small amount of memory resources as independent of the grating size. We firstly describe the applicability of DFBEM to a considerably large-sized structure; regularly aligned grooves and a random shallow-groove structure are calculated by DFBEM and compared with the results given by standard BEM and scalar-wave approximation, respectively. Finally we show the relation between the degree of randomness and the diffraction efficiencies for two orthogonal linear polarizations. The relation provides information for determining the tolerance of fabrication errors in the groove structure and measuring the structural randomness by acquiring the irradiance of the diffracted waves.

  • Oblivious Polynomial Evaluation in the Exponent, Revisited

    Naoto ITAKURA  Kaoru KUROSAWA  Kazuki YONEYAMA  

     
    PAPER

      Vol:
    E100-A No:1
      Page(s):
    26-33

    There are two extensions of oblivious polynomial evaluation (OPE), OPEE (oblivious polynomial evaluation in the exponent) and OPEE2. At TCC 2015, Hazay showed two OPEE2 protocols. In this paper, we first show that her first OPEE2 protocol does not run in polynomial time if the computational DH assumption holds. We next present a constant round OPEE protocol under the DDH assumption.

  • Two-Side Agreement Learning for Non-Parametric Template Matching

    Chao ZHANG  Takuya AKASHI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    140-149

    We address the problem of measuring matching similarity in terms of template matching. A novel method called two-side agreement learning (TAL) is proposed which learns the implicit correlation between two sets of multi-dimensional data points. TAL learns from a matching exemplar to construct a symmetric tree-structured model. Two points from source set and target set agree to form a two-side agreement (TA) pair if each point falls into the same leaf cluster of the model. In the training stage, unsupervised weak hyper-planes of each node are learned at first. After then, tree selection based on a cost function yields final model. In the test stage, points are propagated down to leaf nodes and TA pairs are observed to quantify the similarity. Using TAL can reduce the ambiguity in defining similarity which is hard to be objectively defined and lead to more convergent results. Experiments show the effectiveness against the state-of-the-art methods qualitatively and quantitatively.

  • Evaluation of Spin-Coated Alumina Passivation Layer for Point-Contacted Rear Electrode Passivation of Silicon Solar Cells

    Ryosuke WATANABE  Tsubasa KOYAMA  Yoji SAITO  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E100-C No:1
      Page(s):
    101-107

    We fabricated silicon solar cells with spin-coated sol-gel alumina passivation layers on the rear side. Spin-coated alumina passivation films have moderate passivation quality and are inferior to atomic layer deposited passivation films. However, low-cost and low temperature process of the sol-gel deposition is still beneficial for the cells using commercially available Cz silicon wafers. Thus, we consider an applicability of the spin-coated alumina passivation layer for rear side passivation. Dependence of cell efficiency on contact spacing and contact diameter of a rear electrode was investigated by both experiments and numerical calculation. The experimental results indicated that conversion efficiency of the cell is enhanced from 9.1% to 11.1% by optimizing an aperture ratio and contact spacing of the rear passivation layers. Numerical calculation indicated that small contact diameter with low aperture ratio of a rear passivation layer is preferable to achieve good cell performance in our experimental condition. We confirmed the effectivity of the spin-coated alumina passivation films for rear surface passivation of the low-cost silicon solar cells.

  • A Fast Single Image Haze Removal Method Based on Human Retina Property

    Xin NING  Weijun LI  Wenjie LIU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:1
      Page(s):
    211-214

    In this letter, a novel and highly efficient haze removal algorithm is proposed for haze removal from only a single input image. The proposed algorithm is built on the atmospheric scattering model. Firstly, global atmospheric light is estimated and coarse atmospheric veil is inferred based on statistics of dark channel prior. Secondly, the coarser atmospheric veil is refined by using a fast Tri-Gaussian filter based on human retina property. To avoid halo artefacts, we then redefine the scene albedo. Finally, the haze-free image is derived by inverting the atmospheric scattering model. Results on some challenging foggy images demonstrate that the proposed method can not only improve the contrast and visibility of the restored image but also expedite the process.

  • The Computation Reduction in Object Detection via Composite Structure of Modified Integral Images

    Daeha LEE  Jaehong KIM  Ho-Hee KIM  Soon-Ja KIM  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/10/04
      Vol:
    E100-D No:1
      Page(s):
    229-233

    Object detection is the first step in the object recognition. According to the detection results, its following works are affected. However, object detection has a heavy resource requirement in terms of, computing power and memory. If an image is enlarged, the computational load required for object detection is also increased. An-integral-image-based method guarantees fast object detection. Once an integral image is generated, the speed of the object detection procedure remains fixed, regardless of the pattern region size. However, this becomes an even greater issue if the image is enlarged. In this paper, we propose the use of directional integral image based object detection. A directional integral image gives direction to an integral image, which can then be calculated from various directions. Furthermore, many unnecessary calculations, which typically occur when a partial integral image is used for object detection, can be avoided. Therefore, the amount of computation is reduced, compared with methods using integral images. In experiments comparing methods, the proposed method required 40% fewer computations.

  • FOREWORD

    Masahiro MAMBO  

     
    FOREWORD

      Vol:
    E100-A No:1
      Page(s):
    1-2
6821-6840hit(42807hit)