The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

10881-10900hit(21534hit)

  • Accuracy Improvement of Pulmonary Nodule Detection Based on Spatial Statistical Analysis of Thoracic CT Scans

    Hotaka TAKIZAWA  Shinji YAMAMOTO  Tsuyoshi SHIINA  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1168-1174

    This paper describes a novel discrimination method of pulmonary nodules based on statistical analysis of thoracic computed tomography (CT) scans. Our previous Computer-Aided Diagnosis (CAD) system can detect pulmonary nodules from CT scans, but, at the same time, yields many false positives. In order to reduce the false positives, the method proposed in the present paper uses a relationship between pulmonary nodules, false positives and image features in CT scans. The trend of variation of the relationships is acquired through statistical analysis of a set of CT scans prepared for training. In testing, by use of the trend, the method predicts the appearances of pulmonary nodules and false positives in a CT scan, and improves the accuracy of the previous CAD system by modifying the system's output based on the prediction. The method is applied to 218 actual thoracic CT scans with 386 actual pulmonary nodules. The receiver operating characteristic (ROC) analysis is used to evaluate the results. The area under the ROC curve (Az) is statistically significantly improved from 0.918 to 0.931.

  • Development of the Lead-Free Brush Material for the High-Load Starter

    Ryoichi HONBO  Youichi MURAKAMI  Koichiro SAWA  Hiroyuki WAKABAYASHI  Naruhiko INAYOSHI  Kyoji INUKAI  Takeshi SHIMOYAMA  Naoki MORITA  

     
    PAPER-Electromechanical Devices and Components

      Vol:
    E90-C No:8
      Page(s):
    1634-1642

    This paper reports the development of a lead-free brush material for a high-load starter. These brushes are used in much more extreme conditions -- at the PV-value (the product of brush contact pressure and sliding velocity) approximately three times that of other starter brushes, and double the electrical current density. The major technical requirement of this development was to decrease the electrical wear in brushes caused by commutation sparking. We developed a brush material that reduces electrical wear by adding zinc phosphate. Because zinc phosphate can improve the lubricity at high-temperature and the contact stability of brushes, the developed brush reduces commutation sparks. The life of the developed brush is about 1.5 times longer than that of conventional brushes containing lead.

  • A Variable-Length Coding Adjustable for Compressed Test Application

    Hideyuki ICHIHARA  Toshihiro OHARA  Michihiro SHINTANI  Tomoo INOUE  

     
    PAPER-Dependable Computing

      Vol:
    E90-D No:8
      Page(s):
    1235-1242

    Test compression / decompression using variable-length coding is an efficient method for reducing the test application cost, i.e., test application time and the size of the storage of an LSI tester. However, some coding techniques impose slow test application, and consequently a large test application time is required despite the high compression. In this paper, we clarify the fact that test application time depends on the compression ratio and the length of codewords and then propose a new Huffman-based coding method for achieving small test application time in a given test environment. The proposed coding method adjusts both of the compression ratio and the minimum length of the codewords to the test environment. Experimental results show that the proposed method can achieve small test application time while keeping high compression ratio.

  • Analysis of Test Generation Complexity for Stuck-At and Path Delay Faults Based on τk-Notation

    Chia Yee OOI  Thomas CLOUQUEUR  Hideo FUJIWARA  

     
    PAPER-Complexity Theory

      Vol:
    E90-D No:8
      Page(s):
    1202-1212

    In this paper, we discuss the relationship between the test generation complexity for path delay faults (PDFs) and that for stuck-at faults (SAFs) in combinational and sequential circuits using the recently introduced τk-notation. On the other hand, we also introduce a class of cyclic sequential circuits that are easily testable, namely two-column distributive state-shiftable finite state machine realizations (2CD-SSFSM). Then, we discuss the relevant conjectures and unsolved problems related to the test generation for sequential circuits with PDFs under different clock schemes and test generation models.

  • Inverse Motion Compensation for DCT Block with Unrestricted Motion Vectors

    Min-Cheol HWANG  Seung-Kyun KIM  Sung-Jea KO  

     
    LETTER

      Vol:
    E90-D No:8
      Page(s):
    1199-1201

    Existing methods for inverse motion compensation (IMC) in the DCT domain have not considered the unrestricted motion vector (UMV). In the existing methods, IMC is performed to deal with the UMV in the spatial domain after the inverse DCT (IDCT). We propose an IMC method which can deal with the UMV directly in the DCT domain without the use of the IDCT/DCT required by the existing methods. The computational complexity of the proposed method can be reduced by about half of that of the brute-force method operating in the spatial domain. Experimental results show that the proposed method can efficiently reduce the processing time with similar visual quality.

  • Pruned Resampling: Probabilistic Model Selection Schemes for Sequential Face Recognition

    Atsushi MATSUI  Simon CLIPPINGDALE  Takashi MATSUMOTO  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1151-1159

    This paper proposes probabilistic pruning techniques for a Bayesian video face recognition system. The system selects the most probable face model using model posterior distributions, which can be calculated using a Sequential Monte Carlo (SMC) method. A combination of two new pruning schemes at the resampling stage significantly boosts computational efficiency by comparison with the original online learning algorithm. Experimental results demonstrate that this approach achieves better performance in terms of both processing time and ID error rate than a contrasting approach with a temporal decay scheme.

  • Managing Contradictions in Multi-Agent Systems

    Ruben FUENTES-FERNANDEZ  Jorge J. GOMEZ-SANZ  Juan PAVON  

     
    PAPER-Distributed Cooperation and Agents

      Vol:
    E90-D No:8
      Page(s):
    1243-1250

    The specification of a Multi-Agent System (MAS) involves the identification of a large number of entities and their relationships. This is a non-trivial task that requires managing different views of the system. Many problems concerning this issue originate in the presence of contradictory goals and tasks, inconsistencies, and unexpected behaviours. Such troublesome configurations should be detected and prevented during the development process in order to study alternative ways to cope with them. In this paper, we present methods and tools that support the management of contradictions during the analysis and design of MAS. Contradiction management in MAS has to consider both individual (i.e. agent) and social (i.e. organization) aspects, and their dynamics. Such issues have already been considered in social sciences, and more concretely in the Activity Theory, a social framework for the study of interactions in activity systems. Our approach applies knowledge from Activity Theory in MAS, especially its base of contradiction patterns. That requires a formalization of this social theory in order to be applicable in a software engineering context and its adaptation to agent-oriented methodologies. Then, it will be possible to check the occurrence of contradiction patterns in a MAS specification and provide solutions to those situations. This technique has been validated by implementing an assistant for the INGENIAS Development Kit and has been tested with several case studies. This paper shows part of one of these experiments for a web application.

  • Quality Evaluation for Document Relation Discovery Using Citation Information

    Kritsada SRIPHAEW  Thanaruk THEERAMUNKONG  

     
    PAPER-Data Mining

      Vol:
    E90-D No:8
      Page(s):
    1225-1234

    Assessment of discovered patterns is an important issue in the field of knowledge discovery. This paper presents an evaluation method that utilizes citation (reference) information to assess the quality of discovered document relations. With the concept of transitivity as direct/indirect citations, a series of evaluation criteria is introduced to define the validity of discovered relations. Two kinds of validity, called soft validity and hard validity, are proposed to express the quality of the discovered relations. For the purpose of impartial comparison, the expected validity is statistically estimated based on the generative probability of each relation pattern. The proposed evaluation is investigated using more than 10,000 documents obtained from a research publication database. With frequent itemset mining as a process to discover document relations, the proposed method was shown to be a powerful way to evaluate the relations in four aspects: soft/hard scoring, direct/indirect citation, relative quality over the expected value, and comparison to human judgment.

  • Image Magnification by a Compact Method with Preservation of Preferential Components

    Akira HIRABAYASHI  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1534-1541

    Bicubic interpolation is one of the standard approaches for image magnification since it can be easily computed and does not require a priori knowledge nor a complicated model. In spite of such convenience, the images enlarged by bicubic interpolation are blurry, in particular for large magnification factors. This may be explained by four constraints of bicubic interpolation. Hence, by relaxing or replacing the constraints, we propose a new magnification method, which performs better than bicubic interpolation, but retains its compactness. One of the constraints is about criterion, which we replace by a criterion requiring that all pixel values are reproduced and preferential components in input images are perfectly reconstructed. We show that, by choosing the low frequency components or edge enhancement components in the DCT basis as the preferential components, the proposed method performs better than bicubic interpolation, with the same, or even less amount of computation.

  • Robust F0 Estimation Based on Complex LPC Analysis for IRS Filtered Noisy Speech

    Keiichi FUNAKI  Tatsuhiko KINJO  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1579-1586

    This paper proposes a novel robust fundamental frequency (F0) estimation algorithm based on complex-valued speech analysis for an analytic speech signal. Since analytic signal provides spectra only over positive frequencies, spectra can be accurately estimated in low frequencies. Consequently, it is considered that F0 estimation using the residual signal extracted by complex-valued speech analysis can perform better for F0 estimation than that for the residual signal extracted by conventional real-valued LPC analysis. In this paper, the autocorrelation function weighted by AMDF is adopted for the F0 estimation criterion and four signals; speech signal, analytic speech signal, LPC residual and complex LPC residual, are evaluated for the F0 estimation. Speech signals used in the experiments were an IRS filtered speech corrupted by adding white Gaussian noise or Pink noise whose noise levels are 10, 5, 0, -5 [dB]. The experimental results demonstrate that the proposed algorithm based on complex LPC residual can perform better than other methods in noisy environment.

  • Players Clustering Based on Graph Theory for Tactics Analysis Purpose in Soccer Videos

    Hirofumi KON  Miki HASEYAMA  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1528-1533

    In this paper, a new method for clustering of players in order to analyze games in soccer videos is proposed. The proposed method classifies players who are closely related in terms of soccer tactics into one group. Considering soccer tactics, the players in one group are located near each other. For this reason, the Euclidean distance between the players is an effective measurement for the clustering of players. However, the distance is not sufficient to extract tactics-based groups. Therefore, we utilize a modified version of the community extraction method, which finds community structure by dividing a non-directed graph. The use of this method in addition to the distance enables accurate clustering of players.

  • Generation of Training Data by Degradation Models for Traffic Sign Symbol Recognition

    Hiroyuki ISHIDA  Tomokazu TAKAHASHI  Ichiro IDE  Yoshito MEKADA  Hiroshi MURASE  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1134-1141

    We present a novel training method for recognizing traffic sign symbols. The symbol images captured by a car-mounted camera suffer from various forms of image degradation. To cope with degradations, similarly degraded images should be used as training data. Our method artificially generates such training data from original templates of traffic sign symbols. Degradation models and a GA-based algorithm that simulates actual captured images are established. The proposed method enables us to obtain training data of all categories without exhaustively collecting them. Experimental results show the effectiveness of the proposed method for traffic sign symbol recognition.

  • Boundary Detection in Echocardiographic Images Using Markovian Level Set Method

    Jierong CHENG  Say-Wei FOO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E90-D No:8
      Page(s):
    1292-1300

    Owing to the large amount of speckle noise and ill-defined edges present in echocardiographic images, computer-based boundary detection of the left ventricle has proved to be a challenging problem. In this paper, a Markovian level set method for boundary detection in long-axis echocardiographic images is proposed. It combines Markov random field (MRF) model, which makes use of local statistics with level set method that handles topological changes, to detect a continuous and smooth boundary. Experimental results show that higher accuracy can be achieved with the proposed method compared with two related MRF-based methods.

  • Compact and Broadband Circularly Polarized Microstrip Antenna with Ring-Slot on Ground Plane

    Masataka YASUKAWA  

     
    LETTER-Antennas and Propagation

      Vol:
    E90-B No:8
      Page(s):
    2179-2181

    For a microstrip antenna (MSA) with a ring-shaped slot on formed on the ground plane, downsizing the microstrip patch and expanding the circularly polarized bandwidth have been achieved successfully. The dimensions of the patch are 6.8 mm7.4 mm and the minimum axial ratio (AR) of 0.6 dB is obtained at 6.1 GHz. In addition, its AR is less than 3 dB at the relative bandwidth of 3.5%. The bandwidth of the proposed MSA is twice that of conventional single-feeding circularly polarized MSAs; however, its size is only half that of conventional MSAs.

  • Multiple-Length Variable-Weight Optical Orthogonal Codes for Supporting Multirate Multimedia Services in Optical CDMA Networks

    Nasaruddin  Tetsuo TSUJIOKA  

     
    SURVEY PAPER-Systems and Technologies

      Vol:
    E90-B No:8
      Page(s):
    1968-1978

    Future optical code division multiple access (CDMA) networks should be designed for multirate and fully integrated multimedia services. In the conventional schemes, multilength optical orthogonal codes (OOCs) are designed to support multirate systems, while variable-weight OOCs are designed to support differentiated quality of service (QoS) for multimedia applications. In this paper, a novel class of optical signature codes; multiple-length variable-weight optical orthogonal codes (MLVW-OOC) is proposed for supporting multirate and integrated multimedia services in optical CDMA networks. The proposed MLVW-OOC has features that are easy to construct variable-weight codes and expanded to multiple-length codes. A construction method for designing MLVW-OOCs up to three levels of codes is discussed. The designed MLVW-OOCs can support differentiated requirements on data rates and QoS for several types of services in the networks. A code analysis for obtaining the value of cross-correlation constraints or multiple access interference (MAI) computation for several levels of codes is also suggested. The cross-correlation constraints of the proposed codes are better than the conventional codes such as multilength OOCs. Finally, the bit error probability performance of the two-level MLVW-OOC is evaluated analytically. The results show that the proposed MLVW-OOC can provide differentiated bit error probability performances for several combinations of data rates and QoS.

  • Efficient Traffic Management for Reverse Traffic Channels in High-Speed CDMA Systems

    Woon-Young YEO  Hyejeong LEE  Dong-Ho CHO  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E90-B No:8
      Page(s):
    2163-2167

    We point out the unstable operation of reverse traffic management in the cdma2000 1xEV-DO system, and propose a new rate control scheme that controls the reverse traffic load more precisely. The proposed scheme is modeled as a multidimensional Markov process and compared with the conventional scheme. The analysis results show that the proposed rate control scheme has a lower overload probability and higher reverse link throughput than the conventional one.

  • A Challenge to Access/Backbone Integrated Network

    Hironari MATSUDA  Takuya KAMINOGOU  Tadahiko YASUI  

     
    SURVEY PAPER-Systems and Technologies

      Vol:
    E90-B No:8
      Page(s):
    1960-1967

    An integration of the access/backbone network is expected to become indispensable in the future. We analyze the current and future optical networks and we describe the promising technologies. GMPLS architecture in backbone networks and WDM PON architecture in access networks will play the most important roles. We overview recent studies on the access/backbone integrated network to achieve guaranteed QoS. We also describe the developed system architecture as a milestone toward the access/backbone integrated network.

  • Simulation Study of Factors That Determine Write Margins in Patterned Media

    Naoki HONDA  Kiyoshi YAMAKAWA  Kazuhiro OUCHI  

     
    PAPER

      Vol:
    E90-C No:8
      Page(s):
    1594-1598

    Shift margins in down and cross track directions and skew angle were investigated using micromagnetic simulation with a shielded planar head for patterned media with an areal density of 1 Tbit/in2. The shift margins were quantitatively estimated using parameters of the head field and the magnetic properties of media. It is essential to use a head with a higher field gradient and a medium with a small field width between saturation and nucleation fields, to obtain a larger down track shift margin, and a head with a narrower cross track field distribution to obtain a larger cross track shift margin and skew angle margin.

  • Fabrication of FePt Films for Magnetic Recording Media

    Fulin WEI  Zheng YANG  

     
    PAPER

      Vol:
    E90-C No:8
      Page(s):
    1570-1576

    The FePt films were fabricated by using RF sputtering for magnetic recording media. The efforts have been done to reduce the grain size, to decrease the ordering temperature, and to control the crystalline orientation. It was found that the proper amount of alumina led to decrease the grain size and weak the interparticle exchange coupling. 15% N2 partial pressure can reduce the transform temperature from fcc to fct phase. In order to control the texture in the FePt films, two kinds of methods were used. On the one hand, the addition of W in Cr underlayer and the usage of Mo and Pt as intermediate layer caused the texture of FePt films changed from (200) to (001). On the other hand, the (Pt/Fe)n multilayers were fabricated to produce the FePt (001) orientation.

  • An Application of Neural Network Equalization to Polytopic Multiplexing Holography and Reduction of Interpixel Interference

    Hisashi OSAWA  Naoki KAWAUE  Yoshihiro OKAMOTO  Yasuaki NAKAMURA  Hirotaka OCHI  Shoji MARUKAWA  

     
    PAPER

      Vol:
    E90-C No:8
      Page(s):
    1612-1618

    The neural network equalization for polytopic multiplexing holography is studied to reduce interpixel interference. The bit error rate performance of the bilinear or bicubic interpolator followed by a neural network as an equalizer is obtained by computer simulation. The results show that the neural network equalizer provides an SNR improvement of about 1.0 dB over conventional equalization.

10881-10900hit(21534hit)