The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Ying SU(13hit)

1-13hit
  • Digital Controller for Single-Phase DCM Boost PFC Converter with High Power Factor over Wide Input Voltage and Load Range

    Daying SUN  Weifeng SUN  Qing WANG  Miao YANG  Shen XU  Shengli LU  

     
    PAPER-Electronic Circuits

      Vol:
    E97-C No:4
      Page(s):
    377-385

    A new digital controller for a single-phase boost power factor correction (PFC) converter operating at a discontinuous conduction mode (DCM), is presented to achieve high input power factor over wide input voltage and load range. A method of duty cycle modulation is proposed to reduce the line harmonic distortion and improve the power factor. The loop regulation scheme is adopted to further improve the system stability and the power factor simultaneously. Meanwhile, a novel digital pulse width modulator (DPWM) based on the delay lock loop technique, is realized to improve the regulation linearity of duty cycle and reduce the regulation deviation. The single-phase DCM boost PFC converter with the proposed digital controller based on the field programmable gate array (FPGA) has been implemented. Experimental results indicate that the proposed digital controller can achieve high power factor more than 0.99 over wide input voltage and load range, the output voltage deviation is less than 3V, and the peak conversion efficiency is 96.2% in the case of a full load.

  • Microstructure Analysis of Annealing Effect on CoCrPt Thin Film Media by XRD

    Ding JIN  Ying SU  Jian Ping WANG  Hao GONG  

     
    PAPER

      Vol:
    E83-C No:9
      Page(s):
    1473-1477

    Post annealing treatment for CoCrPt magnetic thin films were tried in different thermal conditions, by changing the time of annealing procedure. Coercivity (Hc) improvement was achieved in annealed sample compared with those as deposited, in which as high as 5.2 kOe has been attained. To clarify the mechanism of annealing treatment on the magnetic properties, X-ray diffraction (XRD) spectrums of those samples and their magnetic properties were carefully studied. Co and Cr lattice parameters were separately calculated from different crystal lattice plane. It was found that a axis lattice spacing of Co hexagonal structure increases monotonically with increased annealing time. Variation of Co hcp peaks significance may due to Cr or Pt redistribution in the crystal grains and its boundaries. Combined with the grain size analysis of Co-rich area by X-ray diffraction peak broaden width, which was not very consistent with the result obtained from other's TEM and AFM studies, Cr diffusion was suggested to be the governing factor at short annealing time region. Co-rich grain growth should also be applied to explain the variation of magnetic properties in longer post annealing.

  • Selective Pseudo-Labeling Based Subspace Learning for Cross-Project Defect Prediction

    Ying SUN  Xiao-Yuan JING  Fei WU  Yanfei SUN  

     
    LETTER-Software Engineering

      Pubricized:
    2020/06/10
      Vol:
    E103-D No:9
      Page(s):
    2003-2006

    Cross-project defect prediction (CPDP) is a research hot recently, which utilizes the data form existing source project to construct prediction model and predicts the defect-prone of software instances from target project. However, it is challenging in bridging the distribution difference between different projects. To minimize the data distribution differences between different projects and predict unlabeled target instances, we present a novel approach called selective pseudo-labeling based subspace learning (SPSL). SPSL learns a common subspace by using both labeled source instances and pseudo-labeled target instances. The accuracy of pseudo-labeling is promoted by iterative selective pseudo-labeling strategy. The pseudo-labeled instances from target project are iteratively updated by selecting the instances with high confidence from two pseudo-labeling technologies. Experiments are conducted on AEEEM dataset and the results show that SPSL is effective for CPDP.

  • Joint Domain Adaption and Pseudo-Labeling for Cross-Project Defect Prediction

    Fei WU  Xinhao ZHENG  Ying SUN  Yang GAO  Xiao-Yuan JING  

     
    LETTER-Software Engineering

      Pubricized:
    2021/11/04
      Vol:
    E105-D No:2
      Page(s):
    432-435

    Cross-project defect prediction (CPDP) is a hot research topic in recent years. The inconsistent data distribution between source and target projects and lack of labels for most of target instances bring a challenge for defect prediction. Researchers have developed several CPDP methods. However, the prediction performance still needs to be improved. In this paper, we propose a novel approach called Joint Domain Adaption and Pseudo-Labeling (JDAPL). The network architecture consists of a feature mapping sub-network to map source and target instances into a common subspace, followed by a classification sub-network and an auxiliary classification sub-network. The classification sub-network makes use of the label information of labeled instances to generate pseudo-labels. The auxiliary classification sub-network learns to reduce the distribution difference and improve the accuracy of pseudo-labels for unlabeled instances through loss maximization. Network training is guided by the adversarial scheme. Extensive experiments are conducted on 10 projects of the AEEEM and NASA datasets, and the results indicate that our approach achieves better performance compared with the baselines.

  • Cross-Project Defect Prediction via Semi-Supervised Discriminative Feature Learning

    Danlei XING  Fei WU  Ying SUN  Xiao-Yuan JING  

     
    LETTER-Software Engineering

      Pubricized:
    2020/07/07
      Vol:
    E103-D No:10
      Page(s):
    2237-2240

    Cross-project defect prediction (CPDP) is a feasible solution to build an accurate prediction model without enough historical data. Although existing methods for CPDP that use only labeled data to build the prediction model achieve great results, there are much room left to further improve on prediction performance. In this paper we propose a Semi-Supervised Discriminative Feature Learning (SSDFL) approach for CPDP. SSDFL first transfers knowledge of source and target data into the common space by using a fully-connected neural network to mine potential similarities of source and target data. Next, we reduce the differences of both marginal distributions and conditional distributions between mapped source and target data. We also introduce the discriminative feature learning to make full use of label information, which is that the instances from the same class are close to each other and the instances from different classes are distant from each other. Extensive experiments are conducted on 10 projects from AEEEM and NASA datasets, and the experimental results indicate that our approach obtains better prediction performance than baselines.

  • Multiagent Cooperating Learning Methods by Indirect Media Communication

    Ruoying SUN  Shoji TATSUMI  Gang ZHAO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E86-A No:11
      Page(s):
    2868-2878

    Reinforcement Learning (RL) is an efficient learning method for solving problems that learning agents have no knowledge about the environment a priori. Ant Colony System (ACS) provides an indirect communication method among cooperating agents, which is an efficient method for solving combinatorial optimization problems. Based on the cooperating method of the indirect communication in ACS and the update policy of reinforcement values in RL, this paper proposes the Q-ACS multiagent cooperating learning method that can be applied to both Markov Decision Processes (MDPs) and combinatorial optimization problems. The advantage of the Q-ACS method is for the learning agents to share episodes beneficial to the exploitation of the accumulated knowledge and utilize the learned reinforcement values efficiently. Further, taking the visited times into account, this paper proposes the T-ACS multiagent learning method. The merit of the T-ACS method is that the learning agents share better policies beneficial to the exploration during agent's learning processes. Meanwhile, considering the Q-ACS and the T-ACS as homogeneous multiagent learning methods, in the light of indirect media communication among heterogeneous multiagent, this paper presents a heterogeneous multiagent RL method, the D-ACS that composites the learning policy of the Q-ACS and the T-ACS, and takes different updating policies of reinforcement values. The agents in our methods are given a simply cooperating way exchanging information in the form of reinforcement values updated in the common model of all agents. Owning the advantages of exploring the unknown environment actively and exploiting learned knowledge effectively, the proposed methods are able to solve both problems with MDPs and combinatorial optimization problems effectively. The results of experiments on hunter game and traveling salesman problem demonstrate that our methods perform competitively with representative methods on each domain respectively.

  • Fronthaul Constrained Coordinated Transmission in Cloud-Based 5G Radio Access Network: Energy Efficiency Perspective

    Ying SUN  Yang WANG  Yuqing ZHONG  

     
    PAPER-Network

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1343-1351

    The cloud radio access network (C-RAN) is embracing unprecedented popularity in the evolution of current RAN towards 5G. One of the essential benefits of C-RAN is facilitating cooperative transmission to enhance capacity and energy performances. In this paper, we argue that the conventional symmetric coordination in which all antennas participate in transmission does not necessarily lead to an energy efficient C-RAN. Further, the current assessments of energy consumption should be modified to match this shifted paradigm in network architecture. Towards this end, this paper proposes an asymmetric coordination scheme to optimize the energy efficiency of C-RAN. Specifically, asymmetric coordination is approximated and formulated as a joint antenna selection and power allocation problem, which is then solved by a proposed sequential-iterative algorithm. A modular power consumption model is also developed to convert the computational complexity of coordination into baseband power consumption. Simulations verify the performance benefits of our proposed asymmetric coordination in effectively enhancing system energy efficiency.

  • Noise-Analysis Based Threshold-Choosing Algorithm in Motion Estimation

    Xiaoying GAN  Shiying SUN  Wentao SONG  Bo LIU  

     
    LETTER-Multimedia Systems for Communications" Multimedia Systems for Communications

      Vol:
    E88-B No:4
      Page(s):
    1753-1755

    A novel threshold choosing method for the threshold-based skip mechanism is presented, in which the threshold is obtained from the analysis of the video device induced noise variance. Simulation results show that the proposed method can remarkably reduce the computation time consumption with only marginal performance penalty.

  • On the Security of an Identity-Based Proxy Signature Scheme in the Standard Model

    Ying SUN  Yong YU  Xiaosong ZHANG  Jiwen CHAI  

     
    LETTER-Cryptography and Information Security

      Vol:
    E96-A No:3
      Page(s):
    721-723

    Observing the security of existing identity-based proxy signature schemes was proven in the random oracle model, Cao et al. proposed the first direct construction of identity-based proxy signature secure in the standard model by making use of the identity-based signature due to Paterson and Schuldt. They also provided a security proof to show their construction is secure against forgery attacks without resorting to the random oracles. Unfortunately, in this letter, we demonstrate that their scheme is vulnerable to insider attacks. Specifically, after a private-key extraction query, an adversary, behaving as a malicious original signer or a malicious proxy signer, is able to violate the unforgeability of the scheme.

  • Convergence of the Q-ae Learning on Deterministic MDPs and Its Efficiency on the Stochastic Environment

    Gang ZHAO  Shoji TATSUMI  Ruoying SUN  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E83-A No:9
      Page(s):
    1786-1795

    Reinforcement Learning (RL) is an efficient method for solving Markov Decision Processes (MDPs) without a priori knowledge about an environment, and can be classified into the exploitation oriented method and the exploration oriented method. Q-learning is a representative RL and is classified as an exploration oriented method. It is guaranteed to obtain an optimal policy, however, Q-learning needs numerous trials to learn it because there is not action-selecting mechanism in Q-learning. For accelerating the learning rate of the Q-learning and realizing exploitation and exploration at a learning process, the Q-ee learning system has been proposed, which uses pre-action-selector, action-selector and back propagation of Q values to improve the performance of Q-learning. But the Q-ee learning is merely suitable for deterministic MDPs, and its convergent guarantee to derive an optimal policy has not been proved. In this paper, based on discussing different exploration methods, replacing the pre-action-selector in the Q-ee learning, we introduce a method that can be used to implement an active exploration to an environment, the Active Exploration Planning (AEP), into the learning system, which we call the Q-ae learning. With this replacement, the Q-ae learning not only maintains advantages of the Q-ee learning but also is adapted to a stochastic environment. Moreover, under deterministic MDPs, this paper presents the convergent condition and its proof for an agent to obtain the optimal policy by the method of the Q-ae learning. Further, by discussions and experiments, it is shown that by adjusting the relation between the learning factor and the discounted rate, the exploration process to an environment can be controlled on a stochastic environment. And, experimental results about the exploration rate to an environment and the correct rate of learned policies also illustrate the efficiency of the Q-ae learning on the stochastic environment.

  • Further Analysis of a Practical Hierarchical Identity-Based Encryption Scheme

    Ying SUN  Yong YU  Yi MU  

     
    LETTER-Information Network

      Vol:
    E95-D No:6
      Page(s):
    1690-1693

    Hu, Huang and Fan proposed a fully secure hierarchical identity-based encryption (IEICE Trans. Fundamentals, Vol.E92-A, No.6, pp.1494–1499, 2009) that achieves constant size of ciphertext and tight security reduction. Unfortunately, Park and Lee (IEICE Trans. Fundamentals, Vol.E93-A, No.6, pp.1269–1272, 2010) found that the security proof of Hu et al.'s scheme is incorrect; that is, the security of Hu et al.'s scheme cannot be reduced to their claimed q-ABDHE assumption. However, it is unclear whether Hu et al.'s scheme is still secure. In this letter, we provide an attack to show that the scheme is not secure against the chosen-plaintext attack.

  • Security Analysis of a Distributed Reprogramming Protocol for Wireless Sensor Networks

    Yong YU  Jianbing NI  Ying SUN  

     
    LETTER-Information Network

      Vol:
    E96-D No:8
      Page(s):
    1875-1877

    Reprogramming for wireless sensor networks is essential to upload new code or to alter the functionality of existing code. To overcome the weakness of the centralized approach of the traditional solutions, He et al. proposed the notion of distributed reprogramming where multiple authorized network users are able to reprogram sensor nodes without involving the base station. They also gave a novel distributed reprogramming protocol called SDRP by using identity-based signature, and provided a comprehensive security analysis for their protocol. In this letter, unfortunately, we demonstrate that SDRP is insecure as the protocol fails to satisfy the property of authenticity and integrity of code images, the most important security requirement of a secure reprogramming protocol.

  • RTP-Q: A Reinforcement Learning System with Time Constraints Exploration Planning for Accelerating the Learning Rate

    Gang ZHAO  Shoji TATSUMI  Ruoying SUN  

     
    PAPER-Artificial Intelligence and Knowledge

      Vol:
    E82-A No:10
      Page(s):
    2266-2273

    Reinforcement learning is an efficient method for solving Markov Decision Processes that an agent improves its performance by using scalar reward values with higher capability of reactive and adaptive behaviors. Q-learning is a representative reinforcement learning method which is guaranteed to obtain an optimal policy but needs numerous trials to achieve it. k-Certainty Exploration Learning System realizes active exploration to an environment, but, the learning process is separated into two phases and estimate values are not derived during the process of identifying the environment. Dyna-Q architecture makes fuller use of a limited amount of experiences and achieves a better policy with fewer environment interactions during identifying an environment by learning and planning with constrained time, however, the exploration is not active. This paper proposes a RTP-Q reinforcement learning system which varies an efficient method for exploring an environment into time constraints exploration planning and compounds it into an integrated system of learning, planning and reacting for aiming for the best of both methods. Based on improving the performance of exploring an environment, refining the model of the environment, the RTP-Q learning system accelerates the learning rate for obtaining an optimal policy. The results of experiment on navigation tasks demonstrate that the RTP-Q learning system is efficient.