The search functionality is under construction.

Author Search Result

[Author] Chenxi LI(2hit)

1-2hit
  • Extended Personalized Individual Semantics with 2-Tuple Linguistic Preference for Supporting Consensus Decision Making

    Haiyan HUANG  Chenxi LI  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2017/11/22
      Vol:
    E101-D No:2
      Page(s):
    387-395

    Considering that different people are different in their linguistic preference and in order to determine the consensus state when using Computing with Words (CWW) for supporting consensus decision making, this paper first proposes an interval composite scale based 2-tuple linguistic model, which realizes the process of translation from word to interval numerical and the process of retranslation from interval numerical to word. Second, this paper proposes an interval composite scale based personalized individual semantics model (ICS-PISM), which can provide different linguistic representation models for different decision-makers. Finally, this paper proposes a consensus decision making model with ICS-PISM, which includes a semantic translation and retranslation phase during decision process and determines the consensus state of the whole decision process. These models proposed take into full consideration that human language contains vague expressions and usually real-world preferences are uncertain, and provide efficient computation models to support consensus decision making.

  • A Study of Qualitative Knowledge-Based Exploration for Continuous Deep Reinforcement Learning

    Chenxi LI  Lei CAO  Xiaoming LIU  Xiliang CHEN  Zhixiong XU  Yongliang ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/07/26
      Vol:
    E100-D No:11
      Page(s):
    2721-2724

    As an important method to solve sequential decision-making problems, reinforcement learning learns the policy of tasks through the interaction with environment. But it has difficulties scaling to large-scale problems. One of the reasons is the exploration and exploitation dilemma which may lead to inefficient learning. We present an approach that addresses this shortcoming by introducing qualitative knowledge into reinforcement learning using cloud control systems to represent ‘if-then’ rules. We use it as the heuristics exploration strategy to guide the action selection in deep reinforcement learning. Empirical evaluation results show that our approach can make significant improvement in the learning process.