The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Chaopeng LI(2hit)

1-2hit
  • Max-Min-Degree Neural Network for Centralized-Decentralized Collaborative Computing

    Yiqiang SHENG  Jinlin WANG  Chaopeng LI  Weining QI  

     
    PAPER

      Vol:
    E99-B No:4
      Page(s):
    841-848

    In this paper, we propose an undirected model of learning systems, named max-min-degree neural network, to realize centralized-decentralized collaborative computing. The basic idea of the proposal is a max-min-degree constraint which extends a k-degree constraint to improve the communication cost, where k is a user-defined degree of neurons. The max-min-degree constraint is defined such that the degree of each neuron lies between kmin and kmax. Accordingly, the Boltzmann machine is a special case of the proposal with kmin=kmax=n, where n is the full-connected degree of neurons. Evaluations show that the proposal is much better than a state-of-the-art model of deep learning systems with respect to the communication cost. The cost of the above improvement is slower convergent speed with respect to data size, but it does not matter in the case of big data processing.

  • k-Degree Layer-Wise Network for Geo-Distributed Computing between Cloud and IoT

    Yiqiang SHENG  Jinlin WANG  Haojiang DENG  Chaopeng LI  

     
    PAPER

      Vol:
    E99-B No:2
      Page(s):
    307-314

    In this paper, we propose a novel architecture for a deep learning system, named k-degree layer-wise network, to realize efficient geo-distributed computing between Cloud and Internet of Things (IoT). The geo-distributed computing extends Cloud to the geographical verge of the network in the neighbor of IoT. The basic ideas of the proposal include a k-degree constraint and a layer-wise constraint. The k-degree constraint is defined such that the degree of each vertex on the h-th layer is exactly k(h) to extend the existing deep belief networks and control the communication cost. The layer-wise constraint is defined such that the layer-wise degrees are monotonically decreasing in positive direction to gradually reduce the dimension of data. We prove the k-degree layer-wise network is sparse, while a typical deep neural network is dense. In an evaluation on the M-distributed MNIST database, the proposal is superior to a state-of-the-art model in terms of communication cost and learning time with scalability.