The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

5101-5120hit(42807hit)

  • Energy Efficient Mobile Positioning System Using Adaptive Particle Filter

    Yoojin KIM  Yongwoon SONG  Hyukjun LEE  

     
    LETTER-Measurement Technology

      Vol:
    E101-A No:6
      Page(s):
    997-999

    An accurate but energy-efficient estimation of a position is important as the number of mobile computing systems grow rapidly. A challenge is to develop a highly accurate but energy efficient estimation method. A particle filter is a key algorithm to estimate and track the position of an object which exhibits non-linear movement behavior. However, it requires high usage of computation resources and energy. In this paper, we propose a scheme which can dynamically adjust the number of particles according to the accuracy of the reference signal for positioning and reduce the energy consumption by 37% on Cortex A7.

  • Compact CAR: Low-Overhead Cache Replacement Policy for an ICN Router

    Atsushi OOKA  Suyong EUM  Shingo ATA  Masayuki MURATA  

     
    PAPER-Network System

      Pubricized:
    2017/12/18
      Vol:
    E101-B No:6
      Page(s):
    1366-1378

    Information-centric networking (ICN) has gained attention from network research communities due to its capability of efficient content dissemination. In-network caching function in ICN plays an important role to achieve the design motivation. However, many researchers on in-network caching due to its ability to efficiently disseminate content. The in-network caching function in ICN plays an important role in realizing the design goals. However, many in-network caching researchers have focused on where to cache rather than how to cache: the former is known as content deployment in the network and the latter is known as cache replacement in an ICN router. Although the cache replacement has been intensively researched in the context of web-caching and content delivery network previously, networks, the conventional approaches cannot be directly applied to ICN due to the fine granularity of chunks in ICN, which eventually changes the access patterns. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements in the design of a high performance ICN router. Then, we propose a novel cache replacement algorithm to satisfy the requirements named Compact CLOCK with Adaptive Replacement (Compact CAR), which can reduce the consumption of cache memory to one-tenth compared to conventional approaches. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements set for high performance ICN routers. Our solution, Compact CLOCK with Adaptive Replacement (Compact CAR), is a novel cache replacement algorithm that satisfies the requirements. The evaluation result shows that the consumption of cache memory required to achieve a desired performance can be reduced by 90% compared to conventional approaches such as FIFO and CLOCK.

  • Requirement Modeling Language for the Dynamic Node Integration Problem of Telecommunication Network

    Yu NAKAYAMA  Kaoru SEZAKI  

     
    PAPER-Network

      Pubricized:
    2017/12/01
      Vol:
    E101-B No:6
      Page(s):
    1379-1387

    Efficiently locating nodes and allocating demand has been a significant problem for telecommunication network carriers. Most of location models focused on where to locate nodes and how to assign increasing demand with optical access networks. However, the population in industrialized countries will decline over the coming decades. Recent advance in the optical amplifier technology has enabled node integration; an excess telecommunication node is closed and integrated to another node. Node integration in low-demand areas will improve the efficiency of access networks in this approaching age of depopulation. A dynamic node integration problem (DNIP) has been developed to organize the optimal plan for node integration. The problem of the DNIP was that it cannot consider the requirements of network carriers. In actual situations, network carriers often want to specify the way each node is managed, regardless of the mathematical optimality of the solution. This paper proposes a requirement modeling language (RML) for the DNIP, with which the requirements of network carriers can be described. The described statements are used to solve the DNIP, and consequently the calculated optimal solution always satisfies the requirements. The validity of the proposed method was evaluated with computer simulations in a case study.

  • Multi-Feature Sensor Similarity Search for the Internet of Things

    Suyan LIU  Yuanan LIU  Fan WU  Puning ZHANG  

     
    PAPER-Network

      Pubricized:
    2017/12/08
      Vol:
    E101-B No:6
      Page(s):
    1388-1397

    The tens of billions of devices expected to be connected to the Internet will include so many sensors that the demand for sensor-based services is rising. The task of effectively utilizing the enormous numbers of sensors deployed is daunting. The need for automatic sensor identification has expanded the need for research on sensor similarity searches. The Internet of Things (IoT) features massive non-textual dynamic data, which is raising the critical challenge of efficiently and effectively searching for and selecting the sensors most related to a need. Unfortunately, single-attribute similarity searches are highly inaccurate when searching among similar attribute values. In this paper, we propose a group-fitting correlation calculation algorithm (GFC) that can identify the most similar clusters of sensors. The GFC method considers multiple attributes (e.g., humidity, temperature) to calculate sensor similarity; thus, it performs more accurate searches than do existing solutions.

  • An Approach for Virtual Network Function Deployment Based on Pooling in vEPC

    Quan YUAN  Hongbo TANG  Yu ZHAO  Xiaolei WANG  

     
    PAPER-Network

      Pubricized:
    2017/12/08
      Vol:
    E101-B No:6
      Page(s):
    1398-1410

    Network function virtualization improves the flexibility of infrastructure resource allocation but the application of commodity facilities arouses new challenges for systematic reliability. To meet the carrier-class reliability demanded from the 5G mobile core, several studies have tackled backup schemes for the virtual network function deployment. However, the existing backup schemes usually sacrifice the efficiency of resource allocation and prevent the sharing of infrastructure resources. To solve the dilemma of balancing the high level demands of reliability and resource allocation in mobile networks, this paper proposes an approach for the problem of pooling deployment of virtualized network functions in virtual EPC network. First, taking pooling of VNFs into account, we design a virtual network topology for virtual EPC. Second, a node-splitting algorithm is proposed to make best use of substrate network resources. Finally, we realize the dynamic adjustment of pooling across different domains. Compared to the conventional virtual topology design and mapping method (JTDM), this approach can achieve fine-grained management and overall scheduling of node resources; guarantee systematic reliability and optimize global view of network. It is proven by a network topology instance provided by SNDlib that the approach can reduce total resource cost of the virtual network and increase the ratio of request acceptance while satisfy the high-demand reliability of the system.

  • Compact Controlled Reception Pattern Antenna (CRPA) Array Based on Mu-Zero Resonance (MZR) Antenna

    Jae-Gon LEE  Taek-Sun KWON  Bo-Hee CHOI  Jeong-Hae LEE  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2017/12/20
      Vol:
    E101-B No:6
      Page(s):
    1427-1433

    In this paper, a compact controlled reception pattern antenna (CRPA) array based on a mu-zero resonance (MZR) antenna is proposed for a global positioning system (GPS). The MZR antenna can be minimized by designing structure based in mu-negative (MNG) transmission line. The MNG transmission line can be implemented by a gap structure for the series capacitance and a shorting via for a short-ended boundary condition. The CRPA array, which operates in L1 (1.57542GHz) and L2 (1.2276GHz) bands, is designed as a cylinder with a diameter and a height of 127mm (5 inches) and 20mm, respectively, and is composed of seven radiating elements. To design the compact CRPA array with high performance attributes such as an impedance matching (VSWR) value of less than 2, an isolation between array elements (<-12dB), an axial ratio (<5dB), and a circular polarization (CP) gain (>-1dBic: L1 band and >-3dBic: L2 band), we employ two orthogonal MZR antennas, a superstrate, and chip couplers. The performances of the CRPA antenna are verified and compared by an analytic analysis, a full-wave simulation, and measurements.

  • MIMO Radar Waveforms Using Orthogonal Complementary Codes with Doppler-Offset

    Takaaki KISHIGAMI  Hidekuni YOMO  Naoya YOSOKU  Akihiko MATSUOKA  Junji SATO  

     
    PAPER-Sensing

      Pubricized:
    2017/12/20
      Vol:
    E101-B No:6
      Page(s):
    1503-1512

    This paper proposes multiple-input multiple-output (MIMO) radar waveforms consisting of Doppler-offset orthogonal complementary codes (DO-OCC) for raising the Doppler resilience of MIMO radar systems. The DO-OCC waveforms have low cross-correlation among multiplexed waves and a low autocorrelation peak sidelobe level (PSL) even in the Doppler shift condition. They are verified by computer simulations and measurements. Computer simulations show that the peak sidelobe ratio (PSR) of the DO-OCC exceeds over 60dB and the desired to undesired signal power ratio (DUR) is over 60dB in the case that the Doppler shift is 0.048 rad per pulse repetition interval (PRI). And through the experimental measurements, it has been verified that the PSR of the DO-OCC is over 40dB and the DUR is over 50dB in the case that Doppler shift is 0.05 rad per PRI and that The DO-OCC waveforms enable to maintain the direction of arrival (DOA) estimation accuracy for moving targets as almost same as the one for static targets. The results prove the effectiveness of the proposed MIMO waveforms in achieving Doppler tolerance while maintaining orthogonality and autocorrelation properties.

  • Waveguide Butt-Joint Germanium Photodetector with Lateral PIN Structure for 1600nm Wavelengths Receiving

    Hideki ONO  Takasi SIMOYAMA  Shigekazu OKUMURA  Masahiko IMAI  Hiroki YAEGASHI  Hironori SASAKI  

     
    PAPER-Optoelectronics

      Vol:
    E101-C No:6
      Page(s):
    409-415

    We report good responsivity at the wavelength of 1600nm in a Ge photodetector which had lateral p-i-n structure and butt-joint coupling structure based on conventional normal complementary metal oxide semiconductor processes. We experimentally verified the responsivity of 0.82A/W and 0.71A/W on the best and the worst polarizations, respectively. The butt joint lateral p-i-n structure is found to be polarization independent as compared with vertical ones. Although cut-off frequency was 2.3-2.4GHz at reverse bias 3V, clearly open eye diagram at 10Gbps was obtained with reverse bias over 12V. These results are promising as optical photodetectors to receive long wavelengths downstream signal wavelengths required for next-generation optical access network systems.

  • Direct Update of XML Documents with Data Values Compressed by Tree Grammars

    Kenji HASHIMOTO  Ryunosuke TAKAYAMA  Hiroyuki SEKI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Vol:
    E101-D No:6
      Page(s):
    1467-1478

    One of the most promising compression methods for XML documents is the one that translates a given document to a tree grammar that generates it. A feature of this compression is that the internal structures are kept in production rules of the grammar. This enables us to directly manipulate the tree structure without decompression. However, previous studies assume that a given XML document does not have data values because they focus on direct retrieval and manipulation of the tree structure. This paper proposes a direct update method for XML documents with data values and shows the effectiveness of the proposed method based on experiments conducted on our implemented tool.

  • Counting Algorithms for Recognizable and Algebraic Series

    Bao Trung CHU  Kenji HASHIMOTO  Hiroyuki SEKI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Vol:
    E101-D No:6
      Page(s):
    1479-1490

    Formal series are a natural extension of formal languages by associating each word with a value called a coefficient or a weight. Among them, recognizable series and algebraic series can be regarded as extensions of regular languages and context-free languages, respectively. The coefficient of a word w can represent quantities such as the cost taken by an operation on w, the probability that w is emitted. One of the possible applications of formal series is the string counting in quantitative analysis of software. In this paper, we define the counting problems for formal series and propose algorithms for the problems. The membership problem for an automaton or a grammar corresponds to the problem of computing the coefficient of a given word in a given series. Accordingly, we define the counting problem for formal series in the following two ways. For a formal series S and a natural number d, we define CC(S,d) to be the sum of the coefficients of all the words of length d in S and SC(S,d) to be the number of words of length d that have non-zero coefficients in S. We show that for a given recognizable series S and a natural number d, CC(S,d) can be computed in O(η log d) time where η is an upper-bound of time needed for a single state-transition matrix operation, and if the state-transition matrices of S are commutative for multiplication, SC(S,d) can be computed in polynomial time of d. We extend the notions to tree series and discuss how to compute them efficiently. Also, we propose an algorithm that computes CC(S,d) in square time of d for an algebraic series S. We show the CPU time of the proposed algorithm for computing CC(S,d) for some context-free grammars as S, one of which represents the syntax of C language. To examine the applicability of the proposed algorithms to string counting for the vulnerability analysis, we also present results on string counting for Kaluza Benchmark.

  • Static Dependency Pair Method in Functional Programs

    Keiichirou KUSAKARI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Vol:
    E101-D No:6
      Page(s):
    1491-1502

    We have previously introduced the static dependency pair method that proves termination by analyzing the static recursive structure of various extensions of term rewriting systems for handling higher-order functions. The key is to succeed with the formalization of recursive structures based on the notion of strong computability, which is introduced for the termination of typed λ-calculi. To bring the static dependency pair method close to existing functional programs, we also extend the method to term rewriting models in which functional abstractions with patterns are permitted. Since the static dependency pair method is not sound in general, we formulate a class; namely, accessibility, in which the method works well. The static dependency pair method is a very natural reasoning; therefore, our extension differs only slightly from previous results. On the other hand, a soundness proof is dramatically difficult.

  • Online Linear Optimization with the Log-Determinant Regularizer

    Ken-ichiro MORIDOMI  Kohei HATANO  Eiji TAKIMOTO  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/03/01
      Vol:
    E101-D No:6
      Page(s):
    1511-1520

    We consider online linear optimization over symmetric positive semi-definite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices Xt and Lt, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of Xt and Lt. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices Lt are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.

  • Evaluation of Register Number Abstraction for Enhanced Instruction Register Files

    Naoki FUJIEDA  Kiyohiro SATO  Ryodai IWAMOTO  Shuichi ICHIKAWA  

     
    PAPER-Computer System

      Pubricized:
    2018/03/14
      Vol:
    E101-D No:6
      Page(s):
    1521-1531

    Instruction set randomization (ISR) is a cost-effective obfuscation technique that modifies or enhances the relationship between instructions and machine languages. An Instruction Register File (IRF), a list of frequently used instructions, can be used for ISR by providing the way of indirect access to them. This study examines the IRF that integrates a positional register, which was proposed as a supplementary unit of the IRF, for the sake of tamper resistance. According to our evaluation, with a new design for the contents of the positional register, the measure of tamper resistance was increased by 8.2% at a maximum, which corresponds to a 32.2% increase in the size of the IRF. The number of logic elements increased by the addition of the positional register was 3.5% of its baseline processor.

  • Optimization of Body Biasing for Variable Pipelined Coarse-Grained Reconfigurable Architectures

    Takuya KOJIMA  Naoki ANDO  Hayate OKUHARA  Ng. Anh Vu DOAN  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2018/03/09
      Vol:
    E101-D No:6
      Page(s):
    1532-1540

    Variable Pipeline Cool Mega Array (VPCMA) is a low power Coarse Grained Reconfigurable Architecture (CGRA) based on the concept of CMA (Cool Mega Array). It provides a pipeline structure in the PE array that can be configured so as to fit target algorithms and required performance. Also, VPCMA uses the Silicon On Thin Buried oxide (SOTB) technology, a type of Fully Depleted Silicon On Insulator (FDSOI), so it is possible to control its body bias voltage to provide a balance between performance and leakage power. In this paper, we study the optimization of the VPCMA body bias while considering simultaneously its variable pipeline structure. Through evaluations, we can observe that it is possible to achieve an average reduction of energy consumption, for the studied applications, of 17.75% and 10.49% when compared to respectively the zero bias (without body bias control) and the uniform (control of the whole PE array) cases, while respecting performance constraints. Besides, it is observed that, with appropriate body bias control, it is possible to extend the possible performance, hence enabling broader trade-off analyzes between consumption and performance. Considering the dynamic power as well as the static power, more appropriate pipeline structure and body bias voltage can be obtained. In addition, when the control of VDD is integrated, higher performance can be achieved with a steady increase of the power. These promising results show that applying an adequate optimization technique for the body bias control while simultaneously considering pipeline structures can not only enable further power reduction than previous methods, but also allow more trade-off analysis possibilities.

  • The Pre-Testing for Virtual Robot Development Environment

    Hyun Seung SON  R. Young Chul KIM  

     
    PAPER-Software Engineering

      Pubricized:
    2018/03/01
      Vol:
    E101-D No:6
      Page(s):
    1541-1551

    The traditional tests are planned and designed at the early stages, but it is possible to execute test cases after implementing source code. Since there is a time difference between design stage and testing stage, by the time a software design error is found it will be too late. To solve this problem, this paper suggests a virtual pre-testing process. While the virtual pre-testing process can find software and testing errors before the developing stage, it can automatically generate and execute test cases with modeling and simulation (M&S) in a virtual environment. The first part of this method is to create test cases with state transition tree based on state diagram, which include state, transition, instruction pair, and all path coverage. The second part is to model and simulate a virtual target, which then pre-test the target with test cases. In other words, these generated test cases are automatically transformed into the event list. This simultaneously executes test cases to the simulated target within a virtual environment. As a result, it is possible to find the design and test error at the early stages of the development cycle and in turn can reduce development time and cost as much as possible.

  • Pain Intensity Estimation Using Deep Spatiotemporal and Handcrafted Features

    Jinwei WANG  Huazhi SUN  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/03/12
      Vol:
    E101-D No:6
      Page(s):
    1572-1580

    Automatically recognizing pain and estimating pain intensity is an emerging research area that has promising applications in the medical and healthcare field, and this task possesses a crucial role in the diagnosis and treatment of patients who have limited ability to communicate verbally and remains a challenge in pattern recognition. Recently, deep learning has achieved impressive results in many domains. However, deep architectures require a significant amount of labeled data for training, and they may fail to outperform conventional handcrafted features due to insufficient data, which is also the problem faced by pain detection. Furthermore, the latest studies show that handcrafted features may provide complementary information to deep-learned features; hence, combining these features may result in improved performance. Motived by the above considerations, in this paper, we propose an innovative method based on the combination of deep spatiotemporal and handcrafted features for pain intensity estimation. We use C3D, a deep 3-dimensional convolutional network that takes a continuous sequence of video frames as input, to extract spatiotemporal facial features. C3D models the appearance and motion of videos simultaneously. For handcrafted features, we propose extracting the geometric information by computing the distance between normalized facial landmarks per frame and the ones of the mean face shape, and we extract the appearance information using the histogram of oriented gradients (HOG) features around normalized facial landmarks per frame. Two levels of SVRs are trained using spatiotemporal, geometric and appearance features to obtain estimation results. We tested our proposed method on the UNBC-McMaster shoulder pain expression archive database and obtained experimental results that outperform the current state-of-the-art.

  • Domain Adaptation Based on Mixture of Latent Words Language Models for Automatic Speech Recognition Open Access

    Ryo MASUMURA  Taichi ASAMI  Takanobu OBA  Hirokazu MASATAKI  Sumitaka SAKAUCHI  Akinori ITO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/02/26
      Vol:
    E101-D No:6
      Page(s):
    1581-1590

    This paper proposes a novel domain adaptation method that can utilize out-of-domain text resources and partially domain matched text resources in language modeling. A major problem in domain adaptation is that it is hard to obtain adequate adaptation effects from out-of-domain text resources. To tackle the problem, our idea is to carry out model merger in a latent variable space created from latent words language models (LWLMs). The latent variables in the LWLMs are represented as specific words selected from the observed word space, so LWLMs can share a common latent variable space. It enables us to perform flexible mixture modeling with consideration of the latent variable space. This paper presents two types of mixture modeling, i.e., LWLM mixture models and LWLM cross-mixture models. The LWLM mixture models can perform a latent word space mixture modeling to mitigate domain mismatch problem. Furthermore, in the LWLM cross-mixture models, LMs which individually constructed from partially matched text resources are split into two element models, each of which can be subjected to mixture modeling. For the approaches, this paper also describes methods to optimize mixture weights using a validation data set. Experiments show that the mixture in latent word space can achieve performance improvements for both target domain and out-of-domain compared with that in observed word space.

  • Submodular Based Unsupervised Data Selection

    Aiying ZHANG  Chongjia NI  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/03/14
      Vol:
    E101-D No:6
      Page(s):
    1591-1604

    Automatic speech recognition (ASR) and keyword search (KWS) have more and more found their way into our everyday lives, and their successes could boil down lots of factors. In these factors, large scale of speech data used for acoustic modeling is the key factor. However, it is difficult and time-consuming to acquire large scale of transcribed speech data for some languages, especially for low-resource languages. Thus, at low-resource condition, it becomes important with which transcribed data for acoustic modeling for improving the performance of ASR and KWS. In view of using acoustic data for acoustic modeling, there are two different ways. One is using the target language data, and another is using large scale of other source languages data for cross-lingual transfer. In this paper, we propose some approaches for efficient selecting acoustic data for acoustic modeling. For target language data, a submodular based unsupervised data selection approach is proposed. The submodular based unsupervised data selection could select more informative and representative utterances for manual transcription for acoustic modeling. For other source languages data, the high misclassified as target language based submodular multilingual data selection approach and knowledge based group multilingual data selection approach are proposed. When using selected multilingual data for multilingual deep neural network training for cross-lingual transfer, it could improve the performance of ASR and KWS of target language. When comparing our proposed multilingual data selection approach with language identification based multilingual data selection approach, our proposed approach also obtains better effect. In this paper, we also analyze and compare the language factor and the acoustic factor influence on the performance of ASR and KWS. The influence of different scale of target language data on the performance of ASR and KWS at mono-lingual condition and cross-lingual condition are also compared and analyzed, and some significant conclusions can be concluded.

  • Deblocking Artifact of Satellite Image Based on Adaptive Soft-Threshold Anisotropic Filter Using Wavelet

    RISNANDAR  Masayoshi ARITSUGI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/02/26
      Vol:
    E101-D No:6
      Page(s):
    1605-1620

    New deblocking artifact, or blocking artifact reduction, algorithms based on nonlinear adaptive soft-threshold anisotropic filter in wavelet are proposed. Our deblocking algorithm uses soft-threshold, adaptive wavelet direction, adaptive anisotropic filter, and estimation. The novelties of this paper are an adaptive soft-threshold for deblocking artifact and an optimal intersection of confidence intervals (OICI) method in deblocking artifact estimation. The soft-threshold values are adaptable to different thresholds of flat area, texture area, and blocking artifact. The OICI is a reconstruction technique of estimated deblocking artifact which improves acceptable quality level of estimated deblocking artifact and reduces execution time of deblocking artifact estimation compared to the other methods. Our adaptive OICI method outperforms other adaptive deblocking artifact methods. Our estimated deblocking artifact algorithms have up to 98% of MSE improvement, up to 89% of RMSE improvement, and up to 99% of MAE improvement. We also got up to 77.98% reduction of computational time of deblocking artifact estimations, compared to other methods. We have estimated shift and add algorithms by using Euler++(E++) and Runge-Kutta of order 4++ (RK4++) algorithms which iterate one step an ordinary differential equation integration method. Experimental results showed that our E++ and RK4++ algorithms could reduce computational time in terms of shift and add, and RK4++ algorithm is superior to E++ algorithm.

  • Horizontal Partition for Scalable Control in Software-Defined Data Center Networks

    Shaojun ZHANG  Julong LAN  Chao QI  Penghao SUN  

     
    LETTER-Information Network

      Pubricized:
    2018/03/07
      Vol:
    E101-D No:6
      Page(s):
    1691-1693

    Distributed control plane architecture has been employed in software-defined data center networks to improve the scalability of control plane. However, since the flow space is partitioned by assigning switches to different controllers, the network topology is also partitioned and the rule setup process has to invoke multiple controllers. Besides, the control load balancing based on switch migration is heavyweight. In this paper, we propose a lightweight load partition method which decouples the flow space from the network topology. The flow space is partitioned with hosts rather than switches as carriers, which supports fine-grained and lightweight load balancing. Moreover, the switches are no longer needed to be assigned to different controllers and we keep all of them controlled by each controller, thus each flow request can be processed by exactly one controller in a centralized style. Evaluations show that our scheme reduces rule setup costs and achieves lightweight load balancing.

5101-5120hit(42807hit)