The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] software(508hit)

21-40hit(508hit)

  • Usage Log-Based Testing of Embedded Software and Identification of Dependencies among Environmental Components

    Sooyong JEONG  Sungdeok CHA  Woo Jin LEE  

     
    LETTER-Software Engineering

      Pubricized:
    2021/07/28
      Vol:
    E104-D No:11
      Page(s):
    2011-2014

    Embedded software often interacts with multiple inputs from various sensors whose dependency is often complex or partially known to developers. With incomplete information on dependency, testing is likely to be insufficient in detecting errors. We propose a method to enhance testing coverage of embedded software by identifying subtle and often neglected dependencies using information contained in usage log. Usage log, traditionally used primarily for investigative purpose following accidents, can also make useful contribution during testing of embedded software. Our approach relies on first individually developing behavioral model for each environmental input, performing compositional analysis while identifying feasible but untested dependencies from usage log, and generating additional test cases that correspond to untested or insufficiently tested dependencies. Experimental evaluation was performed on an Android application named Gravity Screen as well as an Arduino-based wearable glove app. Whereas conventional CTM-based testing technique achieved average branch coverage of 26% and 68% on these applications, respectively, proposed technique achieved 100% coverage in both.

  • Lightweight Operation History Graph for Traceability on Program Elements

    Takayuki OMORI  Katsuhisa MARUYAMA  Atsushi OHNISHI  

     
    PAPER-Software System

      Pubricized:
    2020/12/15
      Vol:
    E104-D No:3
      Page(s):
    404-418

    History data of edit operations are more beneficial than those stored in version control systems since they provide detailed information on how source code was changed. Meanwhile, a large number of recorded edit operations discourage developers and researchers from roughly understanding the changes. To assist with this task, it is desirable that they easily obtain traceability links for changed program elements over two source code snapshots before and after a code change. In this paper, we propose a graph representation called Operation History Graph (OHG), which presents code change information with such traceability links that are inferred from the history of edit operations. An OHG instance is generated by parsing any source code snapshot restored by edit histories and combining resultant abstract syntax trees (ASTs) into a single graph structure. To improve the performance of building graph instances, we avoided simply maintaining every program element. Any program element presenting the inner-structure of methods and non-changed elements are omitted. In addition, we adopted a lightweight static analysis for type name resolving to reduce required memory resource in the analysis while the accuracy of name resolving is preserved. Moreover, we assign a specific ID to each node and edge in the graph instance so that a part of the graph data can be separately stored and loaded on demand. These decisions make it feasible to build, manipulate, and store the graph with limited computer resources. To demonstrate the usefulness of the proposed operation history graph and verify whether detected traceability links are sufficient to reveal actual changes of program elements, we implemented tools to generate and manipulate OHG instances. The evaluation on graph generation performance shows that our tool can reduce the required computer resource as compared to another tool authors previously proposed. Moreover, the evaluation on traceability shows that OHG provides traceability links with sufficient accuracy as compared to the baseline approach using GumTree.

  • End-to-End SDN/NFV Orchestration of Multi-Domain Transport Networks and Distributed Computing Infrastructure for Beyond-5G Services Open Access

    Carlos MANSO  Pol ALEMANY  Ricard VILALTA  Raul MUÑOZ  Ramon CASELLAS  Ricardo MARTÍNEZ  

     
    INVITED PAPER-Network

      Pubricized:
    2020/09/11
      Vol:
    E104-B No:3
      Page(s):
    188-198

    The need of telecommunications operators to reduce Capital and Operational Expenditures in networks which traffic is continuously growing has made them search for new alternatives to simplify and automate their procedures. Because of the different transport network segments and multiple layers, the deployment of end-to-end services is a complex task. Also, because of the multiple vendor existence, the control plane has not been fully homogenized, making end-to-end connectivity services a manual and slow process, and the allocation of computing resources across the entire network a difficult task. The new massive capacity requested by Data Centers and the new 5G connectivity services will urge for a better solution to orchestrate the transport network and the distributed computing resources. This article presents and demonstrates a Network Slicing solution together with an end-to-end service orchestration for transport networks. The Network Slicing solution permits the co-existence of virtual networks (one per service) over the same physical network to ensure the specific service requirements. The network orchestrator allows automated end-to-end services across multi-layer multi-domain network segments making use of the standard Transport API (TAPI) data model for both l0 and l2 layers. Both solutions will allow to keep up with beyond 5G services and the higher and faster demand of network and computing resources.

  • An Exploratory Study of Copyright Inconsistency in the Linux Kernel

    Shi QIU  Daniel M. GERMAN  Katsuro INOUE  

     
    PAPER-Software Engineering

      Pubricized:
    2020/11/17
      Vol:
    E104-D No:2
      Page(s):
    254-263

    Software copyright claims an exclusive right for the software copyright owner to determine whether and under what conditions others can modify, reuse, or redistribute this software. For Free and Open Source Software (FOSS), it is very important to identify the copyright owner who can control those activities with license compliance. Copyright notice is a few sentences mostly placed in the header part of a source file as a comment or in a license document in a FOSS project, and it is an important clue to establish the ownership of a FOSS project. Repositories of FOSS projects contain rich and varied information on the development including the source code contributors who are also an important clue to establish the ownership. In this paper, as a first step of understanding copyright owner, we will explore the situation of the software copyright in the Linux kernel, a typical example of FOSS, by analyzing and comparing two kinds of datasets, copyright notices in source files and source code contributors in the software repositories. The discrepancy between two kinds of analysis results is defined as copyright inconsistency. The analysis result has indicated that copyright inconsistencies are prevalent in the Linux kernel. We have also found that code reuse, affiliation change, refactoring, support function, and others' contributions potentially have impacts on the occurrence of the copyright inconsistencies in the Linux kernel. This study exposes the difficulty in managing software copyright in FOSS, highlighting the usefulness of future work to address software copyright problems.

  • Quantitative Evaluation of Software Component Behavior Discovery Approach

    Cong LIU  

     
    LETTER

      Pubricized:
    2020/05/21
      Vol:
    E104-D No:1
      Page(s):
    117-120

    During the execution of software systems, their execution data can be recorded. By fully exploiting these data, software practitioners can discover behavioral models describing the actual execution of the underlying software system. The recorded unstructured software execution data may be too complex, spanning over several days, etc. Applying existing discovery techniques results in spaghetti-like models with no clear structure and no valuable information for comprehension. Starting from the observation that a software system is composed of a set of logical components, Liu et al. propose to decompose the software behavior discovery problem into smaller independent ones by discovering a behavioral model per component in [1]. However, the effectiveness of the proposed approach is not fully evaluated and compared with existing approaches. In this paper, we evaluate the quality (in terms of understandability/complexity) of discovered component behavior models in a quantitative manner. Based on evaluation, we show that this approach can reduce the complexity of the discovered model and gives a better understanding.

  • Practical Video Authentication Scheme to Analyze Software Characteristics

    Wan Yeon LEE  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2020/09/30
      Vol:
    E104-D No:1
      Page(s):
    212-215

    We propose a video authentication scheme to verify whether a given video file is recorded by a camera device or touched by a video editing tool. The proposed scheme prepares software characteristics of camera devices and video editing tools in advance, and compares them with the metadata of the given video file. Through practical implementation, we show that the proposed scheme has benefits of fast analysis time, high accuracy and full automation.

  • A Machine Learning Method for Automatic Copyright Notice Identification of Source Files

    Shi QIU  German M. DANIEL  Katsuro INOUE  

     
    LETTER-Software Engineering

      Pubricized:
    2020/09/18
      Vol:
    E103-D No:12
      Page(s):
    2709-2712

    For Free and Open Source Software (FOSS), identifying the copyright notices is important. However, both the collaborative manner of FOSS project development and the large number of source files increase its difficulty. In this paper, we aim at automatically identifying the copyright notices in source files based on machine learning techniques. The evaluation experiment shows that our method outperforms FOSSology, the only existing method based on regular expression.

  • Field-Trial Experiments of an IoT-Based Fiber Networks Control and Management-Plane Early Disaster Recovery via Narrow-Band and Lossy Links System (FRENLL)

    Sugang XU  Goshi SATO  Masaki SHIRAIWA  Katsuhiro TEMMA  Yasunori OWADA  Noboru YOSHIKANE  Takehiro TSURITANI  Toshiaki KURI  Yoshinari AWAJI  Naruto YONEMOTO  Naoya WADA  

     
    PAPER

      Pubricized:
    2020/05/14
      Vol:
    E103-B No:11
      Page(s):
    1214-1225

    Large-scale disasters can lead to a severe damage or destruction of optical transport networks including the data-plane (D-plane) and control and management-plane (C/M-plane). In addition to D-plane recovery, quick recovery of the C/M-plane network in modern software-defined networking (SDN)-based fiber optical networks is essential not only for emergency control of surviving optical network resources, but also for quick collection of information related to network damage/survivability to enable the optimal recovery plan to be decided as early as possible. With the advent of the Internet of Things (IoT) technologies, low energy consumption, and low-cost IoT devices have been more common. Corresponding long-distance networking technologies such as low-power wide-area (LPWA) and LPWA-based mesh (LPWA-mesh) networks promise wide coverage sensing and environment data collection capabilities. We are motivated to take an infrastructure-less IoT approach to provide long-distance, low-power and inexpensive wireless connectivity and create an emergency C/M-plane network for early disaster recovery. In this paper, we investigate the feasibility of fiber networks C/M-plane recovery using an IoT-based extremely narrow-band, and lossy links system (FRENLL). For the first time, we demonstrate a field-trial experiment of a long-latency/loss tolerable SDN C/M-plane that can take advantage of widely available IoT resources and easy-to-create wireless mesh networks to enable the timely recovery of the C/M-plane after disaster.

  • Empirical Evaluation of Mimic Software Project Data Sets for Software Effort Estimation

    Maohua GAN  Zeynep YÜCEL  Akito MONDEN  Kentaro SASAKI  

     
    PAPER-Software Engineering

      Pubricized:
    2020/07/03
      Vol:
    E103-D No:10
      Page(s):
    2094-2103

    To conduct empirical research on industry software development, it is necessary to obtain data of real software projects from industry. However, only few such industry data sets are publicly available; and unfortunately, most of them are very old. In addition, most of today's software companies cannot make their data open, because software development involves many stakeholders, and thus, its data confidentiality must be strongly preserved. To that end, this study proposes a method for artificially generating a “mimic” software project data set, whose characteristics (such as average, standard deviation and correlation coefficients) are very similar to a given confidential data set. Instead of using the original (confidential) data set, researchers are expected to use the mimic data set to produce similar results as the original data set. The proposed method uses the Box-Muller transform for generating normally distributed random numbers; and exponential transformation and number reordering for data mimicry. To evaluate the efficacy of the proposed method, effort estimation is considered as potential application domain for employing mimic data. Estimation models are built from 8 reference data sets and their concerning mimic data. Our experiments confirmed that models built from mimic data sets show similar effort estimation performance as the models built from original data sets, which indicate the capability of the proposed method in generating representative samples.

  • Optimal Rejuvenation Policies for Non-Markovian Availability Models with Aperiodic Checkpointing

    Junjun ZHENG  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER-Dependable Computing

      Pubricized:
    2020/07/16
      Vol:
    E103-D No:10
      Page(s):
    2133-2142

    In this paper, we present non-Markovian availability models for capturing the dynamics of system behavior of an operational software system that undergoes aperiodic time-based software rejuvenation and checkpointing. Two availability models with rejuvenation are considered taking account of the procedure after the completion of rollback recovery operation. We further proceed to investigate whether there exists the optimal rejuvenation schedule that maximizes the steady-state system availability, which is derived by means of the phase expansion technique, since the resulting models are not the trivial stochastic models such as semi-Markov process and Markov regenerative process, so that it is hard to solve them by using the common approaches like Laplace-Stieltjes transform and embedded Markov chain techniques. The numerical experiments are conducted to determine the optimal rejuvenation trigger timing maximizing the steady-state system availability for each availability model, and to compare both two models.

  • 3D-HEVC Virtual View Synthesis Based on a Reconfigurable Architecture

    Lin JIANG  Xin WU  Yun ZHU  Yu WANG  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2019/11/12
      Vol:
    E103-B No:5
      Page(s):
    618-626

    For high definition (HD) videos, the 3D-High Efficiency Video Coding (3D-HEVC) reference algorithm incurs dramatically highly computation loads. Therefore, with the demands for the real-time processing of HD video, a hardware implementation is necessary. In this paper, a reconfigurable architecture is proposed that can support both median filtering preprocessing and mean filtering preprocessing to satisfy different scene depth maps. The architecture sends different instructions to the corresponding processing elements according to different scenarios. Mean filter is used to process near-range images, and median filter is used to process long-range images. The simulation results show that the designed architecture achieves an averaged PSNR of 34.55dB for the tested images. The hardware design for the proposed virtual view synthesis system operates at a maximum clock frequency of 160MHz on the BEE4 platform which is equipped with four Virtex-6 FF1759 LX550T Field-Programmable Gate Array (FPGA) for outputting 720p (1024×768) video at 124fps.

  • Cost-Sensitive and Sparse Ladder Network for Software Defect Prediction

    Jing SUN  Yi-mu JI  Shangdong LIU  Fei WU  

     
    LETTER-Software Engineering

      Pubricized:
    2020/01/29
      Vol:
    E103-D No:5
      Page(s):
    1177-1180

    Software defect prediction (SDP) plays a vital role in allocating testing resources reasonably and ensuring software quality. When there are not enough labeled historical modules, considerable semi-supervised SDP methods have been proposed, and these methods utilize limited labeled modules and abundant unlabeled modules simultaneously. Nevertheless, most of them make use of traditional features rather than the powerful deep feature representations. Besides, the cost of the misclassification of the defective modules is higher than that of defect-free ones, and the number of the defective modules for training is small. Taking the above issues into account, we propose a cost-sensitive and sparse ladder network (CSLN) for SDP. We firstly introduce the semi-supervised ladder network to extract the deep feature representations. Besides, we introduce the cost-sensitive learning to set different misclassification costs for defective-prone and defect-free-prone instances to alleviate the class imbalance problem. A sparse constraint is added on the hidden nodes in ladder network when the number of hidden nodes is large, which enables the model to find robust structures of the data. Extensive experiments on the AEEEM dataset show that the CSLN outperforms several state-of-the-art semi-supervised SDP methods.

  • Design and Implementation of 10Gbps Software PPPoE Router for IoT Smart Home Network

    Ping DU  Akihiro NAKAO  Satoshi MIKI  Makoto INOUE  

     
    PAPER-Network

      Pubricized:
    2019/10/08
      Vol:
    E103-B No:4
      Page(s):
    422-430

    In the coming smart-home era, more and more household electrical appliances are generating more and more sensor data and transmitting them over the home networks, which are often connected to Internet through Point-to-Point Protocol over Ethernet (PPPoE) for desirable authentication and accounting. However, according to our knowledge, high-speed commercial home PPPoE router is still absent for a home network environment. In this paper, we first introduce and evaluate our programmable platform FLARE-DPDK for ease of programming network functions. Then we introduce our effort to build a compact 10Gbps software FLARE PPPoE router on a commercial mini-PC. In our implementation, the control plane is implemented with Linux PPPoE software for authentication-like signaling control. The data plane is implemented over FLARE-DPDK platform, where we get packets from physical network interfaces directly bypassing Linux kernel and distribute packets to multiple CPU cores for data processing in parallel. We verify our software PPPoE router in both lab and production network environment. The experimental results show that our FLARE software PPPoE router can achieve much higher throughput than a commercial PPPoE router tested in a production environment.

  • Software Development Effort Estimation from Unstructured Software Project Description by Sequence Models

    Tachanun KANGWANTRAKOOL  Kobkrit VIRIYAYUDHAKORN  Thanaruk THEERAMUNKONG  

     
    PAPER

      Pubricized:
    2020/01/14
      Vol:
    E103-D No:4
      Page(s):
    739-747

    Most existing methods of effort estimations in software development are manual, labor-intensive and subjective, resulting in overestimation with bidding fail, and underestimation with money loss. This paper investigates effectiveness of sequence models on estimating development effort, in the form of man-months, from software project data. Four architectures; (1) Average word-vector with Multi-layer Perceptron (MLP), (2) Average word-vector with Support Vector Regression (SVR), (3) Gated Recurrent Unit (GRU) sequence model, and (4) Long short-term memory (LSTM) sequence model are compared in terms of man-months difference. The approach is evaluated using two datasets; ISEM (1,573 English software project descriptions) and ISBSG (9,100 software projects data), where the former is a raw text and the latter is a structured data table explained the characteristic of a software project. The LSTM sequence model achieves the lowest and the second lowest mean absolute errors, which are 0.705 and 14.077 man-months for ISEM and ISBSG datasets respectively. The MLP model achieves the lowest mean absolute errors which is 14.069 for ISBSG datasets.

  • Compiler Software Coherent Control for Embedded High Performance Multicore

    Boma A. ADHI  Tomoya KASHIMATA  Ken TAKAHASHI  Keiji KIMURA  Hironori KASAHARA  

     
    PAPER

      Vol:
    E103-C No:3
      Page(s):
    85-97

    The advancement of multicore technology has made hundreds or even thousands of cores processor on a single chip possible. However, on a larger scale multicore, a hardware-based cache coherency mechanism becomes overwhelmingly complicated, hot, and expensive. Therefore, we propose a software coherence scheme managed by a parallelizing compiler for shared-memory multicore systems without a hardware cache coherence mechanism. Our proposed method is simple and efficient. It is built into OSCAR automatic parallelizing compiler. The OSCAR compiler parallelizes the coarse grain task, analyzes stale data and line sharing in the program, then solves those problems by simple program restructuring and data synchronization. Using our proposed method, we compiled 10 benchmark programs from SPEC2000, SPEC2006, NAS Parallel Benchmark (NPB), and MediaBench II. The compiled binaries then are run on Renesas RP2, an 8 cores SH-4A processor, and a custom 8-core Altera Nios II system on Altera Arria 10 FPGA. The cache coherence hardware on the RP2 processor is only available for up to 4 cores. The RP2's cache coherence hardware can also be turned off for non-coherence cache mode. The Nios II multicore system does not have any hardware cache coherence mechanism; therefore, running a parallel program is difficult without any compiler support. The proposed method performed as good as or better than the hardware cache coherence scheme while still provided the correct result as the hardware coherence mechanism. This method allows a massive array of shared memory CPU cores in an HPC setting or a simple non-coherent multicore embedded CPU to be easily programmed. For example, on the RP2 processor, the proposed software-controlled non-coherent-cache (NCC) method gave us 2.6 times speedup for SPEC 2000 “equake” with 4 cores against sequential execution while got only 2.5 times speedup for 4 cores MESI hardware coherent control. Also, the software coherence control gave us 4.4 times speedup for 8 cores with no hardware coherence mechanism available.

  • Software Process Capability Self-Assessment Support System Based on Task and Work Product Characteristics: A Case Study of ISO/IEC 29110 Standard

    Apinporn METHAWACHANANONT  Marut BURANARACH  Pakaimart AMSURIYA  Sompol CHAIMONGKHON  Kamthorn KRAIRAKSA  Thepchai SUPNITHI  

     
    PAPER-Software Engineering

      Pubricized:
    2019/10/17
      Vol:
    E103-D No:2
      Page(s):
    339-347

    A key driver of software business growth in developing countries is the survival of software small and medium-sized enterprises (SMEs). Quality of products is a critical factor that can indicate the future of the business by building customer confidence. Software development agencies need to be aware of meeting international standards in software development process. In practice, consultants and assessors are usually employed as the primary solution, which can impact the budget in case of small businesses. Self-assessment tools for software development process can potentially reduce time and cost of formal assessment for software SMEs. However, the existing support methods and tools are largely insufficient in terms of process coverage and semi-automated evaluation. This paper proposes to apply a knowledge-based approach in development of a self-assessment and gap analysis support system for the ISO/IEC 29110 standard. The approach has an advantage that insights from domain experts and the standard are captured in the knowledge base in form of decision tables that can be flexibly managed. Our knowledge base is unique in that task lists and work products defined in the standard are broken down into task and work product characteristics, respectively. Their relation provides the links between Task List and Work Product which make users more understand and influence self-assessment. A prototype support system was developed to assess the level of software development capability of the agencies based on the ISO/IEC 29110 standard. A preliminary evaluation study showed that the system can improve performance of users who are inexperienced in applying ISO/IEC 29110 standard in terms of task coverage and user's time and effort compared to the traditional self-assessment method.

  • Towards Blockchain-Based Software-Defined Networking: Security Challenges and Solutions

    Wenjuan LI  Weizhi MENG  Zhiqiang LIU  Man-Ho AU  

     
    INVITED PAPER

      Pubricized:
    2019/11/08
      Vol:
    E103-D No:2
      Page(s):
    196-203

    Software-Defined Networking (SDN) enables flexible deployment and innovation of new networking applications by decoupling and abstracting the control and data planes. It has radically changed the concept and way of building and managing networked systems, and reduced the barriers to entry for new players in the service markets. It is considered to be a promising solution providing the scale and versatility necessary for IoT. However, SDN may also face many challenges, i.e., the centralized control plane would be a single point of failure. With the advent of blockchain technology, blockchain-based SDN has become an emerging architecture for securing a distributed network environment. Motivated by this, in this work, we summarize the generic framework of blockchain-based SDN, discuss security challenges and relevant solutions, and provide insights on the future development in this field.

  • Study on the Vulnerabilities of Free and Paid Mobile Apps Associated with Software Library

    Takuya WATANABE  Mitsuaki AKIYAMA  Fumihiro KANEI  Eitaro SHIOJI  Yuta TAKATA  Bo SUN  Yuta ISHII  Toshiki SHIBAHARA  Takeshi YAGI  Tatsuya MORI  

     
    PAPER-Network Security

      Pubricized:
    2019/11/22
      Vol:
    E103-D No:2
      Page(s):
    276-291

    This paper reports a large-scale study that aims to understand how mobile application (app) vulnerabilities are associated with software libraries. We analyze both free and paid apps. Studying paid apps was quite meaningful because it helped us understand how differences in app development/maintenance affect the vulnerabilities associated with libraries. We analyzed 30k free and paid apps collected from the official Android marketplace. Our extensive analyses revealed that approximately 70%/50% of vulnerabilities of free/paid apps stem from software libraries, particularly from third-party libraries. Somewhat paradoxically, we found that more expensive/popular paid apps tend to have more vulnerabilities. This comes from the fact that more expensive/popular paid apps tend to have more functionality, i.e., more code and libraries, which increases the probability of vulnerabilities. Based on our findings, we provide suggestions to stakeholders of mobile app distribution ecosystems.

  • Towards Minimizing RAM Requirement for Implementation of Grain-128a on ARM Cortex-M3

    Yuhei WATANABE  Hideki YAMAMOTO  Hirotaka YOSHIDA  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    2-10

    As Internet-connected service is emerged, there has been a need for use cases where a lightweight cryptographic primitive meets both of a constrained hardware implementation requirement and a constrained embedded software requirement. One of the examples of these use cases is the PKES (Passive Keyless Entry and Start) system in an automotive domain. From the perspective on these use cases, one interesting direction is to investigate how small the memory (RAM/ROM) requirement of ARM-implementations of hardware-oriented stream ciphers can be. In this paper, we propose implementation techniques for memory-optimized implementations of lightweight hardware-oriented stream ciphers including Grain-128a specified in ISO/IEC 29167-13 for RFID protocols. Our techniques include data-dependency analysis to take a close look at how and in which timing certain variables are updated and also the way taking into account the structure of registers on the target micro-controller. In order to minimize RAM size, we reduce the number of general purpose registers for computation of Grain-128a's update and pre-output values. We present results of our memory-optimized implementations of Grain-128a, one of which requires 84 RAM bytes on ARM Cortex-M3.

  • Frequency Efficient Subcarrier Spacing in Multicarrier Backscatter Sensors System Open Access

    Jin MITSUGI  Yuki SATO  Yuusuke KAWAKITA  Haruhisa ICHIKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E102-A No:12
      Page(s):
    1834-1841

    Backscatter wireless communications offer advantages such as batteryless operations, small form factor, and radio regulatory exemption sensors. The major challenge ahead of backscatter wireless communications is synchronized multicarrier data collection, which can be realized by rejecting mutual harmonics among backscatters. This paper analyzes the mutual interferences of digitally modulated multicarrier backscatter to find interferences from higher frequency subcarriers to lower frequency subcarriers, which do not take place in analog modulated multicarrier backscatters, is harmful for densely populated subcarriers. This reverse interference distorts the harmonics replica, deteriorating the performance of the existing method, which rejects mutual interference among subcarriers by 5dB processing gain. To solve this problem, this paper analyzes the relationship between subcarrier spacing and reverse interference, and reveals that an alternate channel spacing, with channel separation twice the bandwidth of a subcarrier, can provide reasonably dense subcarrier allocation and can alleviate reverse interference. The idea is examined with prototype sensors in a wired experiment and in an indoor propagation experiment. The results reveal that with alternate channel spacing, the reverse interference practically becomes negligible, and the existing interference rejection method achieves the original processing gain of 5dB with one hundredth packet error rate reduction.

21-40hit(508hit)