The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Q(6809hit)

1521-1540hit(6809hit)

  • Negative Surveys with Randomized Response Techniques for Privacy-Aware Participatory Sensing

    Shunsuke AOKI  Kaoru SEZAKI  

     
    PAPER-Network

      Vol:
    E97-B No:4
      Page(s):
    721-729

    Participatory sensing is an emerging system that allows the increasing number of smartphone users to share effectively the minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data. However, a number of privacy concerns will hinder the spread of participatory sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption schemes. We should prepare a privacy-preserving participatory sensing scheme with low computation complexity. Moreover, an environment that can reassure participants and encourage their participation in participatory sensing is strongly required because the quality of the statistical data is dependent on the active contribution of general users. In this article, we present MNS-RRT algorithms, which is the combination of negative surveys and randomized response techniques, for preserving privacy in participatory sensing, with high levels of data integrity. By using our method, participatory sensing applications can deal with a data having two selections in a dimension. We evaluated how this scheme can preserve the privacy while ensuring data integrity.

  • Some Results on Generalized Quasi-Cyclic Codes over $mathbb{F}_q+umathbb{F}_q$

    Jian GAO  Fang-Wei FU  Linzhi SHEN  Wenli REN  

     
    LETTER-Coding Theory

      Vol:
    E97-A No:4
      Page(s):
    1005-1011

    Generalized quasi-cyclic (GQC) codes with arbitrary lengths over the ring $mathbb{F}_{q}+umathbb{F}_{q}$, where u2=0, q=pn, n a positive integer and p a prime number, are investigated. By the Chinese Remainder Theorem, structural properties and the decomposition of GQC codes are given. For 1-generator GQC codes, minimum generating sets and lower bounds on the minimum distance are given.

  • Linear Complexity of Pseudorandom Sequences Derived from Polynomial Quotients: General Cases

    Xiaoni DU  Ji ZHANG  Chenhuang WU  

     
    PAPER-Information Theory

      Vol:
    E97-A No:4
      Page(s):
    970-974

    We determine the linear complexity of binary sequences derived from the polynomial quotient modulo p defined by $F(u)equiv rac{f(u)-f_p(u)}{p} ~(mod~ p), qquad 0 le F(u) le p-1,~uge 0,$ where fp(u)≡f(u) (mod p), for general polynomials $f(x)in mathbb{Z}[x]$. The linear complexity equals to one of the following values {p2-p,p2-p+1,p2-1,p2} if 2 is a primitive root modulo p2, depending on p≡1 or 3 modulo 4 and the number of solutions of f'(u)≡0 (mod) p, where f'(x) is the derivative of f(x). Furthermore, we extend the constructions to d-ary sequences for prime d|(p-1) and d being a primitive root modulo p2.

  • Joint CPFSK Modulation and Physical-Layer Network Coding in Two-Way Relay Channels

    Nan SHA  Yuanyuan GAO  Xiaoxin YI  Wenlong LI  Weiwei YANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E97-A No:4
      Page(s):
    1021-1023

    A joint continuous phase frequency shift keying (CPFSK) modulation and physical-layer network coding (PNC), i.e., CPFSK-PNC, is proposed for two-way relay channels (TWRCs). This letter discusses the signal detection of the CPFSK-PNC scheme with emphasis on the maximum-likelihood sequence detection (MLSD) algorithm for the relay receiver. The end-to-end error performance of the proposed CPFSK-PNC scheme is evaluated through simulations.

  • Textual Approximation Methods for Time Series Classification: TAX and l-TAX Open Access

    Abdulla Al MARUF  Hung-Hsuan HUANG  Kyoji KAWAGOE  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    798-810

    A lot of work has been conducted on time series classification and similarity search over the past decades. However, the classification of a time series with high accuracy is still insufficient in applications such as ubiquitous or sensor systems. In this paper, a novel textual approximation of a time series, called TAX, is proposed to achieve high accuracy time series classification. l-TAX, an extended version of TAX that shows promising classification accuracy over TAX and other existing methods, is also proposed. We also provide a comprehensive comparison between TAX and l-TAX, and discuss the benefits of both methods. Both TAX and l-TAX transform a time series into a textual structure using existing document retrieval methods and bioinformatics algorithms. In TAX, a time series is represented as a document like structure, whereas l-TAX used a sequence of textual symbols. This paper provides a comprehensive overview of the textual approximation and techniques used by TAX and l-TAX

  • Magnetic Field Homogeneity of Birdcage Coil for 4T MRI System with No Lumped Circuit Elements

    Ryotaro SUGA  Kazuyuki SAITO  Masaharu TAKAHASHI  Koichi ITO  

     
    PAPER-Antennas and Propagation

      Vol:
    E97-B No:4
      Page(s):
    791-797

    In recent years, magnetic resonance imaging (MRI) systems that operate up to under 3T are being used in clinical practice in Japan. In order to achieve the requirements of higher image quality and shorter imaging times, devices that utilize high magnetic fields (> 3T) and high power electromagnetic (EM) wave pulses have been developed. The rise of the static magnetic field is proportional to the increase of the EM wave frequency which raises the issue of variation in capacitance used in the radio frequency (RF) coil for MRI system. In addition, increasing power causes problems of withstanding voltage and these approaches leads to generation of non-uniform magnetic field inside the RF coil. Therefore, we proposed a birdcage coil without the use of lumped circuit elements for MRI systems in previous study. However, it is difficult to fabricate this birdcage coil. Hence, simply-structured birdcage coil with no lumped circuit elements is desired. In this paper, we propose a simply-structured birdcage coil with no lumped circuit elements for a 4T MRI system. In addition, the authors investigated the input impedance and magnetic field distribution of the proposed coil by FDTD calculations and measurements. The results confirm that the proposed birdcage coil matches the performance of the conventional birdcage coil which includes several capacitors.

  • Performance Comparison of Subjective Assessment Methods for Stereoscopic 3D Video Quality

    Taichi KAWANO  Kazuhisa YAMAGISHI  Takanori HAYASHI  

     
    PAPER-Network

      Vol:
    E97-B No:4
      Page(s):
    738-745

    The International Telecommunication Union has standardized many subjective assessment methods for stereoscopic three-dimensional (3D) and 2D video quality. The same methods are used for 3D and 2D videos. The assessment time, stability, and discrimination ability, which means the ability to identify differences in video quality, are important factors in subjective assessment methods. Many studies on these factors have been done for 2D video quality. However, these factors for 3D video quality have not been sufficiently studied. To address this, we conduct subjective quality assessments for 3D and 2D videos using the absolute category rating (ACR), degradation category rating (DCR), and double stimulus continuous quality-scale (DSCQS) methods that are defined in ITU Recommendations. We first investigate the Pearson's correlation coefficients and Spearman's rank correlation coefficients between different pairings of the three methods to clarify which method is most efficient in terms of assessment time. The different pairings of the three methods exhibit high coefficients. These results indicate that the order relation of the mean opinion score (MOS) and the distance between the MOSs for these methods are almost the same. Therefore, for generally investigating the quality characteristics, the ACR method is most efficient because it has the shortest assessment time. Next, we analyze the stability of these subjective assessment methods. We clarify that the confidence intervals (CIs) of the MOSs for 3D video are almost the same as those for 2D video and that the stability of the DCR method is higher than that of the other methods. The DSCQS method has the smallest CIs for high-quality video. Finally, we investigate the discrimination ability of these subjective assessment methods. The results show that the DCR method performs better than the others in terms of the number of paired MOSs with a significant difference for low-quality video. However, we confirm that the DSCQS method performs better than the others for high-quality video.

  • Rapid Acquisition Assisted by Navigation Data for Inter-Satellite Links of Navigation Constellation

    Xian-Bin LI  Yue-Ke WANG  Jian-Yun CHEN  Shi-ce NI  

     
    PAPER-Navigation, Guidance and Control Systems

      Vol:
    E97-B No:4
      Page(s):
    915-922

    Introducing inter-satellite ranging and communication links in a Global Navigation Satellite System (GNSS) can improve its performance. In view of the highly dynamic characteristics and the rapid but reliable acquisition requirement of inter-satellite link (ISL) signal of navigation constellation, we utilize navigation data, which is the special resource of navigation satellites, to assist signal acquisition. In this paper, we introduce a method that uses the navigation data for signal acquisition from three aspects: search space, search algorithm, and detector structure. First, an iteration method to calculate the search space is presented. Then the most efficient algorithm is selected by comparing the computation complexity of different search algorithms. Finally, with the navigation data, we also propose a method to guarantee the detecting probability constant by adjusting the non-coherent times. An analysis shows that with the assistance of navigation data, we can reduce the computing cost of ISL signal acquisition significantly, as well effectively enhancing acquisition speed and stabling the detection probability.

  • Stem Cell Quantity Determination in Artificial Culture Bone by Ultrasonic Testing

    Naomi YAGI  Tomomoto ISHIKAWA  Yutaka HATA  

     
    PAPER-Ultrasonics

      Vol:
    E97-A No:4
      Page(s):
    913-922

    This paper describes an ultrasonic system that estimates the cell quantity of an artificial culture bone, which is effective for appropriate treat with a composite of this material and Bone Marrow Stromal Cells. For this system, we examine two approaches for analyzing the ultrasound waves transmitted through the cultured bone, including stem cells to estimate cell quantity: multiple regression and fuzzy inference. We employ two characteristics from the obtained wave for applying each method. These features are the amplitude and the frequency; the amplitude is measured from the obtained wave, and the frequency is calculated by the cross-spectrum method. The results confirmed that the fuzzy inference method yields the accurate estimates of cell quantity in artificial culture bone. Using this ultrasonic estimation system, the orthopaedic surgeons can choose the composites that contain favorable number of cells before the implantation.

  • Probabilistic Range Querying over Gaussian Objects Open Access

    Tingting DONG  Chuan XIAO  Yoshiharu ISHIKAWA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    694-704

    Probabilistic range query is an important type of query in the area of uncertain data management. A probabilistic range query returns all the data objects within a specific range from the query object with a probability no less than a given threshold. In this paper, we assume that each uncertain object stored in the database is associated with a multi-dimensional Gaussian distribution, which describes the probability distribution that the object appears in the multi-dimensional space. A query object is either a certain object or an uncertain object modeled by a Gaussian distribution. We propose several filtering techniques and an R-tree-based index to efficiently support probabilistic range queries over Gaussian objects. Extensive experiments on real data demonstrate the efficiency of our proposed approach.

  • A New Non-data Aided Frequency Offset Estimation Method for OFDM Based Device-to-Device Systems

    Kyunghoon WON  Dongjun LEE  Wonjun HWANG  Hyung-Jin CHOI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:4
      Page(s):
    896-904

    D2D (Device-to-Device) communication has received considerable attention in recent years as one of the key technologies for future communication systems. Among the typical D2D communication systems, FlashLinQ (FLQ) adopted single-tone OFDM (Orthogonal Frequency Division Multiplexing) transmission which enables wide-sense discovery and distributed channel-aware link scheduling. Although synchronization based on a CES (Common External Source) is basically assumed in FLQ, a means to support devices when they are unable to use a CES is still necessary. In most OFDM systems, CFO (Carrier Frequency Offset) induces ICI (Inter Channel Interference) which degrades overall system performance drastically. Especially in D2D systems, ICI can be amplified due to different path losses between link and a precise estimation and correction of CFO is very important. Many CFO estimation algorithms based on DA (Data Aided) and NDA (None Data Aided) were proposed for OFDM systems, but there are several constraint conditions on frequency synchronization in D2D systems. Therefore, in this paper, we propose a new NDA-CFO estimation method for OFDM based D2D systems. The proposed method is based on the characteristics of single-tone OFDM signal, and is composed of two estimation stages: initial estimation and feed-back estimation. In initial estimation, the estimation of CFO is obtained by using two correlation results in a symbol. Also, estimation range can be adaptively defined as the distance between the two windows. In feed-back estimation, the distance between the two correlation results is gradually increased by re-using the estimated CFO and the correlation results. Therefore, more precise CFO estimation can be obtained. A numerical analysis and performance evaluation verify that the proposed method has a large estimation range and achieves precise estimation performance compared to the conventional methods.

  • New Metrics for Prioritized Interaction Test Suites

    Rubing HUANG  Dave TOWEY  Jinfu CHEN  Yansheng LU  

     
    PAPER-Software Engineering

      Vol:
    E97-D No:4
      Page(s):
    830-841

    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies.

  • An Improved Video Identification Scheme Based on Video Tomography

    Qing-Ge JI  Zhi-Feng TAN  Zhe-Ming LU  Yong ZHANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    919-927

    In recent years, with the popularization of video collection devices and the development of the Internet, it is easy to copy original digital videos and distribute illegal copies quickly through the Internet. It becomes a critical task to uphold copyright laws, and this problem will require a technical solution. Therefore, as a challenging problem, copy detection or video identification becomes increasingly important. The problem addressed here is to identify a given video clip in a given set of video sequences. In this paper, an extension to the video identification approach based on video tomography is presented. First, the feature extraction process is modified to enhance the reliability of the shot signature with its size unchanged. Then, a new similarity measurement between two shot signatures is proposed to address the problem generated by the original approach when facing the query shot with a short length. In addition, the query scope is extended from one shot only to one clip (several consecutive shots) by giving a new definition of similarity between two clips and describing a search algorithm which can save much of the computation cost. Experimental results show that the proposed approach is more suitable for identifying shots with short lengths than the original approach. The clip query approach performs well in the experiment and it also shows strong robustness to data loss.

  • New Constructions of Perfect 8-QAM+/8-QAM Sequences

    Chengqian XU  Xiaoyu CHEN  Kai LIU  

     
    LETTER-Coding Theory

      Vol:
    E97-A No:4
      Page(s):
    1012-1015

    This letter presents new methods for transforming perfect ternary sequences into perfect 8-QAM+ sequences. Firstly, based on perfect ternary sequences with even period, two mappings which can map two ternary variables to an 8-QAM+ symbol are employed for constructing new perfect 8-QAM+ sequences. In this case, the proposed construction is a generalization of the existing one. Then based on perfect ternary sequence with odd period, perfect 8-QAM sequences are generated. Compared with perfect 8-QAM+ sequences, the resultant sequences have no energy loss.

  • Computationally Efficient Estimation of Squared-Loss Mutual Information with Multiplicative Kernel Models

    Tomoya SAKAI  Masashi SUGIYAMA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:4
      Page(s):
    968-971

    Squared-loss mutual information (SMI) is a robust measure of the statistical dependence between random variables. The sample-based SMI approximator called least-squares mutual information (LSMI) was demonstrated to be useful in performing various machine learning tasks such as dimension reduction, clustering, and causal inference. The original LSMI approximates the pointwise mutual information by using the kernel model, which is a linear combination of kernel basis functions located on paired data samples. Although LSMI was proved to achieve the optimal approximation accuracy asymptotically, its approximation capability is limited when the sample size is small due to an insufficient number of kernel basis functions. Increasing the number of kernel basis functions can mitigate this weakness, but a naive implementation of this idea significantly increases the computation costs. In this article, we show that the computational complexity of LSMI with the multiplicative kernel model, which locates kernel basis functions on unpaired data samples and thus the number of kernel basis functions is the sample size squared, is the same as that for the plain kernel model. We experimentally demonstrate that LSMI with the multiplicative kernel model is more accurate than that with plain kernel models in small sample cases, with only mild increase in computation time.

  • Cross-Correlation between a p-Ary m-Sequence and Its All Decimated Sequences for $d= rac{(p^{m}+1)(p^{m}+p-1)}{p+1}$

    Yongbo XIA  Shaoping CHEN  Tor HELLESETH  Chunlei LI  

     
    PAPER-Information Theory

      Vol:
    E97-A No:4
      Page(s):
    964-969

    Let m ≥ 3 be an odd positive integer, n=2m and p be an odd prime. For the decimation factor $d= rac{(p^{m}+1)(p^{m}+p-1)}{p+1}$, the cross-correlation between the p-ary m-sequence {tr1n(αt)} and its all decimated sequences {tr1n(αdt+l)} is investigated, where 0 ≤ l < gcd(d,pn-1) and α is a primitive element of Fpn. It is shown that the cross-correlation function takes values in {-1,-1±ipm|i=1,2,…p}. The result presented in this paper settles a conjecture proposed by Kim et al. in the 2012 IEEE International Symposium on Information Theory Proceedings paper (pp.1014-1018), and also improves their result.

  • QoS Analysis for Service Composition by Human and Web Services Open Access

    Donghui LIN  Toru ISHIDA  Yohei MURAKAMI  Masahiro TANAKA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    762-769

    The availability of more and more Web services provides great varieties for users to design service processes. However, there are situations that services or service processes cannot meet users' requirements in functional QoS dimensions (e.g., translation quality in a machine translation service). In those cases, composing Web services and human tasks is expected to be a possible alternative solution. However, analysis of such practical efforts were rarely reported in previous researches, most of which focus on the technology of embedding human tasks in software environments. Therefore, this study aims at analyzing the effects of composing Web services and human activities using a case study in the domain of language service with large scale experiments. From the experiments and analysis, we find out that (1) service implementation variety can be greatly increased by composing Web services and human activities for satisfying users' QoS requirements; (2) functional QoS of a Web service can be significantly improved by inducing human activities with limited cost and execution time provided certain quality of human activities; and (3) multiple QoS attributes of a composite service are affected in different ways with different quality of human activities.

  • Multimode Image Clustering Using Optimal Image Descriptor Open Access

    Nasir AHMED  Abdul JALIL  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    743-751

    Manifold learning based image clustering models are usually employed at local level to deal with images sampled from nonlinear manifold. Multimode patterns in image data matrices can vary from nominal to significant due to images with different expressions, pose, illumination, or occlusion variations. We show that manifold learning based image clustering models are unable to achieve well separated images at local level for image datasets with significant multimode data patterns. Because gray level image features used in these clustering models are not able to capture the local neighborhood structure effectively for multimode image datasets. In this study, we use nearest neighborhood quality (NNQ) measure based criterion to improve local neighborhood structure in terms of correct nearest neighbors of images locally. We found Gist as the optimal image descriptor among HOG, Gist, SUN, SURF, and TED image descriptors based on an overall maximum NNQ measure on 10 benchmark image datasets. We observed significant performance improvement for recently reported clustering models such as Spectral Embedded Clustering (SEC) and Nonnegative Spectral Clustering with Discriminative Regularization (NSDR) using proposed approach. Experimentally, significant overall performance improvement of 10.5% (clustering accuracy) and 9.2% (normalized mutual information) on 13 benchmark image datasets is observed for SEC and NSDR clustering models. Further, overall computational cost of SEC model is reduced to 19% and clustering performance for challenging outdoor natural image databases is significantly improved by using proposed NNQ measure based optimal image representations.

  • Probabilistic Frequent Itemset Mining on a GPU Cluster Open Access

    Yusuke KOZAWA  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    779-789

    Probabilistic frequent itemset mining, which discovers frequent itemsets from uncertain data, has attracted much attention due to inherent uncertainty in the real world. Many algorithms have been proposed to tackle this problem, but their performance is not satisfactory because handling uncertainty incurs high processing cost. To accelerate such computation, we utilize GPUs (Graphics Processing Units). Our previous work accelerated an existing algorithm with a single GPU. In this paper, we extend the work to employ multiple GPUs. Proposed methods minimize the amount of data that need to be communicated among GPUs, and achieve load balancing as well. Based on the methods, we also present algorithms on a GPU cluster. Experiments show that the single-node methods realize near-linear speedups, and the methods on a GPU cluster of eight nodes achieve up to a 7.1 times speedup.

  • No-Reference Quality Metric of Blocking Artifacts Based on Color Discontinuity Analysis

    Leida LI  Hancheng ZHU  Jiansheng QIAN  Jeng-Shyang PAN  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    993-997

    This letter presents a no-reference blocking artifact measure based on analysis of color discontinuities in YUV color space. Color shift and color disappearance are first analyzed in JPEG images. For color-shifting and color-disappearing areas, the blocking artifact scores are obtained by computing the gradient differences across the block boundaries in U component and Y component, respectively. An overall quality score is then produced as the average of the local ones. Extensive simulations and comparisons demonstrate the efficiency of the proposed method.

1521-1540hit(6809hit)