The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6041-6060hit(42807hit)

  • A Study on Video Generation Based on High-Density Temporal Sampling

    Yukihiro BANDOH  Seishi TAKAMURA  Atsushi SHIMIZU  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2044-2047

    In current video encoding systems, the acquisition process is independent from the video encoding process. In order to compensate for the independence, pre-filters prior to the encoder are used. However, conventional pre-filters are designed under constraints on the temporal resolution, so they are not optimized enough in terms of coding efficiency. By relaxing the restriction on the temporal resolution of current video encoding systems, there is a good possibility to generate a video signal suitable for the video encoding process. This paper proposes a video generation method with an adaptive temporal filter that utilizes a temporally over-sampled signal. The filter is designed based on dynamic-programming. Experimental results show that the proposed method can reduce encoding rate on average by 3.01 [%] compared to the constant mean filter.

  • Spectral Distribution of Wigner Matrices in Finite Dimensions and Its Application to LPI Performance Evaluation of Radar Waveforms

    Jun CHEN  Fei WANG  Jianjiang ZHOU  Chenguang SHI  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:9
      Page(s):
    2021-2025

    Recent research on the assessment of low probability of interception (LPI) radar waveforms is mainly based on limiting spectral properties of Wigner matrices. As the dimension of actual operating data is constrained by the sampling frequency, it is very urgent and necessary to research the finite theory of Wigner matrices. This paper derives a closed-form expression of the spectral cumulative distribution function (CDF) for Wigner matrices of finite sizes. The expression does not involve any derivatives and integrals, and therefore can be easily computed. Then we apply it to quantifying the LPI performance of radar waveforms, and the Kullback-Leibler divergence (KLD) is also used in the process of quantification. Simulation results show that the proposed LPI metric which considers the finite sample size and signal-to-noise ratio is more effective and practical.

  • A Formal Model to Enforce Trustworthiness Requirements in Service Composition

    Ning FU  Yingfeng ZHANG  Lijun SHAN  Zhiqiang LIU  Han PENG  

     
    PAPER-Software System

      Pubricized:
    2017/06/20
      Vol:
    E100-D No:9
      Page(s):
    2056-2067

    With the in-depth development of service computing, it has become clear that when constructing service applications in an open dynamic network environment, greater attention must be paid to trustworthiness under the premise of functions' realization. Trustworthy computing requires theories for business process modeling in terms of both behavior and trustworthiness. In this paper, a calculus for ensuring the satisfaction of trustworthiness requirements in service-oriented systems is proposed. We investigate a calculus called QPi, for representing both the behavior and the trustworthiness property of concurrent systems. QPi is the combination of pi-calculus and a constraint semiring, which has a feature when problems with multi-dimensional properties must be tackled. The concept of the quantified bisimulation of processes provides us a measure of the degree of equivalence of processes based on the bisimulation distance. The QPi related properties of bisimulation and bisimilarity are also discussed. A specific modeling example is given to illustrate the effectiveness of the algebraic method.

  • Imperceptible On-Screen Markers for Mobile Interaction on Public Large Displays

    Goshiro YAMAMOTO  Luiz SAMPAIO  Takafumi TAKETOMI  Christian SANDOR  Hirokazu KATO  Tomohiro KURODA  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2027-2036

    We present a novel method to enable users to experience mobile interaction with digital content on external displays by embedding markers imperceptibly on the screen. Our method consists of two parts: marker embedding on external displays and marker detection. To embed markers, similar to previous work, we display complementary colors in alternating frames, which are selected by considering L*a*b color space in order to make the markers harder for humans to detect. Our marker detection process does not require mobile devices to be synchronized with the display, while certain constraints for the relation between camera and display update rate need to be fulfilled. In this paper, we have conducted three experiments. The results show 1) selecting complementary colors in the a*b* color plane maximizes imperceptibility, 2) our method is extremely robust when used with static contents and can handle animated contents up to certain optical flow levels, and 3) our method was proved to work well in case of small movements, but large movements can lead to loss of tracking.

  • Image Restoration of JPEG Encoded Images via Block Matching and Wiener Filtering

    Yutaka TAKAGI  Takanori FUJISAWA  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E100-A No:9
      Page(s):
    1993-2000

    In this paper, we propose a method for removing block noise which appears in JPEG (Joint Photographic Experts Group) encoded images. We iteratively perform the 3D wiener filtering and correction of the coefficients. In the wiener filtering, we perform the block matching for each patch in order to get the patches which have high similarities to the reference patch. After wiener filtering, the collected patches are returned to the places where they were and aggregated. We compare the performance of the proposed method to some conventional methods, and show that the proposed method has an excellent performance.

  • Estimation of Dense Displacement by Scale Invariant Polynomial Expansion of Heterogeneous Multi-View Images

    Kazuki SHIBATA  Mehrdad PANAHPOUR TEHERANI  Keita TAKAHASHI  Toshiaki FUJII  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2048-2051

    Several applications for 3-D visualization require dense detection of correspondence for displacement estimation among heterogeneous multi-view images. Due to differences in resolution or sampling density and field of view in the images, estimation of dense displacement is not straight forward. Therefore, we propose a scale invariant polynomial expansion method that can estimate dense displacement between two heterogeneous views. Evaluation on heterogeneous images verifies accuracy of our approach.

  • Articulatory Modeling for Pronunciation Error Detection without Non-Native Training Data Based on DNN Transfer Learning

    Richeng DUAN  Tatsuya KAWAHARA  Masatake DANTSUJI  Jinsong ZHANG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/05/26
      Vol:
    E100-D No:9
      Page(s):
    2174-2182

    Aiming at detecting pronunciation errors produced by second language learners and providing corrective feedbacks related with articulation, we address effective articulatory models based on deep neural network (DNN). Articulatory attributes are defined for manner and place of articulation. In order to efficiently train these models of non-native speech without such data, which is difficult to collect in a large scale, several transfer learning based modeling methods are explored. We first investigate three closely-related secondary tasks which aim at effective learning of DNN articulatory models. We also propose to exploit large speech corpora of native and target language to model inter-language phenomena. This kind of transfer learning can provide a better feature representation of non-native speech. Related task transfer and language transfer learning are further combined on the network level. Compared with the conventional DNN which is used as the baseline, all proposed methods improved the performance. In the native attribute recognition task, the network-level combination method reduced the recognition error rate by more than 10% relative for all articulatory attributes. The method was also applied to pronunciation error detection in Mandarin Chinese pronunciation learning by Japanese native speakers, and achieved the relative improvement up to 17.0% for detection accuracy and up to 19.9% for F-score, which is also better than the lattice-based combination.

  • Centralized Contention Based MAC for OFDMA WLAN

    Gunhee LEE  Cheeha KIM  

     
    LETTER-Information Network

      Pubricized:
    2017/06/06
      Vol:
    E100-D No:9
      Page(s):
    2219-2223

    The IEEE 802.11 wireless local area network (WLAN) is the most widely deployed communication standard in the world. Currently, the IEEE 802.11ax draft standard is one of the most advanced and promising among future wireless network standards. However, the suggested uplink-OFDMA (UL-OFDMA) random access method, based on trigger frame-random access (TF-R) from task group ax (TGax), does not yet show satisfying system performance. To enhance the UL-OFDMA capability of the IEEE 802.11ax draft standard, we propose a centralized contention-based MAC (CC-MAC) and describe its detailed operation. In this paper, we analyze the performance of CC-MAC by solving the Markov chain model and evaluating BSS throughput compared to other methods, such as DCF and TF-R, by computer simulation. Our results show that CC-MAC is a scalable and efficient scheme for improving the system performance in a UL-OFDMA random access situation in IEEE 802.11ax.

  • A Hybrid Approach via SRG and IDE for Volume Segmentation

    Li WANG  Xiaoan TANG  Junda ZHANG  Dongdong GUAN  

     
    LETTER-Computer Graphics

      Pubricized:
    2017/06/09
      Vol:
    E100-D No:9
      Page(s):
    2257-2260

    Volume segmentation is of great significances for feature visualization and feature extraction, essentially volume segmentation can be viewed as generalized cluster. This paper proposes a hybrid approach via symmetric region growing (SRG) and information diffusion estimation (IDE) for volume segmentation, the volume dataset is over-segmented to series of subsets by SRG and then subsets are clustered by K-Means basing on distance-metric derived from IDE, experiments illustrate superiority of the hybrid approach with better segmentation performance.

  • A Polynomial Time Pattern Matching Algorithm on Graph Patterns of Bounded Treewidth

    Takayoshi SHOUDAI  Takashi YAMADA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1764-1772

    This paper deals with a problem to decide whether a given graph structure appears as a pattern in the structure of a given graph. A graph pattern is a triple p=(V,E,H), where (V,E) is a graph and H is a set of variables, which are ordered lists of vertices in V. A variable can be replaced with an arbitrary connected graph by a kind of hyperedge replacements. A substitution is a collection of such replacements. The graph pattern matching problem (GPMP) is the computational problem to decide whether or not a given graph G is obtained from a given graph pattern p by a substitution. In this paper, we show that GPMP for a graph pattern p and a graph G is solvable in polynomial time if the length of every variable in p is 2, p is of bounded treewidth, and G is connected.

  • Group Signature with Deniability: How to Disavow a Signature

    Ai ISHIDA  Keita EMURA  Goichiro HANAOKA  Yusuke SAKAI  Keisuke TANAKA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1825-1837

    Group signatures are a class of digital signatures with enhanced privacy. By using this type of signature, a user can sign a message on behalf of a specific group without revealing his identity, but in the case of a dispute, an authority can expose the identity of the signer. However, it is not always the case that we need to know the specific identity of a signature. In this paper, we propose the notion of deniable group signatures, where the authority can issue a proof showing that the specified user is NOT the signer of a signature, without revealing the actual signer. We point out that existing efficient non-interactive zero-knowledge proof systems cannot be straightforwardly applied to prove such a statement. We circumvent this problem by giving a fairly practical construction through extending the Groth group signature scheme (ASIACRYPT 2007). In particular, a denial proof in our scheme consists of 96 group elements, which is about twice the size of a signature in the Groth scheme. The proposed scheme is provably secure under the same assumptions as those of the Groth scheme.

  • Packed Compact Tries: A Fast and Efficient Data Structure for Online String Processing

    Takuya TAKAGI  Shunsuke INENAGA  Kunihiko SADAKANE  Hiroki ARIMURA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1785-1793

    We present a new data structure called the packed compact trie (packed c-trie) which stores a set S of k strings of total length n in nlog σ+O(klog n) bits of space and supports fast pattern matching queries and updates, where σ is the alphabet size. Assume that α=logσn letters are packed in a single machine word on the standard word RAM model, and let f(k,n) denote the query and update times of the dynamic predecessor/successor data structure of our choice which stores k integers from universe [1,n] in O(klog n) bits of space. Then, given a string of length m, our packed c-tries support pattern matching queries and insert/delete operations in $O( rac{m}{alpha} f(k,n))$ worst-case time and in $O( rac{m}{alpha} + f(k,n))$ expected time. Our experiments show that our packed c-tries are faster than the standard compact tries (a.k.a. Patricia trees) on real data sets. We also discuss applications of our packed c-tries.

  • Constructing Subspace Membership Encryption through Inner Product Encryption

    Shuichi KATSUMATA  Noboru KUNIHIRO  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1804-1815

    Subspace membership encryption (SME), a generalization of inner product encryption (IPE), was recently formalized by Boneh, Raghunathan, and Segev in Asiacrypt 2013. The main motivation for SME was that traditional predicate encryptions did not yield function privacy, a security notion introduced by Boneh et al. in Crypto 2013 that captures the privacy of the predicate associated to the secret key. Although they gave a generic construction of SME based on any IPE, we show that their construction of SME for small attribute space was incorrect and provide an attack that breaks the attribute hiding security, a baseline security notion for predicate encryptions that captures the privacy of the attribute associated with the ciphertext. Then, we propose a generalized construction of SME and prove that the attribute hiding security can not be achieved even in the newly defined setting. Finally, we further extend our generalized construction of SME and propose a SME that achieves the attribute hiding property even when the attribute space is small. In exchange our proposed scheme does not yield function privacy and the construction is rather inefficient. Although we did not succeed in constructing a SME both yielding function privacy and attribute hiding security, ours is the first attribute hiding SME scheme whose attribute space is polynomial in the security parameter, and we formalized a richer framework for constructing SMEs and discovered a trade-off like relationship between the two security notions.

  • Automatic Optic Disc Boundary Extraction Based on Saliency Object Detection and Modified Local Intensity Clustering Model in Retinal Images

    Wei ZHOU  Chengdong WU  Yuan GAO  Xiaosheng YU  

     
    LETTER-Image

      Vol:
    E100-A No:9
      Page(s):
    2069-2072

    Accurate optic disc localization and segmentation are two main steps when designing automated screening systems for diabetic retinopathy. In this paper, a novel optic disc detection approach based on saliency object detection and modified local intensity clustering model is proposed. It consists of two stages: in the first stage, the saliency detection technique is introduced to the enhanced retinal image with the aim of locating the optic disc. In the second stage, the optic disc boundary is extracted by the modified Local Intensity Clustering (LIC) model with oval-shaped constrain. The performance of our proposed approach is tested on the public DIARETDB1 database. Compared to the state-of-the-art approaches, the experimental results show the advantages and effectiveness of the proposed approach.

  • Partially Wildcarded Ciphertext-Policy Attribute-Based Encryption and Its Performance Evaluation

    Go OHTAKE  Kazuto OGAWA  Goichiro HANAOKA  Shota YAMADA  Kohei KASAMATSU  Takashi YAMAKAWA  Hideki IMAI  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1846-1856

    Attribute-based encryption (ABE) enables flexible data access control based on attributes and policies. In ciphertext-policy ABE (CP-ABE), a secret key is associated with a set of attributes and a policy is associated with a ciphertext. If the set of attributes satisfies the policy, the ciphertext can be decrypted. CP-ABE can be applied to a variety of services such as access control for file sharing systems and content distribution services. However, a CP-ABE scheme usually has larger costs for encryption and decryption than conventional public-key encryption schemes due to flexible policy setting. In particular, wildcards, which mean that certain attributes are not relevant to the ciphertext policy, are not essential for a certain service. In this paper, we propose a partially wildcarded CP-ABE scheme with a lower encryption and decryption cost. In our scheme, user's attributes are separated into those requiring wildcards and those not requiring wildcards. Our scheme embodies a CP-ABE scheme with a wildcard functionality and an efficient CP-ABE scheme without wildcard functionality. We show that our scheme is provably secure under the DBDH assumption. Then, we compare our scheme with the conventional CP-ABE schemes and describe a content distribution service as an application of our scheme. Also, we implement our scheme on a PC and measure the processing time. The result shows that our scheme can reduce all of the costs for key generation, encryption, and decryption as much as possible.

  • Provably Secure Structured Signature Schemes with Tighter Reductions

    Naoto YANAI  Tomoya IWASAKI  Masaki INAMURA  Keiichi IWAMURA  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1870-1881

    Structured signatures are digital signatures where relationship between signers is guaranteed in addition to the validity of individually generated data for each signer, and have been expected for the digital right management. Nevertheless, we mention that there is no scheme with a tight security reduction, to the best of our knowledge. Loosely speaking, it means that the security is downgraded against an adversary who obtains a large amount of signatures. Since contents are widely utilized in general, achieving a tighter reduction is desirable. Based on this background, we propose the first structured signature scheme with a tight security reduction in the conventional public key cryptography and the one with a rigorous reduction proof in the ID-based cryptography via our new proof method. Moreover, the security of our schemes can be proven under the CDH assumption which is the most standard. Our schemes are also based on bilinear maps whose implementation can be provided via well-known cryptographic libraries.

  • Card-Based Protocols Using Regular Polygon Cards

    Kazumasa SHINAGAWA  Takaaki MIZUKI  Jacob C.N. SCHULDT  Koji NUIDA  Naoki KANAYAMA  Takashi NISHIDE  Goichiro HANAOKA  Eiji OKAMOTO  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1900-1909

    Cryptographic protocols enable participating parties to compute any function of their inputs without leaking any information beyond the output. A card-based protocol is a cryptographic protocol implemented by physical cards. In this paper, for constructing protocols with small numbers of shuffles, we introduce a new type of cards, regular polygon cards, and a new protocol, oblivious conversion. Using our cards, we construct an addition protocol on non-binary inputs with only one shuffle and two cards. Furthermore, using our oblivious conversion protocol, we construct the first protocol for general functions in which the number of shuffles is linear in the number of inputs.

  • Speech Enhancement with Impact Noise Activity Detection Based on the Kurtosis of an Instantaneous Power Spectrum

    Naoto SASAOKA  Naoya HAMAHASHI  Yoshio ITOH  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:9
      Page(s):
    1942-1950

    In a speech enhancement system for impact noise, it is important for any impact noise activity to be detected. However, because impact noise occurs suddenly, it is not always easy to detect. We propose a method for impact noise activity detection based on the kurtosis of an instantaneous power spectrum. The continuous duration of a generalized impact noise is shorter than that of speech, and the power of such impact noise varies dramatically. Consequently, the distribution of the instantaneous power spectrum of impact noise is different from that of speech. The proposed detection takes advantage of kurtosis, which depends on the sharpness and skirt of the distribution. Simulation results show that the proposed noise activity detection improves the performance of the speech enhancement system.

  • On the Key Parameters of the Oscillator-Based Random Source

    Chenyang GUO  Yujie ZHOU  

     
    PAPER-Nonlinear Problems

      Vol:
    E100-A No:9
      Page(s):
    1956-1964

    This paper presents a mathematical model for the oscillator-based true random number generator (TRNG) to study the influence of some key parameters to the randomness of the output sequence. The output of the model is so close to the output of the real design of the TRNG that the model can generate the random bits instead of the analog simulation for research. It will cost less time than the analog simulation and be more convenient for the researchers to change some key parameters in the design. The authors give a method to improve the existing design of the oscillator-based TRNG to deal with the possible bias of the key parameters. The design is fabricated with a 55-nm CMOS process.

  • A Compact Tree Representation of an Antidictionary

    Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Information Theory

      Vol:
    E100-A No:9
      Page(s):
    1973-1984

    In both theoretical analysis and practical use for an antidictionary coding algorithm, an important problem is how to encode an antidictionary of an input source. This paper presents a proposal for a compact tree representation of an antidictionary built from a circular string for an input source. We use a technique for encoding a tree in the compression via substring enumeration to encode a tree representation of the antidictionary. Moreover, we propose a new two-pass universal antidictionary coding algorithm by means of the proposal tree representation. We prove that the proposed algorithm is asymptotic optimal for a stationary ergodic source.

6041-6060hit(42807hit)