The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ASE(2849hit)

561-580hit(2849hit)

  • History-Pattern Encoding for Large-Scale Dynamic Multidimensional Datasets and Its Evaluations

    Masafumi MAKINO  Tatsuo TSUJI  Ken HIGUCHI  

     
    PAPER

      Pubricized:
    2016/01/14
      Vol:
    E99-D No:4
      Page(s):
    989-999

    In this paper, we present a new encoding/decoding method for dynamic multidimensional datasets and its implementation scheme. Our method encodes an n-dimensional tuple into a pair of scalar values even if n is sufficiently large. The method also encodes and decodes tuples using only shift and and/or register instructions. One of the most serious problems in multidimensional array based tuple encoding is that the size of an encoded result may often exceed the machine word size for large-scale tuple sets. This problem is efficiently resolved in our scheme. We confirmed the advantages of our scheme by analytical and experimental evaluations. The experimental evaluations were conducted to compare our constructed prototype system with other systems; (1) a system based on a similar encoding scheme called history-offset encoding, and (2) PostgreSQL RDBMS. In most cases, both the storage and retrieval costs of our system significantly outperformed those of the other systems.

  • Autonomous Decentralized Database System Self Configuration Technology for High Response

    Carlos PEREZ-LEGUIZAMO  

     
    PAPER

      Vol:
    E99-B No:4
      Page(s):
    794-802

    In recent years, society has experienced several changes in its ways and methods of consuming. Nowadays, the diversity and the customization of products and services have provoked that the consumer needs continuously change. Hence, the database systems support e-business processes are required to be timeliness and adaptable to the changing preferences. Autonomous Decentralized Database System (ADDS), has been proposed in order to satisfy the enhanced requirements of current on-line e-business applications. Autonomy and decentralization of subsystems help to achieve short response times in highly competitive situations and an autonomous Coordination Mobile Agent (CMA) has been proposed to achieve flexibility in a highly dynamic environment. However, a problem in ADDS is as the number of sites increases, the distribution and harmonization of product information among the sites are turning difficult. Therefore, many users cannot be satisfied quickly. As a result, system timeliness is inadequate. To solve this problem, a self configuration technology is proposed. This technology can configure the system to the evolving situation dynamically for achieving high response. A simulation shows the effectiveness of the proposed technology in a large-scale system. Finally, an implementation of this technology is presented.

  • An On-Chip Monitoring Circuit with 51-Phase PLL-Based Frequency Synthesizer for 8-Gb/s ODR Single-Ended Signaling Integrity Analysis

    Pil-Ho LEE  Yu-Jeong HWANG  Han-Yeol LEE  Hyun-Bae LEE  Young-Chan JANG  

     
    BRIEF PAPER

      Vol:
    E99-C No:4
      Page(s):
    440-443

    An on-chip monitoring circuit using a sub-sampling scheme, which consists of a 6-bit flash analog-to-digital converter (ADC) and a 51-phase phase-locked loop (PLL)-based frequency synthesizer, is proposed to analyze the signal integrity of a single-ended 8-Gb/s octal data rate (ODR) chip-to-chip interface with a source synchronous clocking scheme.

  • Diamond Cellular Network —Optimal Combination of Small Power Basestations and CoMP Cellular Networks —

    Hidekazu SHIMODAIRA  Gia Khanh TRAN  Kei SAKAGUCHI  Kiyomichi ARAKI  Shinobu NANBA  Satoshi KONISHI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E99-B No:4
      Page(s):
    917-927

    Coordinated Multi-point (CoMP) transmission has long been known for its ability to improve cell edge throughput. However, in a CoMP cellular network, fixed CoMP clustering results in cluster edges where system performance degrades due to non-coordinated clusters. To solve this problem, conventional studies proposed dynamic clustering schemes. However, such schemes require a complex backhaul topology and are infeasible with current network technologies. In this paper, small power base stations (BSs) are introduced instead of dynamic clustering to solve the cluster edge problem in CoMP cellular networks. This new cell topology is called the diamond cellular network since the resultant cell structure looks like a diamond pattern. In our novel cell topology, we derive the optimal locations of small power base stations and the optimal resource allocation between the CoMP base station and small power base stations to maximize the proportional fair utility function. By using the proposed architecture, in the case of perfect user scheduling, a more than 150% improvement in 5% outage throughput is achieved, and in the case of successive proportional fair user scheduling, nearly 100% improvement of 5% outage throughput is achieved compared with conventional single cell networks.

  • A Variant of Park-Lee Identity-Based Encryption System

    Jong Hwan PARK  Dong Hoon LEE  

     
    PAPER-Cryptography and Information Security

      Vol:
    E99-A No:3
      Page(s):
    720-732

    Recently, Park and Lee suggested a new framework for realizing Identity-Based Encryption (IBE) trapdoor called ‘two-equation-revocation’, and proposed a new IBE system that makes use of a Map-To-Point hash function. In this paper, we present a variant of the PL system by giving a simple way to remove the Map-To-Point hash function from the PL system. Our variant is proven to be secure under non-standard security assumptions, which results in the degradation of security. Instead, our variant can have several efficiency advantages over the PL system: (1) it provides receiver's anonymity, (2) it has no correctness error, (3) it has shorter ciphertext, and (4) it has faster encryption. As a result, (when not considering security assumptions and security losses) our variant is as efficient as the Boneh-Boyen and Sakai-Kasahara IBE systems that are considered as being the most practical ones.

  • Nanophotonic Devices Based on Semiconductor Quantum Nanostructures Open Access

    Kazuhiro KOMORI  Takeyoshi SUGAYA  Takeru AMANO  Keishiro GOSHIMA  

     
    INVITED PAPER

      Vol:
    E99-C No:3
      Page(s):
    346-357

    In this study, our recent research activities on nanophotonic devices with semiconductor quantum nanostructures are reviewed. We have developed a technique for nanofabricating of high-quality and high-density semiconductor quantum dots (QDs). On the basis of this core technology, we have studied next-generation nanophotonic devices fabricated using high-quality QDs, including (1) a high-performance QD laser for long-wavelength optical communications, (2) high-efficiency compound-type solar cell structures, and (3) single-QD devices for future applications related to quantum information. These devices are expected to be used in high-speed optical communication systems, high-performance renewable energy systems, and future high-security quantum computation and communication systems.

  • A Most Resource-Consuming Disease Estimation Method from Electronic Claim Data Based on Labeled LDA

    Yasutaka HATAKEYAMA  Takahiro OGAWA  Hironori IKEDA  Miki HASEYAMA  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/11/30
      Vol:
    E99-D No:3
      Page(s):
    763-768

    In this paper, we propose a method to estimate the most resource-consuming disease from electronic claim data based on Labeled Latent Dirichlet Allocation (Labeled LDA). The proposed method models each electronic claim from its medical procedures as a mixture of resource-consuming diseases. Thus, the most resource-consuming disease can be automatically estimated by applying Labeled LDA to the electronic claim data. Although our method is composed of a simple scheme, this is the first trial for realizing estimation of the most resource-consuming disease.

  • Single Image Super Resolution by l2 Approximation with Random Sampled Dictionary

    Takanori FUJISAWA  Taichi YOSHIDA  Kazu MISHIBA  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E99-A No:2
      Page(s):
    612-620

    In this paper, we propose an example-based single image super resolution (SR) method by l2 approximation with self-sampled image patches. Example-based super resolution methods can reconstruct high resolution image patches by a linear combination of atoms in an overcomplete dictionary. This reconstruction requires a pair of two dictionaries created by tremendous low and high resolution image pairs from the prepared image databases. In our method, we introduce the dictionary by random sampling patches from just an input image and eliminate its training process. This dictionary exploits the self-similarity of images and it will no more depend on external image sets, which consern the storage space or the accuracy of referred image sets. In addition, we modified the approximation of input image to an l2-norm minimization problem, instead of commonly used sparse approximation such as l1-norm regularization. The l2 approximation has an advantage of computational cost by only solving an inverse problem. Through some experiments, the proposed method drastically reduces the computational time for the SR, and it provides a comparable performance to the conventional example-based SR methods with an l1 approximation and dictionary training.

  • An Integrative Modelling Language for Agent-Based Simulation of Traffic

    Alberto FERNÁNDEZ-ISABEL  Rubén FUENTES-FERNÁNDEZ  

     
    PAPER-Information Network

      Pubricized:
    2015/10/27
      Vol:
    E99-D No:2
      Page(s):
    406-414

    Traffic is a key aspect of everyday life. Its study, as it happens with other complex phenomena, has found in simulation a basic tool. However, the use of simulations faces important limitations. Building them requires considering different aspects of traffic (e.g. urbanism, car features, and individual drivers) with their specific theories, that must be integrated to provide a coherent model. There is also a variety of simulation platforms with different requirements. Many of these problems demand multi-disciplinary teams, where the different backgrounds can hinder the communication and validation of simulations. The Model-Driven Engineering (MDE) of simulations has been proposed in other fields to address these issues. Such approaches develop graphical Modelling Languages (MLs) that researchers use to model their problems, and then semi-automatically generate simulations from those models. Working in this way promotes communication, platform independence, incremental development, and reutilisation. This paper presents the first steps for a MDE framework for traffic simulations. It introduces a tailored extensible ML for domain experts. The ML is focused on human actions, so it adopts an Agent-Based Modelling perspective. Regarding traffic aspects, it includes concepts commonly found in related literature following the Driver-Vehicle-Environment model. The language is also suitable to accommodate additional theories using its extension mechanisms. The approach is supported by an infrastructure developed using Eclipse MDE projects: the ML is specified with Ecore, and a model editor and a code generator tools are provided. A case study illustrates how to develop a simulation based on a driver's behaviour theory for a specific target platform using these elements.

  • Using Trust of Social Ties for Recommendation

    Liang CHEN  Chengcheng SHAO  Peidong ZHU  Haoyang ZHU  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/10/30
      Vol:
    E99-D No:2
      Page(s):
    397-405

    Nowadays, with the development of online social networks (OSN), a mass of online social information has been generated in OSN, which has triggered research on social recommendation. Collaborative filtering, as one of the most popular techniques in social recommendation, faces several challenges, such as data sparsity, cold-start users and prediction quality. The motivation of our work is to deal with the above challenges by effectively combining collaborative filtering technology with social information. The trust relationship has been identified as a useful means of using social information to improve the quality of recommendation. In this paper, we propose a trust-based recommendation approach which uses GlobalTrust (GT) to represent the trust value among users as neighboring nodes. A matrix factorization based on singular value decomposition is used to get a trust network built on the GT value. The recommendation results are obtained through a modified random walk algorithm called GlobalTrustWalker. Through experiments on a real-world sparser dataset, we demonstrate that the proposed approach can better utilize users' social trust information and improve the recommendation accuracy on cold-start users.

  • Frequency-Domain Differential Coding Schemes under Frequency-Selective Fading Environment in Adaptive Baseband Radio

    Jin NAKAZATO  Daiki OKUYAMA  Yuki MORIMOTO  Yoshio KARASAWA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E99-B No:2
      Page(s):
    488-498

    In our previous paper, we presented a concept of “Baseband Radio” as an ideal of future wireless communication scheme. Furthermore, for enhancing the adaptability of baseband radio, the adaptive baseband radio was discussed as the ultimate communication system; it integrates the functions of cognitive radio and software-defined radio. In this paper, two transmission schemes that take advantage of adaptive baseband radio are introduced and the results of a performance evaluation are presented. The first one is a scheme based on DSFBC for realizing higher reliability; it allows the flexible use of frequency bands over a wide range of white space. The second one is a low-power-density communication scheme with spectrum-spreading by means of frequency-domain differential coding so that the secondary system does not seriously interfere with primary-user systems that have been assigned the same frequency band.

  • An Effective Range Ambiguity Resolution for LEO Satellite with Unknown Phase Deviation

    Seung Won CHO  Sang Jeong LEE  

     
    PAPER-Satellite Communications

      Vol:
    E99-B No:2
      Page(s):
    533-541

    Ranging is commonly used to measure the distance to a satellite, since it is one of the quickest and most effective methods of finding the position of a satellite. In general, ranging ambiguity is easily resolved using major and subsequent ambiguity-resolving tones. However, an induced unknown phase error could interfere with resolving the ranging ambiguity. This paper suggests an effective and practical method to resolve the ranging ambiguity without changing the original planned ranging tone frequencies when an unknown non-linear phase error exists. Specifically, the present study derives simple equations for finding the phase error from the physical relationship between the measured major and minor tones. Furthermore, a technique to select the optimal ambiguity integer and correct phase error is provided. A numerical analysis is performed using real measurements from a low earth orbit (LEO) satellite to show its suitability and effectiveness. It can be seen that a non-ambiguous range is acquired after compensating the unknown phase error.

  • Determining Image Base of Firmware Files for ARM Devices

    Ruijin ZHU  Yu-an TAN  Quanxin ZHANG  Fei WU  Jun ZHENG  Yuan XUE  

     
    PAPER-Software System

      Pubricized:
    2015/11/06
      Vol:
    E99-D No:2
      Page(s):
    351-359

    Disassembly, as a principal reverse-engineering tool, is the process of recovering the equivalent assembly instructions of a program's machine code from its binary representation. However, when disassembling a firmware file, the disassembly process cannot be performed well if the image base is unknown. In this paper, we propose an innovative method to determine the image base of a firmware file with ARM/Thumb instruction set. First, based on the characteristics of the function entry table (FET) for an ARM processor, an algorithm called FIND-FET is proposed to identify the function entry tables. Second, by using the most common instructions of function prologue and FETs, the FIND-BASE algorithm is proposed to determine the candidate image base by counting the matched functions and then choose the one with maximal matched FETs as the final result. The algorithms are applied on some firmwares collected from the Internet, and results indicate that they can effectively find out the image base for the majority of example firmware files.

  • Frequency Division Multiplexed Radio-on-Fiber Link Employing an Electro-Absorption Modulator Integrated Laser Diode for a Cube Satellite Earth Station

    Seiji FUKUSHIMA  Takayuki SHIMAKI  Kota YAMASHITA  Taishi FUNASAKO  Tomohiro HACHINO  

     
    PAPER

      Vol:
    E99-C No:2
      Page(s):
    212-218

    Recent small cube satellites use higher frequency bands such as Ku-band for higher throughput communications. This requires high-frequency link in an earth radio station as well. As one of the solutions, we propose usage of bidirectional radio-on-fiber link employing a wavelength multiplexing scheme. It was numerically shown that the response linearity of the electro-absorption modulator integrated laser (EML) is sufficient and that the spurious emissions are lower enough or can be reduced by the radio-frequency filters. From the frequency response and the single-sideband phase noise measurements, the EML was proved to be used in a radio-on-fiber system of the cube satellite earth station.

  • vCanal: Paravirtual Socket Library towards Fast Networking in Virtualized Environment

    Dongwoo LEE  Changwoo MIN  Young IK EOM  

     
    PAPER-Software System

      Pubricized:
    2015/11/11
      Vol:
    E99-D No:2
      Page(s):
    360-369

    Virtualization is no longer an emerging research area since the virtual processor and memory operate as efficiently as the physical ones. However, I/O performance is still restricted by the virtualization overhead caused by the costly and complex I/O virtualization mechanism, in particular by massive exits occurring on the guest-host switch and redundant processing of the I/O stacks at both guest and host. A para-virtual device driver may reduce the number of exits to the hypervisor, whereas the network stacks in the guest OS are still duplicated. Previous work proposed a socket-outsourcing technique that bypasses the redundant guest network stack by delivering the network request directly to the host. However, even by bypassing the redundant network paths in the guest OS, the obtained performance was still below 60% of the native device, since notifications of completion still depended on the hypervisor. In this paper, we propose vCanal, a novel network virtualization framework, to improve the performance of network access in the virtual machine toward that of the native machine. Implementation of vCanal reached 96% of the native TCP throughput, increasing the UDP latency by only 4% compared to the native latency.

  • Offline Selective Data Deduplication for Primary Storage Systems

    Sejin PARK  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/10/26
      Vol:
    E99-D No:2
      Page(s):
    370-382

    Data deduplication is a technology that eliminates redundant data to save storage space. Most previous studies on data deduplication target backup storage, where the deduplication ratio and throughput are important. However, data deduplication on primary storage has recently been receiving attention; in this case, I/O latency should be considered equally with the deduplication ratio. Unfortunately, data deduplication causes high sequential-read-latency problems. When a file is created, the file system allocates physically contiguous blocks to support low sequential-read latency. However, the data deduplication process rearranges the block mapping information to eliminate duplicate blocks. Because of this rearrangement, the physical sequentiality of blocks in a file is broken. This makes a sequential-read request slower because it operates like a random-read operation. In this paper, we propose a selective data deduplication scheme for primary storage systems. A selective scheme can achieve a high deduplication ratio and a low I/O latency by applying different data-chunking methods to the files, according to their file access characteristics. In the proposed system, file accesses are characterized by recent access time and the access frequency of each file. No chunking is applied to update-intensive files since they are meaningless in terms of data deduplication. For sequential-read-intensive files, we apply big chunking to preserve their sequentiality on the media. For random-read-intensive files, small chunking is used to increase the deduplication ratio. Experimental evaluation showed that the proposed method achieves a maximum of 86% of an ideal deduplication ratio and 97% of the sequential-read performance of a native file system.

  • Threshold-Based Distributed Continuous Top-k Query Processing for Minimizing Communication Overhead

    Kamalas UDOMLAMLERT  Takahiro HARA  Shojiro NISHIO  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/11/11
      Vol:
    E99-D No:2
      Page(s):
    383-396

    In this paper, we propose a communication-efficient top-k continuous query processing method on distributed local nodes where data are horizontally partitioned. A designated coordinator server takes the role of issuing queries from users to local nodes and delivering the results to users. The final results are requested via a top-k subscription which lets local nodes know which data and updates need to be returned to users. Our proposed method makes use of the active previously posed queries to identify a small set of needed top-k subscriptions. In addition, with the pre-indexed nodes' skylines, the number of local nodes to be subscribed can be significantly reduced. As a result, only a small number of subscriptions are informed to a small number of local nodes resulting in lower communication overhead. Furthermore, according to dynamic data updates, we also propose a method that prevents nodes from reporting needless updates and also maintenance procedures to preserve the consistency. The results of experiments that measure the volume of transferred data show that our proposed method significantly outperforms the previously proposed methods.

  • Purchase Behavior Prediction in E-Commerce with Factorization Machines

    Chen CHEN  Chunyan HOU  Jiakun XIAO  Xiaojie YUAN  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/10/01
      Vol:
    E99-D No:1
      Page(s):
    270-274

    Purchase behavior prediction is one of the most important issues for the precision marketing of e-commerce companies. This Letter presents our solution to the purchase behavior prediction problem in E-commerce, specifically the task of Big Data Contest of China Computer Federation in 2014. The goal of this task is to predict which users will have the purchase behavior based on users' historical data. The traditional methods of recommendation encounter two crucial problems in this scenario. First, this task just predicts which users will have the purchase behavior, rather than which items should be recommended to which users. Second, the large-scale dataset poses a big challenge for building the empirical model. Feature engineering and Factorization Model shed some light on these problems. We propose to use Factorization Machines model based on the multiple classes and high dimensions of feature engineering. Experimental results on a real-world dataset demonstrate the advantages of our proposed method.

  • Application Authentication System with Efficiently Updatable Signature

    Kazuto OGAWA  Go OHTAKE  

     
    PAPER

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    69-82

    Broadcasting and communications networks can be used together to offer hybrid broadcasting services that incorporate a variety of personalized information from communications networks in TV programs. To enable these services, many different applications have to be run on a user terminal, and it is necessary to establish an environment where any service provider can create applications and distribute them to users. The danger is that malicious service providers might distribute applications which may cause user terminals to take undesirable actions. To prevent such applications from being distributed, we propose an application authentication protocol for hybrid broadcasting and communications services. Concretely, we modify a key-insulated signature scheme and apply it to this protocol. In the protocol, a broadcaster distributes a distinct signing key to each service provider that the broadcaster trusts. As a result, users can verify that an application is reliable. If a signed application causes an undesirable action, a broadcaster can revoke the privileges and permissions of the service provider. In addition, the broadcaster can update the signing key. That is, our protocol is secure against leakage of the signing key by the broadcaster and service providers. Moreover, a user terminal uses only one verification key for verifying a signature, so the memory needed for storing the verification key in the user terminal is very small. With our protocol, users can securely receive hybrid services from broadcasting and communications networks.

  • Sub-Band Noise Reduction in Multi-Channel Digital Hearing Aid

    Qingyun WANG  Ruiyu LIANG  Li JING  Cairong ZOU  Li ZHAO  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/10/14
      Vol:
    E99-D No:1
      Page(s):
    292-295

    Since digital hearing aids are sensitive to time delay and power consumption, the computational complexity of noise reduction must be reduced as much as possible. Therefore, some complicated algorithms based on the analysis of the time-frequency domain are very difficult to implement in digital hearing aids. This paper presents a new approach that yields an improved noise reduction algorithm with greatly reduce computational complexity for multi-channel digital hearing aids. First, the sub-band sound pressure level (SPL) is calculated in real time. Then, based on the calculated sub-band SPL, the noise in the sub-band is estimated and the possibility of speech is computed. Finally, a posteriori and a priori signal-to-noise ratios are estimated and the gain function is acquired to reduce the noise adaptively. By replacing the FFT and IFFT transforms by the known SPL, the proposed algorithm greatly reduces the computation loads. Experiments on a prototype digital hearing aid show that the time delay is decreased to nearly half that of the traditional adaptive Wiener filtering and spectral subtraction algorithms, but the SNR improvement and PESQ score are rather satisfied. Compared with modulation frequency-based noise reduction algorithm, which is used in many commercial digital hearing aids, the proposed algorithm achieves not only more than 5dB SNR improvement but also less time delay and power consumption.

561-580hit(2849hit)