The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

8861-8880hit(42807hit)

  • QCN/DC: Quantized Congestion Notification with Delay-Based Congestion Detection in Data Center Networks

    Kenta MATSUSHIMA  Yuki TANISAWA  Miki YAMAMOTO  

     
    PAPER-Network System

      Vol:
    E98-B No:4
      Page(s):
    585-595

    Data center network is composed of high-speed Ethernet extended in a limited area of a data center building, so its RTT is extremely small of µsec order. In order to regulate data center network delay large part of which is queuing delay, QCN is proposed for layer 2 congestion control in IEEE 802.1Qau. QCN controls transmission rate of the sender by congestion feedback from a congested switch. QCN adopts probabilistic feedback transmission to reduce the control overhead. When the number of flows through a bottleneck link increases, some flows might receive no feedback even in congestion phase due to probabilistic feedback transmission. In this situation, queue length might be significantly fluctuated. In this paper, we propose a new delay-based congestion detection and control method. Our proposed delay-based congestion control is cooperated with the conventional QCN so as to detect and react congestion not detected by QCN.

  • High-Speed Design of Conflictless Name Lookup and Efficient Selective Cache on CCN Router

    Atsushi OOKA  Shingo ATA  Kazunari INOUE  Masayuki MURATA  

     
    PAPER-Network

      Vol:
    E98-B No:4
      Page(s):
    607-620

    Content-centric networking (CCN) is an innovative network architecture that is being considered as a successor to the Internet. In recent years, CCN has received increasing attention from all over the world because its novel technologies (e.g., caching, multicast, aggregating requests) and communication based on names that act as addresses for content have the potential to resolve various problems facing the Internet. To implement these technologies, however, requires routers with performance far superior to that offered by today's Internet routers. Although many researchers have proposed various router components, such as caching and name lookup mechanisms, there are few router-level designs incorporating all the necessary components. The design and evaluation of a complete router is the primary contribution of this paper. We provide a concrete hardware design for a router model that uses three basic tables — forwarding information base (FIB), pending interest table (PIT), and content store (CS) — and incorporates two entities that we propose. One of these entities is the name lookup entity, which looks up a name address within a few cycles from content-addressable memory by use of a Bloom filter; the other is the interest count entity, which counts interest packets that require certain content and selects content worth caching. Our contributions are (1) presenting a proper algorithm for looking up and matching name addresses in CCN communication, (2) proposing a method to process CCN packets in a way that achieves high throughput and very low latency, and (3) demonstrating feasible performance and cost on the basis of a concrete hardware design using distributed content-addressable memory.

  • Analysis and Performance Improvement of Independent Electric Coupled Resonance WPT System with Impedance Transformer

    Cheng YANG  Koichi TSUNEKAWA  

     
    PAPER-Antennas and Propagation

      Vol:
    E98-B No:4
      Page(s):
    630-637

    Wireless power transfer (WPT) based on electric coupled resonance can withstand a great level of variability in antenna separation. In this paper, we propose an independent electrical coupled resonance WPT system to further increase such systems' power transfer distance and ensure flexibility in the antenna location. The proposed system's power transfer function, critical coupling point, and resonance frequency splitting are investigated via the equivalent circuit, simulation, and experiment. Moreover, the input impedance characteristic of two electric coupled resonance antennas is also analyzed according to the transfer distance. In the region of under coupled, an appropriate impedance matching method is required to achieve effective power transfers. Here, we proposed a fixed configuration type matching loop with a series-connecting variable capacitance that can be added into both the source and load antennas. Experimental results demonstrate that the proposed matching loop can convert the two electric coupled resonance antennas' input impedance to the feed port impedance very well at varying transfer distances; these results are in good agreement with the simulation results.

  • Through Chip Interface Based Three-Dimensional FPGA Architecture Exploration

    Li-Chung HSU  Masato MOTOMURA  Yasuhiro TAKE  Tadahiro KURODA  

     
    PAPER

      Vol:
    E98-C No:4
      Page(s):
    288-297

    This paper presents work on integrating wireless 3-D interconnection interface, namely ThruChip Interface (TCI), in three-dimensional field-programmable gate array (3-D FPGA) exploration tool (TPR). TCI is an emerging 3-D IC integration solution because of its advantages over cost, flexibility, reliability, comparable performance, and energy dissipation in comparison to through-silicon-via (TSV). Since the communication bandwidth of TCI is much higher than FPGA internal logic signals, in order to fully utilize its bandwidth, the time-division multiplexing (TDM) scheme is adopted. The experimental results show 25% on average and 58% at maximum path delay reduction over 2-D FPGA when five layers are used in TCI based 3-D FPGA architecture. Although the performance of TCI based 3-D FPGA architecture is 8% below that of TSV based 3-D FPGA on average, TCI based architecture can reduce active area consumed by vertical communication channels by 42% on average in comparison to TSV based architecture and hence leads to better delay and area product.

  • A Low-Latency DMR Architecture with Fast Checkpoint Recovery Scheme

    Go MATSUKAWA  Yohei NAKATA  Yasuo SUGURE  Shigeru OHO  Yuta KIMI  Masafumi SHIMOZAWA  Shuhei YOSHIDA  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER

      Vol:
    E98-C No:4
      Page(s):
    333-339

    This paper presents a novel architecture for a fault-tolerant and dual modular redundancy (DMR) system using a checkpoint recovery approach. The architecture features exploitation of SRAM with simultaneous copy and instantaneous compare function. It can perform low-latency data copying between dual cores. Therefore, it can carry out fast backup and rollback. Furthermore, it can reduce the power consumption during data comparison process compared to the cyclic redundancy check (CRC). Evaluation results show that, compared with the conventional checkpoint/restart DMR, the proposed architecture reduces the cycle overhead by 97.8% and achieves a 3.28% low-latency execution cycle even if a one-time fault occurs when executing the task. The proposed architecture provides high reliability for systems with a real-time requirement.

  • Removing Boundary Effect of a Patch-Based Super-Resolution Algorithm

    Aram KIM  Junhee PARK  Byung-Uk LEE  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/01/09
      Vol:
    E98-D No:4
      Page(s):
    976-979

    In a patch-based super-resolution algorithm, a low-resolution patch is influenced by surrounding patches due to blurring. We propose to remove this boundary effect by subtracting the blur from the surrounding high-resolution patches, which enables more accurate sparse representation. We demonstrate improved performance through experimentation. The proposed algorithm can be applied to most of patch-based super-resolution algorithms to achieve additional improvement.

  • GHOST Sensor: A Proactive Cyber Attack Monitoring Platform

    Masashi ETO  Tomohide TANAKA  Koei SUZUKI  Mio SUZUKI  Daisuke INOUE  Koji NAKAO  

     
    PAPER-Attack Monitoring & Detection

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    788-795

    A number of network monitoring sensors such as honeypot and web crawler have been launched to observe increasingly-sophisticated cyber attacks. Based on these technologies, there have been several large scale network monitoring projects launched to fight against cyber threats on the Internet. Meanwhile, these projects are facing some problems such as Difficulty of collecting wide range darknet, Burden of honeypot operation and Blacklisting problem of honeypot address. In order to address these problems, this paper proposes a novel proactive cyber attack monitoring platform called GHOST sensor, which enables effective utilization of physical and logical resources such as hardware of sensors and monitoring IP addresses as well as improves the efficiency of attack information collection. The GHOST sensor dynamically allocates targeted IP addresses to appropriate sensors so that the sensors can flexibly monitor attacks according to profiles of each attacker. Through an evaluation in a experiment environment, this paper presents the efficiency of attack observation and resource utilization.

  • Authorization Conflict Problems in Combining RIF Rules with RDF Data

    Jaehoon KIM  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2014/09/05
      Vol:
    E98-D No:4
      Page(s):
    863-871

    Resource Description Framework (RDF) access control suffers from an authorization conflict problem caused by RDF inference. When an access authorization is specified, it can lie in conflict with other access authorizations that have the opposite security sign as a result of RDF inference. In our former study, we analyzed the authorization conflict problem caused by subsumption inference, which is the key inference in RDF. The Rule Interchange Format (RIF) is a Web standard rule language recommended by W3C, and can be combined with RDF data. Therefore, as in RDF inference, an authorization conflict can be caused by RIF inference. In addition, this authorization conflict can arise as a result of the interaction of RIF inference and RDF inference rather than of RIF inference alone. In this paper, we analyze the authorization conflict problem caused by RIF inference and suggest an efficient authorization conflict detection algorithm. The algorithm exploits the graph labeling-based algorithm proposed in our earlier paper. Through experiments, we show that the performance of the graph labeling-based algorithm is outstanding for large RDF data.

  • An Original Entry Point Detection Method with Candidate-Sorting for More Effective Generic Unpacking

    Ryoichi ISAWA  Daisuke INOUE  Koji NAKAO  

     
    PAPER-Information Network

      Pubricized:
    2015/01/06
      Vol:
    E98-D No:4
      Page(s):
    883-893

    Many malware programs emerging from the Internet are compressed and/or encrypted by a wide variety of packers to deter code analysis, thus making it necessary to perform unpacking first. To do this task efficiently, Guo et al. proposed a generic unpacking system named Justin that provides original entry point (OEP) candidates. Justin executes a packed program, and then it extracts written-and-executed points caused by the decryption of the original binary until it determines the OEP has appeared, taking those points as candidates. However, for several types of packers, the system can provide comparatively large sets of candidates or fail to capture the OEP. For more effective generic unpacking, this paper presents a novel OEP detection method featuring two mechanisms. One identifies the decrypting routine by tracking relations between writing instructions and written areas. This is based on the fact that the decrypting routine is the generator for the original binary. In case our method fails to detect the OEP, the other mechanism sorts candidates based on the most likely candidate so that analysts can reach the correct one quickly. With experiments using a dataset of 753 samples packed by 25 packers, we confirm that our method can be more effective than Justin's heuristics, in terms of detecting OEPs and reducing candidates. After that, we also propose a method combining our method with one of Justin's heuristics.

  • Techniques for Measuring Business Process Based on Business Values

    Jihyun LEE  Sungwon KANG  

     
    PAPER-Office Information Systems, e-Business Modeling

      Pubricized:
    2014/12/26
      Vol:
    E98-D No:4
      Page(s):
    911-921

    The ultimate purpose of a business process is to promote business values. Thus, any process that fails to enhance or promote business values should be improved or adjusted so that business values can be achieved. Therefore an organization should have the capability of confirming whether a business value is achieved; furthermore, in order to cope with the changes of business environment, it should be able to define the necessary measures on the basis of business values. This paper proposes techniques for measuring a business process based on business values, which can be used to monitor and control business activities focusing on the attainment of business values. To show the feasibility of the techniques, we compare their monitoring and controlling capabilities with those of the current fulfillment process of a company. The results show that the proposed techniques are effective in linking business values to relevant processes and integrating each measurement result in accordance with the management level.

  • Advantages and Drawbacks of Smartphones and Tablets for Visually Impaired People —— Analysis of ICT User Survey Results ——

    Tetsuya WATANABE  Toshimitsu YAMAGUCHI  Kazunori MINATANI  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Pubricized:
    2014/12/26
      Vol:
    E98-D No:4
      Page(s):
    922-929

    A survey was conducted on the use of ICT by visually impaired people. Among 304 respondents, 81 used smartphones and 44, tablets. Blind people used feature phones at a higher rate and smartphones and tablets at lower rates than people with low vision. The most popular smartphone model was iPhone and the most popular tablet model was iPad. While almost all blind users used the speech output accessibility feature and only a few of them used visual features, low vision users used both visual features such as Zoom, Large text, and Invert colors and speech output at high rates both on smartphones and tablets. The most popular text entry methods were different between smartphones and tablets. For smartphones flick and numeric keypad input were popular among low vision users while voice input was the most popular among blind users. For tablets a software QWERTY keyboard was the most popular among both blind and low vision users. The advantages of smartphones were access to geographical information, quick Web browsing, voice input, and extensibility for both blind and low vision users, object recognition for blind users, and readability for low vision users. Tablets also work as a vision aid for people with low vision. The drawbacks of smartphones and tablets were text entry and touch operation difficulties and inaccessible apps for both blind and low vision users, problems in speech output for blind users, and problems in readability for low vision users. Researchers and makers of operating systems (OS) and apps should assume responsibility for solving these problems.

  • A Spatially Correlated Mixture Model for Image Segmentation

    Kosei KURISU  Nobuo SUEMATSU  Kazunori IWATA  Akira HAYASHI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/06
      Vol:
    E98-D No:4
      Page(s):
    930-937

    In image segmentation, finite mixture modeling has been widely used. In its simplest form, the spatial correlation among neighboring pixels is not taken into account, and its segmentation results can be largely deteriorated by noise in images. We propose a spatially correlated mixture model in which the mixing proportions of finite mixture models are governed by a set of underlying functions defined on the image space. The spatial correlation among pixels is introduced by putting a Gaussian process prior on the underlying functions. We can set the spatial correlation rather directly and flexibly by choosing the covariance function of the Gaussian process prior. The effectiveness of our model is demonstrated by experiments with synthetic and real images.

  • Robust Visual Tracking Using Sparse Discriminative Graph Embedding

    Jidong ZHAO  Jingjing LI  Ke LU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/19
      Vol:
    E98-D No:4
      Page(s):
    938-947

    For robust visual tracking, the main challenges of a subspace representation model can be attributed to the difficulty in handling various appearances of the target object. Traditional subspace learning tracking algorithms neglected the discriminative correlation between different multi-view target samples and the effectiveness of sparse subspace learning. For learning a better subspace representation model, we designed a discriminative graph to model both the labeled target samples with various appearances and the updated foreground and background samples, which are selected using an incremental updating scheme. The proposed discriminative graph structure not only can explicitly capture multi-modal intraclass correlations within labeled samples but also can obtain a balance between within-class local manifold and global discriminative information from foreground and background samples. Based on the discriminative graph, we achieved a sparse embedding by using L2,1-norm, which is incorporated to select relevant features and learn transformation in a unified framework. In a tracking procedure, the subspace learning is embedded into a Bayesian inference framework using compound motion estimation and a discriminative observation model, which significantly makes localization effective and accurate. Experiments on several videos have demonstrated that the proposed algorithm is robust for dealing with various appearances, especially in dynamically changing and clutter situations, and has better performance than alternatives reported in the recent literature.

  • Fault Localization Using Failure-Related Contexts for Automatic Program Repair

    Ang LI  Xiaoguang MAO  Yan LEI  Tao JI  

     
    LETTER-Software Engineering

      Pubricized:
    2015/01/08
      Vol:
    E98-D No:4
      Page(s):
    955-959

    Fault localization is essential for conducting effective program repair. However, preliminary studies have shown that existing fault localization approaches do not take the requirements of automatic repair into account, and therefore restrict the repair performance. To address this issue, this paper presents the first study on designing fault localization approaches for automatic program repair, that is, we propose a fault localization approach using failure-related contexts in order to improve automatic program repair. The proposed approach first utilizes program slicing technique to construct a failure-related context, then evaluates the suspiciousness of each element in this context, and finally transfers the result of evaluation to automatic program repair techniques for performing repair on faulty programs. The experimental results demonstrate that the proposed approach is effective to improve automatic repair performance.

  • Supporting Jogging at an Even Pace by Synchronizing Music Playback Speed with Runner's Pace

    Tetsuro KITAHARA  Shunsuke HOKARI  Tatsuya NAGAYASU  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2015/01/09
      Vol:
    E98-D No:4
      Page(s):
    968-971

    In this paper, we propose a jogging support system that plays back background music while synchronizing its tempo with the user's jogging pace. Keeping an even pace is important in jogging but it is not easy due to tiredness. Our system indicates the variation of the runner's pace by changing the playback speed of music according to the user's pace variation. Because this requires the runner to keep an even pace in order to enjoy the music at its normal speed, the runner will be spontaneously influenced to keep an even pace. Experimental results show that our system reduced the variation of jogging pace.

  • Weighted-Combining Calibration on Multiuser MIMO Systems with Implicit Feedback Open Access

    Hayato FUKUZONO  Tomoki MURAKAMI  Riichi KUDO  Yasushi TAKATORI  Masato MIZOGUCHI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:4
      Page(s):
    701-713

    Implicit feedback is an approach that utilizes uplink channel state information (CSI) for downlink transmit beamforming on multiple-input multiple-output (MIMO) systems, relying on over-the-air channel reciprocity. The implicit feedback improves throughput efficiency because overhead of CSI feedback for change of over-the-air channel responses is omitted. However, it is necessary for the implicit feedback to calibrate circuitry responses that uplink CSI includes, because actual downlink and uplink channel responses do not match due to different transmit and receive circuitry chains. This paper presents our proposed calibration scheme, weighted-combining calibration (WCC); it offers improved calibration accuracy. In WCC, an access point (AP) calculates multiple calibration coefficients from ratios of downlink and uplink CSI, and then combines coefficients with minimum mean square error (MMSE) weights. The weights are derived using a linear approximation in the high signal to noise power ratio (SNR) regime. Analytical mean square error (MSE) of calibration coefficients with WCC and calibration schemes for comparison is expressed based on the linear approximation. Computer simulations show that the analytical MSE matches simulated one if the linear approximation holds, and that WCC improves the MSE and signal to interference plus noise power ratio (SINR). Indoor experiments are performed on a multiuser MIMO system with implicit feedback based on orthogonal frequency division multiplexing (OFDM), built using measurement hardware. Experimental results verify that the channel reciprocity can be exploited on the developed multiuser MIMO-OFDM system and that WCC is also effective in indoor environments.

  • Novel Synchronization and BER Improvement Method for Public Safety Mobile Communication Systems Employing Heterogeneous Cognitive Radio

    Masafumi MORIYAMA  Takeo FUJII  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E98-B No:4
      Page(s):
    736-745

    In this paper, a novel synchronization method is proposed for a heterogeneous cognitive radio that combines public safety mobile communication systems (PMCSs) with commercial mobile wireless communication systems (CMWCSs). The proposed method enables self-synchronization of the PMCSs as well as co-synchronization of PMCSs and CMWCSs. In this paper, the self-synchronization indicates that each system obtains own timing synchronization. The co-synchronization indicates that a system recognizes data transmitted from other systems correctly. In our research, we especially focus on PMCS self-synchronization because it is one of the most difficult parts of our proposed cognitive radio that improves PMCS's communication quality. The proposed method is utilized for systems employing differentially encoded π/4 shift QPSK modulation. The synchronization can be achieved by correlating envelopes calculated from a PMCS's received signals with subsidiary information (SI) sent via a CMWCS. In this paper, the performance of the proposed synchronization method is evaluated by computer simulation. Moreover, because this SI can also be used to improve the bit error rate (BER) of PMCSs, BER improvement and efficient SI sending methods are derived, after which their performance is evaluated.

  • A New Approach to Identify User Authentication Methods toward SSH Dictionary Attack Detection

    Akihiro SATOH  Yutaka NAKAMURA  Takeshi IKENAGA  

     
    PAPER-Authentication

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    760-768

    A dictionary attack against SSH is a common security threat. Many methods rely on network traffic to detect SSH dictionary attacks because the connections of remote login, file transfer, and TCP/IP forwarding are visibly distinct from those of attacks. However, these methods incorrectly judge the connections of automated operation tasks as those of attacks due to their mutual similarities. In this paper, we propose a new approach to identify user authentication methods on SSH connections and to remove connections that employ non-keystroke based authentication. This approach is based on two perspectives: (1) an SSH dictionary attack targets a host that provides keystroke based authentication; and (2) automated tasks through SSH need to support non-keystroke based authentication. Keystroke based authentication relies on a character string that is input by a human; in contrast, non-keystroke based authentication relies on information other than a character string. We evaluated the effectiveness of our approach through experiments on real network traffic at the edges in four campus networks, and the experimental results showed that our approach provides high identification accuracy with only a few errors.

  • Efficient Data Possession Auditing for Real-World Cloud Storage Environments

    Da XIAO  Lvyin YANG  Chuanyi LIU  Bin SUN  Shihui ZHENG  

     
    PAPER-Cloud Security

      Pubricized:
    2014/12/04
      Vol:
    E98-D No:4
      Page(s):
    796-806

    Provable Data Possession (PDP) schemes enable users to efficiently check the integrity of their data in the cloud. Support for massive and dynamic sets of data and adaptability to third-party auditing are two key factors that affect the practicality of existing PDP schemes. We propose a secure and efficient PDP system called IDPA-MF-PDP, by exploiting the characteristics of real-world cloud storage environments. The cost of auditing massive and dynamic sets of data is dramatically reduced by utilizing a multiple-file PDP scheme (MF-PDP), based on the data update patterns of cloud storage. Deployment and operational costs of third-party auditing and information leakage risks are reduced by an auditing framework based on integrated data possession auditors (DPAs), instantiated by trusted hardware and tamper-evident audit logs. The interaction protocols between the user, the cloud server, and the DPA integrate MF-PDP with the auditing framework. Analytical and experimental results demonstrate that IDPA-MF-PDP provides the same level of security as the original PDP scheme while reducing computation and communication overhead on the DPA, from linear the size of data to near constant. The performance of the system is bounded by disk I/O capacity.

  • A Distributed and Cooperative NameNode Cluster for a Highly-Available Hadoop Distributed File System

    Yonghwan KIM  Tadashi ARARAGI  Junya NAKAMURA  Toshimitsu MASUZAWA  

     
    PAPER-Computer System

      Pubricized:
    2014/12/26
      Vol:
    E98-D No:4
      Page(s):
    835-851

    Recently, Hadoop has attracted much attention from engineers and researchers as an emerging and effective framework for Big Data. HDFS (Hadoop Distributed File System) can manage a huge amount of data with high performance and reliability using only commodity hardware. However, HDFS requires a single master node, called a NameNode, to manage the entire namespace (or all the i-nodes) of a file system. This causes the SPOF (Single Point Of Failure) problem because the file system becomes inaccessible when the NameNode fails. This also causes a bottleneck of efficiency since all the access requests to the file system have to contact the NameNode. Hadoop 2.0 resolves the SPOF problem by introducing manual failover based on two NameNodes, Active and Standby. However, it still has the efficiency bottleneck problem since all the access requests have to contact the Active in ordinary executions. It may also lose the advantage of using commodity hardware since the two NameNodes have to share a highly reliable sophisticated storage. In this paper, we propose a new HDFS architecture to resolve all the problems mentioned above.

8861-8880hit(42807hit)