The search functionality is under construction.

Author Search Result

[Author] Ying-Dar LIN(11hit)

1-11hit
  • Application Performance Profiling in Android Dalvik Virtual Machines

    Hung-Cheng CHANG  Kuei-Chung CHANG  Ying-Dar LIN  Yuan-Cheng LAI  

     
    PAPER-Software System

      Pubricized:
    2016/01/25
      Vol:
    E99-D No:5
      Page(s):
    1296-1303

    Most Android applications are written in JAVA and run on a Dalvik virtual machine. For smartphone vendors and users who wish to know the performance of an application on a particular smartphone but cannot obtain the source code, we propose a new technique, Dalvik Profiler for Applications (DPA), to profile an Android application on a Dalvik virtual machine without the support of source code. Within a Dalvik virtual machine, we determine the entry and exit locations of a method, log its execution time, and analyze the log to determine the performance of the application. Our experimental results show an error ratio of less than 5% from the baseline tool Traceview which instruments source code. The results also show some interesting behaviors of applications and smartphones: the performance of some smartphones with higher hardware specifications is 1.5 times less than the phones with lower specifications. DPA is now publicly available as an open source tool.

  • Accelerating Web Content Filtering by the Early Decision Algorithm

    Po-Ching LIN  Ming-Dao LIU  Ying-Dar LIN  Yuan-Cheng LAI  

     
    PAPER-Contents Technology and Web Information Systems

      Vol:
    E91-D No:2
      Page(s):
    251-257

    Real-time content analysis is typically a bottleneck in Web filtering. To accelerate the filtering process, this work presents a simple, but effective early decision algorithm that analyzes only part of the Web content. This algorithm can make the filtering decision, either to block or to pass the Web content, as soon as it is confident with a high probability that the content really belongs to a banned or an allowed category. Experiments show the algorithm needs to examine only around one-fourth of the Web content on average, while the accuracy remains fairly good: 89% for the banned content and 93% for the allowed content. This algorithm can complement other Web filtering approaches, such as URL blocking, to filter the Web content with high accuracy and efficiency. Text classification algorithms in other applications can also follow the principle of early decision to accelerate their applications.

  • Reconfigurable Multi-Resolution Performance Profiling in Android Applications

    Ying-Dar LIN  Kuei-Chung CHANG  Yuan-Cheng LAI  Yu-Sheng LAI  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:9
      Page(s):
    2039-2046

    The computing of applications in embedded devices suffers tight constraints on computation and energy resources. Thus, it is important that applications running on these resource-constrained devices are aware of the energy constraint and are able to execute efficiently. The existing execution time and energy profiling tools could help developers to identify the bottlenecks of applications. However, the profiling tools need large space to store detailed profiling data at runtime, which is a hard demand upon embedded devices. In this article, a reconfigurable multi-resolution profiling (RMP) approach is proposed to handle this issue on embedded devices. It first instruments all profiling points into source code of the target application and framework. Developers can narrow down the causes of bottleneck by adjusting the profiling scope using the configuration tool step by step without recompiling the profiled targets. RMP has been implemented as an open source tool on Android systems. Experiment results show that the required log space using RMP for a web browser application is 25 times smaller than that of Android debug class, and the profiling error rate of execution time is proven 24 times lower than that of debug class. Besides, the CPU and memory overheads of RMP are only 5% and 6.53% for the browsing scenario, respectively.

  • Service-Sensitive Routing in DiffServ/MPLS Networks

    Nai-Bin HSU  Ying-Dar LIN  Mao-Huang LI  Tsern-Huei LEE  

     
    PAPER-Internet

      Vol:
    E84-B No:10
      Page(s):
    2871-2879

    This study investigates the problem of unfairness when QoS routing does not consider the mix of traffic classes. Unfairness is mainly caused by routing different traffic flows of the same class through paths with extremely different traffic mixtures, involving various service classes. Next, a new routing scheme--Service-sensitive Routing (SSR), which takes the state of traffic mixture of the various service classes into account, is proposed. To determine the QoS route for a flow request, SSR not only considers the available bandwidth and delay of the candidate paths, but also considers the mix of traffic classes on the paths. Additionally, the hybrid granularity routing decision in SSR scheme is scalable and suitable for the Differentiated Services and MPLS networks. Extensive simulations show that SSR can effectively reduce the variance of the average of queuing delays, for example by approximately 20% to 35% for a moderate offered load, compared to the shortest path routing. Furthermore, this routing scheme reduces the fractional reward loss and bandwidth blocking probability.

  • Co-DRR: An Integrated Uplink and Downlink Scheduler for Bandwidth Management over Wireless LANs

    Huan-Yun WEI  Ching-Chuang CHIANG  Ying-Dar LIN  

     
    PAPER-Network

      Vol:
    E90-B No:8
      Page(s):
    2022-2033

    Bandwidth management over wired bottleneck links has been an effective way to utilize network resources. For the rapidly emerging IEEE 802.11b Wireless LAN (WLAN), the limited WLAN bandwidth becomes a new bottleneck and requires bandwidth management. Most possible existing solutions only exclusively focus on optimizing multimedia traffic, pure downlink or pure uplink fairness, or are incompatible with IEEE 802.11b. This study proposes a cooperative deficit round robin (co-DRR), an IEEE 802.11b-compatible host-based fair scheduling algorithm based on the deficit round robin (DRR) and distributed-DRR (DDRR) schemes, to make the uplink and downlink quantum calculations cooperate to simultaneously control uplink and downlink bandwidth. Co-DRR uses the standard PCF mode to utilize the contention-free period to compensate for the unfairness in the contention period. Numerical results demonstrate that: co-DRR can scale up to 100 mobile hosts even under high bit error rate (0.0001) while simultaneously achieving uplink/downlink long-term fairness (CoV<0.01) among competing mobile hosts.

  • Optimal Ranging Algorithms for Medium Access Control in Hybrid Fiber Coax Networks

    Frank Yeong-Sung LIN  Wei-Ming YIN  Ying-Dar LIN  Chih-Hao LIN  

     
    PAPER-Network

      Vol:
    E85-B No:10
      Page(s):
    2319-2326

    The ranging algorithm allows active stations to measure their distances to the headend for synchronization purpose in Hybrid Fiber Coax (HFC) networks. A practicable mechanism to resolve contention among numerous stations is to randomly delay the transmission of their control messages. Since shorter contention cycle time increases slot throughput, this study develops three mechanisms, fixed random delay, variable random delay, and optimal random delay, to minimize the contention cycle time. Simulation demonstrates that the optimal random delay effectively minimizes the contention cycle time and approaches the theoretical optimum throughput of 0.18 from pure ALOHA. Furthermore, over-estimation reduces the impact on contention cycle time more than under-estimation through sensitivity analysis, and both phenomenon damage slot throughput. Two estimation schemes, maximum likelihood and average likelihood, are thereby presented to estimate the number of active stations for each contention resolution round. Simulation proofs that the proposed estimation schemes are effective even when the estimated number of active stations in initial contention round is inaccurate.

  • Two-Phase Minislot Scheduling Algorithm for HFC QoS Services Provisioning

    Wei-Ming YIN  Chia-Jen WU  Ying-Dar LIN  

     
    PAPER-Fiber-Optic Transmission

      Vol:
    E85-B No:3
      Page(s):
    582-593

    Data-Over-Cable Service Interface Specifications v1.1 (DOCSIS v1.1), developed for data transmissions over Hybrid Fiber Coaxial (HFC) networks, defines five upstream services for supporting per-flow Quality of Services (QoS). The cable modem termination system (CMTS) must periodically grant upstream transmission opportunities to the QoS flows based on their QoS parameters. However, packets may violate QoS requirements when several flows demand the same interval for transmission. This study proposes a two-phase, i.e., the scheduling sequence determination phase and the minislot assignment phase, minislot scheduling algorithm to reduce the QoS violation rate. In the scheduling sequence determination phase, the flow whose packets are most unlikely to violate QoS is scheduled first. Then, in the minislot assignment phase, the scheduler allocates to a flow the available interval where the likelihood of packet violation is minimum. Simulation results demonstrate that our scheduling algorithm can reduce the QoS violation rate by 80-35% over that of the first-come-first-serve-random-selection algorithm. It increases the utilization by 25% as well. The two-phase minislot scheduling algorithm can work within the DOCSIS v1.1 framework.

  • kP2PADM: An In-Kernel Architecture of P2P Management Gateway

    Ying-Dar LIN  Po-Ching LIN  Meng-Fu TSAI  Tsao-Jiang CHANG  Yuan-Cheng LAI  

     
    PAPER-Computer Systems

      Vol:
    E91-D No:10
      Page(s):
    2398-2405

    Managing increasing traffic from Instant Messengers and P2P applications is becoming more important nowadays. We present an in-kernel architecture of management gateway, namely kP2PADM, built upon open-source packages with several modifications and design techniques. First, the in-kernel design streamlines the data path through the gateway. Second, the dual-queue buffer eliminates head-of-line blocking for multiple connections. Third, a connection cache reduces useless reconnection attempts from the peers. Fourth, a fast-pass mechanism avoids slowing down the TCP transmission. The in-kernel design approximately doubles the throughput of the design in the user space. The internal benchmarks also analyze the impact of each function on performance.

  • Embedded TaintTracker: Lightweight Run-Time Tracking of Taint Data against Buffer Overflow Attacks

    Yuan-Cheng LAI  Ying-Dar LIN  Fan-Cheng WU  Tze-Yau HUANG  Frank C. LIN  

     
    PAPER

      Vol:
    E94-D No:11
      Page(s):
    2129-2138

    A buffer overflow attack occurs when a program writes data outside the allocated memory in an attempt to invade a system. Approximately forty percent of all software vulnerabilities over the past several years are attributed to buffer overflow. Taint tracking is a novel technique to prevent buffer overflow. Previous studies on taint tracking ran a victim's program on an emulator to dynamically instrument the code for tracking the propagation of taint data in memory and checking whether malicious code is executed. However, the critical problem of this approach is its heavy performance overhead. Analysis of this overhead shows that 60% of the overhead is from the emulator, and the remaining 40% is from dynamic instrumentation and taint information maintenance. This article proposes a new taint-style system called Embedded TaintTracker to eliminate the overhead in the emulator and dynamic instrumentation by compressing a checking mechanism into the operating system (OS) kernel and moving the instrumentation from runtime to compilation time. Results show that the proposed system outperforms the previous work, TaintCheck, by at least 8 times on throughput degradation, and is about 17.5 times faster than TaintCheck when browsing 1 KB web pages.

  • Two-Stage Dynamic Uplink Channel and Slot Assignment for GPRS

    Yu-Ching HSU  Ying-Dar LIN  Mei-Yan CHIANG  

     
    PAPER-Network

      Vol:
    E86-B No:9
      Page(s):
    2694-2700

    General packet radio service (GPRS) uses a two-stage mechanism to allocate uplink radio resource to mobile stations (MSs). In stage-1, the base station (BS) assigns several packet data channels (PDCHs) to an MS. Furthermore, a PDCH may be assigned to multiple MSs. In stage-2, therefore, the BS selects one of the multiplexed MSs in a PDCH to use the radio resource. In this paper, maintaining a load balance between PDCHs in stage-1 is examined and several selection schemes to lower the mis-selection rate in stage-2 are proposed. From our simulation results, the cost deduced from the poor load balancing and selection schemes render a lower system throughput and a non-negligible increase in packet queuing delay. Among the various stage-2 selection policies, round robin with linearly-accumulated adjustment (RRLAA) has the lowest mis-selection rate and outperforms the one without any heuristic by up to 50%.

  • Bandwidth Brokers of Instantaneous and Book-Ahead Requests for Differentiated Services Networks

    Ying-Dar LIN  Cheng-Hsien CHANG  Yu-Ching HSU  

     
    PAPER-Network

      Vol:
    E85-B No:1
      Page(s):
    278-283

    The Quality of Service (QoS) reservations in Differentiated Service (DiffServ) networks can be classified into two sets: Book-ahead (BA) requests and Instantaneous Requests (IRs). When an admitted BA request becomes active, some ongoing IRs is dropped when the bandwidth is insufficient for supporting both IRs and BA requests. The admission control should predict the lifetime, i.e. look-ahead time, of the IRs to prevent the admitted IRs from being dropped. The control should then check whether the available bandwidth during the look-ahead time is sufficient for the incoming IRs. We propose an application-aware look-ahead admission control for IRs, which determines the look-ahead time for specific types of IR applications. An admitted BA request might block subsequent ones that could bring more effective revenue. Thus, we propose the deferrable model of the admission control for BA requests. Simulation results indicate that the application-aware look-ahead admission control successfully reduces the dropping probability and wasted revenue of IRs by up to 10 times and 30%, respectively. Besides, the deferrable model indeed results in more BA effective revenue.