The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] browser(7hit)

1-7hit
  • To Get Lost is to Learn the Way: An Analysis of Multi-Step Social Engineering Attacks on the Web Open Access

    Takashi KOIDE  Daiki CHIBA  Mitsuaki AKIYAMA  Katsunari YOSHIOKA  Tsutomu MATSUMOTO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    162-181

    Web-based social engineering (SE) attacks manipulate users to perform specific actions, such as downloading malware and exposing personal information. Aiming to effectively lure users, some SE attacks, which we call multi-step SE attacks, constitute a sequence of web pages starting from a landing page and require browser interactions at each web page. Also, different browser interactions executed on a web page often branch to multiple sequences to redirect users to different SE attacks. Although common systems analyze only landing pages or conduct browser interactions limited to a specific attack, little effort has been made to follow such sequences of web pages to collect multi-step SE attacks. We propose STRAYSHEEP, a system to automatically crawl a sequence of web pages and detect diverse multi-step SE attacks. We evaluate the effectiveness of STRAYSHEEP's three modules (landing-page-collection, web-crawling, and SE-detection) in terms of the rate of collected landing pages leading to SE attacks, efficiency of web crawling to reach more SE attacks, and accuracy in detecting the attacks. Our experimental results indicate that STRAYSHEEP can lead to 20% more SE attacks than Alexa top sites and search results of trend words, crawl five times more efficiently than a simple crawling module, and detect SE attacks with 95.5% accuracy. We demonstrate that STRAYSHEEP can collect various SE attacks, not limited to a specific attack. We also clarify attackers' techniques for tricking users and browser interactions, redirecting users to attacks.

  • Evasive Malicious Website Detection by Leveraging Redirection Subgraph Similarities

    Toshiki SHIBAHARA  Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Kunio HATO  Masayuki MURATA  

     
    PAPER

      Pubricized:
    2018/10/30
      Vol:
    E102-D No:3
      Page(s):
    430-443

    Many users are exposed to threats of drive-by download attacks through the Web. Attackers compromise vulnerable websites discovered by search engines and redirect clients to malicious websites created with exploit kits. Security researchers and vendors have tried to prevent the attacks by detecting malicious data, i.e., malicious URLs, web content, and redirections. However, attackers conceal parts of malicious data with evasion techniques to circumvent detection systems. In this paper, we propose a system for detecting malicious websites without collecting all malicious data. Even if we cannot observe parts of malicious data, we can always observe compromised websites. Since vulnerable websites are discovered by search engines, compromised websites have similar traits. Therefore, we built a classifier by leveraging not only malicious but also compromised websites. More precisely, we convert all websites observed at the time of access into a redirection graph and classify it by integrating similarities between its subgraphs and redirection subgraphs shared across malicious, benign, and compromised websites. As a result of evaluating our system with crawling data of 455,860 websites, we found that the system achieved a 91.7% true positive rate for malicious websites containing exploit URLs at a low false positive rate of 0.1%. Moreover, it detected 143 more evasive malicious websites than the conventional content-based system.

  • Building a Scalable Web Tracking Detection System: Implementation and the Empirical Study

    Yumehisa HAGA  Yuta TAKATA  Mitsuaki AKIYAMA  Tatsuya MORI  

     
    PAPER-Privacy

      Pubricized:
    2017/05/18
      Vol:
    E100-D No:8
      Page(s):
    1663-1670

    Web tracking is widely used as a means to track user's behavior on websites. While web tracking provides new opportunities of e-commerce, it also includes certain risks such as privacy infringement. Therefore, analyzing such risks in the wild Internet is meaningful to make the user's privacy transparent. This work aims to understand how the web tracking has been adopted to prominent websites. We also aim to understand their resilience to the ad-blocking techniques. Web tracking-enabled websites collect the information called the web browser fingerprints, which can be used to identify users. We develop a scalable system that can detect fingerprinting by using both dynamic and static analyses. If a tracking site makes use of many and strong fingerprints, the site is likely resilient to the ad-blocking techniques. We also analyze the connectivity of the third-party tracking sites, which are linked from multiple websites. The link analysis allows us to extract the group of associated tracking sites and understand how influential these sites are. Based on the analyses of 100,000 websites, we quantify the potential risks of the web tracking-enabled websites. We reveal that there are 226 websites that adopt fingerprints that cannot be detected with the most of off-the-shelf anti-tracking tools. We also reveal that a major, resilient third-party tracking site is linked to 50.0 % of the top-100,000 popular websites.

  • Practice and Evaluation of Pagelet-Based Client-Side Rendering Mechanism

    Hao HAN  Yinxing XUE  Keizo OYAMA  Yang LIU  

     
    PAPER-Software Engineering

      Vol:
    E97-D No:8
      Page(s):
    2067-2083

    The rendering mechanism plays an indispensable role in browser-based Web application. It generates active webpages dynamically and provides human-readable layout through template engines, which are used as a standard programming model to separate the business logic and data computations from the webpage presentation. The client-side rendering mechanism, owing to the advances of rich application technologies, has been widely adopted. The adoption of client side rendering brings not only various merits but also new problems. In this paper, we propose and construct “pagelet”, a segment-based template engine for developing flexible and extensible Web applications. By presenting principles, practice and usage experience of pagelet, we conduct a comprehensive analysis of possible advantages and disadvantages brought by client-side rendering mechanism from the viewpoints of both developers and end-users.

  • Information-Flow-Based Access Control for Web Browsers

    Sachiko YOSHIHAMA  Takaaki TATEISHI  Naoshi TABUCHI  Tsutomu MATSUMOTO  

     
    PAPER-Authentication and Authorization Techniques

      Vol:
    E92-D No:5
      Page(s):
    836-850

    The emergence of Web 2.0 technologies such as Ajax and Mashup has revealed the weakness of the same-origin policy [1], the current de facto standard for the Web browser security model. We propose a new browser security model to allow fine-grained access control in the client-side Web applications for secure mashup and user-generated contents. We propose a browser security model that is based on information-flow-based access control (IBAC) to overcome the dynamic nature of the client-side Web applications and to accurately determine the privilege of scripts in the event-driven programming model.

  • Design of SMIL Browser Functionality in Mobile Phones

    Satoshi HIEDA  Yoshinori SAIDA  Hiroshi CHISHIMA  Naoki SATO  Yukikazu NAKAMOTO  

     
    PAPER-Terminals

      Vol:
    E87-B No:2
      Page(s):
    342-349

    SMIL is a markup language which enables us to describe multimedia contents. This paper proposes a design model of SMIL browser functionality for mobile phones whose resources are limited. We introduce SMIL Component, which is based on attachable software architecture to a pre-installed generic web browser and an event-based SMIL scheduler, which is a part of SMIL Component, to provide the multimedia presentation scheduling functionality. These lead to reducing the memory amount that SMIL Component consumes and brings high portabilty of SMIL Component for various web browsers. We implement SMIL Component and evaluate RAM sizes and presentation delays. As a result, we conclude that SMIL Component is practical for MMS presentations on a mobile phone.

  • Multimedia HTML Layout Method

    Toshimitsu SUZUKI  Kazumi SAITO  Sadao YASHIRO  Takahide MURAMOTO  

     
    PAPER

      Vol:
    E79-B No:8
      Page(s):
    1076-1082

    We proposed a graphical user interface (GUI) that provides users with multimedia information, including dynamic images. On the Internet, there are many WWW browsers that read the Hypertext Markup Language (HTML). As various browsers extend the HTML tags and attributes independently to expand and/or improve layout, the HTML compatibility between browsers is lost. We have developed a WWW browser to solve this problem. Our browser presents all multimedia information, including text, images, and dynamic images as a block and renders them without the need to extend the HTML specifications. It independently interprets and draws HTML objects using a layout manager. It has a layout rule, and manages the hierarchical data structure and the block data of HTML documents. This browser also allows layout-rule changes. The layout manager efficiently displays information while checking the available display area size. The structure of this browser is such that the portion that manages the formatting of the document is separated from the portion that displays the individual parts. In this browser, the layout rule allows text to be placed around an image without the need to modify the existing HTML contents. It is also relatively easy to change the presentation of multiple screens, such as a two-page book-like layout or the conventional single-page scroll-bar format by changing the layout rule. The incorporation of media decoders into the browser enables the displaying of various multimedia information, such as sounds, pictures, and moving images.