The search functionality is under construction.

Author Search Result

[Author] Takeo HARIU(10hit)

1-10hit
  • Client Honeypot Multiplication with High Performance and Precise Detection

    Mitsuaki AKIYAMA  Takeshi YAGI  Youki KADOBAYASHI  Takeo HARIU  Suguru YAMAGUCHI  

     
    PAPER-Attack Monitoring & Detection

      Vol:
    E98-D No:4
      Page(s):
    775-787

    We investigated client honeypots for detecting and circumstantially analyzing drive-by download attacks. A client honeypot requires both improved inspection performance and in-depth analysis for inspecting and discovering malicious websites. However, OS overhead in recent client honeypot operation cannot be ignored when improving honeypot multiplication performance. We propose a client honeypot system that is a combination of multi-OS and multi-process honeypot approaches, and we implemented this system to evaluate its performance. The process sandbox mechanism, a security measure for our multi-process approach, provides a virtually isolated environment for each web browser. It prevents system alteration from a compromised browser process by I/O redirection of file/registry access. To solve the inconsistency problem of file/registry view by I/O redirection, our process sandbox mechanism enables the web browser and corresponding plug-ins to share a virtual system view. Therefore, it enables multiple processes to be run simultaneously without interference behavior of processes on a single OS. In a field trial, we confirmed that the use of our multi-process approach was three or more times faster than that of a single process, and our multi-OS approach linearly improved system performance according to the number of honeypot instances. In addition, our long-term investigation indicated that 72.3% of exploitations target browser-helper processes. If a honeypot restricts all process creation events, it cannot identify an exploitation targeting a browser-helper process. In contrast, our process sandbox mechanism permits the creation of browser-helper processes, so it can identify these types of exploitations without resulting in false negatives. Thus, our proposed system with these multiplication approaches improves performance efficiency and enables in-depth analysis on high interaction systems.

  • Understanding Characteristics of Phishing Reports from Experts and Non-Experts on Twitter Open Access

    Hiroki NAKANO  Daiki CHIBA  Takashi KOIDE  Naoki FUKUSHI  Takeshi YAGI  Takeo HARIU  Katsunari YOSHIOKA  Tsutomu MATSUMOTO  

     
    PAPER-Information Network

      Pubricized:
    2024/03/01
      Vol:
    E107-D No:7
      Page(s):
    807-824

    The increase in phishing attacks through email and short message service (SMS) has shown no signs of deceleration. The first thing we need to do to combat the ever-increasing number of phishing attacks is to collect and characterize more phishing cases that reach end users. Without understanding these characteristics, anti-phishing countermeasures cannot evolve. In this study, we propose an approach using Twitter as a new observation point to immediately collect and characterize phishing cases via e-mail and SMS that evade countermeasures and reach users. Specifically, we propose CrowdCanary, a system capable of structurally and accurately extracting phishing information (e.g., URLs and domains) from tweets about phishing by users who have actually discovered or encountered it. In our three months of live operation, CrowdCanary identified 35,432 phishing URLs out of 38,935 phishing reports. We confirmed that 31,960 (90.2%) of these phishing URLs were later detected by the anti-virus engine, demonstrating that CrowdCanary is superior to existing systems in both accuracy and volume of threat extraction. We also analyzed users who shared phishing threats by utilizing the extracted phishing URLs and categorized them into two distinct groups - namely, experts and non-experts. As a result, we found that CrowdCanary could collect information that is specifically included in non-expert reports, such as information shared only by the company brand name in the tweet, information about phishing attacks that we find only in the image of the tweet, and information about the landing page before the redirect. Furthermore, we conducted a detailed analysis of the collected information on phishing sites and discovered that certain biases exist in the domain names and hosting servers of phishing sites, revealing new characteristics useful for unknown phishing site detection.

  • BotProfiler: Detecting Malware-Infected Hosts by Profiling Variability of Malicious Infrastructure Open Access

    Daiki CHIBA  Takeshi YAGI  Mitsuaki AKIYAMA  Kazufumi AOKI  Takeo HARIU  Shigeki GOTO  

     
    PAPER

      Vol:
    E99-B No:5
      Page(s):
    1012-1023

    Ever-evolving malware makes it difficult to prevent it from infecting hosts. Botnets in particular are one of the most serious threats to cyber security, since they consist of a lot of malware-infected hosts. Many countermeasures against malware infection, such as generating network-based signatures or templates, have been investigated. Such templates are designed to introduce regular expressions to detect polymorphic attacks conducted by attackers. A potential problem with such templates, however, is that they sometimes falsely regard benign communications as malicious, resulting in false positives, due to an inherent aspect of regular expressions. Since the cost of responding to malware infection is quite high, the number of false positives should be kept to a minimum. Therefore, we propose a system to generate templates that cause fewer false positives than a conventional system in order to achieve more accurate detection of malware-infected hosts. We focused on the key idea that malicious infrastructures, such as malware samples or command and control, tend to be reused instead of created from scratch. Our research verifies this idea and proposes here a new system to profile the variability of substrings in HTTP requests, which makes it possible to identify invariable keywords based on the same malicious infrastructures and to generate more accurate templates. The results of implementing our system and validating it using real traffic data indicate that it reduced false positives by up to two-thirds compared to the conventional system and even increased the detection rate of infected hosts.

  • Safe and Secure Services Based on NGN

    Tomoo FUKAZAWA  Takemi NISASE  Masahisa KAWASHIMA  Takeo HARIU  Yoshihito OSHIMA  

     
    INVITED PAPER

      Vol:
    E91-D No:5
      Page(s):
    1226-1233

    Next Generation Network (NGN), which has been undergoing standardization as it has developed, is expected to create new services that converge the fixed and mobile networks. This paper introduces the basic requirements for NGN in terms of security and explains the standardization activities, in particular, the requirements for the security function described in Y.2701 discussed in ITU-T SG-13. In addition to the basic NGN security function, requirements for NGN authentication are also described from three aspects: security, deployability, and service. As examples of authentication implementation, three profiles--namely, fixed, nomadic, and mobile--are defined in this paper. That is, the "fixed profile" is typically for fixed-line subscribers, the "nomadic profile" basically utilizes WiFi access points, and the "mobile profile" provides ideal NGN mobility for mobile subscribers. All three of these profiles satisfy the requirements from security aspects. The three profiles are compared from the viewpoint of requirements for deployability and service. After showing that none of the three profiles can fulfill all of the requirements, we propose that multiple profiles should be used by NGN providers. As service and application examples, two promising NGN applications are proposed. The first is a strong authentication mechanism that makes Web applications more safe and secure even against password theft. It is based on NGN ID federation function. The second provides an easy peer-to-peer broadband virtual private network service aimed at safe and secure communication for personal/SOHO (small office, home office) users, based on NGN SIP (session initiation protocol) session control.

  • MineSpider: Extracting Hidden URLs Behind Evasive Drive-by Download Attacks

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeo HARIU  Shigeki GOTO  

     
    PAPER-Web security

      Pubricized:
    2016/01/13
      Vol:
    E99-D No:4
      Page(s):
    860-872

    Drive-by download attacks force users to automatically download and install malware by redirecting them to malicious URLs that exploit vulnerabilities of the user's web browser. In addition, several evasion techniques, such as code obfuscation and environment-dependent redirection, are used in combination with drive-by download attacks to prevent detection. In environment-dependent redirection, attackers profile the information on the user's environment, such as the name and version of the browser and browser plugins, and launch a drive-by download attack on only certain targets by changing the destination URL. When malicious content detection and collection techniques, such as honeyclients, are used that do not match the specific environment of the attack target, they cannot detect the attack because they are not redirected. Therefore, it is necessary to improve analysis coverage while countering these adversarial evasion techniques. We propose a method for exhaustively analyzing JavaScript code relevant to redirections and extracting the destination URLs in the code. Our method facilitates the detection of attacks by extracting a large number of URLs while controlling the analysis overhead by excluding code not relevant to redirections. We implemented our method in a browser emulator called MINESPIDER that automatically extracts potential URLs from websites. We validated it by using communication data with malicious websites captured during a three-year period. The experimental results demonstrated that MINESPIDER extracted 30,000 new URLs from malicious websites in a few seconds that conventional methods missed.

  • Analysis of Blacklist Update Frequency for Countering Malware Attacks on Websites

    Takeshi YAGI  Junichi MURAYAMA  Takeo HARIU  Sho TSUGAWA  Hiroyuki OHSAKI  Masayuki MURATA  

     
    PAPER-Internet

      Vol:
    E97-B No:1
      Page(s):
    76-86

    We proposes a method for determining the frequency for monitoring the activities of a malware download site used for malware attacks on websites. In recent years, there has been an increase in attacks exploiting vulnerabilities in web applications for infecting websites with malware and maliciously using those websites as attack platforms. One scheme for countering such attacks is to blacklist malware download sites and filter out access to them from user websites. However, a malware download site is often constructed through the use of an ordinary website that has been maliciously manipulated by an attacker. Once the malware has been deleted from the malware download site, this scheme must be able to unblacklist that site to prevent normal user websites from being falsely detected as malware download sites. However, if a malware download site is frequently monitored for the presence of malware, the attacker may sense this monitoring and relocate that malware on a different site. This means that an attack will not be detected until the newly generated malware download site is discovered. In response to these problems, we clarify the change in attack-detection accuracy caused by attacker behavior. This is done by modeling attacker behavior, specifying a state-transition model with respect to the blacklisting of a malware download site, and analyzing these models with synthetically generated attack patterns and measured attack patterns in an operation network. From this analysis, we derive the optimal monitoring frequency that maximizes the true detection rate while minimizing the false detection rate.

  • Intelligent High-Interaction Web Honeypots Based on URL Conversion Scheme

    Takeshi YAGI  Naoto TANIMOTO  Takeo HARIU  Mitsutaka ITOH  

     
    PAPER-Internet

      Vol:
    E94-B No:5
      Page(s):
    1339-1347

    Vulnerabilities in web applications expose computer networks to security threats. For example, attackers use a large number of normal user websites as hopping sites, which are illegally operated using malware distributed by abusing vulnerabilities in web applications on these websites, for attacking other websites and user terminals. Thus, the security threats, resulting from vulnerabilities in web applications prevent service providers from constructing secure networking environments. To protect websites from attacks based on the vulnerabilities of web applications, security vendors and service providers collect attack information using web honeypots, which masquerade as vulnerable systems. To collect all accesses resulting from attacks that include further network attacks by malware, such as downloaders, vendors and providers use high-interaction web honeypots, which are composed of vulnerable systems with surveillance functions. However, conventional high-interaction web honeypots can collect only limited information and malware from attacks, whose paths in the destination URLs do not match the path structure of the web honeypot since these attacks are failures. To solve this problem, we propose a scheme in which the destination URLs of these attacks are corrected by determining the correct path from the path structure of the web honeypot. Our Internet investigation revealed that 97% of attacks are failures. However, we confirmed that approximately 50% of these attacks will succeed with our proposed scheme. We can use much more information with this scheme to protect websites than with conventional high-interaction web honeypots because we can collect complete information and malware from these attacks.

  • Design of Provider-Provisioned Website Protection Scheme against Malware Distribution

    Takeshi YAGI  Naoto TANIMOTO  Takeo HARIU  Mitsutaka ITOH  

     
    PAPER

      Vol:
    E93-B No:5
      Page(s):
    1122-1130

    Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.

  • Evaluations and Analysis of Malware Prevention Methods on Websites

    Takeshi YAGI  Junichi MURAYAMA  Takeo HARIU  Hiroyuki OHSAKI  

     
    PAPER-Internet

      Vol:
    E96-B No:12
      Page(s):
    3091-3100

    With the diffusion of web services caused by the appearance of a new architecture known as cloud computing, a large number of websites have been used by attackers as hopping sites to attack other websites and user terminals because many vulnerable websites are constructed and managed by unskilled users. To construct hopping sites, many attackers force victims to download malware by using vulnerabilities in web applications. To protect websites from these malware infection attacks, conventional methods, such as using anti-virus software, filter files from attackers using pattern files generated by analyzing conventional malware files collected by security vendors. In addition, certain anti-virus software uses a behavior blocking approach, which monitors malicious file activities and modifications. These methods can detect malware files that are already known. However, it is difficult to detect malware that is different from known malware. It is also difficult to define malware since legitimate software files can become malicious depending on the situation. We previously proposed an access filtering method based on communication opponents, which are other servers or terminals that connect with our web honeypots, of attacks collected by web honeypots, which collect malware infection attacks to websites by using actual vulnerable web applications. In this blacklist-based method, URLs or IP addresses, which are used in malware infection attacks collected by web honeypots, are listed in a blacklist, and accesses to and from websites are filtered based on the blacklist. To reveal the effects in an actual attack situation on the Internet, we evaluated the detection ratio of anti-virus software, our method, and a composite of both methods. Our evaluation revealed that anti-virus software detected approximately 50% of malware files, our method detected approximately 98% of attacks, and the composite of the two methods could detect approximately 99% of attacks.

  • Identifying Evasive Code in Malicious Websites by Analyzing Redirection Differences

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeo HARIU  Kazuhiko OHKUBO  Shigeki GOTO  

     
    PAPER-Mobile Application and Web Security

      Pubricized:
    2018/08/22
      Vol:
    E101-D No:11
      Page(s):
    2600-2611

    Security researchers/vendors detect malicious websites based on several website features extracted by honeyclient analysis. However, web-based attacks continue to be more sophisticated along with the development of countermeasure techniques. Attackers detect the honeyclient and evade analysis using sophisticated JavaScript code. The evasive code indirectly identifies vulnerable clients by abusing the differences among JavaScript implementations. Attackers deliver malware only to targeted clients on the basis of the evasion results while avoiding honeyclient analysis. Therefore, we are faced with a problem in that honeyclients cannot analyze malicious websites. Nevertheless, we can observe the evasion nature, i.e., the results in accessing malicious websites by using targeted clients are different from those by using honeyclients. In this paper, we propose a method of extracting evasive code by leveraging the above differences to investigate current evasion techniques. Our method analyzes HTTP transactions of the same website obtained using two types of clients, a real browser as a targeted client and a browser emulator as a honeyclient. As a result of evaluating our method with 8,467 JavaScript samples executed in 20,272 malicious websites, we discovered previously unknown evasion techniques that abuse the differences among JavaScript implementations. These findings will contribute to improving the analysis capabilities of conventional honeyclients.