The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] discovery(66hit)

41-60hit(66hit)

  • Multi-Floor Semantically Meaningful Localization Using IEEE 802.11 Network Beacons

    Uzair AHMAD  Brian J. D'AURIOL  Young-Koo LEE  Sungyoung LEE  

     
    PAPER

      Vol:
    E91-B No:11
      Page(s):
    3450-3460

    This paper presents a new methodology, Beacognition, for real-time discovery of the associations between a signal space and arbitrarily defined regions, termed as Semantically Meaningful Areas (SMAs), in the corresponding physical space. It lets the end users develop semantically meaningful location systems using standard 802.11 network beacons as they roam through their environment. The key idea is to discover the unique associations using a beacon popularity model. The popularity measurements are then used to localize the mobile devices. The beacon popularity is computed using an election' algorithm and a new recognition model is presented to perform the localization task. We have implemented such a location system in a five story campus building. The comparative results show significant improvement in localization by achieving on average 83% SMA and 88% Floor recognition rate in less than one minute per SMA training time.

  • Handling Dynamic Weights in Weighted Frequent Pattern Mining

    Chowdhury Farhan AHMED  Syed Khairuzzaman TANBEER  Byeong-Soo JEONG  Young-Koo LEE  

     
    PAPER-Knowledge Discovery and Data Mining

      Vol:
    E91-D No:11
      Page(s):
    2578-2588

    Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.

  • UDP Large-Payload Capability Detection for DNSSEC

    Kenji RIKITAKE  Koji NAKAO  Shinji SHIMOJO  Hiroki NOGAWA  

     
    PAPER-Network Security

      Vol:
    E91-D No:5
      Page(s):
    1261-1273

    Domain Name System (DNS) is a major target for the network security attacks due to the weak authentication. A security extension DNSSEC has been proposed to introduce the public-key authentication, but it is still on the deployment phase. DNSSEC assumes IP fragmentation allowance for exchange of its messages over UDP large payloads. IP fragments are often blocked on network packet filters for administrative reasons, and the blockage may prevent fast exchange of DNSSEC messages. In this paper, we propose a scheme to detect the UDP large-payload transfer capability between two DNSSEC hosts. The proposed detection scheme does not require new protocol elements of DNS and DNSSEC, so it is applicable by solely modifying the application software and configuration. The scheme allows faster capability detection to probe the end-to-end communication capability between two DNS hosts by transferring a large UDP DNS message. The DNS software can choose the maximum transmission unit (MTU) on the application level using the probed detection results. Implementation test results show that the proposed scheme shortens the detection and transition time on fragment-blocked transports.

  • A Proposal of TLS Implementation for Cross Certification Model

    Tadashi KAJI  Takahiro FUJISHIRO  Satoru TEZUKA  

     
    PAPER-Implementation

      Vol:
    E91-D No:5
      Page(s):
    1311-1318

    Today, TLS is widely used for achieving a secure communication system. And TLS is used PKI for server authentication and/or client authentication. However, its PKI environment, which is called as "multiple trust anchors environment," causes the problem that the verifier has to maintain huge number of CA certificates in the ubiquitous network because the increase of terminals connected to the network brings the increase of CAs. However, most of terminals in the ubiquitous network will not have enough memory to hold such huge number of CA certificates. Therefore, another PKI environment, "cross certification environment", is useful for the ubiquitous network. But, because current TLS is designed for the multiple trust anchors model, TLS cannot work efficiently on the cross-certification model. This paper proposes a TLS implementation method to support the cross certification model efficiently. Our proposal reduces the size of exchanged messages between the TLS client and the TLS server during the handshake process. Therefore, our proposal is suitable for implementing TLS in the terminals that do not have enough computing power and memory in ubiquitous network.

  • A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

    Kazuya HARAGUCHI  Mutsunori YAGIURA  Endre BOROS  Toshihide IBARAKI  

     
    PAPER-Algorithm Theory

      Vol:
    E91-D No:3
      Page(s):
    781-788

    We consider a data set in which each example is an n-dimensional Boolean vector labeled as true or false. A pattern is a co-occurrence of a particular value combination of a given subset of the variables. If a pattern appears frequently in the true examples and infrequently in the false examples, we consider it a good pattern. In this paper, we discuss the problem of determining the data size needed for removing "deceptive" good patterns; in a data set of a small size, many good patterns may appear superficially, simply by chance, independently of the underlying structure. Our hypothesis is that, in order to remove such deceptive good patterns, the data set should contain a greater number of examples than that at which a random data set contains few good patterns. We justify this hypothesis by computational studies. We also derive a theoretical upper bound on the needed data size in view of our hypothesis.

  • On the Interplay of Service Proximity and Ubiquity

    Shafique Ahmad CHAUDHRY  Ali Hammad AKBAR  Ki-Hyung KIM  

     
    PAPER

      Vol:
    E90-B No:12
      Page(s):
    3470-3479

    The IEEE 802.15.4 standard for Low Power Wireless Personal Area Networks (LoWPANs) has emerged as a promising technology to bring the envisioned ubiquitous paradigm, into realization. Considerable efforts are being carried on to integrate LoWPANs with other wired and wireless IP networks, in order to make use of pervasive nature and existing infrastructure associated with IP technologies. Provisioning of service discovery and network selection in such pervasive environments puts heavy communication and processing overhead in networks with highly constrained resources. Localization of communication, through accessing the closest services, increases the total network capacity and increases the network life. We present a hierarchical service discovery architecture based on SSLP, in which we propose directory proxy agents to act as cache service for directory agent, in order to localize the service discovery communication and access the closest services. We also propose algorithms to make sure that service users are connected to the closest proxy agent in order to access the closest service in the vicinity. The results show that our architecture and algorithms help finding the closest services, reduce the traffic overhead for service discovery, decrease the service discovery time, and save nodes' energy considerably in 6LoWPANs.

  • A Landmark-Based Scalable Semantic Resource Discovery Scheme

    Saehoon KANG  Younghee LEE  Dongman LEE  Hee Yong YOUN  

     
    LETTER-Networks

      Vol:
    E90-D No:6
      Page(s):
    986-989

    In this paper, we propose an efficient resource discovery scheme for large-scale ubiquitous computing environments, which supports scalable semantic searches and load balancing among resource discovery resolvers. Here, the resources are described based on the concepts defined in the ontological hierarchy. To semantically search the resources in a scalable manner, we propose a semantic vector space and semantic resource discovery network in which the resources are organized based on their respective semantic distances. Most importantly, landmarks are introduced for the first time to reduce the dimensionality of the vector space. Computer simulation with CAN verifies the effectiveness of the proposed scheme.

  • WLAN Discovery Scheme Delay Analysis and Its Enhancement for 3GPP WLAN Interworking Networks

    Zhigang CAO  Junfeng JIANG  Pingyi FAN  

     
    LETTER-Network

      Vol:
    E90-B No:6
      Page(s):
    1523-1527

    In this letter, we first analyze the delay of the WLAN discovery scheme specified for 3GPP and WLAN interworking networks. Theoretical analysis indicates that the delay of the discovery scheme given by 3GPP increases linearly with the number of WLAN channels needed to be scanned. To reduce the discovery delay, we then propose an effective WLAN discovery scheme utilizing the cellular network to aid in the broadcasting the information of WLANs. Thus the number of WLAN channels required to be scanned for users is greatly reduced. The effectiveness of the proposed scheme is demonstrated by analysis and simulation.

  • Secure Route Discovery Protocol for Ad Hoc Networks

    YoungHo PARK  Hwangjun SONG  KyungKeun LEE  CheolSoo KIM  SangGon LEE  SangJae MOON  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E90-A No:2
      Page(s):
    539-541

    A secure and efficient route discovery protocol is proposed for ad hoc networks, where only one-way hash functions are used to authenticate nodes in the ROUTE REQUEST, while additional public-key cryptography is used to guard against active attackers disguising a node in the ROUTE REPLY.

  • A Localized Route Discovery for On-Demand Routing Protocols in Event-Driven Wireless Sensor Networks

    Dong-Hyun CHAE  Kyu-Ho HAN  Kyung-Soo LIM  Sae-Young AHN  Sun-Shin AN  

     
    PAPER-Network

      Vol:
    E89-B No:10
      Page(s):
    2828-2840

    In this paper, the problem of Redundant Duplicated RREQ Network-wide Flooding (RDRNF), induced by multiple sensor nodes during route discovery in event-driven wireless sensor networks, is described. In order to reduce the number of signaling messages during the route discovery phase, a novel extension, named the Localized Route Discovery Extension (LRDE), to the on-demand ad hoc routing protocol, is proposed. The LRDE reduces energy consumption during route discovery. The heuristically and temporarily selected Path Set-up Coordinator (PSC) plays the role of a route request broker that alleviates redundant route request flooding. The LRDE also sets a route path be aggregation-compatible. The PSC can effectively perform data aggregation through the routing path constructed by the LRDE. The simulation results reveal that significant energy is conserved by reducing signaling overhead and performing data aggregation when LRDE is applied to on-demand routing protocols.

  • Gossip-Based Service Discovery in Mobile Ad Hoc Networks

    Choonhwa LEE  Sumi HELAL  Wonjun LEE  

     
    LETTER-Network

      Vol:
    E89-B No:9
      Page(s):
    2621-2624

    This letter presents a new gossip-based ad hoc service discovery protocol that uses a novel decentralized, peer-to-peer mechanism to provide mobile devices with the ability to advertise and discover services in an efficient way. Our performance study shows that the proposed protocol appropriately addresses the need of proximal service discovery over a dynamic wireless medium.

  • Design of a Mobile Application Framework with Context Sensitivities

    Hyung-Min YOON  Woo-Shik KANG  Oh-Young KWON  Seong-Hun JEONG  Bum-Seok KANG  Tack-Don HAN  

     
    PAPER-Mobile Computing

      Vol:
    E89-D No:2
      Page(s):
    508-515

    New service concepts involving mobile devices with a diverse range of embedded sensors are emerging that share contexts supporting communication on a wireless network infrastructure. To promote these services in mobile devices, we propose a method that can efficiently detect a context provider by partitioning the location, time, speed, and discovery sensitivities.

  • HiPeer: A Highly Reliable P2P System

    Giscard WEPIWE  Plamen L. SIMEONOV  

     
    PAPER-Peer-to-Peer Computing

      Vol:
    E89-D No:2
      Page(s):
    570-580

    The paper presents HiPeer, a robust resource distribution and discovery algorithm that can be used for fast and fault-tolerant location of resources in P2P network environments. HiPeer defines a concentric multi-ring overlay networking topology, whereon dynamic network management methods are deployed. In terms of performance, HiPeer delivers of number of lowest bounds. We demonstrate that for any De Bruijn digraph of degree d 2 and diameter DDB HiPeer constructs a highly reliable network, where each node maintains a routing table with at most 2d+2 entries independent of the number N of nodes in the system. Further, we show that any existing resource in the network with at most d nodes can be found within at most DHiPeer = log d(N(d-1)+d)-1 overlay hops. This result is as close to the Moore bound [1] as the query path length in other outstanding P2P proposals based on the De Bruijn digraphs. Thus, we argue that HiPeer defines a highly connected network with connectivity d and the lowest yet known lookup bound DHiPeer. Moreover, we show that any node's "join or leave" operation in HiPeer implies a constant expected reorganization cost of the magnitude order of O(d) control messages.

  • Acquisition and Maintenance of Knowledge for Online Navigation Suggestions

    Juan D. VELASQUEZ  Richard WEBER  Hiroshi YASUDA  Terumasa AOKI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E88-D No:5
      Page(s):
    993-1003

    The Internet has become an important medium for effective marketing and efficient operations for many institutions. Visitors of a particular web site leave behind valuable information on their preferences, requirements, and demands regarding the offered products and/or services. Understanding these requirements online, i.e., during a particular visit, is both a difficult technical challenge and a tremendous business opportunity. Web sites that can provide effective online navigation suggestions to their visitors can exploit the potential inherent in the data such visits generate every day. However, identifying, collecting, and maintaining the necessary knowledge that navigation suggestions are based on is far from trivial. We propose a methodology for acquiring and maintaining this knowledge efficiently using data mart and web mining technology. Its effectiveness has been shown in an application for a bank's web site.

  • Fast Algorithms for Mining Generalized Frequent Patterns of Generalized Association Rules

    Kritsada SRIPHAEW  Thanaruk THEERAMUNKONG  

     
    PAPER-Databases

      Vol:
    E87-D No:3
      Page(s):
    761-770

    Mining generalized frequent patterns of generalized association rules is an important process in knowledge discovery system. In this paper, we propose a new approach for efficiently mining all frequent patterns using a novel set enumeration algorithm with two types of constraints on two generalized itemset relationships, called subset-superset and ancestor-descendant constraints. We also show a method to mine a smaller set of generalized closed frequent itemsets instead of mining a large set of conventional generalized frequent itemsets. To this end, we develop two algorithms called SET and cSET for mining generalized frequent itemsets and generalized closed frequent itemsets, respectively. By a number of experiments, the proposed algorithms outperform the previous well-known algorithms in both computational time and memory utilization. Furthermore, the experiments with real datasets indicate that mining generalized closed frequent itemsets gains more merit on computational costs since the number of generalized closed frequent itemsets is much more smaller than the number of generalized frequent itemsets.

  • CLOCK: Clustering for Common Knowledge Extraction in a Set of Transactions

    Sang Hyun OH  Won Suk LEE  

     
    PAPER-Databases

      Vol:
    E86-D No:9
      Page(s):
    1845-1855

    Association mining extracts common relationships among a finite number of categorical data objects in a set of transactions. However, if the data objects are not categorical and potentially unlimited, it is impossible to employ the association mining approach. On the other hand, clustering is suitable for modeling a large number of non-categorical data objects as long as there exists a distance measure among them. Although it has been used to classify data objects in a data set into groups of similar objects based on data similarity, it can be used to extract the properties of similar data objects commonly appearing in a set of transactions. In this paper, a new clustering method, CLOCK, is proposed to find common knowledge such as frequent ranges of similar objects in a set of transactions. The common knowledge of data objects in the transactions can be represented by the occurrence frequency of similar data objects in terms of a transaction as well as the common repetitive ratio of similar data objects in each transaction. Furthermore, the proposed method also addresses how to maintain identified common knowledge as a summarized profile. As a result, any data difference between a newly collected transaction and the common knowledge of past transactions can be easily identified.

  • Decentralized Meta-Data Strategies: Effective Peer-to-Peer Search

    Sam JOSEPH  Takashige HOSHIAI  

     
    INVITED PAPER

      Vol:
    E86-B No:6
      Page(s):
    1740-1753

    Gnutella's service announcement in March 2000 stirred worldwide interest by referring to P2P model. Basically, the P2P model needs not the broker "the centralized management server" that until now has figured so importantly in prevailing business models, and offers a new approach that enables peers such as end terminals to discover out and locate other suitable peers on their own without going through an intermediary server. It seems clear that the wealth of content made available by peer-to-peer systems like Gnutella and Freenet have spurred many authors into considering how meta-data might be used to support more effective search in a distributed environment. This paper has reviewed a number of these systems and attempted to identify some common themes. At this time the major division between the different approaches is the use of a hash-based routing scheme.

  • Discovering Knowledge from Graph Structured Data by Using Refutably Inductive Inference of Formal Graph Systems

    Tetsuhiro MIYAHARA  Tomoyuki UCHIDA  Takayoshi SHOUDAI  Tetsuji KUBOYAMA  Kenichi TAKAHASHI  Hiroaki UEDA  

     
    PAPER

      Vol:
    E84-D No:1
      Page(s):
    48-56

    We present a new method for discovering knowledge from structured data which are represented by graphs in the framework of Inductive Logic Programming. A graph, or network, is widely used for representing relations between various data and expressing a small and easily understandable hypothesis. The analyzing system directly manipulating graphs is useful for knowledge discovery. Our method uses Formal Graph System (FGS) as a knowledge representation language for graph structured data. FGS is a kind of logic programming system which directly deals with graphs just like first order terms. And our method employs a refutably inductive inference algorithm as a learning algorithm. A refutably inductive inference algorithm is a special type of inductive inference algorithm with refutability of hypothesis spaces, and is suitable for knowledge discovery. We give a sufficiently large hypothesis space, the set of weakly reducing FGS programs. And we show that this hypothesis space is refutably inferable from complete data. We have designed and implemented a prototype of a knowledge discovery system KD-FGS, which is based on our method and acquires knowledge directly from graph structured data. Finally we discuss the applicability of our method for graph structured data with experimental results on some graph theoretical notions.

  • A Study of Collaborative Discovery Processes Using a Cognitive Simulator

    Kazuhisa MIWA  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:12
      Page(s):
    2088-2097

    We discuss human collaborative discovery processes using a production system model as a cognitive simulator. We have developed an interactive production system architecture to construct the simulator. Two production systems interactively find targets in which the only experimental results are shared; each does not know the hypothesis the other system has. Through this kind of interaction, we verify whether or not the performance of two systems interactively finding targets exceeds that of two systems independently finding targets. If we confirm the superiority of collaborative discovery, we approve of emergence by the interaction. The results are: (1) generally speaking collaboration does not produces the emergence defined above, and (2) as the different degree of hypothesis testing strategies that the two system use gets larger, the benefits of interaction gradually increases.

  • Inductive Logic Programming: From Logic of Discovery to Machine Learning

    Hiroki ARIMURA  Akihiro YAMAMOTO  

     
    INVITED PAPER

      Vol:
    E83-D No:1
      Page(s):
    10-18

    Inductive Logic Programming (ILP) is a study of machine learning systems that use clausal theories in first-order logic as a representation language. In this paper, we survey theoretical foundations of ILP from the viewpoints of Logic of Discovery and Machine Learning, and try to unify these two views with the support of the modern theory of Logic Programming. Firstly, we define several hypothesis construction methods in ILP and give their proof-theoretic foundations by treating them as a procedure which complets incomplete proofs. Next, we discuss the design of individual learning algorithms using these hypothesis construction methods. We review known results on learning logic programs in computational learning theory, and show that these algorithms are instances of a generic learning strategy with proof completion methods.

41-60hit(66hit)