The search functionality is under construction.

Author Search Result

[Author] Semih YUMUSAK(4hit)

1-4hit
  • Classification of Linked Data Sources Using Semantic Scoring

    Semih YUMUSAK  Erdogan DOGDU  Halife KODAZ  

     
    PAPER

      Pubricized:
    2017/09/15
      Vol:
    E101-D No:1
      Page(s):
    99-107

    Linked data sets are created using semantic Web technologies and they are usually big and the number of such datasets is growing. The query execution is therefore costly, and knowing the content of data in such datasets should help in targeted querying. Our aim in this paper is to classify linked data sets by their knowledge content. Earlier projects such as LOD Cloud, LODStats, and SPARQLES analyze linked data sources in terms of content, availability and infrastructure. In these projects, linked data sets are classified and tagged principally using VoID vocabulary and analyzed according to their content, availability and infrastructure. Although all linked data sources listed in these projects appear to be classified or tagged, there are a limited number of studies on automated tagging and classification of newly arriving linked data sets. Here, we focus on automated classification of linked data sets using semantic scoring methods. We have collected the SPARQL endpoints of 1,328 unique linked datasets from Datahub, LOD Cloud, LODStats, SPARQLES, and SpEnD projects. We have then queried textual descriptions of resources in these data sets using their rdfs:comment and rdfs:label property values. We analyzed these texts in a similar manner with document analysis techniques by assuming every SPARQL endpoint as a separate document. In this regard, we have used WordNet semantic relations library combined with an adapted term frequency-inverted document frequency (tfidf) analysis on the words and their semantic neighbours. In WordNet database, we have extracted information about comment/label objects in linked data sources by using hypernym, hyponym, homonym, meronym, region, topic and usage semantic relations. We obtained some significant results on hypernym and topic semantic relations; we can find words that identify data sets and this can be used in automatic classification and tagging of linked data sources. By using these words, we experimented different classifiers with different scoring methods, which results in better classification accuracy results.

  • SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    Semih YUMUSAK  Erdogan DOGDU  Halife KODAZ  Andreas KAMILARIS  Pierre-Yves VANDENBUSSCHE  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    758-767

    Linked data endpoints are online query gateways to semantically annotated linked data sources. In order to query these data sources, SPARQL query language is used as a standard. Although a linked data endpoint (i.e. SPARQL endpoint) is a basic Web service, it provides a platform for federated online querying and data linking methods. For linked data consumers, SPARQL endpoint availability and discovery are crucial for live querying and semantic information retrieval. Current studies show that availability of linked datasets is very low, while the locations of linked data endpoints change frequently. There are linked data respsitories that collect and list the available linked data endpoints or resources. It is observed that around half of the endpoints listed in existing repositories are not accessible (temporarily or permanently offline). These endpoint URLs are shared through repository websites, such as Datahub.io, however, they are weakly maintained and revised only by their publishers. In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a “search keyword” discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, the collected search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. We analyze our findings in comparison to Datahub collection in detail.

  • A Methodology on Converting 10-K Filings into a Machine Learning Dataset and Its Applications

    Mustafa SAMI KACAR  Semih YUMUSAK  Halife KODAZ  

     
    PAPER

      Pubricized:
    2022/10/12
      Vol:
    E106-D No:4
      Page(s):
    477-487

    Companies listed on the stock exchange are required to share their annual reports with the U.S. Securities and Exchange Commission (SEC) within the first three months following the fiscal year. These reports, namely 10-K Filings, are presented to public interest by the SEC through an Electronic Data Gathering, Analysis, and Retrieval database. 10-K Filings use standard file formats (xbrl, html, pdf) to publish the financial reports of the companies. Although the file formats propose a standard structure, the content and the meta-data of the financial reports (e.g. tag names) is not strictly bound to a pre-defined schema. This study proposes a data collection and data preprocessing method to semantify the financial reports and use the collected data for further analysis (i.e. machine learning). The analysis of eight different datasets, which were created during the study, are presented using the proposed data transformation methods. As a use case, based on the datasets, five different machine learning algorithms were utilized to predict the existence of the corresponding company in the S&P 500 index. According to the strong machine learning results, the dataset generation methodology is successful and the datasets are ready for further use.

  • Price Rank Prediction of a Company by Utilizing Data Mining Methods on Financial Disclosures

    Mustafa Sami KACAR  Semih YUMUSAK  Halife KODAZ  

     
    PAPER

      Pubricized:
    2023/05/22
      Vol:
    E106-D No:9
      Page(s):
    1461-1471

    The use of reports in action has grown significantly in recent decades as data has become digitized. However, traditional statistical methods no longer work due to the uncontrollable expansion and complexity of raw data. Therefore, it is crucial to clean and analyze financial data using modern machine learning methods. In this study, the quarterly reports (i.e. 10Q filings) of publicly traded companies in the United States were analyzed by utilizing data mining methods. The study used 8905 quarterly reports of companies from 2019 to 2022. The proposed approach consists of two phases with a combination of three different machine learning methods. The first two methods were used to generate a dataset from the 10Q filings with extracting new features, and the last method was used for the classification problem. Doc2Vec method in Gensim framework was used to generate vectors from textual tags in 10Q filings. The generated vectors were clustered using the K-means algorithm to combine the tags according to their semantics. By this way, 94000 tags representing different financial items were reduced to 20000 clusters consisting of these tags, making the analysis more efficient and manageable. The dataset was created with the values corresponding to the tags in the clusters. In addition, PriceRank metric was added to the dataset as a class label indicating the price strength of the companies for the next financial quarter. Thus, it is aimed to determine the effect of a company's quarterly reports on the market price of the company for the next period. Finally, a Convolutional Neural Network model was utilized for the classification problem. To evaluate the results, all stages of the proposed hybrid method were compared with other machine learning techniques. This novel approach could assist investors in examining companies collectively and inferring new, significant insights. The proposed method was compared with different approaches for creating datasets by extracting new features and classification tasks, then eventually tested with different metrics. The proposed approach performed comparatively better than the other machine learning methods to predict future price strength based on past reports with an accuracy of 84% on the created 10Q filings dataset.