The search functionality is under construction.

Author Search Result

[Author] Tugkan TUGLULAR(2hit)

1-2hit
  • Model-Based Contract Testing of Graphical User Interfaces

    Tugkan TUGLULAR  Arda MUFTUOGLU  Fevzi BELLI  Michael LINSCHULTE  

     
    PAPER-Software Engineering

      Pubricized:
    2015/03/19
      Vol:
    E98-D No:7
      Page(s):
    1297-1305

    Graphical User Interfaces (GUIs) are critical for the security, safety and reliability of software systems. Injection attacks, for instance via SQL, succeed due to insufficient input validation and can be avoided if contract-based approaches, such as Design by Contract, are followed in the software development lifecycle of GUIs. This paper proposes a model-based testing approach for detecting GUI data contract violations, which may result in serious failures such as system crash. A contract-based model of GUI data specifications is used to develop test scenarios and to serve as test oracle. The technique introduced uses multi terminal binary decision diagrams, which are designed as an integral part of decision table-augmented event sequence graphs, to implement a GUI testing process. A case study, which validates the presented approach on a port scanner written in Java programming language, is presented.

  • Rule-Based Automatic Question Generation Using Semantic Role Labeling Open Access

    Onur KEKLIK  Tugkan TUGLULAR  Selma TEKIR  

     
    PAPER-Natural Language Processing

      Pubricized:
    2019/04/01
      Vol:
    E102-D No:7
      Page(s):
    1362-1373

    This paper proposes a new rule-based approach to automatic question generation. The proposed approach focuses on analysis of both syntactic and semantic structure of a sentence. Although the primary objective of the designed system is question generation from sentences, automatic evaluation results shows that, it also achieves great performance on reading comprehension datasets, which focus on question generation from paragraphs. Especially, with respect to METEOR metric, the designed system significantly outperforms all other systems in automatic evaluation. As for human evaluation, the designed system exhibits similar performance by generating the most natural (human-like) questions.