The search functionality is under construction.

Keyword Search Result

[Keyword] CASE(81hit)

1-20hit(81hit)

  • A Case Study on Recommender Systems in Online Conferences: Behavioral Analysis through A/B Testing Open Access

    Ayano OKOSO  Keisuke OTAKI  Yoshinao ISHII  Satoshi KOIDE  

     
    PAPER

      Pubricized:
    2024/01/16
      Vol:
    E107-D No:5
      Page(s):
    650-658

    Owing to the COVID-19 pandemic, many academic conferences are now being held online. Our study focuses on online video conferences, where participants can watch pre-recorded embedded videos on a conference website. In online video conferences, participants must efficiently find videos that match their interests among many candidates. There are few opportunities to encounter videos that they may not have planned to watch but may be of interest to them unless participants actively visit the conference. To alleviate these problems, the introduction of a recommender system seems promising. In this paper, we implemented typical recommender systems for the online video conference with 4,000 participants and analyzed users’ behavior through A/B testing. Our results showed that users receiving recommendations based on collaborative filtering had a higher continuous video-viewing rate and spent longer on the website than those without recommendations. In addition, these users were exposed to broader videos and tended to view more from categories that are usually less likely to view together. Furthermore, the impact of the recommender system was most significant among users who spent less time on the site.

  • Locating Concepts on Use Case Steps in Source Code Open Access

    Shinpei HAYASHI  Teppei KATO  Motoshi SAEKI  

     
    PAPER

      Pubricized:
    2023/12/20
      Vol:
    E107-D No:5
      Page(s):
    602-612

    Use case descriptions describe features consisting of multiple concepts with following a procedural flow. Because existing feature location techniques lack a relation between concepts in such features, it is difficult to identify the concepts in the source code with high accuracy. This paper presents a technique to locate concepts in a feature described in a use case description consisting of multiple use case steps using dependency between them. We regard each use case step as a description of a concept and apply an existing concept location technique to the descriptions of concepts and obtain lists of modules. Also, three types of dependencies: time, call, and data dependencies among use case steps are extracted based on their textual description. Modules in the obtained lists failing to match the dependency between concepts are filtered out. Thus, we can obtain more precise lists of modules. We have applied our technique to use case descriptions in a benchmark. Results show that our technique outperformed baseline setting without applying the filtering.

  • Performance Analysis and Optimization of Worst Case User in CoMP Ultra Dense Networks

    Sinh Cong LAM  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/03/27
      Vol:
    E106-B No:10
      Page(s):
    979-986

    In the cellular system, the Worst Case User (WCU), whose distances to three nearest BSs are the similar, usually achieves the lowest performance. Improving user performance, especially the WCU, is a big problem for both network designers and operators. This paper works on the WCU in terms of coverage probability analysis by the stochastic geometry tool and data rate optimization with the transmission power constraint by the reinforcement learning technique under the Stretched Pathloss Model (SPLM). In analysis, only fast fading from the WCU to the serving Base Stations (BSs) is taken into the analysis to derive the lower bound coverage probability. Furthermore, the paper assumes that the Coordinated Multi-Point (CoMP) technique is only employed for the WCU to enhance its downlink signal and avoid the explosion of Intercell Interference (ICI). Through the analysis and simulation, the paper states that to improve the WCU performance under bad wireless environments, an increase in transmission power can be a possible solution. However, in good environments, the deployment of advanced techniques such as Joint Transmission (JT), Joint Scheduling (JS), and reinforcement learning is an suitable solution.

  • Envisioning 6G Outlook and Technical Enablers Open Access

    Hideaki TAKAHASHI  Hisashi ONOZAWA  Satish K.  Mikko A. UUSITALO  

     
    INVITED PAPER

      Pubricized:
    2023/05/23
      Vol:
    E106-B No:9
      Page(s):
    724-734

    6G research has been extensively conducted by individual organizations as well as pre-competitive joint research initiatives. One of the joint initiatives is the Hexa-X European 6G flagship project. This paper shares the up-to-date deliverables through which Hexa-X is envisioning the 6G era. The Hexa-X deliverables presented in this paper encompass the overall 6G vision, use cases and technical enablers. The latest deliverables on tenets of 6G architectural design and central pillars of technical enablers are presented. In conclusion, the authors encourage joint research and PoC collaboration with Japanese industry, academia and research initiatives for the potential technical enablers presented in this paper, aimed at global harmonization towards 6G standards.

  • Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection

    Yotaro SEKI  Shinpei HAYASHI  Motoshi SAEKI  

     
    PAPER

      Pubricized:
    2022/01/06
      Vol:
    E105-D No:5
      Page(s):
    849-863

    Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent, and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.000 by our final version of the automated tool.

  • Fusion of Blockchain, IoT and Artificial Intelligence - A Survey

    Srinivas KOPPU  Kumar K  Siva Rama KRISHNAN SOMAYAJI  Iyapparaja MEENAKSHISUNDARAM  Weizheng WANG  Chunhua SU  

     
    SURVEY PAPER

      Pubricized:
    2021/09/28
      Vol:
    E105-D No:2
      Page(s):
    300-308

    Blockchain is one of the prominent rapidly used technology in the last decade in various applications. In recent years, many researchers explored the capabilities of blockchain in smart IoT to address various security challenges. Integration of IoT and blockchain solves the security problems but scalability still remains a huge challenge. To address this, various AI techniques can be applied in the blockchain IoT framework, thus providing an efficient information system. In this survey, various works pertaining to the domains which integrate AI, IoT and Blockchain has been explored. Also, this article discusses potential industrial use cases on fusion of blockchain, AI and IoT applications and its challenges.

  • Interleaved Weighted Round-Robin: A Network Calculus Analysis Open Access

    Seyed Mohammadhossein TABATABAEE  Jean-Yves LE BOUDEC  Marc BOYER  

     
    INVITED PAPER

      Pubricized:
    2021/07/01
      Vol:
    E104-B No:12
      Page(s):
    1479-1493

    Weighted Round-Robin (WRR) is often used, due to its simplicity, for scheduling packets or tasks. With WRR, a number of packets equal to the weight allocated to a flow can be served consecutively, which leads to a bursty service. Interleaved Weighted Round-Robin (IWRR) is a variant that mitigates this effect. We are interested in finding bounds on worst-case delay obtained with IWRR. To this end, we use a network calculus approach and find a strict service curve for IWRR. The result is obtained using the pseudo-inverse of a function. We show that the strict service curve is the best obtainable one, and that delay bounds derived from it are tight (i.e., worst-case) for flows of packets of constant size. Furthermore, the IWRR strict service curve dominates the strict service curve for WRR that was previously published. We provide some numerical examples to illustrate the reduction in worst-case delays caused by IWRR compared to WRR.

  • MTGAN: Extending Test Case set for Deep Learning Image Classifier

    Erhu LIU  Song HUANG  Cheng ZONG  Changyou ZHENG  Yongming YAO  Jing ZHU  Shiqi TANG  Yanqiu WANG  

     
    PAPER-Software Engineering

      Pubricized:
    2021/02/05
      Vol:
    E104-D No:5
      Page(s):
    709-722

    During the recent several years, deep learning has achieved excellent results in image recognition, voice processing, and other research areas, which has set off a new upsurge of research and application. Internal defects and external malicious attacks may threaten the safe and reliable operation of a deep learning system and even cause unbearable consequences. The technology of testing deep learning systems is still in its infancy. Traditional software testing technology is not applicable to test deep learning systems. In addition, the characteristics of deep learning such as complex application scenarios, the high dimensionality of input data, and poor interpretability of operation logic bring new challenges to the testing work. This paper focuses on the problem of test case generation and points out that adversarial examples can be used as test cases. Then the paper proposes MTGAN which is a framework to generate test cases for deep learning image classifiers based on Generative Adversarial Network. Finally, this paper evaluates the effectiveness of MTGAN.

  • Mobility Innovation “Another CASE” Open Access

    Koji OGURI  Haruki KAWANAKA  Shintaro ONO  

     
    INVITED PAPER

      Vol:
    E104-A No:2
      Page(s):
    349-356

    The environment surrounding automotive technology is undergoing a major transformation. In particular, as technological innovation advances in new areas called “CASE” such as Connected, Autonomous/Automated, Shared, and Electric, various research activities are underway. However, this is an approach from the standpoint of the automobile centered, and when considering the development of a new automobile society, it is necessary to consider from the standpoint of “human centered,” who are users, too. Therefore, this paper proposes the possibility of technological innovation in the area of “Another CASE” such as Comfortable, Accessible, Safety, and Enjoy/Exciting, and introduces the contents of some interesting researches.

  • Influence of Outliers on Estimation Accuracy of Software Development Effort

    Kenichi ONO  Masateru TSUNODA  Akito MONDEN  Kenichi MATSUMOTO  

     
    PAPER

      Pubricized:
    2020/10/02
      Vol:
    E104-D No:1
      Page(s):
    91-105

    When applying estimation methods, the issue of outliers is inevitable. The extent of their influence has not been clarified, though several studies have evaluated outlier elimination methods. It is unclear whether we should always be sensitive to outliers, whether outliers should always be removed before estimation, and what amount of precaution is required for collecting project data. Therefore, the goal of this study is to illustrate a guideline that suggests how sensitively we should handle outliers. In the analysis, we experimentally add outliers to three datasets, to analyze their influence. We modified the percentage of outliers, their extent (e.g., we varied the actual effort from 100 to 200 person-hours when the extent was 100%), the variables including outliers (e.g., adding outliers to function points or effort), and the locations of outliers in a dataset. Next, the effort was estimated using these datasets. We used multiple linear regression analysis and analogy based estimation to estimate the development effort. The experimental results indicate that the influence of outliers on the estimation accuracy is non-trivial when the extent or percentage of outliers is considerable (i.e., 100% and 20%, respectively). In contrast, their influence is negligible when the extent and percentage are small (i.e., 50% and 10%, respectively). Moreover, in some cases, the linear regression analysis was less affected by outliers than analogy based estimation.

  • Facilitating Use of Assurance Cases in Industries by Workshops with an Agent-Based Method

    Yutaka MATSUNO  Toshinori TAKAI  Shuichiro YAMAMOTO  

     
    PAPER

      Pubricized:
    2020/03/11
      Vol:
    E103-D No:6
      Page(s):
    1297-1308

    Assurance cases are documents for arguing that systems satisfy required properties such as safety and security in the given environment based on sufficient evidence. As systems become complex and networked, the importance of assurance cases has become significant. However, we observe that creating assurance cases has some essential difficulties, and unfortunately it seems that assurance cases have not been widely used in industries. For this problem, we have been developing assurance cases creation methods and opening workshops based on the creation methods. This paper presents an assurance cases creation method called “D-Case Steps” which is based on d* framework[1], an agent-based assurance case method, and reports the results of workshops. The results indicate that our workshops have been improved and our activities on assurance cases facilitates use of them in Japan. This paper is an extended version of [2]. We add detailed background and related works, workshops results and evaluation, and lessons learned from our a decade experiences.

  • Template-Based Monte-Carlo Test-Suite Generation for Large and Complex Simulink Models Open Access

    Takashi TOMITA  Daisuke ISHII  Toru MURAKAMI  Shigeki TAKEUCHI  Toshiaki AOKI  

     
    PAPER

      Vol:
    E103-A No:2
      Page(s):
    451-461

    MATLAB/Simulink is the de facto standard tool for the model-based development (MBD) of control software for automotive systems. A Simulink model developed in MBD for real automotive systems involves complex computation as well as tens of thousands of blocks. In this paper, we focus on decision coverage (DC), condition coverage (CC) and modified condition/decision coverage (MC/DC) criteria, and propose a Monte-Carlo test suite generation method for large and complex Simulink models. In the method, a candidate test case is generated by assigning random values to the parameters of signal templates with specific waveforms. We try to find contributable candidates in a plausible and understandable search space, specified by a set of templates. We implemented the method as a tool, and our experimental evaluation showed that the tool was able to generate test suites for industrial implementation models with higher coverages and shorter execution times than Simulink Design Verifier. Additionally, the tool includes a fast coverage measurement engine, which demonstrated better performance than Simulink Coverage in our experiments.

  • Error Correction for Search Engine by Mining Bad Case

    Jianyong DUAN  Tianxiao JI  Hao WANG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2018/03/26
      Vol:
    E101-D No:7
      Page(s):
    1938-1945

    Automatic error correction of users' search terms for search engines is an important aspect of improving search engine retrieval efficiency, accuracy and user experience. In the era of big data, we can analyze and mine massive search engine logs to release the hidden mind with big data ideas. It can obtain better results through statistical modeling of query errors in search engine log data. But when we cannot find the error query in the log, we can't make good use of the information in the log to correct the query result. These undiscovered error queries are called Bad Case. This paper combines the error correction algorithm model and search engine query log mining analysis. First, we explored Bad Cases in the query error correction process through the search engine query logs. Then we quantified the characteristics of these Bad Cases and built a model to allow search engines to automatically mine Bad Cases with these features. Finally, we applied Bad Cases to the N-gram error correction algorithm model to check the impact of Bad Case mining on error correction. The experimental results show that the error correction based on Bad Case mining makes the precision rate and recall rate of the automatic error correction improved obviously. Users experience is improved and the interaction becomes more friendly.

  • The Pre-Testing for Virtual Robot Development Environment

    Hyun Seung SON  R. Young Chul KIM  

     
    PAPER-Software Engineering

      Pubricized:
    2018/03/01
      Vol:
    E101-D No:6
      Page(s):
    1541-1551

    The traditional tests are planned and designed at the early stages, but it is possible to execute test cases after implementing source code. Since there is a time difference between design stage and testing stage, by the time a software design error is found it will be too late. To solve this problem, this paper suggests a virtual pre-testing process. While the virtual pre-testing process can find software and testing errors before the developing stage, it can automatically generate and execute test cases with modeling and simulation (M&S) in a virtual environment. The first part of this method is to create test cases with state transition tree based on state diagram, which include state, transition, instruction pair, and all path coverage. The second part is to model and simulate a virtual target, which then pre-test the target with test cases. In other words, these generated test cases are automatically transformed into the event list. This simultaneously executes test cases to the simulated target within a virtual environment. As a result, it is possible to find the design and test error at the early stages of the development cycle and in turn can reduce development time and cost as much as possible.

  • A Variable-to-Fixed Length Lossless Source Code Attaining Better Performance than Tunstall Code in Several Criterions

    Mitsuharu ARIMURA  

     
    PAPER-Information Theory

      Vol:
    E101-A No:1
      Page(s):
    249-258

    Tunstall code is known as an optimal variable-to-fixed length (VF) lossless source code under the criterion of average coding rate, which is defined as the codeword length divided by the average phrase length. In this paper we define the average coding rate of a VF code as the expectation of the pointwise coding rate defined by the codeword length divided by the phrase length. We call this type of average coding rate the average pointwise coding rate. In this paper, a new VF code is proposed. An incremental parsing tree construction algorithm like the one that builds Tunstall parsing tree is presented. It is proved that this code is optimal under the criterion of the average pointwise coding rate, and that the average pointwise coding rate of this code converges asymptotically to the entropy of the stationary memoryless source emitting the data to be encoded. Moreover, it is proved that the proposed code attains better worst-case coding rate than Tunstall code.

  • A New Sentiment Case-Based Recommender

    Mashael ALDAYEL  Mourad YKHLEF  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/04/05
      Vol:
    E100-D No:7
      Page(s):
    1484-1493

    Recommender systems have attracted attention in both the academic and the business areas. They aim to give users more intelligent methods for navigating and identifying complex information spaces, especially in e-commerce domain. However, these systems still have to overcome certain limitations that reduce their performance, such as overspecialization of recommendations, cold-start, and difficulties when items with unequal probability distribution exist. A novel approach addresses the above issues through a case-based recommendation methodology which is a form of content-based recommendation that is well suited to many product recommendation domains, owing to the clear organization of users' needs and preferences. Unfortunately, the experience-based roots of case-based reasoning are not clearly reflected in case-based recommenders. In other words, the concept that product cases, which are usually fixed feature-based tuples, are experiential is not adopted well in case-based recommenders. To solve this problem as well as the recommenders' rating sparsity issue, one can use product reviews which are generated from users' experience with the product a basis of product information. Our approach adapts the use of sentiment scores along with feature similarity throughout the recommendation unlike traditional case-based recommender systems, which tend to depend entirely on pure similarity-based approaches. This paper models product cases with the products' features and sentiment scores at the feature level and product level. Thus, combining user experience and similarity measures improves the recommender performance and gives users more flexibility to choose whether they prefer products more similar to their query or better qualified products. We present the results using different evaluation methods for different case structures, different numbers of similar cases retrieved and multilevel sentiment-approaches. The recommender performance was highly improved with the use of feature-level sentiment approach, which recommends product cases that are similar to the query but favored for customers.

  • Coverage-Based Clustering and Scheduling Approach for Test Case Prioritization

    Wenhao FU  Huiqun YU  Guisheng FAN  Xiang JI  

     
    PAPER-Software Engineering

      Pubricized:
    2017/03/03
      Vol:
    E100-D No:6
      Page(s):
    1218-1230

    Regression testing is essential for assuring the quality of a software product. Because rerunning all test cases in regression testing may be impractical under limited resources, test case prioritization is a feasible solution to optimize regression testing by reordering test cases for the current testing version. In this paper, we propose a novel test case prioritization approach that combines the clustering algorithm and the scheduling algorithm for improving the effectiveness of regression testing. By using the clustering algorithm, test cases with same or similar properties are merged into a cluster, and the scheduling algorithm helps allocate an execution priority for each test case by incorporating fault detection rates with the waiting time of test cases in candidate set. We have conducted several experiments on 12 C programs to validate the effectiveness of our proposed approach. Experimental results show that our approach is more effective than some well studied test case prioritization techniques in terms of average percentage of fault detected (APFD) values.

  • A Method for Correcting Preposition Errors in Learner English with Feedback Messages

    Ryo NAGATA  Edward WHITTAKER  

     
    PAPER-Educational Technology

      Pubricized:
    2017/03/08
      Vol:
    E100-D No:6
      Page(s):
    1280-1289

    This paper presents a novel framework called error case frames for correcting preposition errors. They are case frames specially designed for describing and correcting preposition errors. Their most distinct advantage is that they can correct errors with feedback messages explaining why the preposition is erroneous. This paper proposes a method for automatically generating them by comparing learner and native corpora. Experiments show (i) automatically generated error case frames achieve a performance comparable to previous methods; (ii) error case frames are intuitively interpretable and manually modifiable to improve them; (iii) feedback messages provided by error case frames are effective in language learning assistance. Considering these advantages and the fact that it has been difficult to provide feedback messages using automatically generated rules, error case frames will likely be one of the major approaches for preposition error correction.

  • Industry Application of Software Development Task Measurement System: TaskPit

    Pawin SUTHIPORNOPAS  Pattara LEELAPRUTE  Akito MONDEN  Hidetake UWANO  Yasutaka KAMEI  Naoyasu UBAYASHI  Kenji ARAKI  Kingo YAMADA  Ken-ichi MATSUMOTO  

     
    PAPER-Software Engineering

      Pubricized:
    2016/12/20
      Vol:
    E100-D No:3
      Page(s):
    462-472

    To identify problems in a software development process, we have been developing an automated measurement tool called TaskPit, which monitors software development tasks such as programming, testing and documentation based on the execution history of software applications. This paper introduces the system requirements, design and implementation of TaskPit; then, presents two real-world case studies applying TaskPit to actual software development. In the first case study, we applied TaskPit to 12 software developers in a certain software development division. As a result, several concerns (to be improved) have been revealed such as (a) a project leader spent too much time on development tasks while he was supposed to be a manager rather than a developer, (b) several developers rarely used e-mails despite the company's instruction to use e-mail as much as possible to leave communication records during development, and (c) several developers wrote too long e-mails to their customers. In the second case study, we have recorded the planned, actual, and self reported time of development tasks. As a result, we found that (d) there were unplanned tasks in more than half of days, and (e) the declared time became closer day by day to the actual time measured by TaskPit. These findings suggest that TaskPit is useful not only for a project manager who is responsible for process monitoring and improvement but also for a developer who wants to improve by him/herself.

  • A New Non-Uniform Weight-Updating Beamformer for LEO Satellite Communication

    Jie LIU  Zhuochen XIE  Huijie LIU  Zhengmin ZHANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:9
      Page(s):
    1708-1711

    In this paper, a new non-uniform weight-updating scheme for adaptive digital beamforming (DBF) is proposed. The unique feature of the letter is that the effective working range of the beamformer is extended and the computational complexity is reduced by introducing the robust DBF based on worst-case performance optimization. The robust parameter for each weight updating is chosen by analyzing the changing rate of the Direction of Arrival (DOA) of desired signal in LEO satellite communication. Simulation results demonstrate the improved performance of the new Non-Uniform Weight-Updating Beamformer (NUWUB).

1-20hit(81hit)