The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E101-D No.9  (Publication Date:2018/09/01)

    Special Section on Picture Coding and Image Media Processing
  • FOREWORD

    Toshiaki FUJII  

     
    FOREWORD

      Page(s):
    2178-2178
  • Robust Index Code to Distribute Digital Images and Digital Contents Together

    Minsu KIM  Kunwoo LEE  Katsuhiko GONDOW  Jun-ichi IMURA  

     
    PAPER

      Pubricized:
    2018/06/20
      Page(s):
    2179-2189

    The main purpose of Codemark is to distribute digital contents using offline media. Due to the main purpose of Codemark, Codemark cannot be used on digital images. It has high robustness on only printed images. This paper presents a new color code called Robust Index Code (RIC for short), which has high robustness on JPEG Compression and Resize targeting digital images. RIC embeds a remote database index to digital images so that users can reach to any digital contents. Experimental results, using our implemented RIC encoder and decoder, have shown high robustness on JPEG Comp. and Resize of the proposed codemark. The embedded database indexes can be extracted 100% on compressed images to 30%. In conclusion, it is able to store all the type of digital products by embedding indexes into digital images to access database, which means it makes a Superdistribution system with digital images realized. Therefore RIC has the potential for new Internet image services, since all the images encoded by RIC are possible to access original products anywhere.

  • Designing Coded Aperture Camera Based on PCA and NMF for Light Field Acquisition

    Yusuke YAGI  Keita TAKAHASHI  Toshiaki FUJII  Toshiki SONODA  Hajime NAGAHARA  

     
    PAPER

      Pubricized:
    2018/06/20
      Page(s):
    2190-2200

    A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.

  • Video Saliency Detection Using Spatiotemporal Cues

    Yu CHEN  Jing XIAO  Liuyi HU  Dan CHEN  Zhongyuan WANG  Dengshi LI  

     
    PAPER

      Pubricized:
    2018/06/20
      Page(s):
    2201-2208

    Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

  • Sparse Graph Based Deep Learning Networks for Face Recognition

    Renjie WU  Sei-ichiro KAMATA  

     
    PAPER

      Pubricized:
    2018/06/20
      Page(s):
    2209-2219

    In recent years, deep learning based approaches have substantially improved the performance of face recognition. Most existing deep learning techniques work well, but neglect effective utilization of face correlation information. The resulting performance loss is noteworthy for personal appearance variations caused by factors such as illumination, pose, occlusion, and misalignment. We believe that face correlation information should be introduced to solve this network performance problem originating from by intra-personal variations. Recently, graph deep learning approaches have emerged for representing structured graph data. A graph is a powerful tool for representing complex information of the face image. In this paper, we survey the recent research related to the graph structure of Convolutional Neural Networks and try to devise a definition of graph structure included in Compressed Sensing and Deep Learning. This paper devoted to the story explain of two properties of our graph - sparse and depth. Sparse can be advantageous since features are more likely to be linearly separable and they are more robust. The depth means that this is a multi-resolution multi-channel learning process. We think that sparse graph based deep neural network can more effectively make similar objects to attract each other, the relative, different objects mutually exclusive, similar to a better sparse multi-resolution clustering. Based on this concept, we propose a sparse graph representation based on the face correlation information that is embedded via the sparse reconstruction and deep learning within an irregular domain. The resulting classification is remarkably robust. The proposed method achieves high recognition rates of 99.61% (94.67%) on the benchmark LFW (YTF) facial evaluation database.

  • Fast CU Termination Algorithm with AdaBoost Classifier in HEVC Encoder

    Yitong LIU  Wang TIAN  Yuchen LI  Hongwen YANG  

     
    LETTER

      Pubricized:
    2018/06/20
      Page(s):
    2220-2223

    High Efficiency Video Coding (HEVC) has a better coding efficiency comparing with H.264/AVC. However, performance enhancement results in increased computational complexity which is mainly brought by the quadtree based coding tree unit (CTU). In this paper, an early termination algorithm based on AdaBoost classifier for coding unit (CU) is proposed to accelerate the process of searching the best partition for CTU. Experiment results indicate that our method can save 39% computational complexity on average at the cost of increasing Bjontegaard-Delta rate (BD-rate) by 0.18.

  • Regular Section
  • Trading-Off Computing and Cooling Energies by VM Migration in Data Centers

    Ying SONG  Xia ZHAO  Bo WANG  Yuzhong SUN  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/06/01
      Page(s):
    2224-2234

    High energy cost is a big challenge faced by the current data centers, wherein computing energy and cooling energy are main contributors to such cost. Consolidating workload onto fewer servers decreases the computing energy. However, it may result in thermal hotspots which typically consume greater cooling energy. Thus the tradeoff between computing energy decreasing and cooling energy decreasing is necessary for energy saving. In this paper, we propose a minimized-total-energy virtual machine (VM for short) migration model called C2vmMap based on efficient tradeoff between computing and cooling energies, with respect to two relationships: one for between the resource utilization and computing power and the other for among the resource utilization, the inlet and outlet temperatures of servers, and the cooling power. Regarding online resolution of the above model for better scalability, we propose a VM migration algorithm called C2vmMap_heur to decrease the total energy of a data center at run-time. We evaluate C2vmMap_heur under various workload scenarios. The real server experimental results show that C2vmMap_heur reduces up to 40.43% energy compared with the non-migration load balance algorithm. This algorithm saves up to 3x energy compared with the existing VM migration algorithm.

  • A Linear-Time Algorithm for Finding a Spanning Tree with Non-Terminal Set VNT on Interval Graphs

    Shin-ichi NAKAYAMA  Shigeru MASUYAMA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/06/18
      Page(s):
    2235-2246

    Given a graph G=(V,E) where V and E are a vertex and an edge set, respectively, specified with a subset VNT of vertices called a non-terminal set, the spanning tree with non-terminal set VNT is a connected and acyclic spanning subgraph of G that contains all the vertices of V where each vertex in a non-terminal set is not a leaf. The complexity of finding a spanning tree with non-terminal set VNT on general graphs where each edge has the weight of one is known to be NP-hard. In this paper, we show that if G is an interval graph then finding a spanning tree with a non-terminal set VNT of G is linearly-solvable when each edge has the weight of one.

  • Evaluating Energy-Efficiency of DRAM Channel Interleaving Schemes for Multithreaded Programs

    Satoshi IMAMURA  Yuichiro YASUI  Koji INOUE  Takatsugu ONO  Hiroshi SASAKI  Katsuki FUJISAWA  

     
    PAPER-Computer System

      Pubricized:
    2018/06/08
      Page(s):
    2247-2257

    The power consumption of server platforms has been increasing as the amount of hardware resources equipped on them is increased. Especially, the capacity of DRAM continues to grow, and it is not rare that DRAM consumes higher power than processors on modern servers. Therefore, a reduction in the DRAM energy consumption is a critical challenge to reduce the system-level energy consumption. Although it is well known that improving row buffer locality(RBL) and bank-level parallelism (BLP) is effective to reduce the DRAM energy consumption, our preliminary evaluation on a real server demonstrates that RBL is generally low across 15 multithreaded benchmarks. In this paper, we investigate the memory access patterns of these benchmarks using a simulator and observe that cache line-grained channel interleaving schemes, which are widely applied to modern servers including multiple memory channels, hurt the RBL each of the benchmarks potentially possesses. In order to address this problem, we focus on a row-grained channel interleaving scheme and compare it with three cache line-grained schemes. Our evaluation shows that it reduces the DRAM energy consumption by 16.7%, 12.3%, and 5.5% on average (up to 34.7%, 28.2%, and 12.0%) compared to the other schemes, respectively.

  • Proxy Responses by FPGA-Based Switch for MapReduce Stragglers

    Koya MITSUZUKA  Michihiro KOIBUCHI  Hideharu AMANO  Hiroki MATSUTANI  

     
    PAPER-Computer System

      Pubricized:
    2018/06/15
      Page(s):
    2258-2268

    In parallel processing applications, a few worker nodes called “stragglers”, which execute their tasks significantly slower than other tasks, increase the execution time of the job. In this paper, we propose a network switch based straggler handling system to mitigate the burden of the compute nodes. We also propose how to offload detecting stragglers and computing their results in the network switch with no additional communications between worker nodes. We introduce some approximate techniques for the proxy computation and response at the switch; thus our switch is called “ApproxSW.” As a result of a simulation experiment, the proposed approximation based on task similarity achieves the best accuracy in terms of quality of generated Map outputs. We also analyze how to suppress unnecessary proxy computation by the ApproxSW. We implement ApproxSW on NetFPGA-SUME board that has four 10Gbit Ethernet (10GbE) interfaces and a Virtex-7 FPGA. Experimental results shows that the ApproxSW functions do not degrade the original 10GbE switch performance.

  • Cross-Validation-Based Association Rule Prioritization Metric for Software Defect Characterization

    Takashi WATANABE  Akito MONDEN  Zeynep YÜCEL  Yasutaka KAMEI  Shuji MORISAKI  

     
    PAPER-Software Engineering

      Pubricized:
    2018/06/13
      Page(s):
    2269-2278

    Association rule mining discovers relationships among variables in a data set, representing them as rules. These are expected to often have predictive abilities, that is, to be able to predict future events, but commonly used rule interestingness measures, such as support and confidence, do not directly assess their predictive power. This paper proposes a cross-validation -based metric that quantifies the predictive power of such rules for characterizing software defects. The results of evaluation this metric experimentally using four open-source data sets (Mylyn, NetBeans, Apache Ant and jEdit) show that it can improve rule prioritization performance over conventional metrics (support, confidence and odds ratio) by 72.8% for Mylyn, 15.0% for NetBeans, 10.5% for Apache Ant and 0 for jEdit in terms of SumNormPre(100) precision criterion. This suggests that the proposed metric can provide better rule prioritization performance than conventional metrics and can at least provide similar performance even in the worst case.

  • Entity Ranking for Queries with Modifiers Based on Knowledge Bases and Web Search Results

    Wiradee IMRATTANATRAI  Makoto P. KATO  Katsumi TANAKA  Masatoshi YOSHIKAWA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/06/18
      Page(s):
    2279-2290

    This paper proposes methods of finding a ranked list of entities for a given query (e.g. “Kennin-ji”, “Tenryu-ji”, or “Kinkaku-ji” for the query “ancient zen buddhist temples in kyoto”) by leveraging different types of modifiers in the query through identifying corresponding properties (e.g. established date and location for the modifiers “ancient” and “kyoto”, respectively). While most major search engines provide the entity search functionality that returns a list of entities based on users' queries, entities are neither presented for a wide variety of search queries, nor in the order that users expect. To enhance the effectiveness of entity search, we propose two entity ranking methods. Our first proposed method is a Web-based entity ranking that directly finds relevant entities from Web search results returned in response to the query as a whole, and propagates the estimated relevance to the other entities. The second proposed method is a property-based entity ranking that ranks entities based on properties corresponding to modifiers in the query. To this end, we propose a novel property identification method that identifies a set of relevant properties based on a Support Vector Machine (SVM) using our seven criteria that are effective for different types of modifiers. The experimental results showed that our proposed property identification method could predict more relevant properties than using each of the criteria separately. Moreover, we achieved the best performance for returning a ranked list of relevant entities when using the combination of the Web-based and property-based entity ranking methods.

  • Formal Method for Security Analysis of Electronic Payment Protocols

    Yi LIU  Qingkun MENG  Xingtong LIU  Jian WANG  Lei ZHANG  Chaojing TANG  

     
    PAPER-Information Network

      Pubricized:
    2018/06/19
      Page(s):
    2291-2297

    Electronic payment protocols provide secure service for electronic commerce transactions and protect private information from malicious entities in a network. Formal methods have been introduced to verify the security of electronic payment protocols; however, these methods concentrate on the accountability and fairness of the protocols, without considering the impact caused by timeliness. To make up for this deficiency, we present a formal method to analyze the security properties of electronic payment protocols, namely, accountability, fairness and timeliness. We add a concise time expression to an existing logical reasoning method to represent the event time and extend the time characteristics of the logical inference rules. Then, the Netbill protocol is analyzed with our formal method, and we find that the fairness of the protocol is not satisfied due to the timeliness problem. The results illustrate that our formal method can analyze the key properties of electronic payment protocols. Furthermore, it can be used to verify the time properties of other security protocols.

  • Review Rating Prediction on Location-Based Social Networks Using Text, Social Links, and Geolocations

    Yuehua WANG  Zhinong ZHONG  Anran YANG  Ning JING  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/06/01
      Page(s):
    2298-2306

    Review rating prediction is an important problem in machine learning and data mining areas and has attracted much attention in recent years. Most existing methods for review rating prediction on Location-Based Social Networks only capture the semantics of texts, but ignore user information (social links, geolocations, etc.), which makes them less personalized and brings down the prediction accuracy. For example, a user's visit to a venue may be influenced by their friends' suggestions or the travel distance to the venue. To address this problem, we develop a review rating prediction framework named TSG by utilizing users' review Text, Social links and the Geolocation information with machine learning techniques. Experimental results demonstrate the effectiveness of the framework.

  • A Machine Learning-Based Approach for Selecting SpMV Kernels and Matrix Storage Formats

    Hang CUI  Shoichi HIRASAWA  Hiroaki KOBAYASHI  Hiroyuki TAKIZAWA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/06/13
      Page(s):
    2307-2314

    Sparse Matrix-Vector multiplication (SpMV) is a computational kernel widely used in many applications. Because of the importance, many different implementations have been proposed to accelerate this computational kernel. The performance characteristics of those SpMV implementations are quite different, and it is basically difficult to select the implementation that has the best performance for a given sparse matrix without performance profiling. One existing approach to the SpMV best-code selection problem is by using manually-predefined features and a machine learning model for the selection. However, it is generally hard to manually define features that can perfectly express the characteristics of the original sparse matrix necessary for the code selection. Besides, some information loss would happen by using this approach. This paper hence presents an effective deep learning mechanism for SpMV code selection best suited for a given sparse matrix. Instead of using manually-predefined features of a sparse matrix, a feature image and a deep learning network are used to map each sparse matrix to the implementation, which is expected to have the best performance, in advance of the execution. The benefits of using the proposed mechanism are discussed by calculating the prediction accuracy and the performance. According to the evaluation, the proposed mechanism can select an optimal or suboptimal implementation for an unseen sparse matrix in the test data set in most cases. These results demonstrate that, by using deep learning, a whole sparse matrix can be used to do the best implementation prediction, and the prediction accuracy achieved by the proposed mechanism is higher than that of using predefined features.

  • Deep Reinforcement Learning with Sarsa and Q-Learning: A Hybrid Approach

    Zhi-xiong XU  Lei CAO  Xi-liang CHEN  Chen-xi LI  Yong-liang ZHANG  Jun LAI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/05/22
      Page(s):
    2315-2322

    The commonly used Deep Q Networks is known to overestimate action values under certain conditions. It's also proved that overestimations do harm to performance, which might cause instability and divergence of learning. In this paper, we present the Deep Sarsa and Q Networks (DSQN) algorithm, which can considered as an enhancement to the Deep Q Networks algorithm. First, DSQN algorithm takes advantage of the experience replay and target network techniques in Deep Q Networks to improve the stability of neural networks. Second, double estimator is utilized for Q-learning to reduce overestimations. Especially, we introduce Sarsa learning to Deep Q Networks for removing overestimations further. Finally, DSQN algorithm is evaluated on cart-pole balancing, mountain car and lunarlander control task from the OpenAI Gym. The empirical evaluation results show that the proposed method leads to reduced overestimations, more stable learning process and improved performance.

  • Pattern-Based Ontology Modeling and Reasoning for Emergency System

    Yue TAN  Wei LIU  Zhenyu YANG  Xiaoni DU  Zongtian LIU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/06/05
      Page(s):
    2323-2333

    Event-centered information integration is regarded as one of the most pressing issues in improving disaster emergency management. Ontology plays an increasingly important role in emergency information integration, and provides the possibility for emergency reasoning. However, the development of event ontology for disaster emergency is a laborious and difficult task due to the increasingly scale and complexity of emergencies. Ontology pattern is a modeling solution to solve the recurrent ontology design problem, which can improve the efficiency of ontology development by reusing patterns. By study on characteristics of numerous emergencies, this paper proposes a generic ontology pattern for emergency system modeling. Based on the emergency ontology pattern, a set of reasoning rules for emergency-evolution, emergency-solution and emergency-resource utilization reasoning were proposed to conduct emergency knowledge reasoning and q.

  • An Application of Intuitionistic Fuzzy Sets to Improve Information Extraction from Thai Unstructured Text

    Peerasak INTARAPAIBOON  Thanaruk THEERAMUNKONG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/05/23
      Page(s):
    2334-2345

    Multi-slot information extraction, also known as frame extraction, is a task that identify several related entities simultaneously. Most researches on this task are concerned with applying IE patterns (rules) to extract related entities from unstructured documents. An important obstacle for the success in this task is unknowing where text portions containing interested information are. This problem is more complicated when involving languages with sentence boundary ambiguity, e.g. the Thai language. Applying IE rules to all reasonable text portions can degrade the effect of this obstacle, but it raises another problem that is incorrect (unwanted) extractions. This paper aims to present a method for removing these incorrect extractions. In the method, extractions are represented as intuitionistic fuzzy sets, and a similarity measure for IFSs is used to calculate distance between IFS of an unclassified extraction and that of each already-classified extraction. The concept of k nearest neighbor is adopted to design whether the unclassified extraction is correct or not. From the experiment on various domains, the proposed technique improves extraction precision while satisfactorily preserving recall.

  • Incremental Estimation of Natural Policy Gradient with Relative Importance Weighting

    Ryo IWAKI  Hiroki YOKOYAMA  Minoru ASADA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/06/01
      Page(s):
    2346-2355

    The step size is a parameter of fundamental importance in learning algorithms, particularly for the natural policy gradient (NPG) methods. We derive an upper bound for the step size in an incremental NPG estimation, and propose an adaptive step size to implement the derived upper bound. The proposed adaptive step size guarantees that an updated parameter does not overshoot the target, which is achieved by weighting the learning samples according to their relative importances. We also provide tight upper and lower bounds for the step size, though they are not suitable for the incremental learning. We confirm the usefulness of the proposed step size using the classical benchmarks. To the best of our knowledge, this is the first adaptive step size method for NPG estimation.

  • Reciprocal Kit-Build Concept Map: An Approach for Encouraging Pair Discussion to Share Each Other's Understanding

    Warunya WUNNASRI  Jaruwat PAILAI  Yusuke HAYASHI  Tsukasa HIRASHIMA  

     
    PAPER-Educational Technology

      Pubricized:
    2018/05/29
      Page(s):
    2356-2367

    Collaborative learning is an active teaching and learning strategy, in which learners who give each other elaborated explanations can learn most. However, it is difficult for learners to explain their own understanding elaborately in collaborative learning. In this study, we propose a collaborative use of a Kit-Build concept map (KB map) called “Reciprocal KB map”. In a Reciprocal KB map for a pair discussion, at first, the two participants make their own concept maps expressing their comprehension. Then, they exchange the components of their maps and request each other to reconstruct their maps by using the components. The differences between the original map and the reconstructed map are diagnosed automatically as an advantage of the KB map. Reciprocal KB map is expected to encourage pair discussion to recognize the understanding of each other and to create an effective discussion. In an experiment reported in this paper, Reciprocal KB map was used for supporting a pair discussion and was compared with a pair discussion which was supported by a traditional concept map. Nineteen pairs of university students were requested to use the traditional concept map in their discussion, while 20 pairs of university students used Reciprocal KB map for discussing the same topic. The results of the experiment were analyzed using three metrics: a discussion score, a similarity score, and questionnaires. The discussion score, which investigates the value of talk in discussion, demonstrates that Reciprocal KB map can promote more effective discussion between the partners compared to the traditional concept map. The similarity score, which evaluates the similarity of the concept maps, demonstrates that Reciprocal KB map can encourage the pair of partners to understand each other better compared to the traditional concept map. Last, the questionnaires illustrate that Reciprocal KB map can support the pair of partners to collaborate in the discussion smoothly and that the participants accepted this method for sharing their understanding with each other. These results suggest that Reciprocal KB map is a promising approach for encouraging pairs of partners to understand each other and to promote the effective discussions.

  • Variational-Bayesian Single-Image Devignetting

    Motoharu SONOGASHIRA  Masaaki IIYAMA  Michihiko MINOH  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/06/18
      Page(s):
    2368-2380

    Vignetting is a common type of image degradation that makes peripheral parts of an image darker than the central part. Single-image devignetting aims to remove undesirable vignetting from an image without resorting to calibration, thereby providing high-quality images required for a wide range of applications. Previous studies into single-image devignetting have focused on the estimation of vignetting functions under the assumption that degradation other than vignetting is negligible. However, noise in real-world observations remains unremoved after inversion of vignetting, and prevents stable estimation of vignetting functions, thereby resulting in low quality of restored images. In this paper, we introduce a methodology of image restoration based on variational Bayes (VB) to devignetting, aiming at high-quality devignetting in the presence of noise. Through VB inference, we jointly estimate a vignetting function and a latent image free from both vignetting and noise, using a general image prior for noise removal. Compared with state-of-the-art methods, the proposed VB approach to single-image devignetting maintains effectiveness in the presence of noise, as we demonstrate experimentally.

  • Optimal Billboard Deformation via 3D Voxel for Free-Viewpoint System

    Keisuke NONAKA  Houari SABIRIN  Jun CHEN  Hiroshi SANKOH  Sei NAITO  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/06/18
      Page(s):
    2381-2391

    A free-viewpoint application has been developed that yields an immersive user experience. One of the simple free-viewpoint approaches called “billboard methods” is suitable for displaying a synthesized 3D view in a mobile device, but it suffers from the limitation that a billboard should be positioned in only one position in the world. This fact gives users an unacceptable impression in the case where an object being shot is situated at multiple points. To solve this problem, we propose optimal deformation of the billboard. The deformation is designed as a mapping of grid points in the input billboard silhouette to produce an optimal silhouette from an accurate voxel model of the object. We formulate and solve this procedure as a nonlinear optimization problem based on a grid-point constraint and some a priori information. Our results show that the proposed method generates a synthesized virtual image having a natural appearance and better objective score in terms of the silhouette and structural similarity.

  • An Edge Detection Method Based on Wavelet Transform at Arbitrary Angles

    Su LIU  Xingguang GENG  Yitao ZHANG  Shaolong ZHANG  Jun ZHANG  Yanbin XIAO  Chengjun HUANG  Haiying ZHANG  

     
    PAPER-Biological Engineering

      Pubricized:
    2018/06/13
      Page(s):
    2392-2400

    The quality of edge detection is related to detection angle, scale, and threshold. There have been many algorithms to promote edge detection quality by some rules about detection angles. However these algorithm did not form rules to detect edges at an arbitrary angle, therefore they just used different number of angles and did not indicate optimized number of angles. In this paper, a novel edge detection algorithm is proposed to detect edges at arbitrary angles and optimized number of angles in the algorithm is introduced. The algorithm combines singularity detection with Gaussian wavelet transform and edge detection at arbitrary directions and contain five steps: 1) An image is divided into some pixel lines at certain angle in the range from 45° to 90° according to decomposition rules of this paper. 2) Singularities of pixel lines are detected and form an edge image at the certain angle. 3) Many edge images at different angles form a final edge images. 4) Detection angles in the range from 45° to 90° are extended to range from 0° to 360°. 5) Optimized number of angles for the algorithm is proposed. Then the algorithm with optimized number of angles shows better performances.

  • Waffle: A New Photonic Plasmonic Router for Optical Network on Chip

    Chao TANG  Huaxi GU  Kun WANG  

     
    LETTER-Computer System

      Pubricized:
    2018/05/29
      Page(s):
    2401-2403

    Optical interconnect is a promising candidate for network on chip. As the key element in the network on chip, the routers greatly affect the performance of the whole system. In this letter, we proposed a new router architecture, Waffle, based on compact 2×2 hybrid photonic-plasmonic switching elements. Also, an optimized architecture, Waffle-XY, was designed for the network employed XY routing algorithm. Both Waffle and Waffle-XY are strictly non-blocking architectures and can be employed in the popular mesh-like networks. Theoretical analysis illustrated that Waffle and Waffle-XY possessed a better performance compared with several representative routers.

  • Data Recovery Aware Garbage Collection Mechanism in Flash-Based Storage Devices

    Joon-Young PAIK  Rize JIN  Tae-Sun CHUNG  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2018/06/20
      Page(s):
    2404-2408

    In terms of system reliability, data recovery is a crucial capability. The lack of data recovery leads to the permanent loss of valuable data. This paper aims at improving data recovery in flash-based storage devices where extremely poor data recovery is shown. For this, we focus on garbage collection that determines the life span of data which have high possibility of data recovery requests by users. A new garbage collection mechanism with awareness of data recovery is proposed. First, deleted or overwritten data are categorized into shallow invalid data and deep invalid data based on the possibility of data recovery requests. Second, the proposed mechanism selects victim area for reclamation of free space, considering the shallow invalid data that have the high possibility of data recovery requests. Our proposal prohibits more shallow invalid data from being eliminated during garbage collections. The experimental results show that our garbage collection mechanism can improve data recovery with minor performance degradation.

  • Reward-Based Exploration: Adaptive Control for Deep Reinforcement Learning

    Zhi-xiong XU  Lei CAO  Xi-liang CHEN  Chen-xi LI  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/06/18
      Page(s):
    2409-2412

    Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.

  • A Propagation Method for Multi Object Tracklet Repair

    Nii L. SOWAH  Qingbo WU  Fanman MENG  Liangzhi TANG  Yinan LIU  Linfeng XU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2018/05/29
      Page(s):
    2413-2416

    In this paper, we improve upon the accuracy of existing tracklet generation methods by repairing tracklets based on their quality evaluation and detection propagation. Starting from object detections, we generate tracklets using three existing methods. Then we perform co-tracklet quality evaluation to score each tracklet and filtered out good tracklet based on their scores. A detection propagation method is designed to transfer the detections in the good tracklets to the bad ones so as to repair bad tracklets. The tracklet quality evaluation in our method is implemented by intra-tracklet detection consistency and inter-tracklet detection completeness. Two propagation methods; global propagation and local propagation are defined to achieve more accurate tracklet propagation. We demonstrate the effectiveness of the proposed method on the MOT 15 dataset

  • A Unified Neural Network for Quality Estimation of Machine Translation

    Maoxi LI  Qingyu XIANG  Zhiming CHEN  Mingwen WANG  

     
    LETTER-Natural Language Processing

      Pubricized:
    2018/06/18
      Page(s):
    2417-2421

    The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.