Derviş AYGÖR Shafqat Ur REHMAN Fatih Vehbi ÇELEBİ
This paper is primarily concerned with the performance of Medium Access Control (MAC) layer plans for Wireless Sensor Networks (WSNs) in the context of buffer management solutions. We propose a novel buffer management solution that improves the general performance of MAC layer plans, in particular those crafted for WSNs. An analytical model is introduced in order to evaluate the cost of different buffer management solutions. The proposed buffer management solution, Single Queue Multi Priority (SQMP), is compared with well-known Single Queue Single Priority (SQSP) and Multi Queue Multi Priority (MQMP) buffer management solutions. All buffer management solutions are investigated in terms of throughput performance, utilization of the buffer and prioritization capabilities. Despite the relatively good performance of the different buffer management solutions in uncongested networks, the characteristic features of WSNs cause a degradation in the performance. In bursty conditions, SQMP controls and manages this degradation more effectively in comparison with the other two solutions. Simulations based on Omnet++ and Castalia confirm the performance improvements of our buffer management solution.
Yu CHEN Jing XIAO Liuyi HU Dan CHEN Zhongyuan WANG Dengshi LI
Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
Natthawute SAE-LIM Shinpei HAYASHI Motoshi SAEKI
Code smells are indicators of design flaws or problems in the source code. Various tools and techniques have been proposed for detecting code smells. These tools generally detect a large number of code smells, so approaches have also been developed for prioritizing and filtering code smells. However, lack of empirical data detailing how developers filter and prioritize code smells hinders improvements to these approaches. In this study, we investigated ten professional developers to determine the factors they use for filtering and prioritizing code smells in an open source project under the condition that they complete a list of five tasks. In total, we obtained 69 responses for code smell filtration and 50 responses for code smell prioritization from the ten professional developers. We found that Task relevance and Smell severity were most commonly considered during code smell filtration, while Module importance and Task relevance were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
Xiaoyuan REN Libing JIANG Xiaoan TANG Junda ZHANG
Extracting 3D information from a single image is an interesting but ill-posed problem. Especially for those artificial objects with less texture such as smooth metal devices, the decrease of object detail makes the problem more challenging. Aiming at the texture-less object with symmetric structure, this paper proposes a novel method for 3D pose estimation from a single image by introducing implicit structural symmetry and context constraint as priori-knowledge. Firstly, by parameterized representation, the texture-less object is decomposed into a series of sub-objects with regular geometric primitives. Accordingly, the problem of 3D pose estimation is converted to a parameter estimation problem, which is implemented by primitive fitting algorithm. Then, the context prior among sub-objects is introduced for parameter refinement via the augmentedLagrange optimization. The effectiveness of the proposed method is verified by the experiments based on simulated and measured data.
Focusing on the defects of famous defogging algorithms for fog images based on the atmosphere scattering model, we find that it is necessary to obtain accurate transmission map that can reflect the real depths both in large depth and close range. And it is hard to tackle this with just one prior because of the differences between the large depth and close range in foggy images. Hence, we propose a novel prior that simplifies the solution of transmission map by transferring coefficient, called saturation prior. Then, under the Random Walk model, we constrain the transferring coefficient with the color attenuation prior that can obtain good transmission map in large depth regions. More importantly, we design a regularization weight to balance the influences of saturation prior and color attenuation prior to the transferring coefficient. Experimental results demonstrate that the proposed defogging method outperforms the state-of-art image defogging methods based on single prior in terms of details restoring and color preserving.
Tianjiao ZHANG Qi ZHU Guangjun LIANG Jianfang XIN Ziyu PAN
Vehicular Ad hoc Network (VANET) is an important part of the Intelligent Transportation System (ITS). VANETs can realize communication between moving vehicles, infrastructures and other intelligent mobile terminals, which can greatly improve the road safety and traffic efficiency effectively. Existing studies of vehicular ad hoc network usually consider only one data transmission model, while the increasing density of traffic data sources means that the vehicular ad hoc network is evolving into Heterogeneous Vehicular Network (HetVNET) which needs hybrid data transmission scheme. Considering the Heterogeneous Vehicular Network, this paper presents a hybrid transmission MAC protocol including vehicle to vehicle communication (V2V) and vehicle to infrastructure communication (V2I/I2V). In this protocol, the data are identified according to timeliness, on the base of the traditional V2V and V2I/I2V communication. If the time-sensitive data (V2V data) fail in transmission, the node transmits the data to the base station and let the base station cooperatively transmit the data with higher priority. This transmission scheme uses the large transmission range of base station in an effective manner. In this paper, the queueing models of the vehicles and base station are analyzed respectively by one-dimensional and two-dimensional Markov Chain, and the expressions of throughput, packet drop rate and delay are also derived. The simulation results show that this MAC protocol can improve the transmission efficiency of V2V communication and reduce the delay of V2V data without losing the system performance.
Self-paced learning (SPL) gradually trains the data from easy to hard, and includes more data into the training process in a self-paced manner. The advantage of SPL is that it has an ability to avoid bad local minima, and the system can improve the generalization performance. However, SPL's system needs an expert to judge the complexity of data at the beginning of training. Generally, this expert does not exist in the beginning, and is learned by gradually training the samples. Based on this consideration, we add an uncertainty of complexity judgment into SPL's system, and propose a self-paced learning with uncertainty prior (SPUP). For efficiently solving our system optimization function, an iterative optimization and statistical simulated annealing method are introduced. The final experimental results indicate that our SPUP has more robustness to the outlier and achieves higher accuracy and less error than SPL.
Yuan GAO Chengdong WU Xiaosheng YU Wei ZHOU Jiahui WU
Efficient optic disc (OD) segmentation plays a significant role in retinal image analysis and retinal disease screening. In this paper, we present a full-automatic segmentation approach called double boundary extraction for the OD segmentation. The proposed approach consists of the following two stages: first, we utilize an unsupervised learning technology and statistical method based on OD boundary information to obtain the initial contour adaptively. Second, the final optic disc boundary is extracted using the proposed LSO model. The performance of the proposed method is tested on the public DIARETDB1 database and the experimental results demonstrate the effectiveness and advantage of the proposed method.
Yun LIU Rui CHEN Jinxia SHANG Minghui WANG
In this letter, we propose a novel and effective haze removal method by using the structure-aware atmospheric veil. More specifically, the initial atmospheric veil is first estimated based on dark channel prior and morphological operator. Furthermore, an energy optimization function considering the structure feature of the input image is constructed to refine the initial atmospheric veil. At last, the haze-free image can be restored by inverting the atmospheric scattering model. Additionally, brightness adjustment is also performed for preventing the dehazing result too dark. Experimental results on hazy images reveal that the proposed method can effectively remove the haze and yield dehazing results with vivid color and high scene visibility.
Wenbo XU Yupeng CUI Yun TIAN Siye WANG Jiaru LIN
This paper considers the recovery problem of distributed compressed sensing (DCS), where J (J≥2) signals all have sparse common component and sparse innovation components. The decoder attempts to jointly recover each component based on {Mj} random noisy measurements (j=1,…,J) with the prior information on the support probabilities, i.e., the probabilities that the entries in each component are nonzero. We give both the sufficient and necessary conditions on the total number of measurements $sum olimits_{j = 1}^J M_j$ that is needed to recover the support set of each component perfectly. The results show that when the number of signal J increases, the required average number of measurements $sum olimits_{j = 1}^J M_j/J$ decreases. Furthermore, we propose an extension of one existing algorithm for DCS to exploit the prior information, and simulations verify its improved performance.
Wei ZHOU Chengdong WU Yuan GAO Xiaosheng YU
Accurate optic disc localization and segmentation are two main steps when designing automated screening systems for diabetic retinopathy. In this paper, a novel optic disc detection approach based on saliency object detection and modified local intensity clustering model is proposed. It consists of two stages: in the first stage, the saliency detection technique is introduced to the enhanced retinal image with the aim of locating the optic disc. In the second stage, the optic disc boundary is extracted by the modified Local Intensity Clustering (LIC) model with oval-shaped constrain. The performance of our proposed approach is tested on the public DIARETDB1 database. Compared to the state-of-the-art approaches, the experimental results show the advantages and effectiveness of the proposed approach.
Huu-Noi DOAN Tien-Dat NGUYEN Min-Cheol HONG
This paper presents a new hole-filling method that uses extrapolated spatio-temporal background information to obtain a synthesized free-view. A new background codebook for extracting reliable temporal background information is introduced. In addition, the paper addresses estimating spatial local background to distinguish background and foreground regions so that spatial background information can be extrapolated. Background holes are filled by combining spatial and temporal background information. Finally, exemplar-based inpainting is applied to fill in the remaining holes using a new priority function. The experimental results demonstrated that satisfactory synthesized views can be obtained using the proposed algorithm.
Wenhao FU Huiqun YU Guisheng FAN Xiang JI
Regression testing is essential for assuring the quality of a software product. Because rerunning all test cases in regression testing may be impractical under limited resources, test case prioritization is a feasible solution to optimize regression testing by reordering test cases for the current testing version. In this paper, we propose a novel test case prioritization approach that combines the clustering algorithm and the scheduling algorithm for improving the effectiveness of regression testing. By using the clustering algorithm, test cases with same or similar properties are merged into a cluster, and the scheduling algorithm helps allocate an execution priority for each test case by incorporating fault detection rates with the waiting time of test cases in candidate set. We have conducted several experiments on 12 C programs to validate the effectiveness of our proposed approach. Experimental results show that our approach is more effective than some well studied test case prioritization techniques in terms of average percentage of fault detected (APFD) values.
This paper presents the formal analysis of the feature negotiation and connection management procedures of the Datagram Congestion Control Protocol (DCCP). Using state space analysis we discover an error in the DCCP specification, that result in both ends of the connection having different agreed feature values. The error occurs when the client ignores an unexpected Response packet in the OPEN state that carries a valid Confirm option. This provides an evidence that the connection management procedure and feature negotiation procedures interact. We also propose solutions to rectify the problem.
Ryoma ANDO Ryo HAMAMOTO Hiroyasu OBATA Chisa TAKANO Kenji ISHIDA
In IEEE802.11 Wireless Local Area Networks (WLANs), frame collisions occur drastically when the number of wireless terminals connecting to the same Access Point (AP) increases. It causes the decrease of the total throughput of all terminals. To solve this issue, the authors have proposed a new media access control (MAC) method, Synchronized Phase MAC (SP-MAC), based on the synchronization phenomena of coupled oscillators. We have addressed the network environment in which only uplink flows from the wireless terminal to an AP exist. However, it is necessary to take into consideration of the real network environment in which uplink and downlink flows are generated simultaneously. If many bidirectional data flows exist in the WLAN, the AP receives many frames from both uplink and downlink by collision avoidance of SP-MAC. As a result, the total throughput decreases by buffer overflow in the AP. In this paper, we propose a priority control method based on SP-MAC for avoiding the buffer overflow in the AP under the bidirectional environment. Also, we show that the proposed method has an effect for improving buffer overflow in the AP and total throughput by the simulation.
Sung-Woong JO Taeyoung HA Taehyun KYONG Jong-Moon CHUNG
Dynamic voltage and frequency scaling (DVFS) is an essential mechanism for power saving in smartphones and mobile devices. Central processing unit (CPU) load based DVFS algorithms are widely used due to their simplicity of implementation. However, such algorithms often lead to a poor response time, which is one of the most important factors of user experience, especially for interactive applications. In this paper, the response time is mathematically modeled by considering the CPU frequency and characteristics of the running applications based on the Linux kernel's completely fair scheduler (CFS), and a Response time constrained Frequency & Priority (RFP) control scheme for improved power efficiency of smartphones is proposed. In the RFP algorithm, the CPU frequency and priority of the interactive applications are adaptively adjusted by estimating the response time in real time. The experimental results show that RFP can save energy up to 24.23% compared to the ondemand governor and up to 7.74% compared to HAPPE while satisfying the predefined threshold of the response time in Android-based smartphones.
In this letter, a novel and highly efficient haze removal algorithm is proposed for haze removal from only a single input image. The proposed algorithm is built on the atmospheric scattering model. Firstly, global atmospheric light is estimated and coarse atmospheric veil is inferred based on statistics of dark channel prior. Secondly, the coarser atmospheric veil is refined by using a fast Tri-Gaussian filter based on human retina property. To avoid halo artefacts, we then redefine the scene albedo. Finally, the haze-free image is derived by inverting the atmospheric scattering model. Results on some challenging foggy images demonstrate that the proposed method can not only improve the contrast and visibility of the restored image but also expedite the process.
Sung-Ho LEE Seung-Won JUNG Sung-Jea KO
The dark channel prior (DCP)-based image dehazing method has been widely used for enhancing visibility of outdoor images. However, since the DCP-based method assumes that the minimum values within local patches of natural outdoor haze-free images are zero, underestimation of the transmission is inevitable when the assumption does not hold. In this letter, a novel iterative image dehazing algorithm is proposed to compensate for the underestimated transmission. Experimental results show that the proposed method can improve the dehazing performance by increasing the transmission estimation accuracy.
Hayato MAKI Tomoki TODA Sakriani SAKTI Graham NEUBIG Satoshi NAKAMURA
In this paper a new method for noise removal from single-trial event-related potentials recorded with a multi-channel electroencephalogram is addressed. An observed signal is separated into multiple signals with a multi-channel Wiener filter whose coefficients are estimated based on parameter estimation of a probabilistic generative model that locally models the amplitude of each separated signal in the time-frequency domain. Effectiveness of using prior information about covariance matrices to estimate model parameters and frequency dependent covariance matrices were shown through an experiment with a simulated event-related potential data set.
Vassilios G. VASSILAKIS Ioannis D. MOSCHOLIOS Michael D. LOGOTHETIS
Fast proliferation of mobile Internet and high-demand mobile applications necessitates the introduction of different priority classes in next-generation cellular networks. This is especially crucial for efficient use of radio resources in the heterogeneous and virtualized network environments. Despite the fact that many analytical tools have been proposed for capacity and radio resource modelling in cellular networks, only a few of them explicitly incorporate priorities among services. We propose a novel analytical model to analyse the performance of a priority-based cellular CDMA system with finite source population. When the cell load is above a certain level, low-priority calls may be blocked to preserve the quality of service of high-priority calls. The proposed model leads to an efficient closed-form solution that enables fast and very accurate calculation of resource occupancy of the CDMA system and call blocking probabilities, for different services and many priority classes. To achieve them, the system is modelled as a continuous-time Markov chain. We evaluate the accuracy of the proposed analytical model by means of computer simulations and find that the introduced approximation errors are negligible.