Zhiyuan JIANG Yijie HUANG Shunqing ZHANG Shugong XU
In a heterogeneous unreliable multiaccess network, wherein terminals share a common wireless channel with distinct error probabilities, existing works have shown that a persistent round-robin (RR-P) scheduling policy can be arbitrarily worse than the optimum in terms of Age of Information (AoI) under standard Automatic Repeat reQuest (ARQ). In this paper, practical Hybrid ARQ (HARQ) schemes which are widely-used in today's wireless networks are considered. We show that RR-P is very close to optimum with asymptotically many terminals in this case, by explicitly deriving tight, closed-form AoI gaps between optimum and achievable AoI by RR-P. In particular, it is rigorously proved that for RR-P, under HARQ models concerning fading channels (resp. finite-blocklength regime), the relative AoI gap compared with the optimum is within a constant of 6.4% (resp. 6.2% with error exponential decay rate of 0.5). In addition, RR-P enjoys the distinctive advantage of implementation simplicity with channel-unaware and easy-to-decentralize operations, making it favorable in practice. A further investigation considering constraint imposed on the number of retransmissions is presented. The performance gap is indicated through numerical simulations.
Yang YAN Yao YAO Zhi CHEN Qiuyan WANG
Codebooks with small inner-product correlation have applied in direct spread code division multiple access communications, space-time codes and compressed sensing. In general, it is difficult to construct optimal codebooks achieving the Welch bound or the Levenstein bound. This paper focuses on constructing asymptotically optimal codebooks with characters of cyclic groups. Based on the proposed constructions, two classes of asymptotically optimal codebooks with respect to the Welch bound are presented. In addition, parameters of these codebooks are new.
Hong-Li WANG Li-Li FAN Gang WANG Lin-Zhi SHEN
In the letter, two classes of optimal codebooks and asymptotically optimal codebooks in regard to the Levenshtein bound are presented, which are based on mutually unbiased bases (MUB) and approximately mutually unbiased bases (AMUB), respectively.
Gang WANG Min-Yao NIU Lin-Zhi SHEN You GAO
In this letter, motivated by the research of Tian et al., two constructions of asymptotically optimal codebooks in regard to the Welch bound with additive and multiplicative characters are provided. The parameters of constructed codebooks are new, which are different from those in the letter of Tian et al.
Gang WANG Min-Yao NIU Jian GAO Fang-Wei FU
In this letter, as a generalization of Luo et al.'s constructions, a construction of codebook, which meets the Welch bound asymptotically, is proposed. The parameters of codebook presented in this paper are new in some cases.
Gang WANG Min-Yao NIU You GAO Fang-Wei FU
In this letter, as a generalization of Heng's constructions in the paper [9], a construction of codebooks, which meets the Welch bound asymptotically, is proposed. The parameters of codebooks presented in this paper are new in some cases.
In this paper, a study of a sufficient condition on the optimality of a decoded codeword of soft-decision decodings for binary linear codes is shown for a quantized case. A typical uniform 4-level quantizer for soft-decision decodings is employed for the analysis. Simulation results on the (64,42,8) Reed-Muller code indicates that the condition is effective for SN ratios at 3[dB] or higher for any iterative style optimum decodings.
Hayato KOBAYASHI Kazuhiko MINEMATSU Tetsu IWATA
An Authenticated Encryption scheme is used to guarantee both privacy and authenticity of digital data. At FSE 2014, an authenticated encryption scheme called CLOC was proposed. CLOC is designed to handle short input data efficiently without needing heavy precomputation nor large memory. This is achieved by making various cases of different treatments in the encryption process depending on the input data. Five tweak functions are used to handle the conditional branches, and they are designed to satisfy 55 differential probability constraints, which are used in the security proof of CLOC. In this paper, we show that all these 55 constraints are necessary. This shows the design optimality of the tweak functions in CLOC in that the constraints cannot be relaxed, and hence the specification of the tweak functions cannot be simplified.
Weiqin YING Xing XU Yuxiang FENG Yu WU
A conical area evolutionary algorithm (CAEA) is presented to further improve computational efficiencies of evolutionary algorithms for bi-objective optimization. CAEA partitions the objective space into a number of conical subregions and then solves a scalar subproblem in each subregion that uses a conical area indicator as its scalar objective. The local Pareto optimality of the solution with the minimal conical area in each subregion is proved. Experimental results on bi-objective problems have shown that CAEA offers a significantly higher computational efficiency than the multi-objective evolutionary algorithm based on decomposition (MOEA/D) while CAEA competes well with MOEA/D in terms of solution quality.
Junqi ZHANG Lina NI Chen XIE Shangce GAO Zheng TANG
This paper presents an inertial estimator learning automata scheme by which both the short-term and long-term perspectives of the environment can be incorporated in the stochastic estimator – the long term information crystallized in terms of the running reward-probability estimates, and the short term information used by considering whether the most recent response was a reward or a penalty. Thus, when the short-term perspective is considered, the stochastic estimator becomes pertinent in the context of the estimator algorithms. The proposed automata employ an inertial weight estimator as the short-term perspective to achieve a rapid and accurate convergence when operating in stationary random environments. According to the proposed inertial estimator scheme, the estimates of the reward probabilities of actions are affected by the last response from environment. In this way, actions that have gotten the positive response from environment in the short time, have the opportunity to be estimated as “optimal”, to increase their choice probability and consequently, to be selected. The estimates become more reliable and consequently, the automaton rapidly and accurately converges to the optimal action. The asymptotic behavior of the proposed scheme is analyzed and it is proved to be ε-optimal in every stationary random environment. Extensive simulation results indicate that the proposed algorithm converges faster than the traditional stochastic-estimator-based S ERI scheme, and the deterministic-estimator-based DGPA and DPRI schemes when operating in stationary random environments.
The establishment of inter-domain traffic engineered paths is a requisite to accomplishing an end-to-end bandwidth guarantee and end-to-end resource optimization. Though the inter-domain paths must be reliable, it is difficult to compute suitable backup inter-domain paths in advance when the traffic engineering information is not disclosed outside of each domain. This means that the inter-domain path computation must satisfy the severe requirement of path establishment delay, since all inter-domain paths traversing the links in failure need to be computed after the failure occurs. Though several inter-domain path computation schemes have been proposed, their relative characteristics remain unknown. First, this paper classifies the conventional inter-domain path computation schemes into two types, i.e. end-to-end and per-domain schemes, and compares their performances under various traffic loads. Based on results of the comparisons, this paper proposes an adaptive inter-domain path computation scheme that can satisfy the severe requirement of the path establishment delay. In this scheme, the domain sequence from the source node to the destination node is divided into multiple sub-domain sequences according to the traffic load in each domain. The end-to-end path computation scheme is applied to the sub-domain sequences under heavy traffic loads, while the per-domain path computation scheme is applied to those under normal traffic loads. The simulation results show that the proposed scheme can adaptively satisfy the requirement for the path establishment delay while it maintains the optimality of path computation, even if the traffic load applied to each domain changes.
Yu LIU Masato YOSHIOKA Katsumi HOMMA Toshiyuki SHIBUYA
This paper presents a novel method using multi-objective optimization algorithm to automatically find the best solution from a topology library of analog circuits. Firstly this method abstracts the Pareto-front of each topology in the library by SPICE simulation. Then, the Pareto-front of the topology library is abstracted from the individual Pareto-fronts of topologies in the library followed by the theorem we proved. The best solution which is defined as the nearest point to specification on the Pareto-front of the topology library is then calculated by the equations derived from collinearity theorem. After the local searching using Nelder-Mead method maps the calculated best solution backs to design variable space, the non-dominated best solution is obtained. Comparing to the traditional optimization methods using single-objective optimization algorithms, this work can efficiently find the best non-dominated solution from multiple topologies for different specifications without additional time-consuming optimizing iterations. The experiments demonstrate that this method is feasible and practical in actual analog designs especially for uncertain or variant multi-dimensional specifications.
Kazuo MUROTA Ken'ichiro TANAKA
The concept of M-convex functions has recently been generalized for functions defined on constant-parity jump systems. The b-matching problem and its generalization provide canonical examples of M-convex functions on jump systems. In this paper, we propose a steepest descent algorithm for minimizing an M-convex function on a constant-parity jump system.
Akira TANAKA Hideyuki IMAI Masaaki MIYAKOSHI
In terms of the formulation of the optimality, image restoration filters can be divided into two streams. One is formulated as an optimization problem in which the fidelity of a restored image is indirectly evaluated, and the other is formulated as an optimization problem based on a direct evaluation. Originally, the formulation of the optimality and the solutions derived from the formulation are identical each other. However in many studies adopting the former stream, an arbitrary choice of a solution without a mathematical ground passes unremarked. In this paper, we discuss the relation between the formulation of the optimality and the solution derived from the formulation from a mathematical point of view, and investigate the relation between a direct style formulation and an indirect one. Through these analyses, we show that the both formulations yield the identical filter in practical situations.
In this paper, a non-adaptive optimal transform (NAOT) coding system is proposed. Note that the energy-invariant property in an orthogonal transformation and that the mean squared error (MSE) of a reconstructed image is proportional to the total energy of transform coefficients discarded in the coding process. The NAOT coding system is developed and proved optimal in the sense of minimum average energy loss. Basically, the proposed coding system consists of the following steps. First, obtain the average energy image block from transform image blocks. Second, sort the average energy image block in the descending order by energy where the sorted indices are recorded. Third, specify the number of coefficients, M, to be retained in the coding process. Fourth, the first M sorted indices form a set denoted as SM through which the problem of optimal feature selection in transform coding is solved. Fifth, find a fixed mask AM from set SM which is then used to select M significant transform coefficients in image blocks. Finally, the M selected coefficients are quantized and coded by the order as in SM. To verify the NAOT coding system, simulations are performed on several examples. In the simulation, the optimality and the optimal feature selection in the NAOT coding system are justified. Also, the effectiveness of the proposed SM-based selection approach is compared with the zigzag scan used in the JPEG. For fair comparison, the JPEG is modified to code only M transform coefficients. Simulation results indicate that the performance of SM-based selection approach is superior or identical to the zigzag scan in terms of PSNR. Finally, the performance comparison between the NAOT coding system and the JPEG is made. It suggests that the proposed NAOT coding system is able to trade very little PSNR for significant bit rate reduction when compared with the JPEG. Or it can be said that the JPEG wastes much bit rate to improve very little PSNR on the reconstructed image, when compared with the NAOT coding system.
Jun ASATANI Takuya KOUMOTO Kenichi TOMITA Tadao KASAMI
In this letter, we propose (1) a new sub-optimum minimum distance search (sub-MDS), whose search complexity is reduced considerably compared with optimum MDSs and (2) a termination criterion, called near optimality condition, to reduce the average number of decoding iterations with little degradation of error performance for the proposed decoding using sub-MDS iteratively. Consequently, the decoding algorithm can be applied to longer codes with feasible complexity. Simulation results for several Reed-Muller (RM) codes of lengths 256 and 512 are given.
This paper deals with two-processor nonpreemptive scheduling problem for acyclic SWITCH-less program nets including two types of nodes: AND-node and OR-node. Compared with task graphs that are a special case of acyclic SWITCH-less program nets and include only AND-nodes, the multiprocessor scheduling problem of general acyclic SWITCH-less program nets is more complicated. Since multiprocessor scheduling problem for general task graphs is NP-hard, so is for acyclic SWITCH-less program nets generally. In this paper, we suppose the acyclic SWITCH-less program nets satisfy: (i) each AND-node and OR-node have 1 and 0 node firing time, respectively; (ii) each AND-node possesses single input edge. For such a class of acyclic SWITCH-less program nets, we first propose a hybrid priority list L that consists of both dynamic and static sub-lists. Then we prove that, for a given acyclic SWITCH-less program net to be executed by two identical processors, the schedule generated by L is optimal.
Yoshihiro KANEKO Shoji SHINODA
A problem of obtaining an optimal file transfer on a file transmission net N is to consider how to transmit, with a minimum total cost, copies of a certain file of information from some vertices to others on N by the respective vertices' copy demand numbers. This problem is NP-hard for a general file transmission net. So far, some class of N on which polynomial time algorithms for obtaining an optimal file transfer are designed has been known. In addition, if we deal with restricted file transfers, i. e. , forest-type file transfers, we can obtain an optimal 'forest-type' file transfer on more general class of N. This paper proves that for such general nets it suffices to consider forest-type file transfers in order to obtain an optimal file transfer.
Mitsuharu ARIMURA Hirosuke YAMAMOTO
In this paper the performance of the Block Sorting algorithm proposed by Burrows and Wheeler is evaluated theoretically. It is proved that the Block Sorting algorithm is asymptotically optimal for stationary ergodic finite order Markov sources. Our proof is based on the facts that symbols with the same Markov state (or context) in an original data sequence are grouped together in the output sequence obtained by Burrows-Wheeler transform, and the codeword length of each group can be bounded by a function described with the frequencies of symbols included in the group.
Takuya KOUMOTO Tadao KASAMI Shu LIN
In an iterative decoding algorithm, such as Chase Type-II decoding algorithm and its improvements, candidate codewords for a received vector are generated for test based on a bounded-distance decoder and a set of test error patterns. It is desirable to remove useless test error patterns in these decoding algorithms. This paper presents a sufficient condition for ruling out some useless test error patterns. If this condition holds for a test error patterns e, then e can not produce a candidate codeword with a correlation metric larger than those of the candidate codewords generated already and hence e is useless. This significantly reduces the decoding operations in Chase type-II decoding algorithm or decoding iterations in its improvements.