1-2hit |
Min YUAN Qianjian XING Zhenguo MA Feng YU Yingke XU
In this letter, we present a novel single-precision floating-point multiply-accumulator (FNA-MAC) to achieve lower hardware resource, reduced computing latency and improved computing accuracy for continuous dot product operations. By further fusing the normalization and alignment in the traditional FMA algorithm, the proposed architecture eliminates the first N-1 normalization and rounding operations for an N-point dot product, and preserves the precision of interim results in a significant bit size that is twice of that in the traditional methods. The normalization and rounding of the final result is processed at the cost of consuming an additional multiply-add operation. The simulation results show that the improvement in computational accuracy is significant. Meanwhile, when comparing to a recently published FMA design, the proposed FNA-MAC can reduce the slice look-up table/flip-flop resource and computing latency by a fact of 18%, 33.3%, respectively.
Ke XU Rujun LIU Yuan SUN Keju ZOU Yan HUANG Xinfang ZHANG
In tutoring systems, students are more likely to utilize hints to assist their decisions about difficult or confusing problems. In the meanwhile, students with weaker knowledge mastery tend to choose more hints than others with stronger knowledge mastery. Hints are important assistances to help students deal with questions. Students can learn from hints and enhance their knowledge about questions. In this paper we firstly use hints alone to build a model named Hints-Model to predict student performance. In addition, matrix factorization (MF) has been prevalent in educational fields to predict student performance, which is derived from their success in collaborative filtering (CF) for recommender systems (RS). While there is another factorization method named non-negative matrix factorization (NMF) which has been developed over one decade, and has additional non-negative constrains on the factorization matrices. Considering the sparseness of the original matrix and the efficiency, we can utilize an element-based matrix factorization called regularized single-element-based NMF (RSNMF). We compared the results of different factorization methods to their combination with Hints-Model. From the experiment results on two datasets, we can find the combination of RSNMF with Hints-Model has achieved significant improvement and obtains the best result. We have also compared the Hints-Model with the pioneer approach performance factor analysis (PFA), and the outcomes show that the former method exceeds the later one.