Takayuki TOMIOKA Kazu MISHIBA Yuji OYAMADA Katsuya KONDO
Depth estimation for a lense-array type light field camera is a challenging problem because of the sensor noise and the radiometric distortion which is a global brightness change among sub-aperture images caused by a vignetting effect of the micro-lenses. We propose a depth map estimation method which has robustness against sensor noise and radiometric distortion. Our method first binarizes sub-aperture images by applying the census transform. Next, the binarized images are matched by computing the majority operations between corresponding bits and summing up the Hamming distance. An initial depth obtained by matching has ambiguity caused by extremely short baselines among sub-aperture images. After an initial depth estimation process, we refine the result with following refinement steps. Our refinement steps first approximate the initial depth as a set of depth planes. Next, we optimize the result of plane fitting with an edge-preserving smoothness term. Experiments show that our method outperforms the conventional methods.
Soft-thresholding is a sparse modeling method typically applied to wavelet denoising in statistical signal processing. It is also important in machine learning since it is an essential nature of the well-known LASSO (Least Absolute Shrinkage and Selection Operator). It is known that soft-thresholding, thus, LASSO suffers from a problem of dilemma between sparsity and generalization. This is caused by excessive shrinkage at a sparse representation. There are several methods for improving this problem in the field of signal processing and machine learning. In this paper, we considered to extend and analyze a method of scaling of soft-thresholding estimators. In a setting of non-parametric orthogonal regression problem including discrete wavelet transform, we introduced component-wise and data-dependent scaling that is indeed identical to non-negative garrote. We here considered a case where a parameter value of soft-thresholding is chosen from absolute values of the least squares estimates, by which the model selection problem reduces to the determination of the number of non-zero coefficient estimates. In this case, we firstly derived a risk and construct SURE (Stein's unbiased risk estimator) that can be used for determining the number of non-zero coefficient estimates. We also analyzed some properties of the risk curve and found that our scaling method with the derived SURE is possible to yield a model with low risk and high sparsity compared to a naive soft-thresholding method with SURE. This theoretical speculation was verified by a simple numerical experiment of wavelet denoising.
Nguyen Cao QUI Si-Rong HE Chien-Nan Jimmy LIU
As devices continue to shrink, the parameter shift due to process variation and aging effects has an increasing impact on the circuit yield and reliability. However, predicting how long a circuit can maintain its design yield above the design specification is difficult because the design yield changes during the aging process. Moreover, performing Monte Carlo (MC) simulation iteratively during aging analysis is infeasible. Therefore, most existing approaches ignore the continuity during simulations to obtain high speed, which may result in accumulation of extrapolation errors with time. In this paper, an incremental simulation technique is proposed for lifetime yield analysis to improve the simulation speed while maintaining the analysis accuracy. Because aging is often a gradual process, the proposed incremental technique is effective for reducing the simulation time. For yield analysis with degraded performance, this incremental technique also reduces the simulation time because each sample is the same circuit with small parameter changes in the MC analysis. When the proposed dynamic aging sampling technique is employed, 50× speedup can be obtained with almost no decline accuracy, which considerably improves the efficiency of lifetime yield analysis.
We propose a method for automatic emphasis estimation using conditional random fields. In our experiments, the value of F-measure obtained using our proposed method (0.31) was higher than that obtained using a random emphasis method (0.20), a method using TF-IDF (0.21), and a method based on LexRank (0.26). On the contrary, the value of F-measure of obtained using our proposed method (0.28) was slightly worse as compared with that obtained using manual estimation (0.26-0.40, with an average of 0.35).
Yuta WAKAYAMA Hidenori TAGA Takehiro TSURITANI
This paper presents an application of low-coherence interferometry for measurement of mode field diameters (MFDs) of a few-mode fiber and shows its performance compared with another method using a mode multiplexer. We found that the presented method could measure MFDs in a few-mode fiber even without any special mode multiplexers.
Lu SUN Mineichi KUDO Keigo KIMURA
Multi-label classification is an appealing and challenging supervised learning problem, where multiple labels, rather than a single label, are associated with an unseen test instance. To remove possible noises in labels and features of high-dimensionality, multi-label dimension reduction has attracted more and more attentions in recent years. The existing methods usually suffer from several problems, such as ignoring label outliers and label correlations. In addition, most of them emphasize on conducting dimension reduction in an unsupervised or supervised way, therefore, unable to utilize the label information or a large amount of unlabeled data to improve the performance. In order to cope with these problems, we propose a novel method termed Robust sEmi-supervised multi-lAbel DimEnsion Reduction, shortly READER. From the viewpoint of empirical risk minimization, READER selects most discriminative features for all the labels in a semi-supervised way. Specifically, the ℓ2,1-norm induced loss function and regularization term make READER robust to the outliers in the data points. READER finds a feature subspace so as to keep originally neighbor instances close and embeds labels into a low-dimensional latent space nonlinearly. To optimize the objective function, an efficient algorithm is developed with convergence property. Extensive empirical studies on real-world datasets demonstrate the superior performance of the proposed method.
Seiji OKAMOTO Kazushige YONENAGA Kengo HORIKOSHI Mitsuteru YOSHIDA Yutaka MIYAMOTO Masahito TOMIZAWA Takeshi OKAMOTO Hidemi NOGUCHI Jun-ichi ABE Junichiro MATSUI Hisao NAKASHIMA Yuichi AKIYAMA Takeshi HOSHIDA Hiroshi ONAKA Kenya SUGIHARA Soichiro KAMETANI Kazuo KUBO Takashi SUGIHARA
We describe a field experiment of flexible modulation format adaptation on a real-time 400Gbit/s/ch DSP-LSI. This real-time DSP-LSI features OSNR estimation, practical simplified back propagation, and high gain soft-decision forward error correction. With these techniques, we have successfully demonstrated modulation format allocation and transmission of 56-channel 400Gbit/s-2SC-PDM-16QAM and 200Gbit/s-2SC-PDM-QPSK signals in 216km and 3246km standard single mode fiber, respectively.
Kangru WANG Lei QU Lili CHEN Jiamao LI Yuzhang GU Dongchen ZHU Xiaolin ZHANG
In this paper, a novel approach is proposed for stereo vision-based ground plane detection at superpixel-level, which is implemented by employing a Disparity Texture Map in a convolution neural network architecture. In particular, the Disparity Texture Map is calculated with a new Local Disparity Texture Descriptor (LDTD). The experimental results demonstrate our superior performance in KITTI dataset.
Takahiro SUZUKI Keita TAKAHASHI Toshiaki FUJII
Structure tensor analysis on epipolar plane images (EPIs) is a successful approach to estimate disparity from a light field, i.e. a dense set of multi-view images. However, the disparity range allowable for the light field is limited because the estimation becomes less accurate as the range of disparities become larger. To overcome this limitation, we developed a new method called sheared EPI analysis, where EPIs are sheared before the structure tensor analysis. The results of analysis obtained with different shear values are integrated into a final disparity map through a smoothing process, which is the key idea of our method. In this paper, we closely investigate the performance of sheared EPI analysis and demonstrate the effectiveness of the smoothing process by extensively evaluating the proposed method with 15 datasets that have large disparity ranges.
Terutaka TAMAI Masahiro YAMAKAWA
At the present time, as downsizing of connectors causes thin gold plated layer and low contact load, serious problem of degradation of contact resistance property is induced. For these contacts, corrosion of the contacts surface under environment and high temperature as soldering and reflow process should be existed. Oxidation of base metal atoms which are diffused from under layer and additives occurs. Contact resistance increases for both surface contamination and low contact load. In order to resolve these problems and wear of surface, application of contact lubricants is useful and effective. However, degradation of the lubricants under such reflow process as high temperature possibly occurs. Therefore, in this study, from view point of change of lubricant quality as viscosity, weight loss, polymerization, oxidation and molecular orientation were clarified. For increase in contact resistance, orientation of lubricant molecular acts as important factor was found. The other factors of the lubricant hardly does not effect on contact resistance.
Go OHTAKE Kazuto OGAWA Goichiro HANAOKA Shota YAMADA Kohei KASAMATSU Takashi YAMAKAWA Hideki IMAI
Attribute-based encryption (ABE) enables flexible data access control based on attributes and policies. In ciphertext-policy ABE (CP-ABE), a secret key is associated with a set of attributes and a policy is associated with a ciphertext. If the set of attributes satisfies the policy, the ciphertext can be decrypted. CP-ABE can be applied to a variety of services such as access control for file sharing systems and content distribution services. However, a CP-ABE scheme usually has larger costs for encryption and decryption than conventional public-key encryption schemes due to flexible policy setting. In particular, wildcards, which mean that certain attributes are not relevant to the ciphertext policy, are not essential for a certain service. In this paper, we propose a partially wildcarded CP-ABE scheme with a lower encryption and decryption cost. In our scheme, user's attributes are separated into those requiring wildcards and those not requiring wildcards. Our scheme embodies a CP-ABE scheme with a wildcard functionality and an efficient CP-ABE scheme without wildcard functionality. We show that our scheme is provably secure under the DBDH assumption. Then, we compare our scheme with the conventional CP-ABE schemes and describe a content distribution service as an application of our scheme. Also, we implement our scheme on a PC and measure the processing time. The result shows that our scheme can reduce all of the costs for key generation, encryption, and decryption as much as possible.
Shu KONDO Yuto KOBAYASHI Keita TAKAHASHI Toshiaki FUJII
A layered light-field display based on light-field factorization is considered. In the original work, the factorization is formulated under the assumption that the light field is captured with orthographic cameras. In this paper, we introduce a generalized framework for light-field factorization that can handle both the orthographic and perspective camera projection models. With our framework, a light field captured with perspective cameras can be displayed accurately.
Daisuke KURITA Kiichi TATEISHI Atsushi HARADA Yoshihisa KISHIYAMA Takehiro NAKAMURA Stefan PARKVALL Erik DAHLMAN Johan FURUSKOG
This paper presents outdoor field experimental results to clarify the 4-by-4 multiple-input multiple-output (MIMO) throughput performance when applying joint transmission (JT) and distributed MIMO to the 15-GHz frequency band in the downlink of a 5G cellular radio access system. Experimental results for JT in a 100m × 70m large-cell scenario show that throughput improvement of up to 10% is achieved in most of the area and the peak data rate is improved from 2.8Gbps to 3.7Gbps. Based on analysis of the reference signal received power (RSRP) and channel correlation, we find that the RSRP is improved in lower RSRP areas, and that the channel correlation is improved in higher RSRP areas. These improvements contribute to higher throughput performance. The advantage of distributed MIMO and JT are compared in a 20m × 20m small-cell scenario. The throughput improvement of 70% and throughput exceeding 5 Gbps were achieved when applying distributed MIMO due to the improvement in the channel correlation. When applying JT, the RSRP is improved; however the channel correlation is not. As a result, there is no improvement in the throughput performance in the area. Finally, the relationship between the transmission point (TP) allocation and the direction of user equipment (UE) antenna arrangement is investigated. Two TP positions at 90 and 180deg. from each other are shown to be advantageous in terms of the throughput performance with different direction of UE antenna arrangement. Thus, we conclude that JT and distributed MIMO are promising technologies for the 5G radio access system that can compensate for the propagation loss and channel correlation in high frequency bands.
Xiaojia WANG Yazhou CHEN Haojiang WAN Qingxi YANG
In this paper, the effect of the tilt angle of return stroke channel and the stratified lossy ground on the lightning-induced voltages on the overhead lines are studied using the modified transmission-line model with linear current decay with height (MTLL). The results show that the lightning-induced voltages from oblique discharge channel are larger than those from the vertical discharge channel, and the peak values of the induced voltages will increase with increasing the tilt angle. When the ground is horizontally stratified, the peak of the induced voltages will increase with increasing the conductivity of the lower layer at different distances. When the upper ground conductivity increases, the voltage peak values will decrease if the overhead line is nearby the lightning strike point and increase if the overhead line is far from the lightning strike point. Moreover, the induced voltages are mainly affected by the conductivity of the lower layer soil when the conductivity of the upper layer ground is smaller than that of the lower layer ground at far distances. When the ground is vertically stratified, the induced voltages are mainly affected by the conductivity of the ground near the strike point when the overhead line and the strike point are located above the same medium; if the overhead line and the strike point are located above different mediums, both of the conductivities of the vertically stratified ground will influence the peak of the induced voltages and the conductivity of the ground which is far from the strike point has much more impact on induced voltages.
Yuki INOUE Shohei YOSHIOKA Yoshihisa KISHIYAMA Satoshi SUYAMA Yukihiko OKUMURA James KEPLER Mark CUDAK
This paper presents beamforming and beam tracking techniques and downlink performance results from field experiments using a Proof-of-Concept (PoC) system. The PoC implements a 5G mobile radio access system in the millimeter wave band and utilizes beamforming and beam tracking techniques. These techniques are realized with a dielectric lens antenna fed by a switched antenna feeder array. The half-power beamwidth of the antenna is 3° corresponding to massive MIMO using approximately 1000 antenna elements. The system bandwidth is 1GHz and the center frequency is 73.5GHz. Adaptive modulation and coding using four modulation and coding schemes is implemented. The field experiment is conducted in the following small cell environments: a courtyard, a shopping mall and a street canyon. The majority of the test area is Line-Of-Sight (LOS) however the shopping mall course contains 69% Non-LOS (NLOS) conditions. The results show that the maximum throughput of over 2Gbps using rate 7/8 coded 16QAM modulation is achieved in 87%, 34% and 28% of each of the respective environments. The beam tracking achieves high availability of coverage and seamless mobility not only in LOS environments but also under NLOS conditions through the reflected paths.
Shuhei TANNO Toshihiko NISHIMURA Takeo OHGANE Yasutaka OGAWA
Detecting signals in a very large multiple-input multiple-output (MIMO) system requires high complexy of implementation. Thus, belief propagation based detection has been studied recently because of its low complexity. When the transmitted signal sequence is encoded using a channel code decodable by a factor-graph-based algorithm, MIMO signal detection and channel decoding can be combined in a single factor graph. In this paper, a low density parity check (LDPC) coded MIMO system is considered, and two types of factor graphs: bipartite and tripartite graphs are compared. The former updates the log-likelihood-ratio (LLR) values at MIMO detection and parity checking simultaneously. On the other hand, the latter performs the updates alternatively. Simulation results show that the tripartite graph achieves faster convergence and slightly better bit error rate performance. In addition, it is confirmed that the LLR damping in LDPC decoding is important for a stable convergence.
An energy-efficient nonvolatile FPGA with assuring highly-reliable backup operation using a self-terminated power-gating scheme is proposed. Since the write current is automatically cut off just after the temporal data in the flip-flop is successfully backed up in the nonvolatile device, the amount of write energy can be minimized with no write failure. Moreover, when the backup operation in a particular cluster is completed, power supply of the cluster is immediately turned off, which minimizes standby energy due to leakage current. In fact, the total amount of energy consumption during the backup operation is reduced by 66% in comparison with that of a conventional worst-case-based approach where the long time write current pulse is used for the reliable write.
Ting-Chou LU Ming-Dou KER Hsiao-Wen ZAN
Process and temperature variations have become a serious concern for ultra-low voltage (ULV) technology. The clock generator is the essential component for the ULV very-large-scale integration (VLSI). MOSFETs that are operated in the sub-threshold region are widely applied for ULV technology. However, MOSFETs at subthreshold region have relatively high variations with process and temperature. In this paper, process and temperature variations on the clock generators have been studied. This paper presents an ultra-low voltage 2.4GHz CMOS voltage controlled oscillator with temperature and process compensation. A new all-digital auto compensated mechanism to reduce process and temperature variation without any laser trimming is proposed. With the compensated circuit, the VCO frequency-drift is 16.6 times the improvements of the uncompensated one as temperature changes. Furthermore, it also provides low jitter performance.
Miki HASEYAMA Takahiro OGAWA Sho TAKAHASHI Shuhei NOMURA Masatsugu SHIMOMURA
Biomimetics is a new research field that creates innovation through the collaboration of different existing research fields. However, the collaboration, i.e., the exchange of deep knowledge between different research fields, is difficult for several reasons such as differences in technical terms used in different fields. In order to overcome this problem, we have developed a new retrieval platform, “Biomimetics image retrieval platform,” using a visualization-based image retrieval technique. A biological database contains a large volume of image data, and by taking advantage of these image data, we are able to overcome limitations of text-only information retrieval. By realizing such a retrieval platform that does not depend on technical terms, individual biological databases of various species can be integrated. This will allow not only the use of data for the study of various species by researchers in different biological fields but also access for a wide range of researchers in fields ranging from materials science, mechanical engineering and manufacturing. Therefore, our platform provides a new path bridging different fields and will contribute to the development of biomimetics since it can overcome the limitation of the traditional retrieval platform.
Rei UENO Naofumi HOMMA Takafumi AOKI Sumio MORIOKA
This paper presents an automatic hierarchical formal verification method for arithmetic circuits over Galois fields (GFs) which are dedicated digital circuits for GF arithmetic operations used in cryptographic processors. The proposed verification method is based on a combination of a word-level computer algebra procedure with a bit-level PPRM (Positive Polarity Reed-Muller) expansion procedure. While the application of the proposed verification method is not limited to cryptographic processors, these processors are our important targets because complicated implementation techniques, such as field conversions, are frequently used for side-channel resistant, compact and low power design. In the proposed method, the correctness of entire datapath is verified over GF(2m) level, or word-level. A datapath implementation is represented hierarchically as a set of components' functional descriptions over GF(2m) and their wiring connections. We verify that the implementation satisfies a given total-functional specification over GF(2m), by using an automatic algebraic method based on the Gröbner basis and a polynomial reduction. Then, in order to verify whether each component circuit is correctly implemented by combination of GF(2) operations, i.e. logic gates in bit-level, we use our fast PPRM expansion procedure which is customized for handling large-scale Boolean expressions with many variables. We have applied the proposed method to a complicated AES (Advanced Encryption Standard) circuit with a masking countermeasure against side-channel attack. The results show that the proposed method can verify such practical circuit automatically within 4 minutes, while any single conventional verification methods fail within a day or even more.