Object detection is one of the most important aspects of computer vision, and the use of CNNs for object detection has yielded substantial results in a variety of fields. However, due to the fixed sampling in standard convolution layers, it restricts receptive fields to fixed locations and limits CNNs in geometric transformations. This leads to poor performance of CNNs for slender object detection. In order to achieve better slender object detection accuracy and efficiency, this proposed detector DFAM-DETR not only can adjust the sampling points adaptively, but also enhance the ability to focus on slender object features and extract essential information from global to local on the image through an attention mechanism. This study uses slender objects images from MS-COCO dataset. The experimental results show that DFAM-DETR achieves excellent detection performance on slender objects compared to CNN and transformer-based detectors.
Teru NAGAMORI Hiroki ITO AprilPyone MAUNGMAUNG Hitoshi KIYA
In this paper, we propose an access control method with a secret key for object detection models for the first time so that unauthorized users without a secret key cannot benefit from the performance of trained models. The method enables us not only to provide a high detection performance to authorized users but to also degrade the performance for unauthorized users. The use of transformed images was proposed for the access control of image classification models, but these images cannot be used for object detection models due to performance degradation. Accordingly, in this paper, selected feature maps are encrypted with a secret key for training and testing models, instead of input images. In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models but also with robustness against unauthorized access without a key.
TongWei LU ShiHai JIA Hao ZHANG
At this stage, research in the field of Few-shot image classification (FSC) has made good progress, but there are still many difficulties in the field of Few-shot object detection (FSOD). Almost all of the current FSOD methods face catastrophic forgetting problems, which are manifested in that the accuracy of base class recognition will drop seriously when acquiring the ability to recognize Novel classes. And for many methods, the accuracy of the model will fall back as the class increases. To address this problem we propose a new memory-based method called Memorable Faster R-CNN (MemFRCN), which makes the model remember the categories it has already seen. Specifically, we propose a new tow-stage object detector consisting of a memory-based classifier (MemCla), a fully connected neural network classifier (FCC) and an adaptive fusion block (AdFus). The former stores the embedding vector of each category as memory, which enables the model to have memory capabilities to avoid catastrophic forgetting events. The final part fuses the outputs of FCC and MemCla, which can automatically adjust the fusion method of the model when the number of samples increases so that the model can achieve better performance under various conditions. Our method can perform well on unseen classes while maintaining the detection accuracy of seen classes. Experimental results demonstrate that our method outperforms other current methods on multiple benchmarks.
Joanna Kazzandra DUMAGPI Yong-Jin JEONG
Fine-grained image analysis, such as pixel-level approaches, improves threat detection in x-ray security images. In the practical setting, the cost of obtaining complete pixel-level annotations increases significantly, which can be reduced by partially labeling the dataset. However, handling partially labeled datasets can lead to training complicated multi-stage networks. In this paper, we propose a new end-to-end object separation framework that trains a single network on a partially labeled dataset while also alleviating the inherent class imbalance at the data and object proposal level. Empirical results demonstrate significant improvement over existing approaches.
Wenhao HUANG Akira TSUGE Yin CHEN Tadashi OKOSHI Jin NAKAZAWA
Crowdedness of buses is playing an increasingly important role in the disease control of COVID-19. The lack of a practical approach to sensing the crowdedness of buses is a major problem. This paper proposes a bus crowdedness sensing system which exploits deep learning-based object detection to count the numbers of passengers getting on and off a bus and thus estimate the crowdedness of buses in real time. In our prototype system, we combine YOLOv5s object detection model with Kalman Filter object tracking algorithm to implement a sensing algorithm running on a Jetson nano-based vehicular device mounted on a bus. By using the driving recorder video data taken from real bus, we experimentally evaluate the performance of the proposed sensing system to verify that our proposed system system improves counting accuracy and achieves real-time processing at the Jetson Nano platform.
This letter studies a biobjective optimization problem in binary associative memories characterized by ternary connection parameters. First, we introduce a condition of parameters that guarantees storage of any desired memories and suppression of oscillatory behavior. Second, we define a biobjective problem based on two objectives that evaluate uniform stability of desired memories and sparsity of connection parameters. Performing precise numerical analysis for typical examples, we have clarified existence of a trade-off between the two objectives.
Suraj Prakash PATTAR Tsubasa HIRAKAWA Takayoshi YAMASHITA Tetsuya SAWANOBORI Hironobu FUJIYOSHI
Predicting the grasping point accurately and quickly is crucial for successful robotic manipulation. However, to commercially deploy a robot, such as a dishwasher robot in a commercial kitchen, we also need to consider the constraints of limited usable resources. We present a deep learning method to predict the grasp position when using a single suction gripper for picking up objects. The proposed method is based on a shallow network to enable lower training costs and efficient inference on limited resources. Costs are further reduced by collecting data in a custom-built synthetic environment. For evaluating the proposed method, we developed a system that models a commercial kitchen for a dishwasher robot to manipulate symmetric objects. We tested our method against a model-fitting method and an algorithm-based method in our developed commercial kitchen environment and found that a shallow network trained with only the synthetic data achieves high accuracy. We also demonstrate the practicality of using a shallow network in sequence with an object detector for ease of training, prediction speed, low computation cost, and easier debugging.
Hongzhe LIU Ningwei WANG Xuewei LI Cheng XU Yaze LI
In the neck part of a two-stage object detection network, feature fusion is generally carried out in either a top-down or bottom-up manner. However, two types of imbalance may exist: feature imbalance in the neck of the model and gradient imbalance in the region of interest extraction layer due to the scale changes of objects. The deeper the network is, the more abstract the learned features are, that is to say, more semantic information can be extracted. However, the extracted image background, spatial location, and other resolution information are less. In contrast, the shallow part can learn little semantic information, but a lot of spatial location information. We propose the Both Ends to Centre to Multiple Layers (BEtM) feature fusion method to solve the feature imbalance problem in the neck and a Multi-level Region of Interest Feature Extraction (MRoIE) layer to solve the gradient imbalance problem. In combination with the Region-based Convolutional Neural Network (R-CNN) framework, our Balanced Feature Fusion (BFF) method offers significantly improved network performance compared with the Faster R-CNN architecture. On the MS COCO 2017 dataset, it achieves an average precision (AP) that is 1.9 points and 3.2 points higher than those of the Feature Pyramid Network (FPN) Faster R-CNN framework and the Generic Region of Interest Extractor (GRoIE) framework, respectively.
Lukas NAKAMURA Hiromitsu AWANO
We propose “Temporal Ensemble SSDLite,” a new method for video object detection that boosts accuracy while maintaining detection speed and energy consumption. Object detection for video is becoming increasingly important as a core part of applications in robotics, autonomous driving and many other promising fields. Many of these applications require high accuracy and speed to be viable, but are used in compute and energy restricted environments. Therefore, new methods that increase the overall performance of video object detection i.e., accuracy and speed have to be developed. To increase accuracy we use ensemble, the machine learning method of combining predictions of multiple different models. The drawback of ensemble is the increased computational cost which is proportional to the number of models used. We overcome this deficit by deploying our ensemble temporally, meaning we inference with only a single model at each frame, cycling through our ensemble of models at each frame. Then, we combine the predictions for the last N frames where N is the number of models in our ensemble through non-max-suppression. This is possible because close frames in a video are extremely similar due to temporal correlation. As a result, we increase accuracy through the ensemble while only inferencing a single model at each frame and therefore keeping the detection speed. To evaluate the proposal, we measure the accuracy, detection speed and energy consumption on the Google Edge TPU, a machine learning inference accelerator, with the Imagenet VID dataset. Our results demonstrate an accuracy boost of up to 4.9% while maintaining real-time detection speed and an energy consumption of 181mJ per image.
Manufacturers are coping with increasing pressures in quality, cost and efficiency as more and more industries are moving from traditional setup to industry 4.0 based digitally transformed setup due to its numerous playbacks. Within the manufacturing domain organizational structures and processes are complex, therefore adopting industry 4.0 and finding an optimized re-engineered business process is difficult without using a systematic methodology. Authors have developed Business Process Re-engineering (BPR) and Business Process Optimization (BPO) methods but no consolidated methodology have been seen in the literature that is based on industry 4.0 and incorporates both the BPR and BPO. We have presented a consolidated and systematic re-engineering and optimization framework for a manufacturing industry setup. The proposed framework performs Evolutionary Multi-Objective Combinatorial Optimization using Multi-Objective Genetic Algorithm (MOGA). An example process from an aircraft manufacturing factory has been optimized and re-engineered with available set of technologies from industry 4.0 based on the criteria of lower cost, reduced processing time and reduced error rate. At the end to validate the proposed framework Business Process Model and Notation (BPMN) is used for simulations and perform comparison between AS-IS and TO-BE processes as it is widely used standard for business process specification. The proposed framework will be used in converting an industry from traditional setup to industry 4.0 resulting in cost reduction, increased performance and quality.
Pengfei LV Xiaosheng YU Jianning CHI Chengdong WU
A robust saliency detection approach for images with a complex background is proposed. The absorbing Markov chain integrating low-level, mid-level and high-level cues dynamically evolves by using the similarity between pixels to detect saliency objects. The experimental results show that the proposed algorithm has advantages in saliency detection, especially for images with a chaotic background or low contrast.
Rubin ZHAO Xiaolong ZHENG Zhihua YING Lingyan FAN
Most existing object detection methods and text detection methods are mainly designed to detect either text or objects. In some scenarios where the task is to find the target word pointed-at by an object, results of existing methods are far from satisfying. However, such scenarios happen often in human-computer interaction, when the computer needs to figure out which word the user is pointing at. Comparing with object detection, pointed-at word localization (PAWL) requires higher accuracy, especially in dense text scenarios. Moreover, in printed document, characters are much smaller than those in scene text detection datasets such as ICDAR-2013, ICDAR-2015 and ICPR-2018 etc. To address these problems, the authors propose a novel target word localization network (TWLN) to detect the pointed-at word in printed documents. In this work, a single deep neural network is trained to extract the features of markers and text sequentially. For each image, the location of the marker is predicted firstly, according to the predicted location, a smaller image is cropped from the original image and put into the same network, then the location of pointed-at word is predicted. To train and test the networks, an efficient approach is proposed to generate the dataset from PDF format documents by inserting markers pointing at the words in the documents, which avoids laborious labeling work. Experiments on the proposed dataset demonstrate that TWLN outperforms the compared object detection method and optical character recognition method on every category of targets, especially when the target is a single character that only occupies several pixels in the image. TWLN is also tested with real photographs, and the accuracy shows no significant differences, which proves the validity of the generating method to construct the dataset.
Takahisa YAMAMOTO Shiki TAKEUCHI Atsushi NAKAZAWA
Visual sentiment analysis has a lot of applications, including image captioning, opinion mining, and advertisement; however, it is still a difficult problem and existing algorithms cannot produce satisfactory results. One of the difficulties in classifying images into emotions is that visual sentiments are evoked by different types of information - visual and semantic information where visual information includes colors or textures, and semantic information includes types of objects evoking emotions and/or their combinations. In contrast to the existing methods that use only visual information, this paper shows a novel algorithm for image emotion recognition that uses both information simultaneously. For semantic features, we introduce an object vector and a word vector. The object vector is created by an object detection method and reflects existing objects in an image. The word vector is created by transforming the names of detected objects through a word embedding model. This vector will be similar among objects that are semantically similar. These semantic features and a visual feature made by a fine-tuned convolutional neural network (CNN) are concatenated. We perform the classification by the concatenated feature vector. Extensive evaluation experiments using emotional image datasets show that our method achieves the best accuracy except for one dataset against other existing methods. The improvement in accuracy of our method from existing methods is 4.54% at the highest.
Jiafeng MAO Qing YU Kiyoharu AIZAWA
Well annotated dataset is crucial to the training of object detectors. However, the production of finely annotated datasets for object detection tasks is extremely labor-intensive, therefore, cloud sourcing is often used to create datasets, which leads to these datasets tending to contain incorrect annotations such as inaccurate localization bounding boxes. In this study, we highlight a problem of object detection with noisy bounding box annotations and show that these noisy annotations are harmful to the performance of deep neural networks. To solve this problem, we further propose a framework to allow the network to modify the noisy datasets by alternating refinement. The experimental results demonstrate that our proposed framework can significantly alleviate the influences of noise on model performance.
Anis Ur REHMAN Ken KIHARA Sakuichi OHTSUKA
In daily reality, people often pay attention to several objects that change positions while being observed. In the laboratory, this process is investigated by a phenomenon known as multiple object tracking (MOT) which is a task that evaluates attentive tracking performance. Recent findings suggest that the attentional set for multiple moving objects whose depth changes in three dimensions from one plane to another is influenced by the initial configuration of the objects. When tracking objects, it is difficult for people to expand their attentional set to multiple-depth planes once attention has been focused on a single plane. However, less is known about people contracting their attentional set from multiple-depth planes to a single-depth plane. In two experiments, we examined tracking accuracy when four targets or four distractors, which were initially distributed on two planes, come together on one of the planes during an MOT task. The results from this study suggest that people have difficulty changing the depth range of their attention during attentive tracking, and attentive tracking performance depends on the initial attentional set based on the configuration prior to attentive tracking.
Yi LIU Wei QIN Jinhui ZHANG Mengmeng LI Qibin ZHENG Jichuan WANG
Multi-objective evolutionary algorithms are widely used in many engineering optimization problems and artificial intelligence applications. Ant lion optimizer is an outstanding evolutionary method, but two issues need to be solved to extend it to the multi-objective optimization field, one is how to update the Pareto archive, and the other is how to choose elite and ant lions from archive. We develop a novel multi-objective variant of ant lion optimizer in this paper. A new measure combining Pareto dominance relation and distance information of individuals is put forward and used to tackle the first issue. The concept of time weight is developed to handle the second problem. Besides, mutation operation is adopted on solutions in middle part of archive to further improve its performance. Eleven functions, other four algorithms and four indicators are taken to evaluate the new method. The results show that proposed algorithm has better performance and lower time complexity.
Misaki SHIKAKURA Yusuke KAMEDA Takayuki HAMAMOTO
This paper reports the evolution and application potential of image sensors with high-speed brightness gradient sensors. We propose an adaptive exposure time control method using the apparent motion estimated by this sensor, and evaluate results for the change in illuminance and global / local motion.
Yih-Cherng LEE Hung-Wei HSU Jian-Jiun DING Wen HOU Lien-Shiang CHOU Ronald Y. CHANG
Automatic tracking and classification are essential for studying the behaviors of wild animals. Owing to dynamic far-shooting photos, the occlusion problem, protective coloration, the background noise is irregular interference for designing a computerized algorithm for reducing human labeling resources. Moreover, wild dolphin images are hard-acquired by on-the-spot investigations, which takes a lot of waiting time and hardly sets the fixed camera to automatic monitoring dolphins on the ocean in several days. It is challenging tasks to detect well and classify a dolphin from polluted photos by a single famous deep learning method in a small dataset. Therefore, in this study, we propose a generic Cascade Small Object Detection (CSOD) algorithm for dolphin detection to handle small object problems and develop visualization to backbone based classification (V2BC) for removing noise, highlighting features of dolphin and classifying the name of dolphin. The architecture of CSOD consists of the P-net and the F-net. The P-net uses the crude Yolov3 detector to be a core network to predict all the regions of interest (ROIs) at lower resolution images. Then, the F-net, which is more robust, is applied to capture the ROIs from high-resolution photos to solve single detector problems. Moreover, a visualization to backbone based classification (V2BC) method focuses on extracting significant regions of occluded dolphin and design significant post-processing by referencing the backbone of dolphins to facilitate for classification. Compared to the state of the art methods, including faster-rcnn, yolov3 detection and Alexnet, the Vgg, and the Resnet classification. All experiments show that the proposed algorithm based on CSOD and V2BC has an excellent performance in dolphin detection and classification. Consequently, compared to the related works of classification, the accuracy of the proposed designation is over 14% higher. Moreover, our proposed CSOD detection system has 42% higher performance than that of the original Yolov3 architecture.
Kohei NAKAI Takashi MATSUBARA Kuniaki UEHARA
The recent development of neural architecture search (NAS) has enabled us to automatically discover architectures of neural networks with high performance within a few days. Convolutional neural networks extract fruitful features by repeatedly applying standard operations (convolutions and poolings). However, these operations also extract useless or even disturbing features. Attention mechanisms enable neural networks to discard information of no interest, having achieved the state-of-the-art performance. While a variety of attentions for CNNs have been proposed, current NAS methods have paid a little attention to them. In this study, we propose a novel NAS method that searches attentions as well as operations. We examined several patterns to arrange attentions and operations, and found that attentions work better when they have their own search space and follow operations. We demonstrate the superior performance of our method in experiments on CIFAR-10, CIFAR-100, and ImageNet datasets. The found architecture achieved lower classification error rates and required fewer parameters compared to those found by current NAS methods.
Ryutaro FUJIKAWA Tomoyuki TOGAWA Toshimichi SAITO
This paper studies a novel approach to analysis of switched dynamical systems in perspective of bifurcation and multiobjective optimization. As a first step, we analyze a simple switched dynamical system based on a boost converter with photovoltaic input. First, in a bifurcation phenomenon perspective, we consider period doubling bifurcation sets in the parameter space. Second, in a multiobjective optimization perspective, we consider a trade-off between maximum input power and stability. The trade-off is represented by a Pareto front in the objective space. Performing numerical experiments, relationship between the bifurcation sets and the Pareto front is investigated.