The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] multimodal(33hit)

1-20hit(33hit)

  • MISpeller: Multimodal Information Enhancement for Chinese Spelling Correction Open Access

    Jiakai LI  Jianyong DUAN  Hao WANG  Li HE  Qing ZHANG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2024/06/07
      Vol:
    E107-D No:10
      Page(s):
    1342-1352

    Chinese spelling correction is a foundational task in natural language processing that aims to detect and correct spelling errors in text. Most spelling corrections in Chinese used multimodal information to model the relationship between incorrect and correct characters. However, feature information mismatch occured during fusion result from the different sources of features, causing the importance relationships between different modalities to be ignored, which in turn restricted the model from learning in an efficient manner. To this end, this paper proposes a multimodal language model-based Chinese spelling corrector, named as MISpeller. The method, based on ChineseBERT as the basic model, allows the comprehensive capture and fusion of character semantic information, phonetic information and graphic information in a single model without the need to construct additional neural networks, and realises the phenomenon of unequal fusion of multi-feature information. In addition, in order to solve the overcorrection issues, the replication mechanism is further introduced, and the replication factor is used as the dynamic weight to efficiently fuse the multimodal information. The model is able to control the proportion of original characters and predicted characters according to different input texts, and it can learn more specifically where errors occur. Experiments conducted on the SIGHAN benchmark show that the proposed model achieves the state-of-the-art performance of the F1 score at the correction level by an average of 4.36%, which validates the effectiveness of the model.

  • REM-CiM: Attentional RGB-Event Fusion Multi-Modal Analog CiM for Area/Energy-Efficient Edge Object Detection during Both Day and Night Open Access

    Yuya ICHIKAWA  Ayumu YAMADA  Naoko MISAWA  Chihiro MATSUI  Ken TAKEUCHI  

     
    PAPER

      Pubricized:
    2024/04/09
      Vol:
    E107-C No:10
      Page(s):
    426-435

    Integrating RGB and event sensors improves object detection accuracy, especially during the night, due to the high-dynamic range of event camera. However, introducing an event sensor leads to an increase in computational resources, which makes the implementation of RGB-event fusion multi-modal AI to CiM difficult. To tackle this issue, this paper proposes RGB-Event fusion Multi-modal analog Computation-in-Memory (CiM), called REM-CiM, for multi-modal edge object detection AI. In REM-CiM, two proposals about multi-modal AI algorithms and circuit implementation are co-designed. First, Memory capacity-Efficient Attentional Feature Pyramid Network (MEA-FPN), the model architecture for RGB-event fusion analog CiM, is proposed for parameter-efficient RGB-event fusion. Convolution-less bi-directional calibration (C-BDC) in MEA-FPN extracts important features of each modality with attention modules, while reducing the number of weight parameters by removing large convolutional operations from conventional BDC. Proposed MEA-FPN w/ C-BDC achieves a 76% reduction of parameters while maintaining mean Average Precision (mAP) degradation to < 2.3% during both day and night, compared with Attentional FPN fusion (A-FPN), a conventional BDC-adopted FPN fusion. Second, the low-bit quantization with clipping (LQC) is proposed to reduce area/energy. Proposed REM-CiM with MEA-FPN and LQC achieves almost the same memory cells, 21% less ADC area, 24% less ADC energy and 0.17% higher mAP than conventional FPN fusion CiM without LQC.

  • PSDSpell: Pre-Training with Self-Distillation Learning for Chinese Spelling Correction Open Access

    Li HE  Xiaowu ZHANG  Jianyong DUAN  Hao WANG  Xin LI  Liang ZHAO  

     
    PAPER

      Pubricized:
    2023/10/25
      Vol:
    E107-D No:4
      Page(s):
    495-504

    Chinese spelling correction (CSC) models detect and correct a text typo based on the misspelled character and its context. Recently, Bert-based models have dominated the research of Chinese spelling correction. However, these methods only focus on the semantic information of the text during the pretraining stage, neglecting the learning of correcting spelling errors. Moreover, when multiple incorrect characters are in the text, the context introduces noisy information, making it difficult for the model to accurately detect the positions of the incorrect characters, leading to false corrections. To address these limitations, we apply the multimodal pre-trained language model ChineseBert to the task of spelling correction. We propose a self-distillation learning-based pretraining strategy, where a confusion set is used to construct text containing erroneous characters, allowing the model to jointly learns how to understand language and correct spelling errors. Additionally, we introduce a single-channel masking mechanism to mitigate the noise caused by the incorrect characters. This mechanism masks the semantic encoding channel while preserving the phonetic and glyph encoding channels, reducing the noise introduced by incorrect characters during the prediction process. Finally, experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art methods by a remarkable gain.

  • Multimodal Named Entity Recognition with Bottleneck Fusion and Contrastive Learning

    Peng WANG  Xiaohang CHEN  Ziyu SHANG  Wenjun KE  

     
    PAPER-Natural Language Processing

      Pubricized:
    2023/01/18
      Vol:
    E106-D No:4
      Page(s):
    545-555

    Multimodal named entity recognition (MNER) is the task of recognizing named entities in multimodal context. Existing methods focus on utilizing co-attention mechanism to discover the relationships between multiple modalities. However, they still have two deficiencies: First, current methods fail to fuse the multimodal representations in a fine-grained way, which may bring noise of visual modalities. Second, current methods ignore bridging the semantic gap between heterogeneous modalities. To solve the above issues, we propose a novel MNER method with bottleneck fusion and contrastive learning (BFCL). Specifically, we first incorporate the transformer-based bottleneck fusion mechanism, subsequently, information between different modalities can only be exchanged through several bottleneck tokens, thus reducing the noise propagation. Then we propose two decoupled image-text contrastive losses to align the unimodal representations, making the representations of semantically similar modalities closer, while the representations of semantically different modalities farther away. Experimental results demonstrate that our method is competitive to the state-of-the-art models, and achieves 74.54% and 85.70% F1-scores on Twitter-2015 and Twitter-2017 datasets, respectively.

  • DeepSIP: A System for Predicting Service Impact of Network Failure by Temporal Multimodal CNN

    Yoichi MATSUO  Tatsuaki KIMURA  Ken NISHIMATSU  

     
    PAPER-Network Management/Operation

      Pubricized:
    2021/04/01
      Vol:
    E104-B No:10
      Page(s):
    1288-1298

    When a failure occurs in a network element, such as switch, router, and server, network operators need to recognize the service impact, such as time to recovery from the failure or severity of the failure, since service impact is essential information for handling failures. In this paper, we propose Deep learning based Service Impact Prediction system (DeepSIP), which predicts the service impact of network failure in a network element using a temporal multimodal convolutional neural network (CNN). More precisely, DeepSIP predicts the time to recovery from the failure and the loss of traffic volume due to the failure in a network on the basis of information from syslog messages and traffic volume. Since the time to recovery is useful information for a service level agreement (SLA) and the loss of traffic volume is directly related to the severity of the failure, we regard the time to recovery and the loss of traffic volume as the service impact. The service impact is challenging to predict, since it depends on types of network failures and traffic volume when the failure occurs. Moreover, network elements do not explicitly contain any information about the service impact. To extract the type of network failures and predict the service impact, we use syslog messages and past traffic volume. However, syslog messages and traffic volume are also challenging to analyze because these data are multimodal, are strongly correlated, and have temporal dependencies. To extract useful features for prediction, we develop a temporal multimodal CNN. We experimentally evaluated DeepSIP in terms of accuracy by comparing it with other NN-based methods by using synthetic and real datasets. For both datasets, the results show that DeepSIP outperformed the baselines.

  • Multimodal Analytics to Understand Self-Regulation Process of Cognitive and Behavioral Strategies in Real-World Learning

    Masaya OKADA  Yasutaka KUROKI  Masahiro TADA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2020/02/05
      Vol:
    E103-D No:5
      Page(s):
    1039-1054

    Recent studies suggest that learning “how to learn” is important because learners must be self-regulated to take more responsibility for their own learning processes, meta-cognitive control, and other generative learning thoughts and behaviors. The mechanism that enables a learner to self-regulate his/her learning strategies has been actively studied in classroom settings, but has seldom been studied in the area of real-world learning in out-of-school settings (e.g., environmental learning in nature). A feature of real-world learning is that a learner's cognition of the world is updated by his/her behavior to investigate the world, and vice versa. This paper models the mechanism of real-world learning for executing and self-regulating a learner's cognitive and behavioral strategies to self-organize his/her internal knowledge space. Furthermore, this paper proposes multimodal analytics to integrate heterogeneous data resources of the cognitive and behavioral features of real-world learning, to structure and archive the time series of strategies occurring through learner-environment interactions, and to assess how learning should be self-regulated for better understanding of the world. Our analysis showed that (1) intellectual achievements are built by self-regulating learning to chain the execution of cognitive and behavioral strategies, and (2) a clue to predict learning outcomes in the world is analyzing the quantity and frequency of strategies that a learner uses and self-regulates. Assessment based on these findings can encourage a learner to reflect and improve his/her way of learning in the world.

  • Cross-Domain Deep Feature Combination for Bird Species Classification with Audio-Visual Data

    Naranchimeg BOLD  Chao ZHANG  Takuya AKASHI  

     
    PAPER-Multimedia Pattern Processing

      Pubricized:
    2019/06/27
      Vol:
    E102-D No:10
      Page(s):
    2033-2042

    In recent decade, many state-of-the-art algorithms on image classification as well as audio classification have achieved noticeable successes with the development of deep convolutional neural network (CNN). However, most of the works only exploit single type of training data. In this paper, we present a study on classifying bird species by exploiting the combination of both visual (images) and audio (sounds) data using CNN, which has been sparsely treated so far. Specifically, we propose CNN-based multimodal learning models in three types of fusion strategies (early, middle, late) to settle the issues of combining training data cross domains. The advantage of our proposed method lies on the fact that we can utilize CNN not only to extract features from image and audio data (spectrogram) but also to combine the features across modalities. In the experiment, we train and evaluate the network structure on a comprehensive CUB-200-2011 standard data set combing our originally collected audio data set with respect to the data species. We observe that a model which utilizes the combination of both data outperforms models trained with only an either type of data. We also show that transfer learning can significantly increase the classification performance.

  • Construction of Spontaneous Emotion Corpus from Indonesian TV Talk Shows and Its Application on Multimodal Emotion Recognition

    Nurul LUBIS  Dessi LESTARI  Sakriani SAKTI  Ayu PURWARIANTI  Satoshi NAKAMURA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/05/10
      Vol:
    E101-D No:8
      Page(s):
    2092-2100

    As interaction between human and computer continues to develop to the most natural form possible, it becomes increasingly urgent to incorporate emotion in the equation. This paper describes a step toward extending the research on emotion recognition to Indonesian. The field continues to develop, yet exploration of the subject in Indonesian is still lacking. In particular, this paper highlights two contributions: (1) the construction of the first emotional audio-visual database in Indonesian, and (2) the first multimodal emotion recognizer in Indonesian, built from the aforementioned corpus. In constructing the corpus, we aim at natural emotions that are corresponding to real life occurrences. However, the collection of emotional corpora is notably labor intensive and expensive. To diminish the cost, we collect the emotional data from television programs recordings, eliminating the need of an elaborate recording set up and experienced participants. In particular, we choose television talk shows due to its natural conversational content, yielding spontaneous emotion occurrences. To cover a broad range of emotions, we collected three episodes in different genres: politics, humanity, and entertainment. In this paper, we report points of analysis of the data and annotations. The acquisition of the emotion corpus serves as a foundation in further research on emotion. Subsequently, in the experiment, we employ the support vector machine (SVM) algorithm to model the emotions in the collected data. We perform multimodal emotion recognition utilizing the predictions of three modalities: acoustic, semantic, and visual. When compared to the unimodal result, in the multimodal feature combination, we attain identical accuracy for the arousal at 92.6%, and a significant improvement for the valence classification task at 93.8%. We hope to continue this work and move towards a finer-grain, more precise quantification of emotion.

  • Relation Prediction in Multilingual Data Based on Multimodal Relational Topic Models

    Yosuke SAKATA  Koji EGUCHI  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    741-749

    There are increasing demands for improved analysis of multimodal data that consist of multiple representations, such as multilingual documents and text-annotated images. One promising approach for analyzing such multimodal data is latent topic models. In this paper, we propose conditionally independent generalized relational topic models (CI-gRTM) for predicting unknown relations across different multiple representations of multimodal data. We developed CI-gRTM as a multimodal extension of discriminative relational topic models called generalized relational topic models (gRTM). We demonstrated through experiments with multilingual documents that CI-gRTM can more effectively predict both multilingual representations and relations between two different language representations compared with several state-of-the-art baseline models that enable to predict either multilingual representations or unimodal relations.

  • Multimodal Learning of Geometry-Preserving Binary Codes for Semantic Image Retrieval Open Access

    Go IRIE  Hiroyuki ARAI  Yukinobu TANIGUCHI  

     
    INVITED PAPER

      Pubricized:
    2017/01/06
      Vol:
    E100-D No:4
      Page(s):
    600-609

    This paper presents an unsupervised approach to feature binary coding for efficient semantic image retrieval. Although the majority of the existing methods aim to preserve neighborhood structures of the feature space, semantically similar images are not always in such neighbors but are rather distributed in non-linear low-dimensional manifolds. Moreover, images are rarely alone on the Internet and are often surrounded by text data such as tags, attributes, and captions, which tend to carry rich semantic information about the images. On the basis of these observations, the approach presented in this paper aims at learning binary codes for semantic image retrieval using multimodal information sources while preserving the essential low-dimensional structures of the data distributions in the Hamming space. Specifically, after finding the low-dimensional structures of the data by using an unsupervised sparse coding technique, our approach learns a set of linear projections for binary coding by solving an optimization problem which is designed to jointly preserve the extracted data structures and multimodal data correlations between images and texts in the Hamming space as much as possible. We show that the joint optimization problem can readily be transformed into a generalized eigenproblem that can be efficiently solved. Extensive experiments demonstrate that our method yields significant performance gains over several existing methods.

  • Predicting Performance of Collaborative Storytelling Using Multimodal Analysis

    Shogo OKADA  Mi HANG  Katsumi NITTA  

     
    PAPER

      Pubricized:
    2016/04/01
      Vol:
    E99-D No:6
      Page(s):
    1462-1473

    This study focuses on modeling the storytelling performance of the participants in a group conversation. Storytelling performance is one of the fundamental communication techniques for providing information and entertainment effectively to a listener. We present a multimodal analysis of the storytelling performance in a group conversation, as evaluated by external observers. A new multimodal data corpus is collected through this group storytelling task, which includes the participants' performance scores. We extract multimodal (verbal and nonverbal) features regarding storytellers and listeners from a manual description of spoken dialog and from various nonverbal patterns, including each participant's speaking turn, utterance prosody, head gesture, hand gesture, and head direction. We also extract multimodal co-occurrence features, such as head gestures, and interaction features, such as storyteller utterance overlapped with listener's backchannel. In the experiment, we modeled the relationship between the performance indices and the multimodal features using machine-learning techniques. Experimental results show that the highest accuracy (R2) is 0.299 for the total storytelling performance (sum of indices scores) obtained with a combination of verbal and nonverbal features in a regression task.

  • Error Correction Using Long Context Match for Smartphone Speech Recognition

    Yuan LIANG  Koji IWANO  Koichi SHINODA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/07/31
      Vol:
    E98-D No:11
      Page(s):
    1932-1942

    Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we use such contexts to search higher-order n-grams corpora for matching word sequences. For this purpose, we also utilize the Web text data. Furthermore, we propose a combination of LCM and WCN (“LCM + WCN”) to provide users with candidate lists that are more relevant than those yielded by WCN alone. We compare our interface with the WCN-based interface on the Corpus of Spontaneous Japanese (CSJ). Our proposed “LCM + WCN” method improved the 1-best accuracy by 23%, improved the Mean Reciprocal Rank (MRR) by 28%, and our interface reduced the user's load by 12%.

  • NOCOA+: Multimodal Computer-Based Training for Social and Communication Skills

    Hiroki TANAKA  Sakriani SAKTI  Graham NEUBIG  Tomoki TODA  Satoshi NAKAMURA  

     
    PAPER-Educational Technology

      Pubricized:
    2015/04/28
      Vol:
    E98-D No:8
      Page(s):
    1536-1544

    Non-verbal communication incorporating visual, audio, and contextual information is important to make sense of and navigate the social world. Individuals who have trouble with social situations often have difficulty recognizing these sorts of non-verbal social signals. In this article, we propose a training tool NOCOA+ (Non-verbal COmmuniation for Autism plus) that uses utterances in visual and audio modalities in non-verbal communication training. We describe the design of NOCOA+, and further perform an experimental evaluation in which we examine its potential as a tool for computer-based training of non-verbal communication skills for people with social and communication difficulties. In a series of four experiments, we investigated 1) the effect of temporal context on the ability to recognize social signals in testing context, 2) the effect of modality of presentation of social stimulus on ability to recognize non-verbal information, 3) the correlation between autistic traits as measured by the autism spectrum quotient (AQ) and non-verbal behavior recognition skills measured by NOCOA+, 4) the effectiveness of computer-based training in improving social skills. We found that context information was helpful for recognizing non-verbal behaviors, and the effect of modality was different. The results also showed a significant relationship between the AQ communication and socialization scores and non-verbal communication skills, and that social skills were significantly improved through computer-based training.

  • Discriminating Unknown Objects from Known Objects Using Image and Speech Information

    Yuko OZASA  Mikio NAKANO  Yasuo ARIKI  Naoto IWAHASHI  

     
    PAPER-Multimedia Pattern Processing

      Pubricized:
    2014/12/16
      Vol:
    E98-D No:3
      Page(s):
    704-711

    This paper deals with a problem where a robot identifies an object that a human asks it to bring by voice when there is a set of objects that the human and the robot can see. When the robot knows the requested object, it must identify the object and when it does not know the object, it must say it does not. This paper presents a new method for discriminating unknown objects from known objects using object images and human speech. It uses a confidence measure that integrates image recognition confidences and speech recognition confidences based on logistic regression.

  • People Re-Identification with Local Distance Comparison Using Learned Metric

    Guanwen ZHANG  Jien KATO  Yu WANG  Kenji MASE  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:9
      Page(s):
    2461-2472

    In this paper, we propose a novel approach for multiple-shot people re-identification. Due to high variance in camera view, light illumination, non-rigid deformation of posture and so on, there exists a crucial inter-/intra- variance issue, i.e., the same people may look considerably different, whereas different people may look extremely similar. This issue leads to an intractable, multimodal distribution of people appearance in feature space. To deal with such multimodal properties of data, we solve the re-identification problem under a local distance comparison framework, which significantly alleviates the difficulty induced by varying appearance of each individual. Furthermore, we build an energy-based loss function to measure the similarity between appearance instances, by calculating the distance between corresponding subsets in feature space. This loss function not only favors small distances that indicate high similarity between appearances of the same people, but also penalizes small distances or undesirable overlaps between subsets, which reflect high similarity between appearances of different people. In this way, effective people re-identification can be achieved in a robust manner against the inter-/intra- variance issue. The performance of our approach has been evaluated by applying it to the public benchmark datasets ETHZ and CAVIAR4REID. Experimental results show significant improvements over previous reports.

  • Negative Correlation Learning in the Estimation of Distribution Algorithms for Combinatorial Optimization

    Warin WATTANAPORNPROM  Prabhas CHONGSTITVATANA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E96-D No:11
      Page(s):
    2397-2408

    This article introduces the Coincidence Algorithm (COIN) to solve several multimodal puzzles. COIN is an algorithm in the category of Estimation of Distribution Algorithms (EDAs) that makes use of probabilistic models to generate solutions. The model of COIN is a joint probability table of adjacent events (coincidence) derived from the population of candidate solutions. A unique characteristic of COIN is the ability to learn from a negative sample. Various experiments show that learning from a negative example helps to prevent premature convergence, promotes diversity and preserves good building blocks.

  • Multimodal Affect Recognition Using Boltzmann Zippers

    Kun LU  Xin ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:11
      Page(s):
    2496-2499

    This letter presents a novel approach for automatic multimodal affect recognition. The audio and visual channels provide complementary information for human affective states recognition, and we utilize Boltzmann zippers as model-level fusion to learn intrinsic correlations between the different modalities. We extract effective audio and visual feature streams with different time scales and feed them to two component Boltzmann chains respectively. Hidden units of the two chains are interconnected to form a Boltzmann zipper which can effectively avoid local energy minima during training. Second-order methods are applied to Boltzmann zippers to speed up learning and pruning process. Experimental results on audio-visual emotion data recorded by ourselves in Wizard of Oz scenarios and collected from the SEMAINE naturalistic database both demonstrate our approach is robust and outperforms the state-of-the-art methods.

  • Least-Squares Conditional Density Estimation

    Masashi SUGIYAMA  Ichiro TAKEUCHI  Taiji SUZUKI  Takafumi KANAMORI  Hirotaka HACHIYA  Daisuke OKANOHARA  

     
    PAPER-Pattern Recognition

      Vol:
    E93-D No:3
      Page(s):
    583-594

    Estimating the conditional mean of an input-output relation is the goal of regression. However, regression analysis is not sufficiently informative if the conditional distribution has multi-modality, is highly asymmetric, or contains heteroscedastic noise. In such scenarios, estimating the conditional distribution itself would be more useful. In this paper, we propose a novel method of conditional density estimation that is suitable for multi-dimensional continuous variables. The basic idea of the proposed method is to express the conditional density in terms of the density ratio and the ratio is directly estimated without going through density estimation. Experiments using benchmark and robot transition datasets illustrate the usefulness of the proposed approach.

  • Interactive Object Recognition through Hypothesis Generation and Confirmation

    Md. Altab HOSSAIN  Rahmadi KURNIA  Akio NAKAMURA  Yoshinori KUNO  

     
    PAPER-Interactive Systems

      Vol:
    E89-D No:7
      Page(s):
    2197-2206

    An effective human-robot interaction is essential for wide penetration of service robots into the market. Such robot needs a vision system to recognize objects. It is, however, difficult to realize vision systems that can work in various conditions. More robust techniques of object recognition and image segmentation are essential. Thus, we have proposed to use the human user's assistance for objects recognition through speech. This paper presents a system that recognizes objects in occlusion and/or multicolor cases using geometric and photometric analysis of images. Based on the analysis results, the system makes a hypothesis of the scene. Then, it asks the user for confirmation by describing the hypothesis. If the hypothesis is not correct, the system generates another hypothesis until it correctly understands the scene. Through experiments on a real mobile robot, we have confirmed the usefulness of the system.

  • New TCP Congestion Control Schemes for Multimodal Mobile Hosts

    Kazuya TSUKAMOTO  Yutaka FUKUDA  Yoshiaki HORI  Yuji OIE  

     
    PAPER-Terrestrial Radio Communications

      Vol:
    E89-B No:6
      Page(s):
    1825-1836

    Two congestion control schemes designed specifically to handle changes in the datalink interface of a mobile host are presented. The future mobile environment is expected to involve multimode connectivity to the Internet and dynamic switching of the connection mode depending on network conditions. The conventional Transmission Control Protocol (TCP), however, is unable to maintain stable and efficient throughput across such interface changes. The two main issues are the handling of the change in host Internet Protocol (IP) address, and the reliability and continuity of TCP flow when the datalink interface changes. Although existing architectures addressing the first issue have already been proposed, the problem of congestion control remains. In this paper, considering a large change in bandwidth when the datalink interface changes, two new schemes to address these issues are proposed. The first scheme, Immediate Expiration of Timeout Timer, detects interface changes and begins retransmission immediately without waiting for a retransmission timeout as in existing architectures. The second scheme, Bandwidth-Aware Slow Start Threshold, detects the interface change and estimates the new bandwidth so as to set an appropriate slow start threshold for retransmission. Through simulations, the proposed schemes are demonstrated to provide marked improvements in performance over existing architectures.

1-20hit(33hit)