1-3hit |
Shiro SAKIYAMA George HAYASHI Shiro DOSHO Masakatsu MARUYAMA Seizo INAGAKI Masatoshi MATSUSHITA Kouji MOCHIZUKI
This paper describes an oversampling analog-to-digital converter (ADC) suitable for PCM codes. Non-linear 5-level quantizer is implemented to noise-shaping modulator. This ADC meets the specifications of ITU-T G.712, in spite of using first order delta-sigma modulator, and realizes low power operation. This chip is fabricated in 0.8 µm double-poly and double-metal CMOS process and occupies a chip area of 15 mm2. Maximum power consumption is 12.8 mW with a single +3 V power supply including DAC and TONE generator.
Hiroyuki NAKAHIRA Masaru FUKUDA Akira YAMAMOTO Shiro SAKIYAMA Masakatsu MARUYAMA
A digital neuro chip with proliferating neuron architecture is described. This chip simulates a neural network model called the adaptive segmentation of quantizer neuron architecture (ASQA). It has proliferating neurons, and can automatically form the optimum network structure for recognition according to the input data. To develop inexpensive commercial hardware and implement a proliferating neuron architecture, we adopt a virtual neuron system for hardware implementation. Namely, this chip is implemented with only an arithmetic unit for network computations, and the network information such as network structure, synaptic weights and so on, are stored in external memories. We devise our original architecture which can efficiently memorize the network information, and moreover, construct a structured network using the ASQA model. As a result, we can recognize about 3,000 Kanji characters using a single chip and a recognition speed of 4.6 msec/character is achieved on a PC.
Masakatsu MARUYAMA Hiroyuki NAKAHIRA Shiro SAKIYAMA Toshiyuki KOHDA Susumu MARUNO Yasuharu SHIMEKI
This paper discusses a digital neuroprocessor named Quantizer Neuron Chip (QNC) employing the Quantizer Neuron model and two newly developed schemes; "concurrent processing of quantizer neuron" and "removal of ineffective calculations". QNC simulates neural networks named the Multi-Functional Layered Network (MFLN) with 64 output neurons, 4672 quantizer neurons and two million synaptic weights and can be used for character or image recognition and learning. The processing speed of the chip achieved 1.6 µseconds per output neuron for recognition and 20 million connections updated per second (MCUPS) for learning. In addition, QNC can execute multichip operation for increasing the size of networks. We applied QNC to handwritten numeral recognition and realized high speed recognition and learning. QNC is implemented in a 1.2 µm double metal CMOS with sea of gates' technology and contains 27,000 gates on a 10.9910.93 mm2 chip.