Implementation of 14 bits floating point numbers of calculating units for neural network hardware development

Detaylı Bibliyografya
Parent link:IOP Conference Series: Materials Science and Engineering
Vol. 177 : Mechanical Engineering, Automation and Control Systems (MEACS 2016).— 2017.— [012044, 5 p.]
Müşterek Yazar: Национальный исследовательский Томский политехнический университет (ТПУ) Институт кибернетики (ИК)
Diğer Yazarlar: Zoev I. V. Ivan Vladimirovich, Beresnev A. P., Mytsko E. A. Evgeniy Aleksandrovich, Malchukov A. N. Andrey Nikolaevich
Özet:Title screen
An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made.
Dil:İngilizce
Baskı/Yayın Bilgisi: 2017
Seri Bilgileri:Information technologies in Mechanical Engineering
Konular:
Online Erişim:http://dx.doi.org/10.1088/1757-899X/177/1/012044
http://earchive.tpu.ru/handle/11683/37851
Materyal Türü: Elektronik Kitap Bölümü
KOHA link:https://koha.lib.tpu.ru/cgi-bin/koha/opac-detail.pl?biblionumber=654045

MARC

LEADER 00000nla2a2200000 4500
001 654045
005 20240123134415.0
035 |a (RuTPU)RU\TPU\network\19560 
035 |a RU\TPU\network\19557 
090 |a 654045 
100 |a 20170406a2017 k y0engy50 ba 
101 0 |a eng 
105 |a y z 100zy 
135 |a drgn ---uucaa 
181 0 |a i  
182 0 |a b 
200 1 |a Implementation of 14 bits floating point numbers of calculating units for neural network hardware development  |f I. V. Zoev [et al.] 
203 |a Text  |c electronic 
225 1 |a Information technologies in Mechanical Engineering 
300 |a Title screen 
320 |a [References: 10 tit.] 
330 |a An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made. 
461 0 |0 (RuTPU)RU\TPU\network\2008  |t IOP Conference Series: Materials Science and Engineering 
463 0 |0 (RuTPU)RU\TPU\network\19514  |t Vol. 177 : Mechanical Engineering, Automation and Control Systems (MEACS 2016)  |o International Conference, October 27–29, 2016, Tomsk, Russia  |o [proceedings]  |f National Research Tomsk Polytechnic University (TPU) ; eds. A. P. Zykova ; N. V. Martyushev  |v [012044, 5 p.]  |d 2017 
610 1 |a электронный ресурс 
610 1 |a труды учёных ТПУ 
610 1 |a числа с плавающей точкой 
610 1 |a вычислительные устройства 
610 1 |a аппаратные средства 
610 1 |a нейронные сети 
610 1 |a машинное обучение 
610 1 |a искусственные нейронные сети 
610 1 |a сверточные сети 
701 1 |a Zoev  |b I. V.  |c Specialist in the field of informatics and computer technology  |c Programmer of Tomsk Polytechnic University  |f 1993-  |g Ivan Vladimirovich  |3 (RuTPU)RU\TPU\pers\38250 
701 1 |a Beresnev  |b A. P. 
701 1 |a Mytsko  |b E. A.  |c specialist in the field of informatics and computer technology  |c Programmer of Tomsk Polytechnic University  |f 1991-  |g Evgeniy Aleksandrovich  |3 (RuTPU)RU\TPU\pers\33691  |9 17322 
701 1 |a Malchukov  |b A. N.  |c specialist in the field of informatics and computer technology  |c Associate Professor of Tomsk Polytechnic University, Candidate of technical sciences  |f 1982-  |g Andrey Nikolaevich  |3 (RuTPU)RU\TPU\pers\32409  |9 16360 
712 0 2 |a Национальный исследовательский Томский политехнический университет (ТПУ)  |b Институт кибернетики (ИК)  |3 (RuTPU)RU\TPU\col\18397 
801 2 |a RU  |b 63413507  |c 20170411  |g RCR 
856 4 |u http://dx.doi.org/10.1088/1757-899X/177/1/012044 
856 4 |u http://earchive.tpu.ru/handle/11683/37851 
942 |c CF