Implementation of 14 bits floating point numbers of calculating units for neural network hardware development; IOP Conference Series: Materials Science and Engineering; Vol. 177 : Mechanical Engineering, Automation and Control Systems (MEACS 2016)

Bibliographische Detailangaben
Parent link:IOP Conference Series: Materials Science and Engineering
Vol. 177 : Mechanical Engineering, Automation and Control Systems (MEACS 2016).— 2017.— [012044, 5 p.]
Körperschaft: Национальный исследовательский Томский политехнический университет (ТПУ) Институт кибернетики (ИК)
Weitere Verfasser: Zoev I. V. Ivan Vladimirovich, Beresnev A. P., Mytsko E. A. Evgeniy Aleksandrovich, Malchukov A. N. Andrey Nikolaevich
Zusammenfassung:Title screen
An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made.
Sprache:Englisch
Veröffentlicht: 2017
Schriftenreihe:Information technologies in Mechanical Engineering
Schlagworte:
Online-Zugang:http://dx.doi.org/10.1088/1757-899X/177/1/012044
http://earchive.tpu.ru/handle/11683/37851
Format: MixedMaterials Elektronisch Buchkapitel
KOHA link:https://koha.lib.tpu.ru/cgi-bin/koha/opac-detail.pl?biblionumber=654045