HARDWARE ACCELERATION OF CONVOLUTIONAL NEURAL NETWORKS USING APPROXIMATE COMPUTING AND DYNAMIC PARTIAL RECONFIGURATION

Eman Youssef Ahmed Safina;

Abstract


In this work, I have trained new four different convolutional neural networks (CNNs) to recognize four different datasets MNIST, Fashion MNIST, SVHN and CIFAR-10. Then, the CNNs are tested for recognition. The resulting trainable weights are approximated using precision scaling. The four networks are tested again while using this approximation. A new hardware architecture is proposed to recognize three
datasets (MNIST- Fashion MNIST- SVHN) while using precision scaling approximation. This architecture is implemented on Xilinx XC7Z020 FPGA. The resulting power and energy consumed to recognize each image in each dataset is reported. The results show significant reduction in energy consumption while
having minor loss in accuracy. This approximation is significant because CNN requires a lot of computation, and hence, consumes large power.


Other data

Title HARDWARE ACCELERATION OF CONVOLUTIONAL NEURAL NETWORKS USING APPROXIMATE COMPUTING AND DYNAMIC PARTIAL RECONFIGURATION
Other Titles تنفيذ الشبكات العصبية التلافيفية علي مصفوفات البوابات المنطقية القابلة للبرمجة باستخدام الحسابات التقريبية و البرمجة الجزئيه لهذه المصفوفات اثناء وقت التشغيل لتسريع عملية التعرف علي الصور
Authors Eman Youssef Ahmed Safina
Issue Date 2021

Attached Files

File SizeFormat
BB10746.pdf818.55 kBAdobe PDFView/Open
Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check



Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.