We implemented bitwise neural networks on FPGA and run tests on the MNIST dataset. Experiments show that we achieve 4x speed compared with the state-of-the-art FPGA implementation.
Deep neural networks (DNNs) have substantially pushed the state-of the-art in a wide range of tasks, including speech recognition and computer vision. However, DNNs also require a wealth of compute resources, and thus can benefit greatly from parallelization.
Moreover, power consumption recently has gained massive attentions due to the emerging of mobile devices. As is well known, running real-time text detection and recognition tasks on standalone devices, like a pair of glasses, will quickly drain the battery. Therefore, we need to exploit the heterogeneity of hardware to boost both performance and efficiency.
The attempt to implement neural networks on FPGA dates back to 24 years ago [1], yet no design shows a potential of commercial use until recently [2]. However, the recent design still suffers from high resource usage and high latency. In this work, we propose a resource-efficient implementation, which also has higher throughput and lower latency.
The bitwise neural networks (BNNs) can further drive down the power consumptions by eliminating power-consuming operations like multiplication [1]. Hence, we expect that a well-implemented BNN on heterogeneous hardware can enable a real-time and always-on service. Specifically, a field-programmable gate array (FPGA) is a reconfigurable hardware and is particularly suitable for fast bit operations, and thus we will first try to port the algorithm on FPGA.
(Completed) Get familiar with training bitwise neural networks and implement a correct baseline algorithm.
(Completed) Tune parameters and implement retrain process to approach the results of the paper.
(Completed) Implement classifying phase by VHDL.
(Completed) Prepare for final exam and analyze the feasibility of porting the code to hardware.
(Completed) Port the code to hardware and analyze the results. [3].
(Completed) Write final report and prepare for competition.