To generate more accurate results, machine learning approaches, particularly those based on neural networks, require the usage of accurate real values. Linear regression is a machine learning technique that is commonly used to identify the linear function that best fits a set of data. Due to current trends in systems need and the availability of Field-Programmable Gate Array (FPGA), floating-point implementations are becoming more widespread, and engineers are increasingly using FPGAs as a platform for floating-point implementations. In this paper, to demonstrate the FPGA-based half-precision floating-point (FPU-16) implementation .Two different ways for implementing linear regression are proposed. the first method is by using the assembler of BZK.SAU.FPGA-based microcomputer architecture. The second method is by using IP-Core of Xilinx simulated and tested using Vivado Design Suite software. After implementing both ways we have calculated the mean square error MSE between them and found the result is 7.8×10^(-4).
Artificial Neural Network (AAN) Machine Learning (ML) Half-precision Floating-point Linear Regression Field-programmable gate array (FPGA) Assembler.
Primary Language | English |
---|---|
Subjects | Engineering |
Journal Section | Articles |
Authors | |
Publication Date | December 31, 2021 |
Submission Date | December 19, 2021 |
Published in Issue | Year 2021 Volume: 17 Issue: 2 |