Exploration of a Dynamic Approximate Multiplier for Mixed-Precision Inference
The increasing demand for efficient neural network (NN) inference on edge devices has driven the need for hardware-level optimizations that balance computational accuracy with energy and area efficiency. This thesis explores the design and implementation of a dynamic approximate multiplier capable of mixed-precision inference, focusing on floating-point (FP) formats. Using a logarithmic-approximat
