Skip to search formSkip to main contentSkip to account menu

Half-precision floating-point format

Known as: Float16, Half-float, Half-precision 
In computing, half precision is a binary floating-point computer number format that occupies 16 bits (two bytes in modern computers) in computer… 
Wikipedia (opens in a new tab)

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
2018
2018
—For dealing with digital signals in real time, parameters like, speed of operation, hardware requirement, power and area, must… 
2018
2018
Since the introduction of Single Instruction Multiple Thread (SIMT) GPU architectures, vectorization has seldom been recommended… 
2018
2018
The use of half-precision floating-point numbers (hFP) in simulations of spiking neural networks (SNN) was investigated. The hFP… 
2017
2017
With NVIDA Tegra Jetson X1 and Pascal P100 GPUs, NVIDIA introduced hardware-based computation on FP16 numbers also called half… 
2011
2011
All-printed half adders will be the first step to the way of printing an arithmetic logic unit which will be further expanded to… 
2010
2010
This paper presents the first hardware implementation of a fully parallel decimal floating-point fused-multiply-add unit… 
2009
2009
We are developing a large-scale reconfigurable data-path (LSRDP) based on single-flux-quantum (SFQ) circuits to establish a… 
2005
2005
A twin-precision multiplier that uses reconfigurable power gating is presented. Employing power cut-off techniques in… 
2004
2004
A Hologic QDR4500A dual energy X-ray absorptiometer (DXA) was used to measure body composition in 199 half-carcasses ranging from…