Can someone please explain the difference / ramification of using field type “decimal” versus “float”?
Following…
Try with this general explanation on MySql field types, discussing decimals versus float:
http://code.rohitink.com/2013/06/12/mysql-integer-float-decimal-data-types-differences/
It was helpful to me.
Very helpful indeed. thanks! Wish I had read this a year ago… using float for financial data and it has created a bit of headache already. I think I know why now.
Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float. Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
Float - 7 digits (32 bit)
Double-15-16 digits (64 bit)
Decimal -28-29 significant digits (128 bit)
The main difference is Floats and Doubles are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types.
Sadly; that website page is down/not working anymore.
There are plenty of good explanations about the difference between float and decimal out there, e.g.:
The official docs have some hints on the differences:
https://dev.mysql.com/doc/refman/8.0/en/floating-point-types.html
Floating-Point Types (Approximate Value) - FLOAT
So for most business cases / CRM calculations, we’d use decimals.
Thanks a bunch!