Can someone please explain the difference / ramification of using field type “decimal” versus “float”?

Following…

Try with this general explanation on MySql field types, discussing decimals versus float:

http://code.rohitink.com/2013/06/12/mysql-integer-float-decimal-data-types-differences/

It was helpful to me.

Very helpful indeed. thanks! Wish I had read this a year ago… using float for financial data and it has created a bit of headache already. I think I know why now.

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float. Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

The main difference is Floats and Doubles are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types.