Numbers like 0.1 and 0.3 have nice representations in decimal numbers, but no exact match in binary. They will get approximated to the nearest available representation.
That one happens to have an exact binary representation: 100.01. More generally, fractions where the denominator is a power-of-two can be represented exactly with a float, e.g. 1/2, 3/4, 7/32.
None of these numbers has an exact representation in binary, so each will be rounded to the nearest available representation. When the addition of 0.1 and 0.2 is done, these rounding errors combine to give a result which is not the closest available approximation to 0.3.
The floating point standard (IEEE 754) specifies 'exact rounding' on basic operations (addition, subtraction, multiplication, division). On these operations, you're guaranteed to get the approximation which is closest to the true value!
The rounding rules usually give exact results for integers, but if at any stage you have a number too big to exactly store in a float, you'll get roundoff error.
Just like the basic arithmetic operations, square root is guaranteed to give the closest available approximation to the true result!
Functions like sin, cos, log, etc. are implementation dependent. Depending on your implementation, you could get very good or very poor accuracy. The only function with exact rounding is sqrt().
The exact rounding guarantee applies to the individual add and multiply operations. When a bunch of these are combined, you get multiple roundings, and the final answer is not guaranteed to be the closest available approximation to the true result.
Some languages (e.g. C and C++) don't promise this, and special (non-default) compiler settings are needed if you want rounding to follow IEEE 754 rules.
There are dedicated methods and software libraries for doing this; check out 'arbitrary precision arithmetic'. It's usually a lot slower than using standard floating point numbers though.