As part of your new program, you need to 'count' money from £0.0 to £1.00 in steps of 10p, but you notice that your computer generates some odd results. So you compile a table of the actual results versus the expected results, and the output is as follows:
Expected Result | Actual Result | Pass/Fail |
£0.0 | £0.0 | Pass |
£0.1 | £0.1 | Pass |
£0.2 | £0.2 | Pass |
£0.3 | £0.30000000000000004 | Fail |
£0.4 | £0.4 | Pass |
£0.5 | £0.5 | Pass |
£0.6 | £0.6 | Pass |
£0.7 | £0.7 | Pass |
£0.8 | £0.7999999999999999 | Fail |
£0.9 | £0.8999999999999999 | Fail |
£1.0 | £0.9999999999999999 | Fail |
Why is the program's answer different from what you might expect--surely adding 10p is simple?