I’ve recently been amazed running a code that takes hours to run with many processes and arrives at the exact same numerical result every time. Maybe it’s just me anthropomorphizing the computer, but it seems impressive.
By mistake I mean a result of a program that is not what it should be based on the initial conditions and the rules of the language. This would have to be caused by a random error, maybe cosmic radiation noise or something. I’m talking 2+2=5, not a human writing bad code. Something where, if you ran it again, with the same code and initial conditions, it would give the correct result (assuming the error was unlikely but not impossible).
I know this sort of thing is possible in computers (How often do computers make mistakes?), but it sounds like it is unlikely. So is there any sort of redundancy built into Python itself or is that built in on a deeper level? And how many floating point operations can be done before you can expect one to be incorrect?
Bonus: What about other languages? Are there some that are more reliable in this sense than others?
Computer languages do not worry about this unlikely problem (bits in the CPU randomly being changed by some outside force). Other forms of error are also not the applications area of responsibility, for example network data which is much more likely to be corrupted often has internal checks in the protocols to detect errors (checksums for example). The same is true for some storage.
In the rare environment where this really matters (space vehicles being the main one I know of) They have redundant applications running and compare the results of both to see if they match.
So in answer to you question these kind of issues are not the concern of the language, They are either handled at a lower level (checksums on network packets etc…), or a higher level (redundancy). Generally these kind of issues are only worried about in very rare circumstances such as space vehicles, nuclear power, etc…