when transforming data to log scale for charting purposes, is it more *“correct”* in some way to **always** transform using `np.log1p`

than with `np.log`

and does it break any common user expectations?

I’m building a charting software with log scale capabilities, and wonder if I should use `np.log`

or `np.log1p`

as the default choice when transforming the data.

here’s a vastly simplified code sample:

import matplotlib.pyplot as plt def chart_with_log_scale(x,y): ylog = np.log(y) # should I be using np.log1p here instead? plt.scatter(x,ylog )

or put a different perspective on this, does `matplotlib`

use `log1p`

or `log`

when it does its log transform in code such as this?

def chart_with_log_scale2(x,y): plt.scatter(x,y) ax = plt.gca() ax.set_yscale("log")

## Answer

when transforming data to log scale for charting purposes, is it more “correct” in some way to always transform using

`np.log1p`

than with`np.log`

and does it break any common user expectations?

**It is almost never correct to use np.log1p instead of np.log if your goal is to compute log(𝑥).**

Here’s an example of a plot with the y axis in log scale, of the probability density function for the Beta distribution with 𝛼 = 2 and 𝛽 = 5:

Here’s the same function with the y axis in log1p scale instead:

If I tried to pass this off as a log scale plot of the Beta(2,5) PDF as a grad student, my advisor would probably shoot me dead on the spot.

(Exception: If your inputs are always greater than 2^{53} on a machine with IEEE 754 binary64 arithmetic, then the two will functions will most likely coincide. But that is only because log(1 + 𝑥) has such low relative error from log(𝑥) on such inputs—that is, |log(1 + 𝑥) − log(𝑥)|/|log(𝑥)| = |log(𝑥⋅(1/𝑥 + 1)) − log(𝑥)|/log(𝑥) = log(1 + 1/𝑥)/log(𝑥) < 1/𝑥 < 2^{−53} so log(1 + 𝑥) is at worst a rounding error away from log(𝑥).)

In a comment, you asked:

log1p could be what I want if the values are very close to 0, as it will have better numerical stability than log, right?

The *functions* log1p and log are just mathematical functions.
Neither one has “better numerical stability” than the other:
“numerical stability” is not even a well-defined concept, and certainly not of a mathematical function.
An *algorithm* for computing a mathematical function can exhibit forward or backward stability; what this property means is relative to the function it aims to compute.
But log and log1p are simply mathematical functions, not algorithms for computing functions, and as such, forward and backward stability do not apply.

**The importance of log1p is that the function log(1 + 𝑥) is well-conditioned near zero, and often turns up in numerical algorithms or algebraic rearrangements of other functions.**

*Well-conditioned*means that if you evaluate it at a point 𝑥⋅(1 + 𝛿) when you actually wanted to evaluate it at 𝑥, then the answer log(1 + 𝑥⋅(1 + 𝛿)) is equal to log(1 + 𝑥)⋅(1 + 𝜀) where 𝜀 is a reasonably small multiple of 𝛿 as long as 𝛿 is reasonably small. Here 𝛿 is the relative error of the input 𝑥⋅(1 + 𝛿) from 𝑥, and 𝜀 is the relative error of the output log(1 + 𝑥)⋅(1 + 𝜀) from log(𝑥).

**In contrast, the function log(𝑦) is ill-conditioned near 1:** evaluate log(𝑦⋅(1 + 𝛿)) when you want log(𝑦) for some point 𝑦 near 1, and what you get back may be log(𝑦)⋅(1 + 𝜀) for an

*arbitrarily bad*error 𝜀,

*even if the input error 𝛿 was quite small*. For example, suppose you want to evaluate log(1.000000000000001) ≈ 9.999999999999995 × 10

^{−16}. If you write

`np.log(1.000000000000001)`

in a Python program, the decimal constant `1.000000000000001`

will be rounded to the nearest binary64 floating-point number, and so you will actually evaluate log(fl(1.000000000000001)) = log(1.0000000000000011102230246251565404236316680908203125) ≈ 1.110223024625156 × 10^{−15}.

Although 1.0000000000000011102230246251565404236316680908203125 is a good approximation to 1.000000000000001, with relative error 𝛿 < 10^{−15}, log(1.0000000000000011102230246251565404236316680908203125) is a *terrible* approximation to log(1.000000000000001), with relative error 𝜀 > 11%.
This is not the fault of `np.log`

, which did an admirable job of returning the correctly rounded result to the *question we asked*.
This is because *the mathematical function* log is ill-conditioned near 1, so it magnified the tiny error 10^{−15} in the input we asked about from the input we *wanted* to ask about—and not just magnified, but magnified by a *quadrillionfold!*

**So if you find yourself in possession of a small real number 𝑥, and you find yourself wanting to know what log(1 + 𝑥) is, then you should use np.log1p(x) to answer this question.**
(Or you may wish to rearrange a computation in terms of log(…) so that it uses log(1 + …) instead;

*e.g.*, to compute logit(𝑝) = log(𝑝/(1 − 𝑝)) for a given 𝑝 near 1/2, you are better off rewriting it as log(1 + (1 − 2𝑝)/𝑝).) If you wrote

`np.log(1 + x)`

instead of `np.log1p(x)`

, then the subexpression `1 + x`

may commit a rounding error, giving 1 ⊕ 𝑥 = fl(1 + 𝑥) = (1 + 𝑥)⋅(1 + 𝛿).
Although the rounding error is small (in binary64 arithmetic, you are guaranteed that |𝛿| ≤ 2^{−53}), the log

*function*may magnify it into an arbitrarily large error 𝜀 in the output.

**But if you already have a number 𝑦, even if it is near zero, and find yourself wanting log(𝑦), then np.log(y) will give a good approximation to log(𝑦), and np.log1p(y) will give a terrible one (unless 𝑦 is very large).**
This is the scenario you seem to find yourself in.

*Could* `np.log1p`

ever be relevant in plotting data on a log scale?
Perhaps, if what you *compute* is 𝑥 and what you wish to *plot* is 1 + 𝑥 in log-scale.
But it’s unlikely that this combination of circumstances—computing 𝑥, and plotting 1 + 𝑥 in log scale—makes sense together:

- If you have a good reason for computing 𝑥 as a proxy for 1 + 𝑥, most likely you are concerned primarily with values of 𝑥 near zero—otherwise there’s not much benefit to the representation—and therefore most likely you are plotting values of 1 + 𝑥 near 1.
- But if you are plotting values of 1 + 𝑥 near 1, then there is very little reason to use a log scale, because the closer your data points are to 1, the less difference there is between a log scale and a linear scale at all!

### log scale gnuplot

set terminal pngcairo set output "logscale.png" set title 'log scale' set xrange [0:1] set logscale y plot x**(2 - 1) * (1 - x)**(5 - 1) notitle

### log1p scale gnuplot

set terminal pngcairo set output "log1pscale.png" set title 'log1p scale' set xrange [0:1] set yrange [1:1.1] set logscale y 2 set ytics 1.1**(1/4.0) plot 1 + x**(2 - 1) * (1 - x)**(5 - 1) notitle