Next Article in Journal
The Alignment Method for Linear Scale Projection Lithography Based on CCD Image Analysis
Previous Article in Journal
Planar Localization of Radio-Frequency or Acoustic Sources with Two Receivers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Exponential or Power Law? How to Select a Stable Distribution of Probability in a Physical System †

D.I.C.C.A., Università di Genova, Via Montallegro 1, 16145 Genova, Italy
Presented at the 4th International Electronic Conference on Entropy and Its Applications, 21 November–1 December 2017; Available online: http://sciforum.net/conference/ecea-4.
Proceedings 2018, 2(4), 156; https://doi.org/10.3390/ecea-4-05009
Published: 20 November 2017

Abstract

:
A map** of non-extensive statistical mechanics with non-additivity parameter q 1 into Gibbs’ statistical mechanics exists (E. Vives, A. Planes, PRL 88 2, 020601 (2002)) which allows generalization to q 1 both of Einstein’s formula for fluctuations and of the ’general evolution criterion’ (P. Glansdorff, I. Prigogine, Physica 30 351 (1964)), an inequality involving the time derivatives of thermodynamical quantities. Unified thermodynamic description of relaxation to stable states with either Boltzmann ( q = 1 ) or power-law ( q 1 ) distribution of probabilities of microstates follows. If a 1D (possibly nonlinear) Fokker-Planck equation describes relaxation, then generalized Einstein’s formula predicts whether the relaxed state exhibits a Boltzmann or a power law distribution function. If this Fokker-Planck equation is associated to the stochastic differential equation obtained in the continuous limit from a 1D, autonomous, discrete, noise-affected map, then we may ascertain if a a relaxed state follows a power-law statistics—and with which exponent—by looking at both map dynamics and noise level, without assumptions concerning the (additive or multiplicative) nature of the noise and without numerical computation of the orbits. Results agree with the simulations (J. R. Sánchez, R. Lopez-Ruiz, EPJ 143.1 (2007): 241–243) of relaxation leading to a Pareto-like distribution function.

1. The Problem

Usefulness of familiar, Gibbs’ thermodynamics lies in its ability to provide predictions concerning systems at thermodynamical equilibrium with the help of no detailed knowledge of the dynamics of the system. The distribution of probabilities of the microstates in canonical systems described by Gibbs’ thermodynamics is proportional to a Boltzmann exponential.
No similar generality exists for those systems in steady, stable (‘relaxed’) state which interact with external world, which are kept far from thermodynamical equilibrium by suitable boundary conditions and where the probability distribution follows a power law. (Here we limit ourselves to systems where only Boltzmann-like or power-law-like distributions are allowed). Correspondingly, there is no way to ascertain whether the probability distribution in a relaxed state is Boltzmann-like or power-law-like, but via solution of the detailed equations which rule the dynamics of the particular system of interest. In other words, if we dub ‘stable distribution function’ distribution of probabilities of the microstates in a relaxed state, then no criterion exists for assessing the stability of a given probability distribution—Boltzmann-like or power-law-like—against perturbations.
Admittedly, a theory exists—the so-called ‘non-extensive statistical mechanics’ [1,2,3,4,5,6]—which extends the formal machinery of Gibbs’ thermodynamics to systems where the probability distribution is power-law-like. Non-extensive statistical mechanics is unambiguously defined, once the value of a dimensionless parameter q is known; among other things, this value describes the slope of the probability distribution. If q 1 then the quantity corresponding to the familiar Gibbs’ entropy is not additive; Gibbs’ thermodynamics and Boltzmann’s distribution are retrieved in the limit q 1 . Thus, if we know the value of q then we know if the distribution function of a stable, steady state of a system which interacts with external world is Boltzmann or power law, and, in the latter case, what its slope is like. Unfortunately, the problem is only shifted: in spite of the formal exactness of non-extensive statistical mechanics, there is no general criterion for estimating q—with the exception, again, of the solution of the equations of the dynamics.
The aim of the present work is to find such criterion, for a wide class of physical sytems at least.
To this purpose, we recall that—in the framework of Gibbs’ thermodynamics—the assumption of ‘local thermodynamical equilibrium’ (LTE) is made in many systems far from thermodynamical equilibrium, i.e., it is assumed that thermodynamical quantities like pressure, temperature etc. are defined withn a small mass element of the system and that these quantities are connected to each other by the same rules—like e.g., Gibbs-Duhem equation—which hold at true thermodynamical equilibrium. If, furthermore, LTE holds at all times during the evolution of the small mass element, then the latter satisfies the so-called ‘general evolution criterion’ (GEC), an inequality involving total time derivatives of thermodynamical quantities [7]. Finally, if GEC holds for arbitrary small mass element of the system, then the evolution of the system as a whole is constrained; if such evolution leads a system to a final, relaxed state, then GEC puts a constraint on relaxation.
Straightforward generalization of these results to the non-extensive case q 1 is impossible. In this case, indeed, the very idea of LTE is scarcely useful: the q 1 entropy being a non-additive quantity, the entropy of the system is not the sum of the entropies of the small mass elements the system is made of, and no constraint on the relaxation of the system as a whole may be extracted from the thermodynamics of its small mass elements of the system. (For mathematical simplicity, we assume q to be uniform across the system).
All the same, an additive quantity exists which is monotonically increasing with the entropy (and achieves therefore a maximum if and only if the q 1 entropy is maximum) and which reduces to Gibbs’ entropy as q 1 . Thus, the q 1 case may be unambiguously mapped onto the corresponding Gibbs’ problem [8], and all the results above still apply. As a consequence, a common criterion of stability exists for relaxed states for both q = 1 and q 1 . The class of perturbations which the relaxed states satisfying such criterion may be stable against include perturbations of q.
We review some relevant results of non-extensive thermodynamics in Section 2. The role of GEC and its consequences in Gibbs’ thermodynamics is discussed in Section 3. Section 4 discusses generalization of the results of Section 3 to the q 1 case. Section 5 shows application to a simple toy model. We apply the results of Section 5 to a class of physical problems in Section 6. Conclusions are drawn in Section 7. Entropies are normalized to Boltzmann’s constant k B .

2. Power-Law vs. Exponential Distributions of Probability

For any probability distribution p k defined on a set of k = 1 , , W microstates of a physical system, the following quantity [1]
S q = k p k q ln q p k
is defined, where ln q x x 1 q 1 1 q is the inverse function of exp q x 1 + 1 q x 1 1 q and q > 0 .
For an isolated (microcanonical) system, constrained maximization of S q leads to p k = 1 W for all k’s and to S q = ln q W , the constraint being given by the normalization condition k p k = 1 .
For non-isolated systems [2,8], some ( i = 1 , , M ) quantities—e.g., energy, number of particles etc.—whose values x i k label the k-th microstate and which are additive constants of motion in an isolated system become fixed only on average (the additivity of a quantity signifies that, when the amount of matter is changed by a given factor, the quantity is changed by the same factor [9]). Maximization of S q with the normalization condition k p k = 1 and the further M constraints X i x i k P q k = const. (each with Lagrange multiplier Y i and P q k p k q k p k q ; repeated indices are summed here and below) leads to S q = ln q Z q , Z q = k exp q Y i F i k , F i k x i k X i k p k q and to the following, power-law-like probability distribution:
p k = exp q Y i F i k Z q
Remarkably, Equation (53) of [2] and Equation (6) of [3] show that suitable rescaling of the Y i ’s allows us to get rid of the denominator k p k q in the F i k ’s and to make all computations explicit—in the case M = 1 at least. Finally, if we apply a quasi-static transformation to a S q = max state then:
d S q = Y i d X i
If q 1 then Equations (1) and (2) lead to Gibbs’ entropy S q = 1 = k p k ln p k and to Boltzmann’s, exponential probability distribution respectively.
Many results of Gibbs’ thermodynamics still hold if q 1 . For example, a Helmholtz’ free energy F q still links S q and Z q the usual way [2,4]. Moreover, if two physical systems A and A are independent (in the sense that the probabilities of A + A factorize into those of A and of A ) then we may still write for the averaged values of the additive quantities [2]
X i A + A = X i A + X i A
Generally speaking, however, Equation (4) does not apply to S q , which satisfies:
S q A + A = S q A + S q A + 1 q S q A S q A

3. q = 1

Equations (4) and (5) are relevant when it comes to discuss stability of the system A + A against perturbations localized inside an arbitrary, small subsystem A . (It makes still sense to investigate the interaction of A and A while dubbing them as ‘independent’, as far as the internal energies of A and A are large compared with their interaction energy [9]). Firstly, we recollect some results concerning the well-known case q = 1 ; then, we investigate the q 1 problem.
To start with, we assume that M = 2 ; generalization to M 2 follows. We are free to choose x 1 k and x 2 k to be the energy and the volume of the system in the k-th microstate respectively. Then Y 1 = S q X 1 = β k p k q and Y 2 = S q X 2 = β p k p k q [4] with β 1 k B T and where T = k B 1 S q = 1 U V 1 , p = F q = 1 V T , U lim q 1 X 1 and V lim q 1 X 2 are the familiar absolute temperature, pressure, internal energy and volume respectively. In the limit q 1 we have k p k q = 1 + ( 1 q ) S q 1 , X 1 , the familiar thermodynamical relationships S q = 1 U V = β and S q = 1 V T = β p are retrieved, and Equation (3) is just a simple form of the first principle of thermodynamics.
Since q = 1 , Equation (5) ensures additivity of Gibbs’ entropy. We assume A to be is at thermodynamical equilibrium with itself, i.e., to maximize S q = 1 A (LTE). We allow A to be also at equilibrium with the rest A of the system A + A , until some small, external perturbation occurs and destroys such equilibrium. The first principle of thermodynamics and the additivity of S q = 1 lead to Le Chatelier’s principle [9]. In turn, such principle leads to 2 inequalities, S T V > 0 and p V T < 0 . States in which such inequalities are not satisfied are unstable.
Let us introduce the volume d V , the mass density ρ and the mass ρ d V of A . (Just like ρ , here and in the following we refer to the value of the generic physical quantity a at the center of mass of A as to ‘the value of a in A ’; this makes sense, provided that A is small enough). Together with the additivity of Gibbs’ entropy, arbitrariness in the choice of A ensures that S q A + A = d V ρ s where S q A + A and s are Gibbs’ entropy of the whole system A + A and Gibbs’ entropy per unit mass respectively; here and in the following, integrals are extended to the whole system A + A . The internal energy per unit mass u and the volume per unit mass ( = 1 ρ ) may similarly be introduced, as well as the all the quantities per unit mass corresponding to all the X i ’s which satisfy Equation (4). Inequalities S T V > 0 and p V T < 0 lead to s T V > 0 and p ρ T > 0 respectively.
We relax the assumption M = 2 . If A contains particles of h = 1 , , N chemical species, each with N h particles with mass m h and chemical potential μ h , then N degrees of freedom add to the 2 degrees of freedom U and V, i.e., M = N + 2 . In the k-th microstate, x h + 2 , k is the number of particles of the h-th species. In analogy with U and V, we write N h = lim q 1 X h + 2 . Starting from this M additive quantities, different M-ples of coordinates (thermodynamical potentials) may be selected with the help of Legendre transforms. LTE implies minimization of Gibbs’ free energy F q = 1 + p V = μ h N h at constant T and p. As for quantities per unit mass, this minimization leads to the inequality μ h o c j p , T d c h d c j 0 [10] where μ h o = μ h m h , c j N j m j h N h m h , j = 1 , , N . Identity h c h = 1 reduces M by 1. With this proviso, we conclude that validity of LTE in A requires:
s T V , N > 0 ; p ρ T , N > 0 ; μ h o c j p , T d c h d c j 0
where ( ) N means that all c h ’s are kept fixed, and ≥ is replaced by = only for d c h = 0 . The 1st, 2nd and 3rd inequality in Equation (6) refer to thermal, mechanical and chemical equilibrium respectively.
Remarkably, Equation (6) contains information on A only; A has disappeared altogether. Thus, if we allow A to change in time (because of some unknown, physical process occurring in A , which we are not interested in at the moment) but we assume that LTE remains valid at all times within A followed along its center-of-mass motion ( v being the velocity of the center-of-mass), then Equation (6) remains valid in A at all times. In this case, all relationships among total differentials of thermodynamic quantities—like e.g., Gibbs-Duhem equation—remain locally valid, provided that the total differential d a of the generic quantity a is d a = d a d t d where d a d t = a t + v × a . Thus, Equation (6) leads to the so called ‘general evolution criterion’ (GEC) [7,11]
d T 1 d t d ρ u d t ρ h d μ h o T 1 d t d c h d t ρ 1 T 1 d p d t + u + ρ 1 p d T 1 d t d ρ d t 0
No matter how erratic the evolution of A is, if LTE holds within A at all times then the (by now) time-dependent quantities T t , ρ t etc. satisfy Equation (7) at all times.
GEC is relevant to stability. By ’stability’ we refer to the fact that, according to Einstein’s formula [9], deviations from the S q = 1 = max state which lead to a reduction of Gibbs’ entropy ( Δ S q = 1 < 0 ) have vanishing small probability exp Δ S q = 1 . Such deviations can e.g., be understood as a consequence of an internal constraint which causes the deviation of the system from the equilibrium state, or as a consequence of contact with an external bath which allows changes in parameters which would be constant under total isolation. Let us characterize this deviation by a parameter κ which vanishes at equilibrium. Einstein’s formula implies that small κ fluctuations near the configuration which maximizes S q = 1 are Gaussian distributed with variance 2 S q = 1 κ 2 1 .
Correspondingly, as far as A is at LTE deviations of the probability distribution p k from Boltzmann’s exponential distribution are also extremely unlikely. As A evolves, the instantaneous values of the X i ’s and the Y i ’s may change, but if LTE is to hold then the shape of p k remains unaffected. For example, T may change in time, but the probability of a microstate with energy E remains exp β E . Should Boltzmann’s distribution becomes unstable at any time—i.e., should any deviation of p k from Boltzmann’s distribution ever fail to fade out—then LTE too should be violated, and Equation (7) cease to hold. Then, we conclude that if p k remains Boltzmann-like in A at all times then Equation (7) remains valid in A at all times.
As for the evolution of the whole system A + A as a whole, if LTE holds everywhere throughout the whole system at all times then Equation (7) too holds everywhere at all times. In particular, let the whole system A + A evolve towards a final, relaxed state, where we maintain—as a working hypothesis—that the word ‘steady’ makes sense, possibly after time-averaging on some typical time scales of macroscopic physics. Since LTE holds everywhere at all times during relaxation, Equation (7) puts a constraint on relaxation everywhere at all times; as a consequence, it provides us with information about the relaxed state as well. In the following, we are going to show that some of the above result find its counterpart in the q 1 case.

4. q 1

If q 1 then Equation (5) ensures that S q is not additive; moreover, it is not possible to find a meaningful expression for s such that S q 1 A + A = d V ρ s , and the results of Section 3 fail to apply to S q (see Appendix A). All the same, even if q 1 the quantity
S q ^ ln 1 + 1 q S q 1 q
is additive and satisfies the conditions lim q 1 S q ^ = S q = 1 and
d S q ^ d S q > 0
so that S q ^ = max if and only if S q = max [1,4,8,12]. Then, a power-law-like distribution Equation (2) corresponds to S q ^ = max . Moreover, the additivity of S q ^ makes it reasonable to wonder whether a straightforward, step-by-step repetition of the arguments of Section 3 leads to their generalization to the q 1 case. When looking for an answer, we are going to discuss each step separately.
First of all, the choice of the x i k ’s does not depend on the actual value of q; then, the X i ’s are unchanged, and Equation (4) still holds as it depends only on the averaging procedure on the p k ’s. As anticipated, Equation (2) corresponds to a maximum of S q ^ , and we replace Equation (5) with
S q ^ A + A = S q ^ A + S q ^ A
Since we are interested in probability distributions which maximize S q ^ , hence S q , we are allowed to invoke Equations (11) and (12) of [8] and to write the following generalization of Equation (3):
d S q ^ = Y i ^ d X i
Y i ^ = Y i 1 + 1 q S q
Once again, we start with M = 2 and choose x 1 k and x 2 k to be the energy and the volume of the system in the k-th microstate respectively. Together, Equations (11) and (12) and the identity k p k q = 1 + ( 1 q ) S q give Y 1 ^ = S q ^ X 1 = 1 1 + 1 q S q S q X 1 = β k p k q k p k q = β and Y 2 ^ = S q ^ X 2 = 1 1 + 1 q S q S q X 2 = β p k p k q k p k q = β p , i.e., we retrieve the usual temperature and pressure of the q = 1 case [12].
At last, Equations (4) and (10) allow us to repeat step-by-step the proof of Equation (6) and of Equation (7), provided that LTE now means that A is in a state which corresponds to a maximum of S q ^ . This way, we draw the conclusion that GEC takes exactly the same form Equation (7) even if q 1 . In detail, we have shown that both T, p, X 1 and X 2 (i.e., U and V) are unchanged; the same holds for u and 1 ρ . The 2nd inequality in Equation (6) remains unchanged: indeed, this is equivalent to say that the speed of sound remains well-defined in a q 1 system—see e.g., [13]. Admittedly, both the entropy per unit mass and the chemical potentials change when we replace S q = 1 with S q ^ . However, S q ^ T V , N = d S q ^ d S q S q T V , N has the same sign of S q T V , N because d S q ^ d S q > 0 and S q T V , N ( ∝ a specific heat) is > 0 [5]. Thus, the 1st inequality in Equation (6) still holds because of the additivity of S q ^ . Finally, maximization of Gibbs’ free energy at fixed T and p follows from maximization of S q ^ as well as from Equations (4) and (10), and the 3rd inequality in Equation (6) remains valid even if the actual values of the μ h ’s may be changed.
Even the notion of stability remains unaffected. Equation (18) of [8] generalizes Einstein’s formula to q 1 and ensures that strong deviations from the maximum of S q ^ are exponentially unlikely. As a further consequence of generalized Einstein’s formula, if the deviation is characterized by a parameter κ which vanishes at equilibrium, then Equation (21) of [8] ensures that small κ fluctuations near the configuration which maximizes S q ^ are Gaussian distributed with variance 2 S q ^ κ 2 1 (and q 1 fluctuations may be larger than q = 1 fluctuations).
In spite of Equation (5), Equation (9) allows us to extend some of our results to S q . We have seen that configurations maximizing S q ^ maximize also S q (where S q is considered for the whole system A + A ). Analogously, Equation (9) implies 2 S q ^ κ 2 1 2 S q κ 2 1 . Given the link between S q and Equation (2), we may apply step-by-step our discussion of Botzmann’s distribution to power-law distributions. By now, the role of S q ^ is clear: it acts as a dummy variable, whose additivity allows us to extend our discussion of Boltzmann’s distribution to power-law distributions in spite of the fact that S q is not additive.
Our discussion suggests that if relaxed states exist, then thermodynamics provides a common description of relaxation regardless of the actual value of q. As a consequence, thermodynamics may provide information about the relaxed states which are the final outcome of relaxation. Since relaxed states are stable against fluctuations and are endowed with probability distributions of the microstates, such information involves stability of these probability distributions against fluctuations. Since thermodynamics provides information regardless of q, such information involves Boltzmann exponential and power-law distributions on an equal footing. We are going to discuss such information in depth for a toy model in Section 5. In spite of its simplicity, the structure of its relaxed states are far from trivial.
Below, it turns to be useful to define the following quantities. In the q = 1 case we introduce the contribution Π q = 1 to d S q = 1 ; A + A d t of the irreversible processes occurring in the bulk of the whole system A + A ( Π q = 1 is often referred to as d i S d t in the literature); by definition, such processes raise S q = 1 by an amount d t × Π q = 1 in a time interval d t . During relaxation, Π q = 1 is a function of time t, and Π q = 1 t is constrained by Equation (7). A straightforward generalization of Π q = 1 to q 1 is Π q ^ , where d t × Π q ^ is the growth of S q ^ due to irreversible processes in the bulk; Π q ^ t is constrained by the q 1 version of Equation (7) in exactly the same way of the q = 1 case. Finally, it is still possible to define Π q such that the irreversible processes occurring in the bulk of the whole system A + A raise S q by an amount d t × Π q . As usual by now, lim q 1 Π q ^ = lim q 1 Π q = Π q = 1 and Π q = d S q ^ d S q × Π q ^ . We provide an explicit epxression for Π q in our toy model below.

5. A Toy Model

5.1. A Simple Case

The discussion of Section 4 does not rely on a particular choice of the X i ’s, as the latter may be changed via Legendre transforms and their choice obviously leaves the actual value of amount of heat produced in the bulk unaffected. Moreover, it holds regardless of the actual value of M as d c h = 0 . As an example, we may think e.g., of a M = 1 system with just one chemical species, the volume x 2 k of the system in the k-th microstate is fixed and the energy x 1 k of the k-th microstate may change because of exchange of heat with the external world. Finally, our discussion is not limited to three-dimensional systems. We focus on a toy model with just 1 degree of freedom, which we suppose to be a continuous variable (say, x) for simplicity so that we may replace p k with a distribution function P x , t satisfying the normalization condition d x P = 1 at all times. We do not require that x retains its original meaning of energy: it may as well be the position of a particle. Here and below, integrals are performed on the whole system unless otherwise specified. The x runs within a fixed 1-D domain, which acts as a constant V. In our toy model, the impact of a driving force A x is contrasted by a diffusion process with constant and uniform diffusion coefficient D = α D T > 0 . Following [14], we write the equation in P x , t in the form of a non linear Fokker Planck equation:
P t + J x = 0 ; J = 1 η A P D q P q 1 P x
Here η is a constant, effective friction coefficient, q 2 q , and we drop the dependence on both x and t for simplicity here and in the following, unless otherwise specified.
The value of q is assumed to be both known and constant in Equation (13). Furthermore, if the value of q is known and a relaxed state exists, then Equation (13) describes relaxation. Now, if we allow q to change in time ( q = q t ) much more slowly than the relaxation described by Equation (13), then the evolution of the system is a succession of relaxed states. Section 4. Unfortunately, available, GEC-based stability criteria [11] are useless, as they have been derived for perturbations at constant q only. In order to solve this conundrum, and given the fact that the evolution of the system is a succession of relaxed states, we start with some information about such states.

5.2. Relaxed States

In relaxed states t 0 and Equation (13) implies
J x = 0
where the value of J depends on the boundary conditions (e.g., the flow of P across the boundaries). In particular, the probability distribution [15]:
P J = 0 , q exp q x d x A D
solves Equation (13) if and only if J = 0 everywhere, i.e., if and only if the quantity
Π q = η D d x | J | 2 P
vanishes. The solution Equation (15) is retrieved in applications—see e.g., Equation (6) in [16] and Equation (2) of [17]. The proportionality constant in the R.H.S. of Equation (15) is fixed by the normalization condition d x P = 1 . The quantity Π q in Equation (16) is the amount of entropy [14]
S q = d x P P q q 1
produced per unit time in the bulk; even if t 0 Equations (10), (11), (42) and (44) of [14] give:
Π q d S q d t = 1 D d x A J
and in the R.H.S. of Equation (18) one identifies the entropy flux, representing the exchanges of entropy between the system and its neighborhood per unit time, in the words of [14]. In relaxed states such amount is precisely equal to Π q because d S q d t = 0 .
Admittedly, Equations (16) and (17) deal with q , rather than with q. In contrast, it is q which appears in Equation (15). Replacing q with q is equivalent to replace 1 q with q 1 ; however, the duality q q of non-extensive statistical mechanics—see Section 2 of [18] and Section 6 of [19]—ensures that no physics is lost this way ( S q replaces S q , etc.). Following Section III of [20], we limit ourselves to q < 2 for D > 0 ; the symmetry q q allow us therefore to focus further our attention on the interval 0 < q 1 [8]. Not surprisingly, if q 1 then q 1 and Equations (15) and (17) reduce to Boltzmann’s exponential (where x d x A D corresponds to β U ) and to Gibbs’ entropy respectively.
At a first glance, S q is defined in two different ways, namely Equations (1) and (17). However, identities k p k = 1 and d x P = 1 allow Equations (1) and (17) to agree with each other, provided that we identify k p k q and d x P q ; according to Equation (15), this is equivalent to a rescaling of α D and x. Comparison of Equations (1) and (17) explains why it is not possible to find a meaningful expression for the entropy per unit mass s unless q 1 — see Appendix A.
According to [14], a H-theorem exists for Equation (13) even if J 0 , as far as lim x | P = 0 , lim x | d P d x = 0 and A is ‘well-behaved at infinity’, i.e., lim x | A P = 0 ; relaxed states minimize a suitably defined Helmholtz’ free energy. ‘Equilibrium’ ( S q = max ) occurs [2,15,20] when Equation (15) holds, i.e., for t 0 , J 0 and Π q = 0 ; boundary conditions may keep the relaxed system ‘far from equilibrium’ ( t 0 , J 0 , Π q > 0 ).
If J = 0 then Equations (16) and (18) ensure that no exchange of entropy between the system and its neighborhood occurs and that Π q = 0 regardless of q . The probability distribution in the relaxed state of isolated systems is a Boltzmann’s exponential. It is the the interaction with the external world (i.e., those boundary conditions which keep J far from 0) which allows the probability distribution in the relaxed state of non-isolated systems to differ from a Boltzmann’s exponential.
If J 0 then the solution of Equation (13) in relaxed state satisfies the H-theorem quoted above. In the following we are going to discuss the ‘weak dissipation’ limit of small (but not vanishing) | J | , which corresponds to weakly dissipating systems as Equation (16) gives Π q = O | J | 2 .

5.3. Weak Dissipation

The definition of J and Equation (16) show how Π q depends on q . We may write this dependence more explicitly in the weak dissipation limit. We show in Appendix B that:
Π z = Π z = 0 + n = 1 a n z n
a n = 1 n 1 2 J n 1 ! 0 u 1 d u A u 1 + 1 n ln P 0 + 0 u d u A u ln P 0 + 0 u d u A u n
P 0 = 1 D 0 u 1 d u exp 0 u d u A u
0 u 1 d u A u = 1
where Π z = 0 Π q = 1 = Π q = 1 , z q 1 = 1 q and 0 z < 1 as 0 < q 1 . Together, Equations (19)–(22) allow computation of Π z (hence, of Π q ) once A u and D are known.

5.4. Stable Distributions of Probability

The evolution of the system is a succession of relaxed states, whose nature depends on J. If J = 0 then Gibbs’ statistics holds and the probability distribution of the relaxed state is a Boltzmann exponential.
For weakly dissipating systems | J | 0 is small and each relaxed state is approximately an equilibrium ( S z = max , hence S z ^ = max ). It follows that Equation (15) describes the probability distribution and that small κ fluctuations near a relaxed state are Gaussian distributed with variance 2 S z ^ κ 2 1 .
Now, stability requires that fluctuations, once triggered, relax back to the initial equilibrium state, sooner or later. In other words, the larger the variance the larger the fluctuations of κ which the relaxed state is stable against. Accordingly, the relaxed state which is stable against the fluctuations of κ of largest variance corresponds to 2 S z ^ κ 2 = 0 . Then, Equation (9) gives 2 S z κ 2 2 S z ^ κ 2 = 0 .
Arbitrariness in the definition of κ allows us to identify it with a perturbation of z (so that d κ = d z ). Moreover, for small | J | (i.e., negligible entropy flux across the boundary) Equation Equation (18) makes any increase of S z in a given time interval Δ t to be equal to Δ t × Π z , so that Δ t × d 2 Π z d z 2 = 2 S z z 2 . Thus, the most stable probability distribution (i.e., the probability distribution which is stable against the fluctuations of z of largest amplitude) is given by Equation (15) with q = 1 z c and z c such that:
d 2 Π z d z 2 z = z c = 0 ; 0 < z c < 1
According to Equation (23), d Π z d z achieves an extremum value at z = z c . In order to ascertain if this extremum is a maximum or a minimum, we recall that the change d S z = d z × d S z d z z c = d z × Δ t × d Π d z z c in S z due to a fluctuation d z of z around z = z c occurring in a time interval Δ t is 0 (this is true for S z = 0 as fluctuations involve irreversible processes and for S z 0 as Equation (7) describes relaxation the same way regardless of the value of z); it achieves its minimum value 0 if J = 0 and the relaxed state is a true equilibrium ( S z = z c = max , d S z d z z c d Π d z z c = 0 ). In the weak dissipation limit the structure of the relaxed state is perturbed only slightly, and we may still reasonably assume d Π d z z c = min even if its value O | J | 2 is 0 (again, Equation (7) acts the same way). In agreement with Equation (23), we obtain:
d 3 Π z d z 3 z = z c > 0
Remarkably, Equations (23) and (24) hold regardless of the actual relaxation time of the fluctuation; it applies therefore also to the slow dynamics of z t . We stress the point that Equations (23) and (24) have no means been said to ensure the actual existence of a relaxed configuration in the weak dissipation limit with something like a most stable probability distribution. But if such a thing exists, then it behaves as a power law with exponent 1 1 q = z c 1 if 0 < z c < 1 , where z c corresponds to a minimum of d Π z d z according to Equations (23) and (24). We rule out Boltzmann exponential in this case; indeed, if the power law is stable at all then it is stable against larger fluctuations than the Boltzmann exponential, because fluctuations are larger for 0 < z < 1 than for z = 0 [8]. Together, Equations (19)–(24) provide us with z c once A u and D are known. We discuss an application below.

6. The Impact of Noise on 1D Maps

6.1. Boltzmann vs. Power Law

We apply the results of Section 5 to the description of the impact of noise on maps. Let us introduce a discrete, autonomous, one-dimensional map
Q i + 1 = G Q i
where i = 0 , 1 , 2 , , Q i 0 for all i’s, the initial condition Q i = 0 = Q 0 is known and G is a known function of its argument. If the system evolves all along a time interval τ , then Equation (25) leads to the differential equation d x t d t = A x , provided that we define A x G x x , identify Q i + 1 and Q i with x t + Δ t Δ t and x t Δ t respectively, write t = i × Δ t and consider a time increment Δ t τ (’continuous limit’). We suppose A x to be integrable and well-behaved at infinity (see Section 5).
Map Equation (25) includes no noise. The latter may be either additive or multiplicative, and may affect Q i at any ‘time’ i; for instance, it may perturb Q 0 . In the continuous limit, we introduce a distribution function P x , t such that P x , t d x is the probability of finding the coordinate in the interval between x and x + d x at the time t η × t . We discuss the role of the constant η > 0 below. In order to describe the impact of noise, we modify the differential equation above as follows:
η d x d t = A x + h x , t ζ t w h e r e A x G x x
where h x , t P x , t z 2 , 0 z < 1 , the noise ζ t satisfies ζ = 0 and ζ t ζ t = 2 η D δ t t , and the brackets denote time average [14]. We leave the value of z unspecified.
According to the discussion of Equation (18) of [20], Equation (26) is associated with Equation (13) which rules P x , t d x . The choice of D in Equation (13) is equivalent to the choice of the noise level in Equation (26), and our assumption that D is constant and uniform in Section 5 means just that the noise level is the same throughout the system at all times. Well-behavedness of A at infinity allows H-theorem to apply to Equation (13). Let a relaxed state exist as an outcome of the evolution described by Equation (13). We know nothing about | J | . For the moment, let us discuss the case J 0 .
If J 0 then the definition of J in Equation (13) allows us to choose a value η large enough that the weak dissipation limit of small | J | applies. Once the dynamics (i.e., the dependence of G on its own argument) and the noise level (i.e., D) are known, then Equations (19)–(22) allow us to compute the value z c of z which minimizes d Π z d z . According to our discussion of Equations (23) and (24) in Section 5, if such minimum exists and 0 < z c < 1 then the most stable probability distribution is power-like with exponent 1 z c . Otherwise, no relaxed state may exist for J 0 . If, nevertheless, such state exists, then J = 0 and its probability distribution is a Boltzmann exponential.
We have just written down a criterion to ascertain whether the outcome Q i of a map Equation (25) follows an exponential or a power law distribution function as i (and provided that such a distribution function may actually be defined in this limit) whenever the map is affected by noise, and regardless of the nature (additive or multipicative) of the latter. Only the map dynamics G Q i and the noise level D are required. In contrast with conventional treatment, no numerical solution of Equation (25) is required, and, above all, q is no ad hoc input anymore. This is the criterion looked for in Section 1.

6.2. An Example

As an example, we consider the map (25) where
G x = r x exp | 1 a | x
and r and a are real, positive numbers. For typical values a = 0.8 and 1 < r < 7 Equations (25) and (27) are relevant in econophysics, where x and P are the wealth and the distribution of richness respectively. We refer to Ref. [21]—and in particular to its Equation (2)—where noise is built in to the initial conditions (which are completely random), and after a transient the system relaxes to a final, asymptotical state, left basically unaffected by fluctuations.
We assume a = 0.8 everywhere in the following. Figure 1 displays d 2 Π z d z 2 (normalized to 2 J ) vs. z as computed from Equations (19)–(22) and (27) at various values of r and with the same value of D = 0.1 . When dealing with Equation (19) we have taken into account powers of z up to z 7 (included). We performed all algebraic computations and definite integrals with the help of MATHCAD software. If r 1 then no z c is found which satisfies both Equations (23) and (24), so that Boltzmann distribution is expected to describe the asymptotic dynamical state of the system. This is far from surprising, as only dam** counteracts noise in Equation (26) in the r 0 limit, like in a Brownian motion. In contrast, if r > 1 then all z c ’s lies well inside the interval 0 < z c < 1 , and a power law is expected to hold; the correponding exponential depends on r only weakly—see Figure 2—as the values of the z c ’s are quite near to each other. Finally, Figure 3 displays how z c depends on D (i.e., the noise level) at fixed r; it turns out that noise tends to help relaxation to Boltzmann’s distribution, as expected.
Our results seem to agree with the results of the numerical simulations reported in [21]. If r < 1 then the average value of x relaxes to zero (just as predicted by standard analysis of Equation (25) in the zero-noise case) and random fluctuations occur. In contrast, if r > 1 then the typical amplitude of the fluctuations is much larger; nevertheless, a distribution function is clearly observed, which exhibits a distinct power-law, Pareto-like behaviour. The exponent is 2.21, in good agreement with the values displayed in our Figure 2. (The exponent in Pareto’s law is ≈2.15). We stress the point that we have obtained our results with no numerical solution of Equation (25) and with no postulate concerning non-extensive thermodynamics, i.e., no assumption on q.

7. Conclusions

Gibbs’ statistical mechanics describes the distribution of probabilities of the microstates of (grand-)canonical systems at thermodynamical equilibrium with the help of Boltzmann’s exponential. In contrast, this distribution follows a power law in stable, steady (‘relaxed’) states of many physical systems. With respect to a power-law-like distribution, non-extensive statistical mechanics [1,2] formally plays the same role played by Gibbs’ statistical mechanics with respect to Boltzmann distribution: a relaxed state corresponds to a constrained maximum of Gibbs’ entropy and to its generalization S q in Gibbs’ and non-extensive statistical mechanics respectively. Generalization of some results of Gibbs’ statistical mechanics to non-extensive statistical mechanics is available; the latter depends on the dimensionless quantity q and reduces to Gibbs’ statistical mechanics in the limit q 1 , just like S q reduces to Gibbs’ entropy S q = 1 in the same limit. The quantity q measures the lack of additivity of S q and provides us with the slope of the power-law-like distribution, the Boltzmann distribution corresponding to q = 1 : S q is an additive quantity if and only if q = 1 .
The overwhelming success of Gibbs’ statistical mechanics lies in its ability to provide predictions (like e.g., the positivity of the specific heat at constant volume) even when few or no information on the detailed dynamics of the system is available. Stability provides us with an example of such predictions. According to Einstein’s formula, deviations from thermodynamic equilibrium which lead to a significant reduction of Gibbs’ entropy ( Δ S q = 1 < 0 ) have vanishing small probability exp Δ S q = 1 . In other words, significant deviations of the probability distribution from the Boltzmann exponential are exponentially unlikely in Gibbs’ statistical mechanics.
Moreover, additivity of S q = 1 and of other quantities like the internal energy allows to write all of them as the sum of the contributions of all the small parts the system is made of. If, furthermore, every small part of a physical system corresponds locally to a maximum of S q = 1 (‘local thermodynamical equilibrium, LTE) at all times during the evolution of the system, then this evolution is bound to satisfy the so-called ‘general evolution criterion’ (GEC) [7], an inequality involving total time derivatives of thermodynamical quantities which follows from Gibbs-Duhem equation. In particular, GEC applies to the relaxation of perturbations of a relaxed state of the system, if any such state exists.
In contrast, lack of a priori knowledge of q limits the usefulness of non-extensive statistical mechanics; for each problem, such knowledge requires either solving the detailed equations of the dynamics (e.g., the relevant kinetic equation ruling the distribution probability of the system of interest) or performing a posteriori analysis of experimental data, thus reducing the attractiveness of non-extensive statistical mechanics.
However, it is possible to map non-extensive statistical mechanics into Gibbs’ statistical mechanics [1,4,8]. A quantity S q ^ exists which is both additive and monotonically increasing function of S q for arbitrary q. Thus, relaxed states of non-extensive thermodynamics correspond to S q ^ = max , and additivity of S q ^ allow suitable generalization of both Einstein’s formula and Gibbs-Duhem equation to q 1 [8], which in turn ensure that strong deviations from this maximum are exponentially unlikely and that LTE and GEC still hold, formally unaffected, in the q 1 case respectively.
These generalizations allow thermodynamics to provide an unified framework for the description of both the relaxed states (via Einstein’s formula) and the relaxation processes leading to them (via GEC) regardless of the value of q, i.e., of the nature—power law ( q 1 ) vs. Boltzmann exponential ( q = 1 )—of the probability distribution of the microstates in the relaxed state.
For further discussion we have focussed our attention on the case of a continuous, one-dimensional system described by a nonlinear Fokker Planck equation [14], where the impact of a driving force is counteracted by diffusion (with diffusion coefficient D). It turns out that it is the the interaction with the external world which allows the probability distribution in the relaxed states to differ from a Boltzmann’s exponential. Moreover, Einstein’s formula in its generalized version implies that the value z c of z 1 q of the ‘most stable probability distribution’ (i.e., the probability distribution of the relaxed state which is stable against fluctuations of largest amplitude) corresponds to a minimum of d Π z d z , Π z × d t being the amount of S z produced in a time interval d t by irreversible processes occurring in the bulk of the system. Finally, if a relaxed state exists and 0 < z c < 1 , then the most stable probability distribution is a power law with exponent 1 z c ; otherwise, it is a Boltzmann exponential. Since Π z depends just on z, D and the driving force, the value of z c —i.e., the selection of the probability distribution—depends on the physics of system only (i.e., on the diffusion coefficient and the driving force): a priori knowledge of q is required no more.
We apply our result to the Fokker Planck equation associated to the stochastic differential equation obtained in the continuous limit from a one-dimensional, autonomous, discrete map affected by noise. Since no assumption is made on q, the noise may be either additive or multiplicative, and the Fokker Planck equation may be either linear or nonlinear. If the system evolves towards a system which is stable against fluctuations then we may ascertain if a power-law statistics describes such state—and with which exponent—once the dynamics of the map and the noise level are known, without actually computing many forward orbits of the map.
As an example, we have analyzed the problem discussed in [21], where a particular one-dimensional discrete map affected by noise leads to an asymptotic state described by a Pareto-like law for selected values of a control parameter. Our results agree with those of [21] as far as both the exponent of the power law and the range of the control parameter, with the help of numerical simulation of the dynamics and of no assumption about q.
Extension to multidimensional maps will be the task of future work.

Acknowledgments

Useful discussions and warm encouragement with W. Pecorella, Università di Tor Vergata, Roma, Italy are gratefully acknowledged.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GECGeneral evolution criterion
LTELocal thermodynamical equilibrium

Appendix A. Non-Existence of s for q ≠ 1

If q 1 then S q S q = 1 = d x P ln P , which leads immediately to S q = 1 = d x ρ s with ρ P and s ln P . Let us suppose that a similar expression holds for S q and q 1 . Of course, ρ does not depend on q, so we keep ρ P even if q 1 . In the latter case, Equation (17) gives S q d x ρ 1 P q 2 q 1 , and agreement of Equation (1) with Equation (17) requires that we identify k p k q and d x P q 2 . However, this identification is possible at no value of q , because the normalization conditions k p k = 1 and d x P = 1 make the p k ’s and P to transform like p k p k × k 1 and P P × k 1 respectively under the scaling transformation x k x , so that k p k q k q × k p k q and d x P q 2 k 2 q × d x P q 2 . Identification of k p k q and d x P q 2 is impossible because these two quantities behave differently under the same scaling transformation; then, no definition of s is self-consistent unless q = 1 .
This result follows from the non-additivity of S q : if q 1 then the entropy of the whole system is not the sum of the entropies of the small masses the systems is made of. Physically, this suggests that strong correlations exixt among such masses; indeed, strongly correlated variables are precisely the topic which non-extensive statistical mechanics is focussed on [6].
Accordingly, straightforward generalization of LTE and GEC with the help of S q to the q 1 case is impossible. This is why we need the additive quantity S q ^ in order to build a local thermodynamics, and to generalize the results of Section 3 in Section 4. As discussed in the text, the results of Section 4 may involve Π q rather than Π q ^ just because d S q ^ d S q > 0 .

Appendix B. Proof of Equations (19)–(22)

We derive from both Equations (14) and (16) and the definition of J the Taylor-series development Equation (19) of Π z = Π q in powers of z, centered in z = 0 :
Π z = Π z = 0 + n = 1 a n z n ; a n = 1 n 1 n 1 ! 2 J d u d ln P J , q = 1 d u 1 + ln P J , q = 1 n ln P J , q = 1 n 1
where u x D and P J , q = 1 = P J , q = 1 u is the q = q = 1 solution for arbitrary J with the boundary condition P J , q = 1 u = u 0 = P 0 of Equation (13). Starting from Equations (13) and (15) (for q = 1 ), the method of variation of constants gives:
P J , q = 1 u = P 0 η J u 0 u d u exp u 0 u d u A u exp u 0 u d u A u
The normalization condition holds
1 = d x P J , q = 1 x = D d u P J , q = 1 u
where P J , q = 1 is approximately given by Equation (15) in weakly dissipating systems. We assume x 0 and u 0 = 0 with no loss of generality (originally, x 1 k is an energy). We define u 1 such that P 0 = η J u 0 u 1 d u exp 0 u d u A u . It is unlikely that β U 1 ; thus, we approximate Equation (A2) as:
P J , q = 1 = P J , q = 1 0 exp 0 u d u A u f o r 0 u u 1 ; P J , q = 1 = 0 otherwise
According to Equation (A4), the domain of integration in Equation (19), Equations (A2) and (A3) reduces to 0 u u 1 . Thus, Equations (19) and (A4) lead to Equation (20), and Equations (A3) and (A4) lead to Equation (21). Finally, Equations (14), (18) and (A4) lead to: Π q = J 0 u 1 d u A in relaxed state, while Equations (14), (16) and (A4) give: Π q = η | J | 2 P J , q = 1 0 0 u 1 d u exp 0 u d u A . After eliminating Π q and replacing the definition of u 1 we obtain Equation (22).

References

  1. Tsallis, C.J. Possible Generalization of Boltzmann-Gibbs Statistics. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  2. Tsallis, C.; Mendes, R.S.; Plastino, A. The role of constraints within generalized nonextensive statistics. Phys. A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  3. Tsallis, C.; Anteneodo, C.; Borland, L.; Osorio, R. Nonextensive statistical mechanics and economics. Phys. A Stat. Mech. Appl. 2003, 324, 89–100. [Google Scholar] [CrossRef]
  4. Abe, S. Heat and entropy in nonextensive thermodynamics: Transmutation from Tsallis theory to Rényi-entropy-based theory. Phys. A 2001, 300, 417–423. [Google Scholar] [CrossRef]
  5. Tatsuaki, W. On the thermodynamic stability conditions of Tsallis’ entropy. Phys. Lett. A 2002, 297, 334–337. [Google Scholar]
  6. Umarov, S.; Tsallis, C.; Steinberg, S. On a q-central limit theorem consistent with nonextensive statistical mechanics. Milan J. Math. 2008, 76, 307–328. [Google Scholar] [CrossRef]
  7. Glansdorff, P.; Prigogine, I. On a general evolution criterion in macroscopic physics. Physica 1964, 30, 351–374. [Google Scholar] [CrossRef]
  8. Vives, E.; Planes, A. Black Hole Production by Cosmic Rays. PRL 2002, 88, 021303. [Google Scholar]
  9. Landau, L.D.; Lifshitz, E.M. Statistical Physics; Pergamon: Oxford, UK, 1959. [Google Scholar]
  10. Prigogine, I.; Defay, R. Chemical Thermodynamics; Longmans-Green: London, UK, 1954. [Google Scholar]
  11. Di Vita, A. Maximum or minimum entropy production? How to select a necessary criterion of stability for a dissipative fluid or plasma. Phys. Rev. E 2010, 81, 041137. [Google Scholar] [CrossRef] [PubMed]
  12. Marino, M. A generalized thermodynamics for power-law statistics. Phys. A Stat. Mech. Its Appl. 2007, 386, 135–154. [Google Scholar] [CrossRef]
  13. Khuntia, A.; Sahoo, P.; Garg, P.; Sahoo, R.; Cleymans, J. Speed of Sound in a System Approaching Thermodynamic Equilibrium; DAE Symp. Nucl. Phys. 2016, 61, 842–843. [Google Scholar]
  14. Casas, G.A.; Nobre, F.D.; Curado, E.M.F. Entropy production and nonlinear Fokker-Planck equations. Phys. Rev. E 2012, 86, 061136. [Google Scholar] [CrossRef] [PubMed]
  15. Wedemann, R.S.; Plastino, A.R.; Tsallis, C. Nonlinear Compton scattering in a strong rotating electric field. Phys. Rev. E 2016, 94, 062105. [Google Scholar] [CrossRef] [PubMed]
  16. Haubold, H.J.; Mathai, A.M.; Saxena, R.K. Boltzmann-Gibbs Entropy Versus Tsallis Entropy: Recent Contributions to Resolving the Argument of Einstein Concerning “Neither Herr Boltzmann nor Herr Planck has Given a Definition of W”? Astrophys. Space Sci. 2004, 290, 241–245. [Google Scholar] [CrossRef]
  17. Ribeiro, M.S.; Nobre, F.D.; Curado, E.M.F. Time evolution of interacting vortices under overdamped motion. Phys. Rev. E 2012, 85, 021146. [Google Scholar] [CrossRef] [PubMed]
  18. Wada, T.; Scarfone, A.M. Connections between Tsallis’ formalisms employing the standard linear average energy and ones employing the normalized q-average energy. Phys. Lett. A 2005, 335, 351–362. [Google Scholar] [CrossRef]
  19. Naudts, J. Generalized thermostatistics based on deformed exponential and logarithmic functions. Phys. A Stat. Mech. Appl. 2004, 340, 32–40. [Google Scholar] [CrossRef]
  20. Borland, L. Microscopic dynamics of the nonlinear Fokker-Planck equation: A phenomenological model. Phys. Rev E 1998, 57, 6634. [Google Scholar] [CrossRef]
  21. Sánchez, J.R.; Lopez-Ruiz, R. A model of coupled maps for economic dynamics. Eur. Phys. J. Spec. Top. 2007, 143, 241–243. [Google Scholar] [CrossRef]
Figure 1. d 2 Π z d z 2 (vertical axis) vs. z (horizontal axis) for r = 1 / 3 (black diamonds), r = 1 / 2 (empty circles), r = 2 (triangles), r = 4 (squares), r = 6 (empty diamonds). In all cases D = 0.1 . If r = 1 / 3 then the slope of the curve at the point z = z c where it crosses the d 2 Π z d z 2 = 0 axis is negative, i.e., Equation (24) is violated. If r = 1 / 2 then z c lies outside the interval 0 z c < 1 , i.e., Equation (23) is violated. Both Equations (23) and (24) are satisfied for r = 2 (with z c = 0.452 ), r = 4 (with z c = 0.438 ) and r = 6 (with z c = 0.412 ).
Figure 1. d 2 Π z d z 2 (vertical axis) vs. z (horizontal axis) for r = 1 / 3 (black diamonds), r = 1 / 2 (empty circles), r = 2 (triangles), r = 4 (squares), r = 6 (empty diamonds). In all cases D = 0.1 . If r = 1 / 3 then the slope of the curve at the point z = z c where it crosses the d 2 Π z d z 2 = 0 axis is negative, i.e., Equation (24) is violated. If r = 1 / 2 then z c lies outside the interval 0 z c < 1 , i.e., Equation (23) is violated. Both Equations (23) and (24) are satisfied for r = 2 (with z c = 0.452 ), r = 4 (with z c = 0.438 ) and r = 6 (with z c = 0.412 ).
Proceedings 02 00156 g001
Figure 2. Exponent 1 z c (vertical axis) of the power-law distribution function of the relaxed state vs. r (horizontal axis).
Figure 2. Exponent 1 z c (vertical axis) of the power-law distribution function of the relaxed state vs. r (horizontal axis).
Proceedings 02 00156 g002
Figure 3. d 2 Π z d z 2 (vertical axis) vs. z (horizontal axis) for D = 0.001 (black diamonds), D = 0.01 (empty circles), D = 0.1 (triangles), D = 2 (squares), D = 10 (empty diamonds). In all cases r = 4 . Even if a relaxed state exists, the larger D, the stronger the noise, the nearer z c to the bounds of the interval 0 , 1 . If D > 1 then z c does not belong to the interval, and Boltzmann’s exponential distribution rules the relaxed state.
Figure 3. d 2 Π z d z 2 (vertical axis) vs. z (horizontal axis) for D = 0.001 (black diamonds), D = 0.01 (empty circles), D = 0.1 (triangles), D = 2 (squares), D = 10 (empty diamonds). In all cases r = 4 . Even if a relaxed state exists, the larger D, the stronger the noise, the nearer z c to the bounds of the interval 0 , 1 . If D > 1 then z c does not belong to the interval, and Boltzmann’s exponential distribution rules the relaxed state.
Proceedings 02 00156 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vita, A.D. Exponential or Power Law? How to Select a Stable Distribution of Probability in a Physical System. Proceedings 2018, 2, 156. https://doi.org/10.3390/ecea-4-05009

AMA Style

Vita AD. Exponential or Power Law? How to Select a Stable Distribution of Probability in a Physical System. Proceedings. 2018; 2(4):156. https://doi.org/10.3390/ecea-4-05009

Chicago/Turabian Style

Vita, Andrea Di. 2018. "Exponential or Power Law? How to Select a Stable Distribution of Probability in a Physical System" Proceedings 2, no. 4: 156. https://doi.org/10.3390/ecea-4-05009

Article Metrics

Back to TopTop