Multiple Interactions
The starting point for the multiple interactions physics scenario in
PYTHIA is provided by [Sjo87]. Recent developments have
included a more careful study of flavour and colour correlations,
junction topologies and the relationship to beam remnants
[Sjo04], and interleaving with initial-state radiation
[Sjo05], making use of transverse-momentum-ordered
initial- and final-state showers.
A big unsolved issue is how the colour of all these subsystems is
correlated. For sure there is a correlation coming from the colour
singlet nature of the incoming beams, but in addition final-state
colour rearrangements may change the picture. Indeed such extra
effects appear necessary to describe data, e.g. on
<pT>(n_ch). A simple implementation of colour
rearrangement is found as part of the
beam remnants description.
Main variables
The maximum pT to be allowed for multiple interactions is
related to the nature of the hard process itself. It involves a
delicate balance between not doublecounting and not leaving any
gaps in the coverage. The best procedure may depend on information
only the user has: how the events were generated and mixed (e.g. with
Les Houches Accord external input), and how they are intended to be
used. Therefore a few options are available, with a sensible default
behaviour.
mode
MultipleInteractions:pTmaxMatch
(default = 0
; minimum = 0
; maximum = 2
)
Way in which the maximum scale for multiple interactions is set
to match the scale of the hard process itself.
option
0 : (i) if the final state of the hard process
(not counting subsequent resonance decays) contains only quarks
(u, d, s, c ,b), gluons and photons then pT_max
is chosen to be the factorization scale for internal processes
and the scale
value for Les Houches input;
(ii) if not, interactions are allowed to go all the way up
to the kinematical limit.
The reasoning is that the former kind of processes are generated by
the multiple-interactions machinery and so would doublecount hard
processes if allowed to overlap the same pT range,
while no such danger exists in the latter case.
option
1 : always use the factorization scale for an internal
process and the scale
value for Les Houches input,
i.e. the lower value. This should avoid doublecounting, but
may leave out some interactions that ought to have been simulated.
option
2 : always allow multiple interactions up to the
kinematical limit. This will simulate all possible event topologies,
but may lead to doublecounting.
The rate of interactions is determined by
parm
MultipleInteractions:alphaSvalue
(default = 0.127
; minimum = 0.06
; maximum = 0.25
)
The value of alpha_strong at m_Z. Default value is
picked equal to the one used in CTEQ 5L.
The actual value is then regulated by the running to the scale
pT^2, at which it is evaluated
mode
MultipleInteractions:alphaSorder
(default = 1
; minimum = 0
; maximum = 2
)
The order at which alpha_strong runs at scales away from
m_Z.
option
0 : zeroth order, i.e. alpha_strong is kept
fixed.
option
1 : first order, which is the normal value.
option
2 : second order. Since other parts of the code do
not go to second order there is no strong reason to use this option,
but there is also nothing wrong with it.
QED interactions are regulated by the alpha_electromagnetic
value at the pT^2 scale of an interaction.
mode
MultipleInteractions:alphaEMorder
(default = 1
; minimum = -1
; maximum = 1
)
The running of alpha_em used in hard processes.
option
1 : first-order running, constrained to agree with
StandardModel:alphaEMmZ
at the Z^0 mass.
option
0 : zeroth order, i.e. alpha_em is kept
fixed at its value at vanishing momentum transfer.
option
-1 : zeroth order, i.e. alpha_em is kept
fixed, but at StandardModel:alphaEMmZ
, i.e. its value
at the Z^0 mass.
Note that the choices of alpha_strong and alpha_em
made here override the ones implemented in the normal process machinery,
but only for the interactions generated by the
MultipleInteractions
class.
In addition there is the possibility of a global rescaling of
cross sections (which could not easily be accommodated by a
changed alpha_strong, since alpha_strong runs)
parm
MultipleInteractions:Kfactor
(default = 1.0
; minimum = 0.5
; maximum = 4.0
)
Multiply all cross sections by this fix factor.
There are two complementary ways of regularizing the small-pT
divergence, a sharp cutoff and a smooth dampening. These can be
combined as desired, but it makes sense to coordinate with how the
same issue is handled in spacelike
showers. Actually, by default, the parameters defined here are
used also for the spacelike showers, but this can be overridden.
Regularization of the divergence of the QCD cross section for
pT -> 0 is obtained by a factor pT^4 / (pT0^2 + pT^2)^2,
and by using an alpha_s(pT0^2 + pT^2). An energy dependence
of the pT0 choice is introduced by two further parameters,
so that pT0Ref is the pT0 value for the reference
cm energy, pT0Ref = pT0(ecmRef).
Warning: if a large pT0 is picked for multiple
interactions, such that the integrated interaction cross section is
below the nondiffractive inelastic one, this pT0 will
automatically be scaled down to cope.
The actual pT0 parameter used at a given cm energy scale, ecmNow,
is obtained as
pT0 = pT0(ecmNow) = pT0Ref * (ecmNow / ecmRef)^ecmPow
where pT0Ref, ecmRef and ecmPow are the
three parameters below.
parm
MultipleInteractions:pT0Ref
(default = 2.15
; minimum = 0.5
; maximum = 10.0
)
The pT0Ref scale in the above formula.
Note: pT0Ref is one of the key parameters in a
complete PYTHIA tune. Its value is intimately tied to a number of other
choices, such as that of colour flow description, so unfortunately it is
difficult to give an independent meaning to pT0Ref.
parm
MultipleInteractions:ecmRef
(default = 1800.0
; minimum = 1.
)
The ecmRef reference energy scale introduced above.
parm
MultipleInteractions:ecmPow
(default = 0.16
; minimum = 0.0
; maximum = 0.5
)
The ecmPow energy rescaling pace introduced above.
Alternatively, or in combination, a sharp cut can be used.
parm
MultipleInteractions:pTmin
(default = 0.2
; minimum = 0.1
; maximum = 10.0
)
Lower cutoff in pT, below which no further interactions
are allowed. Normally pT0 above would be used to provide
the main regularization of the cross section for pT -> 0,
in which case pTmin is used mainly for technical reasons.
It is possible, however, to set pT0Ref = 0 and use
pTmin to provide a step-function regularization, or to
combine them in intermediate approaches. Currently pTmin
is taken to be energy-independent.
The choice of impact-parameter dependence is regulated by several
parameters.
mode
MultipleInteractions:bProfile
(default = 2
; minimum = 0
; maximum = 3
)
Choice of impact parameter profile for the incoming hadron beams.
option
0 : no impact parameter dependence at all.
option
1 : a simple Gaussian matter distribution;
no free parameters.
option
2 : a double Gaussian matter distribution,
with the two free parameters coreRadius and
coreFraction.
option
3 : an overlap function, i.e. the convolution of
the matter distributions of the two incoming hadrons, of the form
exp(- b^expPow), where expPow is a free
parameter.
parm
MultipleInteractions:coreRadius
(default = 0.4
; minimum = 0.1
; maximum = 1.
)
When assuming a double Gaussian matter profile, bProfile = 2,
the inner core is assumed to have a radius that is a factor
coreRadius smaller than the rest.
parm
MultipleInteractions:coreFraction
(default = 0.5
; minimum = 0.
; maximum = 1.
)
When assuming a double Gaussian matter profile, bProfile = 2,
the inner core is assumed to have a fraction coreFraction
of the matter content of the hadron.
parm
MultipleInteractions:expPow
(default = 1.
; minimum = 0.4
; maximum = 10.
)
When bProfile = 3 it gives the power of the assumed overlap
shape exp(- b^expPow). Default corresponds to a simple
exponential drop, which is not too dissimilar from the overlap
obtained with the standard double Gaussian parameters. For
expPow = 2 we reduce to the simple Gaussian, bProfile = 1,
and for expPow -> infinity to no impact parameter dependence
at all, bProfile = 0. For small expPow the program
becomes slow and unstable, so the min limit must be respected.
It is possible to regulate the set of processes that are included in the
multiple-interactions framework.
mode
MultipleInteractions:processLevel
(default = 3
; minimum = 0
; maximum = 3
)
Set of processes included in the machinery.
option
0 : only the simplest 2 -> 2 QCD processes
between quarks and gluons, giving no new flavours, i.e. dominated by
t-channel gluon exchange.
option
1 : also 2 -> 2 QCD processes giving new flavours
(including charm and bottom), i.e. proceeding through s-channel
gluon exchange.
option
2 : also 2 -> 2 processes involving one or two
photons in the final state, s-channel gamma
boson exchange and t-channel gamma/Z^0/W^+-
boson exchange.
option
3 : also charmonium and bottomonium production, via
colour singlet and colour octet channels.
Further variables
These should normally not be touched. Their only function is for
cross-checks.
mode
MultipleInteractions:nQuarkIn
(default = 5
; minimum = 0
; maximum = 5
)
Number of allowed incoming quark flavours in the beams; a change
to 4 would thus exclude b and bbar as incoming
partons, etc.
mode
MultipleInteractions:nSample
(default = 1000
; minimum = 100
)
The allowed pT range is split (unevenly) into 100 bins,
and in each of these the interaction cross section is evaluated in
nSample random phase space points. The full integral is used
at initialization, and the differential one during the run as a
"Sudakov form factor" for the choice of the hardest interaction.
A larger number implies increased accuracy of the calculations.
The process library
The processes used to generate multiple interactions form a subset
of the standard library of hard processes. The input is slightly
different from the standard hard-process machinery, however,
since incoming flavours, the alpha_strong value and most
of the kinematics are aready fixed when the process is called.
Technical notes
Relative to the articles mentioned above, not much has happened.
The main news is a technical one, that the phase space of the
2 -> 2 (massless) QCD processes is now sampled in
dy_3 dy_4 dpT^2, where y_3 and y_4 are
the rapidities of the two produced partons. One can show that
(dx_1 / x_1) * (dx_2 / x_2) * d(tHat) = dy_3 * dy_4 * dpT^2
Furthermore, since cross sections are dominated by the "Rutherford"
one of t-channel gluon exchange, which is enhanced by a
factor of 9/4 for each incoming gluon, effective structure functions
are defined as
F(x, pT2) = (9/4) * xg(x, pT2) + sum_i xq_i(x, pT2)
With this technical shift of factors 9/4 from cross sections to parton
densities, a common upper estimate of
d(sigmaHat)/d(pT2) < pi * alpha_strong^2 / pT^4
is obtained.
In fact this estimate can be reduced by a factor of 1/2 for the
following reason: for any configuration (y_3, y_4, pT2) also
one with (y_4, y_3, pT2) lies in the phase space. Not both
of those can enjoy being enhanced by the tHat -> 0
singularity of
d(sigmaHat) propto 1/tHat^2.
Or if they are, which is possible with identical partons like
q q -> q q and g g -> g g, each singularity comes
with half the strength. So, when integrating/averaging over the two
configurations, the estimated d(sigmaHat)/d(pT2) drops.
Actually, it drops even further, since the naive estimate above is
based on
(4 /9) * (1 + (uHat/sHat)^2) < 8/9 < 1
The 8/9 value would be approached for tHat -> 0, which
implies sHat >> pT2 and thus a heavy parton-distribution
penalty, while parton distributions are largest for
tHat = uHat = -sHat/2, where the above expression
evaluates to 5/9. A fudge factor is therefore introduced to go the
final step, so it can easily be modifed when further non-Rutherford
processes are added, or should parton distributions change significantly.
At initialization, it is assumed that
d(sigma)/d(pT2) < d(sigmaHat)/d(pT2) * F(x_T, pT2) * F(x_T, pT2)
* (2 y_max(pT))^2
where the first factor is the upper estimate as above, the second two
the parton density sum evaluated at y_3 = y_ 4 = 0 so that
x_1 = x_2 = x_T = 2 pT / E_cm, where the product is expected
to be maximal, and the final is the phase space for
-y_max < y_{3,4} < y_max.
The right-hand side expression is scanned logarithmically in y,
and a N is determined such that it always is below
N/pT^4.
To describe the dampening of the cross section at pT -> 0 by
colour screening, the actual cross section is multiplied by a
regularization factor (pT^2 / (pT^2 + pT0^2))^2, and the
alpha_s is evaluated at a scale pT^2 + pT0^2,
where pT0 is a free parameter of the order of 2 - 4 GeV.
Since pT0 can be energy-dependent, an ansatz
pT0(ecm) = pT0Ref * (ecm/ecmRef)^ecmPow
is used, where ecm is the current cm frame energy,
ecmRef is an arbitrary reference energy where pT0Ref
is defined, and ecmPow gives the energy rescaling pace. For
technical reasons, also an absolute lower pT scale pTmin,
by default 0.2 GeV, is introduced. In principle, it is possible to
recover older scenarios with a sharp pT cutoff by setting
pT0 = 0 and letting pTmin be a larger number.
The above scanning strategy is then slightly modified: instead of
an upper estimate N/pT^4 one of the form
N/(pT^2 + r * pT0^2)^2 is used. At first glance, r = 1
would seem to be fixed by the form of the regularization procedure,
but this does not take into account the nontrivial dependence on
alpha_s, parton distributions and phase space. A better
Monte Carlo efficiency is obtained for r somewhat below unity,
and currently r = 0.25 is hardcoded.
In the generation a trial pT2 is then selected according to
d(Prob)/d(pT2) = (1/sigma_ND) * N/(pT^2 + r * pT0^2)^2 * ("Sudakov")
For the trial pT2, a y_3 and a y_4 are then
selected, and incoming flavours according to the respective
F(x_i, pT2), and then the cross section is evaluated for this
flavour combination. The ratio of trial/upper estimate gives the
probability of survival.
Actually, to profit from the factor 1/2 mentioned above, the cross
section for the combination with y_3 and y_4
interchanged is also tried, which corresponds to exchanging tHat
and uHat, and the average formed, while the final kinematics
is given by the relative importance of the two.
Furthermore, since large y values are disfavoured by dropping
PDF's, a factor
WT_y = (1 - (y_3/y_max)^2) * (1 - (y_4/y_max)^2)
is evaluated, and used as a survival probability before the more
time-consuming PDF+ME evaluation, with surviving events given a
compensating weight 1/WT_y.
An impact-parameter dependencs is also allowed. Based on the hard
pT scale of the first interaction, and enhancement/depletion
factor is picked, which multiplies the rate of subsequent interactions.
Parton densities are rescaled and modified to take into account the
energy-momentum and flavours kicked out by already-considered
interactions.