Wednesday, May 28, 2014

An Alternative CKM Matrix Parameterization?

Background

The Cabibbo-Kobayashi-Mashkawa (CKM) matrix is the matrix that sets forth the empirically determined square root of the probability of a quark of one type changing into a quark of another type when it emits a W boson.

Any up type quark can give rise to any of the three down type quarks.  Any down type quark can give rise to any of the three up type quarks.  It is trust as a matter of definition, that the sum of the probabilities of a particular quark, say, a charm quark, changing into each of the possible down type quarks is exactly 100%.  It is empirically true, however, that the matrix is unitary.  Thus, the sum of the squares of the entries in the columns of the matrix as well as the sum of the squares of the entries in rows of the matrix equal 100%.

The probability of a particular up type quark transforming into a particular down type quark is the same as the probability of that down type quark transforming into that up type quark, except to the extent of CP violation in the matrix, which has complex number valued entries.

Like any unitary matrix, the CKM matrix can be parameterized in an infinite number of ways using four parameters (because a unitary three by three matrix has four degrees of freedom).

The magnitude of the entries is as follows:*


One interesting parameterization of the matrix, because it suggests a hidden structure to its entries, is the Wolfenstein parameterization (originally proposed in a 1983 paper by Lincoln Wolfenstein). To order λ3, it is:



Using the values of the previous section for the CKM matrix, the best determination of the Wolfenstein parameters is:
λ = 0.2257+0.0009 −0.0010
A = 0.814+0.021 −0.022
ρ = 0.135+0.031 −0.016, and
η = 0.349+0.015 −0.017.

* The material from the * to this explanatory note is from Wikipedia.

Up to adjustments for CP violation, this parameterization suggests that the probability of a first to second generation transition (or second to first generation transition) is λ, that the probability of a second to third generation (or third to second generation) transition is Aλ2, and that the probability of a first to third generation (or third to first generation) transition is λ*Aλ(i.e. product of the probability of making first one of the single generation step transitions and then the second).  The probability of transitioning to a quark of the same generation is the residual probability after the probability of the other two options is subtracted out.

Higher order and variant Wolfenstein parameterizations are discussed in a 2011 paper.  This suggests that some tweaks to the original Wolfenstein parameterization are necessary to fit the data, while being motivated by the same principles.

A 1994 paper, expands the Wolfenstein parameterization to a higher order of lambda and pointing out that a symmetric CKM matrix of the kind originally envisioned is almost ruled out by the data.  A 2014 paper attempted to generalize this parameterization to leptons.

Conjecture

Nothing mathematically requires that it be possible to parameterize the CKM matrix with three rather than four empirically determined constants.  But, there is a way to do so that is consistent with empirical evidence.

2 is equal to (2λ)at the 0.1 sigma level of precision, and there is no place in the Wolfenstein parameterization of the CKM matrix where this substitution cannot be made.

Thus, Vcb becomes approximately (2λ)4, Vts becomes approximately -(2λ)4, Vub becomes approximately (2λ)4*λ*(ρ-iη) and Vtd becomes approximately (2λ)4*λ*(1-ρ-iη).

Moreover if we take ρ-iη to be a single complex number C, and C* to be the complex conjugate of C, then we can state that: Vub becomes approximately (2λ)4*λ*C and Vtd becomes approximately (2λ)4*λ*(1-C*).

Thus, the entire CKM matrix is a function of one empirically determined real number, λ, and one complex number pertinent only to CP violation, C.  This approach thus suggests that there is even more method to the apparent randomness of the CKM matrix than the Wolfenstein parameterization would already suggest.

This variation on the Wolfenstein parameterization of the CKM matrix suggests that we should be looking for physics explaining the second to third generation transition that are basically a power of four different than the first to second generation transition, rather than a power of two different.  For example, the power of four might have some physical or geometrical relationship to the four dimensions of space-time.

To be clear, I'm not by any means the first person to see the possibility of expressing the CKM matrix with fewer than four parameters.  A very different proposed single constant parameterization of both the CKM matrix and PMNS matrix, for example, can be found here.

Wednesday, May 21, 2014

Today's Physics Preprints

* Complete data reports from Tevatron show a 125 GeV Higgs bosons signal at 3 sigma with branching fractions consistent with the Standard Model prediction. A Standard Model 0+ spin-parity combination (i.e. a scalar Higgs boson) is favored over a pseudoscalar O- hypothesis at 99.9% significance, and over a tensor 2+ hypothesis at 99.5% significance.

* A new LHC report sets new SUSY exclusions:

ATLAS Collaboration, "Search for supersymmetry in events with four or more leptons in sqrt{s}= 8 TeV pp collisions with the ATLAS detector" (2014)

Results from a search for supersymmetry in events with four or more leptons including electrons, muons and taus are presented. The analysis uses a data sample corresponding to 20.3 fb-1 of proton--proton collisions delivered by the Large Hadron Collider at sqrt{s} = 8 TeV and recorded by the ATLAS detector. . . . No significant deviations are observed in data from Standard Model predictions and results are used to set upper limits on the event yields from processes beyond the Standard Model. Exclusion limits at the 95% confidence level on the masses of relevant supersymmetric particles are obtained. In R-parity-violating simplified models with decays of the lightest supersymmetric particle to electrons and muons, limits of 1350 GeV and 750 GeV are placed on gluino and chargino masses, respectively. In R-parity-conserving simplified models with heavy neutralinos decaying to a massless lightest supersymmetric particle, heavy neutralino masses up to 620 GeV are excluded. Limits are also placed on other supersymmetric scenarios.

* The two LHC experiments, ATLAS and CMS, have presented a combined report on their single top quark production results (other than the critical top quark mass). Single top quark and top-antitop quark pair productions rates closely approximately the expected values.

Nine direct measurements of CKM matrix element Vtb are presented with a maximum precision of 4.1% and a minimum precision of 17%. The most precise measurement has a central value of 0.998 and the range of central values span from 0.97 to 1.13.

 Since the number can't in principle ever exceed 1 (since the square of this value is defined to be the probability that a top quark will transform into a bottom quark rather than an strange quark or down quark when it decays by emitting a W boson), the six best fit results greater than 1 are consistent with a best fit to a 100% preference for the tb as opposed to the ts or td possibilities.

 The reality is that estimates of Vtb inferred from direct measurements to Vts and Vtd are profoundly more precise than the direct measurements reported in this paper. Vts is 0.0404 to a precision of about 2.5%. Vtd is 0.00867 to a precision of about 3%. This implies that Vtb is 0.999146 to a precision of about 0.005% (about 1000 times as precise as the direct measurements of Vtb reported in this paper). (Note that all Vti values discussed above are actually the absolute value of those elements. The true value is a complex number that reflects the probability of CP violations in the Standard Model CKM matrix).

An ATLAS search ruled out flavor changing neutral current (FCNC) branching fractions in single top quark decays at rates in excess of about 3*10-5. CMS also didn't see FCNC's in single top quark decays, but ruled them out with far less precision.

Tuesday, May 20, 2014

Fourth Generation Lepton Searches

The current mass limits from LEP are 80.5 GeV (90.3 GeV) for a Majorana (Dirac) ν′ decaying to τW and 101.9 GeV (or 100.8 GeV) for τ′ decaying to ν′W (or νW). We shall be able to use LHC data to exclude a range of mass combinations above these limits. This is in lieu of a dedicated search for the simplest leptonic extension of the standard model, which surprisingly still remains to be done. An even simpler though less motivated search is possible if one assumes that ν′ → W ℓ (ℓ = µ, e) is the dominant decay. . . . a τ′ mass up to at least 250 GeV would be excluded very early with 1 fb−1 of data. We estimate that the corresponding limit with present data would be at least 600 GeV. Here again no dedicated search has been reported.

From here.

Three observed excesses in the signal region of multi-lepton searches reported to date at the LHC prevent much more strict limits on fourth generation charged lepton masses and fourth generation neutrinos from being revised in the linked paper. This probably reflects unduly broad event selection criteria for published multi-lepton event searches designed for other purposes such as the search for a Higgs boson, rather dedicated searches for fourth generation leptons which the criteria could be be rigorous. It probably does not reflect a sub-discovery threshold signal of fourth generation leptons, although this can't be ruled out from available published data.

Of course, it would also be very exciting if all or any of the three observed excesses were a signal of beyond the Standard Model (BSM) physics.

The benefits in terms of ruling out BSM theories or BSM parameter spaces of a dedicated fourth generation lepton event search at the LHC that could increase exclusions from 80.5 GeV for a fourth generation Majorana neutrino, 90.3 GeV for the fourth generation Dirac neutrino, and 100.8-101.9 GeV for a fourth generation charged lepton set in experiments at the LEP which ceased operations in the year 2000, to about 600 GeV for each of these form the LHC, would be substantial. Few BSM experimental limits are so stale.

An increased mass limitation on a fourth generation "fertile" neutrino (aka active neutrino) to about 600 GeV would be particular relevant as it would significantly impact the parameter space of dark matter theories and other aspects of neutrino physics. The heaviest of the three current fertile neutrino species is realistically probably not more than about 0.1 eV, so it would be very surprising given the absence of a fertile neutrino species between 0.1 eV and 80.5 GeV to find a fertile neutrino species between 80.5 GeV and 600 GeV. But, the extra experimental confirmation would be nice to have anyway.

The searches may not have been done already, mostly because a variety of factors already strongly disfavor the SM4 extension of the Standard Model that simply adds a fourth generation of Standard Model fermions to the existing three.

Monday, May 19, 2014

The story of mtDNA haplogroup U6

A new paper on the history of the mtDNA haplogroup U6 uses new detailed data of U6 subclades, the location of modern individuals with this mtDNA haplogroup, and ancient DNA data to trace the origins of the current distribution of this genotype.

The paper is Bernard Secher, et al., "The history of the North African mitochondrial DNA haplogroup U6 gene flow into the African, Eurasian and American continents", BMC Evolutionary Biology; 2014,14:109 doi:10.1186/1471-2148-14-109.

I summarize the paper's conclusions in detail below.

Arrival In Africa

The mtDNA U6 clade is one of several clades of mtDNA U that expanded out of Central Asia and corresponds archaeologically to the intrusive Levantine Aurignacian around 35 kya.  Parallel haplogroup U5 gaves rise to the Aurignacian in Europe and U2 to a contemporaneous expansion into India. "This period coincides with the Early Upper Paleolithic (EUP) period, prior to the Last Glacial Maximum, but cold and dry enough to force a North African coastal route."  

The authors note that mtDNA clade M1 back migrated to Africa at about the same time, but was probably not a fellow traveler with U6 for very long as it has a very different geographic distribution focused around Northeast Africa and East Africa.  There are several Y-DNA clades that could plausibly have been fellow travelers of U6, but their mutation rate estimated ages are so much younger than the mtDNA clade ages that later male dominated migration and replacement seems to be a better fit for these correlations.

Expansion Within Africa

U6a expanded in Northwest Africa about 9,000 years after the Levatine Aurignacian after a gradual diffusion across the North Africa coast where it expanded from Morocco ca. 26 kya and is associated archaeologically and anthropologically with the Iberomaurusian culture in the  Maghreb.
Others have pointed to the Dabban industry in North Africa and its supposed source in the Levant, the Ahmarian, as the archaeological footprints of U6 coming back to Africa. However, we disagree for several reasons: firstly, they most probably evolved in situ from previous cultures, not being intrusive in their respective areas; second, their chronologies are out of phase with U6 and third, Dabban is a local industry in Cyrenaica not showing the whole coastal expansion of U6. In addition, recent archaeological evidence, based on securely dated layers, also points to the Maghreb as the place with the oldest implantation of the Iberomaurusian culture, which is coincidental with the U6 radiation from this region proposed in this and previous studies. 
The U6a2 branch expanded from Ethiopia around 20kya, corresponding with a maximal period of aridity in North Africa.  This was probably not due to "a return to East Africa across the Sahara."  Instead:
The most probable scenario is that small human groups scattered at a low density throughout the territory, retreated in bad times to more hospitable areas such as the Moroccan Atlas Mountains and the Ethiopian Highlands.
Some investigators have proposed that U6 bearing people left Africa via the Levant ca. 40 kya.  The authors of this paper disagree.
[T]he proposed movement out of Africa through the Levantine corridor around 40 kya did not occur or has no maternal continuity to the present day. This is because: first, in that period the Eurasian haplogroups M and N had already evolved and spread at continental level in Eurasia, and, second, there is no evidence of any L-derived clade outside Africa with a similar coalescence age to that proposed movement. Under this perspective, the late Pleistocene human skull from Hofmeyr, South Africa, considered as a sub-Saharan African predecessor of the Upper Paleolithic Eurasians, should be better considered as the southernmost vestige of the Homo sapiens return to Africa.  
The rest of the human movements inside Africa, such as the Saharan occupation in the humid period by Eastern and Northern immigrations, or the retreat to sub-Saharan African southwards and to the Maghreb northwards in the desiccation period, or even the colonization of the Canary Islands, all faithfully reflect the scenarios deduced from the archaeological and anthropological information. 
The paper catalogs radiations of U6a branches with the Maghreb including two lineages that may spread to Iberia ca. 20 kya (U6a1, U6a1b).  There are no expansions from 17 kya to 13 kya, likely reflecting the lingering effects of the LGM's slow retreat.  There are multiple subsequent clade expansions whose timing largely correspond to inferred population growth and climate trends.
After that, the climate shifted to a humid period in Africa and population growth was reinitiated. In Ethiopia, periodical bursts at around 13 kya (U6a2a1), 9 kya (U6a2b, U6a2a1a) and 6 kya (U6a2a1b) are detectable. 
Basic clusters like U6b, U6c and U6d also emerged within a window between 13 to 10 kya. 
U6b lineages spread from the Maghreb, through the Sahel, to West Africa and the Canary Islands (U6b1a), and are also present from the Sudan to Arabia, but not detected in Ethiopia. 
In contrast, U6c and U6d are more localized in the Maghreb. 
Further spreads of secondary U6a branches are also apparent, going southwards to Sahel countries and reaching West Africa (U6a5a). 
Autochthonous clusters in sub-Saharan Africa first appeared at around 7 kya (U6a5b), coinciding with a period of gradual desiccation that would have obliged pastoralists to abandon many desert areas. Consequently, no more U6 lineages in the Sahel are detected, while later expansions continued in West Africa (U6a3f, U6a3c, and U6b3) and the Maghreb with an additional spread to the Mediterranean shores of Europe involving U6b2, U6a3e, U6a1b and U6a3b1. . . . using only African sequences . . . 
The subdivision of HVI sequences into geographic components shows that the Maghreb component is dominant over all of North Africa, reaching 45.7% even in Arabia. Frequencies drop in Central and West Africa, suggesting a southward spread, and it is absent in East Africa where all haplotypes belong to the Ethiopian U6a2 cluster. This East African lineage is also the most prevalent in Central and West Africa, pointing to a westward expansion through the Sahel corridor. In North Africa it is second in frequency except in Algeria where it is dominant (55%) . . . . U6a2 may have reached the region through the Sahara, by maritime contacts from the Levant or, most probably both. 
U6c is confirmed to be a Maghreb lineage restricted to the Mediterranean area. 
U6b has the most widespread geographic range. . . . its present-day western and eastern areas must have been connected sometime in the past, perhaps through the Sahara during the Holocene Humid Period.

The Canary Islands.

The Canary Islands, 100km from Western Sahara was first "inhabited by indigenous people, today collectively known as Guanches. On the basis of anthropological, archaeological and linguistic grounds,
close affinities with the North African Berbers were soon identified. Molecular analyses have confirmed these affinities. Later studies of indigenous remnants confirmed that these lineages were in the Canaries before the European colonization. . . . it is broadly accepted that the most ancient human settlement on the Canaries was not earlier than 2.5 kya."

Multiple U6 clades as well as H1 clades were present in the founding population of the Canary Island.  The aboriginal samples has as much mtDNA diversity as the current population and aborigine ancient DNA included both "basic and derived U6b1a and U6c1 haplotypes."  U6c1 and U6b1 probably arrived in separate waves of premediated maritime colonization of the islands (not a sporadic male contact) with origins in the Mediterranean in Roman or Arab times.  Both clades originate in the Maghred of North Africa.

"Curiously, one U6b1 lineage has been sporadically detected in a Lebanese mtDNA survey that might bring speculation about a Levantine origin for the U6b1 cluster. However, a more or less recent immigration of this lineage from the Canary Islands seems more convincing explanation." I am inclined to see this as a possible Phoenician, Roman or Arabian back migration to the Levant with a Mediterranean sailor possibly bringing a Guanche wife back with him to a Lebanese home.


Europe
In general, haplogroup U6 has very low frequencies in Europe. It is more frequent in the Mediterranean countries, mainly in those with longer histories of Moorish influence since medieval times, such as Portugal (2.5%), Spain (1.1%) or Sicily (0.4%). In fact, there is a significant longitudinal gradient in Mediterranean Europe, with frequencies decreasing eastwards (r = −0.87; p = 0.008) that run parallel to that found in North Africa (r = −0.97; p <0.001). Congruently, the presence of U6 in the Iberian Peninsula has been attributed to the historic Moorish expansion. However, without denying this historic gene flow, others have also suggested prehistoric inputs from North Africa.  
Actually, the U6 phylogeny and the phylogeography of its lineages are better explained admitting both prehistoric and historic influences in Europe.
Two Iberian lineages of U6a1 expand ca. 18.6 kya.  U6a1a expands in Europe ca. 13.1 kya. "There are also two sequences of Mediterranean European origin that directly emerged from the ancestral node of the East African cluster U6a2a (19.8 kya)."


Later expansions into Europe involve waves of Maritime contacts with North Africa and from Eastern areas to the Maghreb from Neolithic to Chalcolithic (U6a3a1, U6a7a1, U6a7a2, and U6c1) to 14 European lineages that coalesce in historic times.
Some may be associated with the Roman conquest of Britain (U6d1a), the diaspora of Sephardic Jews (U6a7a1b), or the European colonization of the Americas (U6a1a1a2, U6a7a1a, U6a7a2a1, U6b1a). Roughly, 35 European lineages have prehistoric spreads and 50 sequences historic spreads. In all cases they are involved with clear North African counterparts. 
We also know something about clades that are not European specific that are found in Europe.
The largest U6 Maghreb component in Europe is found in Portugal (69.9%), then in Spain (50.0%) and Italy (53.0%), and decreases sharply in the Eastern Mediterranean (25.0%). No U6b representatives have been detected in Italy, although it is present in Iberia to the west and in the Near East to the east. 
Regarding the Canarian motif, 33% and 50% of the U6b haplotypes found respectively in mainland Portugal and Spain belong to the Canary Islands autochthonous U6b1a subgroup. Curiously, it has not been detected in the Portuguese island of Azores and Madeira or in Cape Verde either. 
U6c is confirmed as a low-frequency Mediterranean haplogroup. All four identified U6 HVI components have representatives in Atlantic Europe. This Maghreb component could have arrived through Atlantic Copper or Bronze age networks, leaving the presence of U6c to Punic or more probably, Roman colonization. 
On the other hand, the East African component in Europe has its peak in eastern Mediterranean area (62.5%) and gradually diminishes westward toward Italy (46.0%), Spain (28.3%) and mainland Portugal (20.0%). . . .  
[A]rchaeological comparisons of the different prehistoric cultures that evolved on both shores of the Mediterranean Sea point to the conclusion that each region had its own technological traditions, despite some parallel developments. This finding weakens the hypothesis of important demic or cultural interchanges, at least until the beginning of the Neolithic when prehistoric seafaring started in the Mediterranean Sea. Indeed, the rapid spread of the Neolithic Cardial Culture, or the presence of the Megalithic culture on both sides of the Mediterranean during the Chalcolithic period, would suffice to explain the presence in Europe of U6 lineages with coalescence ages since Neolithic times onwards. 
A couple of U6a lineages may have been part of a larger Mesolithic expansion to the North and South from a Franco-Cantabrian refugia as the climate after the LGM eased, although an absence of archaeological or ancient DNA evidence makes this mere conjecture.
[A]t least two U6 lineages, U6a1a and U6a5, both with European coalescences around 13 kya . . . . These would coincide with climatic improvement during the Late Glacial period. . . . several European mtDNA lineages, with similar coalescence ages, such as V, U5b1, H1 and H3, have been proposed as maternal footprints in North Africa of a hypothetical southward human spread after the Last Glacial period, from the Franco-Cantabrian refuge.
Jews and Gypsies

The presence of U6 in Sephardic Jews and some Gypsy populations is about what would be expected given the known histories of local admixture of these groups in the historic era.

Dark Matter News

LUX Light WIMP CDM Exclusion Confirmed By SuperCDMS

Several Earth based direct dark matter detection experiments have claimed to see hints of WIMP dark matter in the 8 GeV-20 GeV mass range: The signals from CDMSII (Silicon detector) and CoGeNT were consistent with each other. Signals from CRESTT and DAMA/LIBRA were just barely consistent with each other. The two sets of positive signals are mutually inconsistent with each other. These hints are largely ruled out by a combination of the results from the CDMSII (Germanium detector), CDMSLite, XENON10, and EDELWEISS experiments.

Finally, both the LUX experiment and the newly released results from the SuperCDMS experiment have almost identical, ultra-precise results that exclude the entire range of all four detection hints.  The dispels any concerns that the LUX result might not have been robust due to experiment specific systemic errors.

Direct detection experiments have searched for and not found WIMP dark matter at masses of up to about 200 GeV.

The bottom line is that every hint of a direct detection of dark matter (in each case by a less precise experiment) is ruled out by eight other experimental results, including two ultra-precise searches that confirm each other.  There is no light WIMP dark matter in the vicinity of Earth.

WIMP dark matter has to be less than 7 GeV in mass if it exist in the vicinity of Earth at anything approaching the relic density expected from dark halo models for the Milky Way, and must be less than 4 GeV in mass if its cross-section of interaction is in the higher end where the hints of direct dark matter detection suggested that they would be.

Other Light WIMP Model Problems

Non-Detection in Precision Electroweak Boson Decay Experiments

Naively, every weakly interacting particle with a mass of 45 GeV or less should be produced in particle-antiparticle pairs in the decays of W and Z bosons and the Higgs boson. Yet, 100% of those decays are accounted for with Standard Model particle decays.

Non-Detection In SUSY Particle Searches That Also Disfavor R-Parity Conserving SUSY

Particle accelerator searches for SUSY particles rule out their existence to the hundreds of GeVs of mass in most cases. R-parity conserving models of SUSY which might evade the naive limits of electroweak boson decays on the existence of light weakly interacting particles (because W and Z bosons might have the wrong R-parity and hence might not be produced in that manner) and are based on searches from missing transverse energy in particle accelerator events are particularly stringent.

The Case For A Hooperon

The hints of a WIMP dark matter annihilation signal from the central Milky Way by the FERMI experiment suggests a potential dark matter particle of about 25 GeV-40 GeV, sometimes called the Hooperon. A comprehensive experimental case for the Hooperon most recently made in a February 26, 2014 preprintand an April 24, 2014 preprint that attempt to explain why this has not been seen in latest LHC data.

The non-detection at the LHX, and by LUX and SuperCDMS strain this model, however, because it should be detectable just around the corner at all three of these experiments according to an April 4, 2014 preprint.  Both rule out cold dark matter in the vicinity of Earth with a Hooperon mass unless it has very, very low cross-sections of interaction and suppose a quite complex set of Hooperon properties rather than a minimal set of such properties.  Specifically, the Hooperon theorists must take the following stance to explain why it hasn't been seen elsewhere yet:
[W]e will take the DM to be a Dirac fermion and a SM gauge singlet, with couplings to right-handed down-type quarks. We take the couplings of X b;s;d with quarks to be approximately flavor diagonal, allowing us to associate each flavor in the dark sector with a corresponding flavor of quarks. In particular, we take the lightest of these new particles to be associated with the b-quark, and assume that the heavier flavors decay in this lightest state.
Ten different direct dark matter detection experiments based on Earth, many of which are extremely precise, have not seen such a particle. This radiation signal is the only indication in any experiment that hasn't been repeatedly and soundly contradicted by another experiment using a similar methodology of WIMP dark matter in the 8 GeV to 200 GeV or so range.

Needless to say, I'm skeptical of this possibility and suspect that it has some Standard Model explanation in the not terribly well understood dynamics of the central black hole of the Milky Way, or in systemic issues with the FERMI experiment itself).  It also doesn't help that no other experiment has replicated the FERMI result yet, hints of which were emerging a decade ago.

Impact, In General, Of Non-Detection Of WIMP Cold Dark Matter On SUSY and CDM Theories

In addition to disfavoring WIMP dark matter, this collection of experimental data also disfavors SUSY models generally, because one of the core features of most popular SUSY models is that dark matter is explained as a stable lighest supersymmetric particle (LSP) that is a weakly interacting massive particle in the sub-TeV, super-GeV mass range.

Now, direct dark matter detection experiments and annihilation radiation signature searches for dark matter would fail if the weak interaction cross-section of dark matter particles was basically nil. But, SUSY particles are, by assumption, weakly interacting (their weak interactions are, in general) a fundamental part of the theories of this type).

Dark matter halo distributions and dwarf galaxy level structures that are inferred from astronomy observations are inconsistent with sterile neutrino dark matter (i.e. dark matter that that interacts solely via gravity and fermion contact forces) that has a mass much in excess of 3 keV, which is a million times or more lighter than the typical cold dark matter candidate.

Hints Of A 3.5 keV Dark Matter Emission Line.  

The 3.5 keV X-Ray emission line seen from multiple galactic clusters by multiple experiments, on the other hand, is a more plausible dark matter annihilation signature, and would suggest a slowly decaying warm dark matter candidate with a mass on the order of 7 keV.  No known process creates such an emission, although, of course, there are lots of astronomy processes that are ill understood.

This possibility has problems too, but is not directly contradicted by Earth based direct dark matter detection experiments, and isn't burdened by the theoretical requirements of more traditional SUSY WIMP Cold Dark Matter models. The candidate particle would be several times heavier than the preferred mass of a warm dark matter candidate based upon other data from astronomy, but this discrepancy is far less severe than the experimental hurdles to SUSY WIMP CDM.

Super-Heavy CDM.

The IceCube experiment, as of March 2013, has seen PeV energy neutrino events which could, in principle be fit to very heavy CDM.  But, for a variety of reasons, the super-heavy cold dark matter explanation of these events has not been favored.

Higgs Physics News

* The total decay width of the Higgs boson is constrained experimentally to be not more than 4.2 times the Standard Model Higgs boson expectation of 4.15 MeV for a 126 GeV mass Standard Model Higgs boson.

* New experiments by the CMS experiment at the LHC rule out a heavy Higgs boson in most of the mass range from 200 GeV to 1 TeV.
A SM-like Higgs boson is excluded in the mass range 248-930 GeV at 95% CL using the shape-based analysis. . . . For the less sensitive cut-and-count analysis we obtain an observed exclusion of 268-756 GeV[.]
SUSY Models and a variety of more conservative Standard Model extensions with additional Higgs bosons predict the existence of additional heavy, Standard Model-like Higgs bosons.

These results force these extensions to assume that the extra Higgs boson is nearly degenerate in mass with the observed one, or is in excess of 1 TeV which rules out much of the "natural" parameter space of SUSY models.

* A search for sub-eV scalar fields (similar in character to the Higgs boson field which has a 246 GeV scalar field, and which could be attributed to a second light Higgs boson or a inflation or dark energy field) has come up empty and placed stiff experimental constraints on this possibility.

Wednesday, May 14, 2014

Neolithic Revolution Reached Indus River Valley Via Gradual Expansion Through Persia

A review of the dates of early Neolithic sites from the Fertile Crescent to the Indus River Valley and a bit beyond is available at PLOS One (open access).

The study shows that the Neolithic expanded gradually through central Persia starting at a point in the Zargos Mountains from the Fertile Crescent.  It disfavors models of sea based hop to the Indus River Valley from Mesopotamia, a coastal route migration to the Indus River Valley, or a "land rush" style settlement when lots of farmer rush past virgin land to reach more distant virgin land further away from the point of departure.  It also strongly disfavors a source for the Fertile Crescent Neolithic in South Asia or Persia.

Still No Signs Of Sparticles

Supersymmetry (SUSY) theories predict the existence of one or more supersymmetric counterparts for each particle of the Standard Model called "sparticles" (although a few interact with other sparticles much "source" particles of electroweak unification theory to give rise to different sets of "mixed" particles).

The Latest Results

To date, no such particles have been discovered experimentally: "No significant deviations from standard model predictions have been observed.", in the ongoing search of supersymmetry at the Large Hadron Collider (LHC) according to a combined report of the ATLAS and CMS experiments released in pre-print form today.

In admittedly mildly model dependent searches, "neutralinos" (which are fermions that are electrically neutral superpartners of the electroweak gauge bosons of the Standard Model) are ruled out for masses of up to about 350 GeV (about twice the top quark mass) and "charginos" (which are fermions that are electrically charged superpartners of the charged electroweak gauaged bosons including non-Standard Model charged Higgs bosons) are ruled out for masses up to about 750 GeV (more than four times the top quark mass).

This does not disprove the existence of SUSY in and of itself, but it does narrow the parameter space to SUSY to one that is indistinguishable from the Standard Model, at energies up to first run LHC energies (i.e. proton-proton collisions at sqrt(s)=8 TeV with an integrated luminosity of about 20 fb-1), when taken together with other evidence.

Other Searches For Sparticles And SUSY Higgs Bosons.

Searches for other sparticles have generally ruled these particles out for masses less than hundreds of GeVs to several TeV.

SUSY theories also predict a minimum of five Higgs bosons (a light and heavy scalar neutral Higgs boson, a pseudoscalar neutral Higgs boson, and a positively and negatively charged Higgs boson).  No non-Standard Model Higgs bosons have been observed, although the exclusions are only up to the hundreds of GeV and the data can't yet definitively rule out a scenario in which two or three SUSY Higgs bosons that collectively behave like the Standard Model Higgs bosn are nearly degenerate in mass around 125 GeV-126 GeV.

SUSY Bounds From The Anomalous Magnetic Moment of the Muon

The anomalous magnetic moment of the muon in indirectly effected by the masses of all existing gauge bosons that interact via electromagnetism.  The current value of this physical constant implies that any undiscovered bosons with electroweak interactions (which would include any SUSY boson) must have certain minimum masses (citing this source) and the lower bounds are even greater if the discrepancy between the current experimental value and the theoretical value are actually fully consistent with each other. These are as follows:
In summary we found the following particles capable of explaining the current discrepancy, assuming unit couplings: 2 TeV (0.3 TeV) neutral scalar with pure scalar (chiral) couplings, 4 TeV doubly charged scalar with pure pseudoscalar coupling[.]. . .

We also derive the following 1σ lower bounds on new particle masses assuming unit couplings and that the experimental anomaly has been otherwise resolved: a doubly charged pseudoscalar must be heavier than 7~TeV, a neutral scalar than 3 TeV[.]
The W and Z bosons and gluons are vector bosons with chiral couplings. I don't know if the couplings of the Standard Model Higgs boson or its additional supersymmetric counterparts do or do not have chiral couplings.

The supersymmetric counterparts of the Standard Model quarks and charged leptons are charged scalar bosons (singly charged in the case of the selectron smuon and stau, and fractionally charged in the case of the squarks).

The superpartners of the neutrinos are neutral scalar bosons (the selectron sneutrino, the smuon sneutrino and the stau sneutrino, not to be confused with neutralinos which are not superpartners of the neutrinos). One can imagine a slight SUSY variant in which some or all of these bosonic superpartners have J=0 but have pseudoscalar rather than scalar parity. Of the supersymmetric additional Higgs bosons, at least one would be a neutral scalar, one would be a neutral pseudoscalar, and two would be singly charged scalars.

These bosons must have at least 300 GeV of mass if they have chiral couplings, at least 2 TeV of mass if they don't have chiral couplings, and at least 3 TeV of mass if the anomalous magnetic moment of the muon actually has its theoretical value but there is experimental error in the ultraprecision measurement of it.

Non-minimal SUSY theories often have even more Higgs bosons, some of which are doubly charged, which the anomalous magnetic moment of the muon requires be at least 4 TeV if it is pseudoscalar and 7 TeV if the anomalous magnetic moment is due to experimental error rather than being a true discrepancy between theory and reality.

The Impact of B and L Number Violations On SUSY Parameter Space

Furthermore, most SUSY theories do not separately conserve baryon number (B) or lepton number (L), but only B-L, a property that leads to phenomena such as neutrinoless beta decay and flavor changing neutral current interactions.  Critically, the rates of these phenomena are generically functions of sparticle masses and increase as sparticle masses increase.  So, if a SUSY theory is fairly minimal and typical in all other respects, and has sparticles too heavy to otherwise be detected even inconclusively at current LHC energies, neutrinoless beta decay rates should be high and flavor changing neutral currents should occur relatively frequently.

No such interactions have been observed and strict constraints on the maximum rates at which these kinds of interactions can occur have been established experimentally.  Current neutrinoless beta decay searches (even one in Moscow which claims to have seen a signal not seen by any other experiments) are sufficient strict that they effectively rule out sparticles with masses in excess of those that have been ruled out by the LHC.

Now, obviously, a theorist can go back to the drawing board and come up with a SUSY-type theory that suppresses or rules out B number and L number non-conservation interactions to some arbitrary scale beyond the scope of current experiments.  But, experimental data now rule out pretty much every SUSY variation that isn't modified in this way.

Searching For Particular SUSY Particles v. Searching For Any SUSY Particle

Also, keep in mind that these are generally two sigma exclusions, covering a range where it is 95% certain that there are no sparticles of the type sought.  A one sigma exclusion (which would be a range over which there is a 68% probability that there is no sparticle of less than the applicable mass cutoff) would be higher.  Now if you want any confidence that a particular particle isn't in a particular mass range, a two or three sigma cutoff is appropriate.

But, suppose that the question you really want to ask is "are there any superpartners of less than a given mass?"  If, for model dependent reasons, the lightest supersymmetric particle (LSP) should have several companion supersymmetric particles of order of magnitude similar mass, then the one sigma cutoff in each of the individual cutoffs ought to be relevant, because a search for each of the lighter SUSY particles ought to reveal at least one of the lighter SUSY particles with that significance.

If there are a dozen or so undiscovered supersymmetric particles out there and several of them are relatively light, the odds that we would receive a strong hint of at least one of them in some kind of data, even if we didn't get any experimental hint of others and got only a sub-discovery level indication of another, are still much greater than the odds that we would actually discover even a single SUSY particle.

Thus, the breadth of the null result for beyond the Standard Model physics also disfavors SUSY scenarios with a whole suite of sparticles with masses "just over the horizon" of current experimental limits. If the full suite of particles predicted by SUSY theories existed we would expect to see a variety of low to moderate significance anomalies and indirect effects at lower energies that are not observed, even in the absence of any definitive discovery of a particular sparticle at or near the threshold for discovery in high energy physics.

The Impact Of The Higgs Boson Discovery and Conclusion

The null results of these searches, the discovery of the 125.9 GeV Higgs boson that is identical in all measured respects to date to the Standard Model Higgs boson, and various non-collider data, taken as a whole largely rule out "minimal" variations of SUSY at the electroweak energy scale where sparticles had been expected when the theory was originally proposed, fully "natural" version of SUSY, and a wide variety of other variations and parameter spaces for supersymmetric theories.

A Higgs boson at the mass observed also resolves what could have been a critical flaw in the Standard Model. In scenarios with many other possible Higgs boson masses, or with no Higgs boson at all, the equations of the Standard Model broke down at energies well under the "GUT scale". But, as it is, the Standard Model is "ultraviolet complete" (i.e. its equations produce results that make sense even at high energies).

Thus, SUSY is not necessary to have a mathematically coherent Standard Model at high energies. The Standard Model may be a bit ugly, but it produces coherent predictions, even at extremely high energies that could never be replicated experimentally or through astronomy observations (which can't directly observe the earliest moments of the universe when energy scales were that high).

It is currently fashionable to propose SUSY models in which all but a few supersymmetric particles have mases of several TeV to tens of TeV.

Monday, May 12, 2014

BICEP-2 Data Probably Flawed

A flaw in methodology has been identified that potentially undermined the key conclusion of the BICEP-2 experiment that it had detected significant tensor mode gravitational waves at a high level of statistical significance (a conclusion strongly supporting certain kinds of cosmological inflation theories). (One of the first blogs to note this fact, strongly hinted at in a Planck experiment sponsored pre-print, has a post here that took several days to gain traction).

Apparently, a piece of source data from the Planck experiment that was used to adjust the BICEP-2 data was misinterpreted. Once corrected, the BICEP-2 result on tensor modes will definitely be less dramatic, although it may hint at some tensor modes, but without the same statistical significance and without the same intensity.

From the start, the inconsistency between the BICEP-2 result and the preliminary Planck results had cast doubt on the results. As I explained when the results were announced:

Given the previous data, the best combined fit is now r=0.10 to 0.11, assuming that the BICEP-2 result isn't flawed in methodology in some way, which is an entirely plausible possibility that will look more plausible if it is not confirmed by the Planck polarization data later this year, and several other experiments that will be reporting their results within the next year or two. Skepticism of the result, in the absence of independent confirmation by another experiment (Jester puts the odds that this result is right at only 50-50) flows from the fact that the value reported is so different from the consensus value from all previous experiments, with the results in a roughly three standard deviation tension with each other.

Now, it seems that the doubt was justified. At least, these methodological errors were discovered much sooner than the notorious OPERA experiment's superluminal neutrino results, although still, as many as 300 papers based on the early and probably flawed BICEP-2 results from this March have already been written and published in pre-print form so far.

Another important consequence of the BICEP-2 results, had they been true, would have been that they would have caused the cosmological evidence to clearly favor the existence of four, rather than three, kinds of light neutrinos. The evidence that BICEP-2 had provided for this beyond the Standard Model number of neutrino species has also been called into question.

UPDATE (May 12, 2014 5:20 p.m. MDT):  Jester adds the following update in the comments:

One comment: indeed, the issue affects only BICEP's DDM2 foreground emission model. But DDM2 is what they single out in the paper as their best model. Recall that in the original analysis using DDM2 shifts the central value of r from 0.2 down to 0.16. Now we know that their best model underestimates the foreground, so we know the significance must go down further. By how much, I don't know. Various rumors place the significance of the corrected signal between 0 and 2 sigma.
A corrected signal between 0 and 2 sigma with the old error bar of +0.7/-0.5 would imply r between 0 and 0.1 with the new analysis and would have a central value consistent with Planck.  END UPDATE.

Off topic but also notable: one of a number of final reports from Fermilab's D0 experiment has reached the following conclusion:
We present an overview of the measurements of the like-sign dimuon charge asymmetry by the DO Collaboration at the Fermilab Tevatron proton-antiproton Collider. The results differ from the Standard Model prediction of CP violation in mixing and interference of B^0 and B^0_s by 3.6 standard deviations.
Evidence at the 3.6 sigma level from a reputable particle physics experiment of beyond the Standard Model phenomena is indeed notable, and confirms previous published results from 2010 and 2011. (Jester discussed the similar but slightly stronger 2011 results.)

But, given recent concern over the declining accuracy of the Run-II muon detectors in the D0 experiment over time, the low power of Fermilab relative to the Large Hadron Collider (LHC)'s experiments (ATLAS, CMS, LHCb), and the apparent lack of clear replication of this result by the other experiment at Fermilab (CDF) or the B-factories (Belle and BaBar), one should not rush to assume that this result is correct.

Notably, the D0 paper whose pre-print came out today does not discuss the replication of this result, or lack thereof, by other experiments.  According to a power point presentation by D0 related to the previous result, CDF saw something, but only at the 2 sigma level.  A result in this level could seem notable when it isn't, for example, simply because one or two important sources of systemic error were considered but underestimated in magnitude.  Early LHCb results strongly constrain the magnitude and nature of the anomaly reported by D0.  BaBar experiment results also fail to strongly confirm this result and the other "B-Factory" at Belle has also not produced dramatic confirmations of this result.  A possible flaw in the Standard Model expectation calculation used by D0 has also been identified.  More recent LHCb results and B-factory results are consistent with the Standard Model expectation.  So, the result from D0 most likely represents experimental error, a statistical fluke, or a problem in modeling the expectation, rather than new physics.

It is not immediately obvious to me what kind of beyond the Standard Model theories would predict the level of dimuon charge asymmetry observed by D0 without observing other phenomena that have not been observed at Tevatron or the LHC.

Wednesday, May 7, 2014

Experiments Reaffirm Original Koide's Rule Tau Lepton Mass Prediction

The most precise experimental measurement yet of the mass of the tau lepton by the BESIII Collaboration finds that it has a mass of 1776.91 +/- 0.12 +0.10/-0.13 MeV/c^2.  Adding in quadrature, the margin of error for this individual measurement is about +/-0.17, almost as good as the prior world average.  (The BESIII results also discuss a number of more conventional theoretical predictions from the Standard Model regarding the tau lepton mass, particularly in relation to "lepton universality.")

Prior to this measurement, the Particle Data Group value for the mass of the tau lepton was 1776.82 +/- 0.16 MeV/c^2.  Thus, the new result is about 0.9 MeV higher than the old value and is consistent at a 0.5 sigma level with the old world average.

[Update on May 8, 2014: actually, the new BESIII Collaboration data point could even bring down the PDG world average, because it would probably replace the existing BES data point from 1996 in that calculation which is a higher "1776.96" MeV/c^2, although with a significantly higher margin of error of about +/- 0.29 MeV/c^2 (this older BES data point is also the closest of any of the existing data points to the original Koide's rule prediction discussed below).  But, on the other hand, the low margin of error in the BESIII data point is the lowest ever of any single measurement (roughly tied with two previous measurements in statistical error, and far and away the best ever in terms of systemic error and combined margin of error), so the new BESIII data point will get more weight in the world average than the old BES data point did and is almost as high as the old BES data point, relative to the other data points in the world average.  The highest number in the current PDG fit is a 1978 data point from DCLO of 1783 MeV which is just barely within two sigma of the current world average and is fourteen years older than the next oldest data point in the world average, but it gets little weight at all in the current calculation of the world average because its margin of error of +3/-4 MeV/c^2 is so large.  The only other result higher than the BES data point among the eight data points that contribute to the current world average is a CLEO experiment result from 1997 which is 1778.2 +/- 0.8 +/- 1.2 MeV, which also doesn't get much weight due to its large margin of error and is still within one sigma of the current world average.]

The Particle Data Group's world averages weight data points based upon their margin of error, while using only the most precise measurement from each experiment.  Thus, the new PDG world average after the BESIII experiment will be roughly 1776.86 MeV/c^2 give or take about 0.1 MeV/c^2, and may have a slightly lower margin of error since a less precise result will be displaced by a more precise result.

The Original Koide's Rule Prediction

The original Koide's rule is a hypothesis about the relationship between the masses of the charged leptons.  It predicts that the sum of the three charged lepton masses, divided by the square of the sum of the square roots of the charged lepton masses, is equal to exactly 2/3rds.

Since the electron and muon masses are known much more precisely than the tau lepton mass, it is possible to use the original Koide's rule (proposed by Yoshio Koide in 1981 just six years after the tau lepton was first discovered experimentally) with the electron mass and muon mass as inputs to predict the tau lepton mass with a precision much greater than current experimental measurements permit us to directly.

The Particle Data Group world average value for the mass of the electron is 0.510998928 +/- 0.000000011 MeV/c^2.  The Particle Data Group world average value for the mass of the muon is 105.6583715 +/- 0.0000035 MeV/c^2.  These values are both precise to roughly one part per 100 million.

If the original Koide's Rule is true, then the predicted mass of the tau lepton to ten significant digits is 1776.968959 MeV/c^2 (a value which probably slightly overstates the precision of the prediction by one or two significant digits, but is vastly more precise than the current experimental measurements which are accurate to less than six significant digits; I'm too lazy at the moment to work out the precise margin of error for the predicted value). [Updated May 8, 2014: arivero at Physics Forum states that the original Koide's rule prediction is 1776.96894(7) which is consistent with the result that I calculated from first principles using PDG data and the margin of error that I guessed was present, but calculated rather than guessed the margin of error.  Given the limited precision of current experimental measurements of the tau lepton mass, the two statements of the prediction are effectively identical and exact for all practical purposes.]

The pre-BESIII paper PDG value for the tau lepton mass is 0.93 sigma less than the original Koide's rule value.  The new BESIII value for the tau lepton mass is 0.33 sigma less than the original Koide's rule value.

An Abbreviated History of the Original Koide's Rule

In 1983, using the then best available measurements of the electron mass and muon mass, the original Koide's rule predicted a tau lepton mass of 1786.66 MeV/c^2.  But, increased precision in the measurement of the electron and muon masses soon tweaked that prediction to something close to the current 1776.968959 MeV/c^2 predicted value, which is, to the same level of precision as current experimental measurements, 1776.97 Mev/c^2.  By 1994 (and probably somewhat sooner than that), the prediction of the original Koide's rule had shifted to 1776.97 MeV/c^2.  Thus, the prediction of the original Koide's rule has been essentially unchanged for more than twenty years.

Both the new BESIII measurement and the PDG world average values are consistent experimentally with the original Koide's rule prediction (as updated sometime between 1981 and 1994), but the new most precise measurement ever of the tau lepton mass is almost three times closer to the original Koide's rule prediction than the old world average and will eventually also bring up the PDG world average significantly in the direction of the original Koide's rule prediction.

Koide's 33 year old formula is still an excellent tool for accurately predicting the direction that new more precise experimental measurements of the tau lepton mass will shift from old world average values.  This prediction has shifted only very slightly as the electron and muon mass have been more accurately measured.

We don't really know why the original Koide's rule works, or why extensions of it also seem to provide reasonably accurate first order estimates of the other fermion masses as well.  But, the fact that it still does work with exquisite precision 33 years later, suggests that there is some underlying phenomena that gives rise to this relationship and that this relationship is not simply a coincidence.

[Update Below Made On May 8, 2014:]

Prospects For An Improved Experimental Tau Lepton Mass Value

Recall that the state of the art BESIII measurement is 1776.91 +/- 0.12 +0.10/-0.13 MeV/c^2, with a combined margin of error of about +/- 0.17 MeV/c^2.  As usual, the first component of the margin of error is statistical and the second is systemic.

Statistical error and systemic error make roughly equal contributions to the total uncertainty in the measurement.

Our theoretical expectation, if the original Koide's rule is an accurate conjecture, is that the actual error in the BESIII measurement is about -0.06 MeV, which is about 0.33 sigma from the 20+ year old original Koide's rule expectation.

Given the small size of this discrepancy, it is entirely possible, for example, that there is in fact zero systemic error since those conservative estimates of systemic error are overstated or cancel each other out more completely than we would expect from random chance, and that all of that error arises from statistical variation.  Alternately, it is also entirely possible that we have gotten lucky and the sample is, in fact, closer than usual to the true mean, and that all or most of the discrepancy between the true value and the measured one is due to systemic error.

Statistical Error

If the error was truly Gaussian (i.e. distributed according to the "normal" Bell curve distribution), we would expect experiment to differ from the true result by an average amount of 1 sigma.  And, since the expected amount of statistical error flows purely from sample size (which is not reasonably subject to question in this case) and from mathematics.

The only possible reasonable objection to the mathematics is that the statistical error may not actually be distributed in a Gaussian manner.  Indeed, the true statistical error in these measurements in any particular case probably isn't really distributed in a precisely Gaussian manner.  Nature probably isn't that convenient.

But, this is not a very strong concern because, according to the law of averages, the combined statistical error from the sum of statistical error distributions for individual events, that meet certain minimum criteria that are present in this kind of context, will asymptotically approximate a Gaussian distribution as the sample size increases to infinity.  And, the law of averages allows us to quantify the extent of the discrepancy between a Gaussian distribution of statistical error and the actual distribution of statistical error given the size of the sample.

Thus, the estimated statistical error in the BESIII experiment or any future experiment is probably very nearly perfectly accurate, and neither too optimistic, nor too conservative, relative to the actual amount of statistical error that can be expected from all available evidence (other than the "true value" of the measured quantity) in this experiment.

For practical purposes, the statistical error is purely a function of sample size, with a few twists thrown in arising from the facts that several sub-samples with different characteristics need to be combined.  BESIII applied a variety of sophisticated and theoretically driven data cuts to a raw sample of about 56,000,000 collision events to produce a final sample of events from which information about the tau lepton mass could be obtained of 1171 events.

Statistical error is conceptually easy to reduce.  The longer you run your experiment, and the more events you can collect, the more events you have in your sample, and the smaller your statistical error will be in the end.  As a good rule of thumb for the raw number of events in the sample, the hypothetically infinite total sample of possible events that could be generated, and event frequencies on the order of magnitude seen in this kind of experiment, an increase in sample size by about a hundred reduces the statistical margin of error by a factor of about ten.

Thus, in order to reduce the margin of error for the tau lepton mass from +/- 0.12 MeV to +/- 0.012 MeV, experimenters will need about 117,100 post-cut events derived from 5,600,000,000 pre-cut events, assuming that the experimental setup is otherwise similar to BESIII.  With the same sized experimental apparatus as BESIII, this would take several centuries.  But, it would take a long time to gather that much more data even in an experiment significantly larger than BESIII.

Even with meta-analysis of all experiments conducted to date and all experimental data that will be possible to obtain over the next decade or so from experiments that are still actively collecting data, or that are currently in the planning stages and will have collected meaningful amounts of data by then, it is not realistic to expect that it will be possible to increase the aggregate the post-cut event sample by a factor of one hundred.

The total sample size from every experiment ever conducted to date combined at this time is probably less than five or ten times the size of the BESIII sample standing alone (based on the references to previous less precise measurements of tau lepton masses in other experiments referenced in the review of the literature in the BESIII paper).  We'll be lucky to get a total combined sample size of more than three or four times the size of the existing sample in another decades, if that.  This translates into an optimistic estimate of the potential improvement in statistical error over the next decade on the order of 45% to 50%.  Thus, the statistical error might fall from +/- 0.12 MeV to +/- 0.06 MeV to +/- 0.07 MeV, at best, by ca. the year 2024.

This amount of improvement in statistical error, without any improvement in systemic error, could reduce the combined total margin of error in the tau lepton mass measurement to about +/- 0.13 MeV by 2024 from the existing +/- 0.17 MeV, an improvement of about 20%, but not enough to cross any threshold necessary to cross a five sigma discovery threshold with regard to, or to rule out any particular plausible phenomenological prediction regarding the tau lepton mass in a way that isn't already possible with existing data.  So, any breakthroughs arising from the tau lepton mass measurement will require not just a larger sample size, but also major advances in reducing systemic error.

Systemic Error

The BESIII experimenters considered nine different potential sources of systemic error in arriving at their final total estimated systemic error.  None of them are dominant in the overall total.  The biggest single source of estimated systemic error which is a technically obscure issue, contributed a systemic error of +/- 0.05 MeV to the final result.

All of the multiple independent experimental measurements of the tau lepton mass have been designed by people in the same high energy experimental physics community who talk to each other and design experiments in similar ways.

Despite the fact that these experiments are independent of each other, they seem to have consistently understated the theoretically expected mass of the tau lepton under the original Koide's rule.  And, over time new more precise experimental measurements have gradually converged towards values closer to that predicted by the original Koide's rule.

This doesn't prove that systemic error is indeed pulling down the experimentally measured tau lepton mass, or that systemic error is present in the estimated amounts at all.  But, the circumstantial evidence does point to that possibility.

It is hard to know without really being immersed in the technical details, how stubborn the systemic errors in this measurement will be over the next decade or so.  They could improve only slightly as this is a fairly mature type of experimental apparatus.  Or, there could be major advances that slash this part of the total error.

Still, it would be probably unreasonably optimistic to expect a total systemic error with so many independent causes to be reduced by anything more than 50% in the next decade.