Saturday, November 30, 2013

Pre-Modern Human Fertility And Mortality

Any effort to reconstruct the human experience prior to the historic era with genetics with reasonable accuracy requires investigators to fit genetic data to population models that are a reasonable fit to what we know from other sources about fertility and mortality.

Among the most important variable are the average length of a generation, the number of lifetime births per woman, and the likelihood that a infant born will live long enough to reproduce.  The extent to which there is sex specific infanticide or pre-reproductive sex biased mortality and polygamy (or serial monogamy) also influences genetic inheritance of uniparental genetic traits such as mitochondrial DNA (mtDNA) and non-recombining Y-DNA (NRY-DNA or commonly, if not perfectly accurately, just Y-DNA), and of X chromosome linked autosomal genetic traits.  An understanding of the extent to which woman have children from multiple men, either due to cuckoldry or sequential relationships (e.g. remarriage of widows) also has some impact on the models.

Often fairly simple models that assume a twenty-nine year generation, steady exponential population growth from the estimated population at time X to the estimated population at the time of the DNA samples taken that lead to the average woman having modestly more than two children who survive to reproduce per lifetime are sufficient for these models.  But, better models also informed by absolute effective population size are necessary to understand the impact of factors like population bottlenecks and rapid population expansions, and differential expansion rates of population genetic markers of populations that co-existed at the same time.

Dienekes Anthropology Blog notes a recent paper making an imperfect effort to model the expansion of the predominant Y-DNA clades of Western Europe (a subset of Y-DNA R1b) and of Africa, based upon publicly available datasets. The abstract states the main conclusion

The best-fitting models in Africa and Europe are very different. In Africa, the expansion took about 12 thousand years, ending very recently; it started from approximately 40 men and numbers expanded approximately 50-fold. In Europe, the expansion was much more rapid, taking only a few generations and occurring as soon as the major R1b lineage entered Europe; it started from just one to three men, whose numbers expanded more than a thousandfold.
I make an effort to translate the conclusions of the paper on the European expansion of the leading subset of R1b into some of these basic demographic parameters with some back of napkin calculations and references to the literature in a series of comments to the post.

Fertility and Mortality in Premodern Japan

My ongoing analysis has also lead me to an open access paper on fertility and mortality in pre-modern Japan based on an analysis done in 1974 of four village family registries going back to the 1720s.

The paper has a wealth of interesting details.  For example, the author notes a statistically apparent systemic falsification of birth dates divisible by five or containing the numbers four or nine, because they were disfavored for superstitious reasons.  The village family registries also show clear signs of deliberate suppression of data on infant mortality and infanticide making it an unreliable source for that data point, which, fortunately, is one of the easier to obtain prehistoric data points to obtain from other sources.

The paper also contains a discussion of the use of infanticide and abortion in Tokugawa era Japan as a means of birth control on a widespread basis and widespread deferral of marriage and childbearing to after ages twenty to twenty-two (consistently twenty-two or later in all four villages in all four periods, on average), as means by which Japan started to make the demographic transition associated with industrialization and modernity even before the Meiji period (which commenced 1867).  Less than 5.2% of births in the entire data set were to teenagers (half the modern U.S. rate for native born women) and only one out of 779 was to a girl under the age of fifteen.  Infanticide in that era was returned to as "returning" the infant in the Japanese language at the time, and may have been practiced particularly frequently with inappropriately early out of wedlock births which may have at any rate been rare.  Abortion started to be used as a form of birth control in Japan in the late Tokugawa era.

The percentage of children age one not surviving to reproductive age (twenty-two), was never lower than twenty-five percent, and aside from one time period in one village where it was sixty-percent, tended to hover around thirty to forty percent.  If one adds a 15% to 20% infant mortality apart from infanticide, which is typical of the premodern era, one find a likelihood of a newborn infant surviving to reproduce of between 48% on the pessimistic end to 64% on the optimistic end, setting aside outlier periods with survival rates as low as about 32%.  Considering that Edo period Japan was probably better off than most pre-modern places as evidenced by a below typical premodern total crude death rate at all four villages throughout the data set, one can establish some reasonable assumptions for demographic modeling purposes:

An assumption that half of children born survive to reproduce in normal times, that two-thirds of children born survive to reproduce in the best of times, and that one-third of children born survive to reproduce in harm times.

While these estimates are crude, they do have the virtue of being based on accurate pre-industrial record keeping rather than modern figures (about 90%-95% of children born now survive to reproduce (the low end including a factor for non-reproduction despite living to reproductive age that was rare in premodern times) with a large share of selective effects in modern times due to generation length and lifetime fertility), although they may be optimistic for earlier Mesolithic, Neolithic and Bronze Age populations.  Thus, in normal times, replacement rate reproduction is about four children per lifetime, but just three in good times, and six in bad times.

For one village, that was typical, the average age at the birth of a first child was twenty-two and the average age at last birth was thirty-six (implying a generation length of approximately twenty-nine), implying a quite brief fourteen year period of childbearing, on average, for Japanese women of that era.  Age at first childbirth appears to be one of the key indicators for premodern lifetime fertility per woman that would have been within the control of pre-moderns other than infanticide rates.

This implies the odds of a parent losing a child sometime in a lifetime would have been about 80% in good times, 98% in normal times, and 99%+ in bad times.  Child mortality risk was higheest before age six and particularly high in the first two years, with the risk falling to close to prime of life adult levels after that point.  The odds of a child losing at least one parent before having a child of their own ranged from about 20%-30%., although the odds of losing both parents in that time frame averaged about 2%-4%., with later birth order children obviously at greater risk in both cases.

The data set also nicely illustrates the demographic impact of the Temmei Famine of 1787 which hit central Japan particularly hard, and provides an example of the likely impact of similar events of that magnitude in history and prehistory.

It is hard to tell, however, how much of an impact resource scarcity had on factors like age at first marriage, infanticide rates, and pre-reproductive death rates.

A next step will be to locate papers on onset of menses in premodern times relative to modern times, and between forager and farmer populations in pre-modern times.  For most of history, this has happened later, setting a floor on the window of lifetime fertility.  I am also looking for more historical data on premodern death rates from childbirth.

Wednesday, November 27, 2013

More Cosmic Accounting And Some Speculations On Matter-Antimatter Asymmetry

In March of this year, I did a post on "Cosmic Accounting and Neutrino Mass". This is also a cosmic accounting post. The notion is to determine the experimentally measured particle make up of the universe to use as a reference point for evaluating theories that purport to explain this reality.

After doing that accounting with the best available data, I speculate about what this could imply in terms of dark matter and cosmology.

Baryon Number and Lepton Number in the Standard Model With Sphalerons

In the Standard Model, baryon number (the number of quarks minus the number of antiquarks, divided by three) is conserved as is lepton number (the number of charged leptons and neutrinos minus the number of charged antileptons and antineutrinos), except in sphaleron process. Per Wikipedia:

A sphaleron (Greek: σφαλερός "weak, dangerous") is a static (time-independent) solution to the electroweak field equations of the Standard Model of particle physics, and it is involved in processes that violate baryon and lepton number. Such processes cannot be represented by Feynman diagrams, and are therefore called non-perturbative. Geometrically, a sphaleron is simply a saddle point of the electroweak potential energy (in the infinite-dimensional field space), much like the saddle point of the surface z(x,y)=x2−y2 in three dimensional analytic geometry.

In the standard model, processes violating baryon number convert three baryons to three antileptons, and related processes. This violates conservation of baryon number and lepton number, but the difference B−L is conserved. In fact, a sphaleron may convert baryons to anti-leptons and anti-baryons to leptons, and hence a quark may be converted to 2 anti-quarks and an anti-lepton, and an anti-quark may be converted to 2 quarks and a lepton. A sphaleron is similar to the midpoint (\tau=0) of the instanton, so it is non-perturbative. This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early universe.

The trouble is that if you start with B=0 and L=0, as you would expect to in a Big Bang comprised initially of pure energy, it is hard to determine how you end up with the observed values of B and L in the universe which are so far from zero.

The mainstream view among physicists, although there are some theorists who dissent from this analysis, is that Standard Model sphaleron processes in the twenty minutes during which Big Bang Nucleosynthesis is believed to have taken place, or the preceding ten seconds between the Big Bang and the onset of Big Bang Nucleosynthesis, can't account for the massive asymmetry between baryons made of matter and baryonic anti-matter that is observed in the universe (also here) without beyond the Standard Model physics (also here).  Likewise, after that point, sphaleron processes should be so rare that they can't explain the baryon asymmetry of the universe (BAU), which is one of the great unsolved problems in physics.

What are the experimentally measured values of baryon number and lepton number in the universe?

Astronomers have been able to estimate the baryon number of the universe to one or two significant digits, but while they have a good estimate of the number of Standard Model leptons in the universe (to one significant digit) they have a less accurate estimate of the lepton number of the universe since they don't know the relative number of neutrinos and antineutrinos.

As one science education website explains:

[S]cientific estimates say that there are about 300 [neutrinos] per cubic centimeter in the universe. . . . Compare that to the density of what makes up normal matter as we know it, protons electrons and neutrons (which together are called “baryons”) – about 10-7 per cubic centimeter. . . . the size of the observable universe is a sphere about 92 million light-years across. So the total number of neutrinos in the observable universe is about 1.2 x 1089! That’s quite a lot – about a billion times the total number of baryons in the observable universe. . . . We don’t know the mass of a neutrino exactly, but a decent rough estimate of it is 0.3 eV (or 5.35 x 10-37 kg). Scientists have indirect proof that neutrinos do have some mass, but as far as we can tell, they’re the lightest elementary particle in the universe. So there are lots of neutrinos in the universe, but they each don’t weigh very much. So that means in a cube of volume one Astronomical Unit on a side (where one AU is the distance from the sun to the earth, or about 93 million miles), there’s only about 600 tons worth of neutrinos! So the mass of neutrinos in empty space millions of miles across contains less mass than the typical apartment building.

(In fact, a better estimate of the average neutrino mass is probably 0.02 eV or less (based on the sum of the mass differences in a normal hierarchy, see also here for cosmology based limits), so so 40 tons rather than 600 tons is probably a better guess for the one cubic AU of neutrino mass in the quoted material above.  The back of napkin estimate of the mass of all of the neutrinos in the universe on that basis is about 1/10,000th of the mass of all of the protons and neutrons in the universe combined, and about 1/5th of the mass of all of the electrons in the universe.)

In other words, the number of baryons in the universe is about 4*1079, and the number of neutrinos in the universe is about 1.2*1089. We know that the ratio of baryon antimatter to baryon matter (and the ratio of charged leptons to charged antileptons) is on the order of 10-11. And, we know that to considerable precision there are 2 neutrons for every 14 protons in the universe (this is a confirmed prediction of Big Bang Nucleosynthesis), and that the number of charged leptons is almost identical to the number of protons in the universe. (The number of exotic mesons and baryons other than neutrons and protons is negligible at any given time in nature since they are so short lived and generated only at high energies.)

What Is The Experimentally Measured Value of B-L?

What is the value of the conserved quantity B-L (which is conserved not only in the Standard Model even in sphaleron processes, but also in the vast majority of beyond the Standard Model theories)?

The number of protons and number of charged leptons cancel, leaving the number of neutrons (about 5*1078) minus the number of neutrinos plus the number of antineutrinos. The combined number of neutrinos is 1.2 x 1089, but we don't have nearly as good of an estimate of the relative number of neutrinos and antineutrinos.

The number of neutrinos in the universe outnumber the number of neutrons in the universe by about 2.4*1010 (i.e. about 24 billion to 1), so the conserved quantity B-L in the universe (considering only Standard Model fermions) is almost exactly equal to the number of antineutrinos in the universe minus the number of neutrinos in the universe.

As of March 2013, the best available observational evidence suggests that antineutrinos overwhelmingly outnumber neutrinos in the universe. As the abstract of a paper published in the March 15, 2013 edition of the New Journal of Physics by Dominik J Schwarz and Maik Stuke, entitled "Does the CMB prefer a leptonic Universe?" (open access preprint here), explains:
Recent observations of the cosmic microwave background at smallest angular scales and updated abundances of primordial elements indicate an increase of the energy density and the helium-4 abundance with respect to standard big bang nucleosynthesis with three neutrino flavour. This calls for a reanalysis of the observational bounds on neutrino chemical potentials, which encode the number asymmetry between cosmic neutrinos and anti-neutrinos and thus measures the lepton asymmetry of the Universe. We compare recent data with a big bang nucleosynthesis code, assuming neutrino flavour equilibration via neutrino oscillations before the onset of big bang nucleosynthesis. We find a preference for negative neutrino chemical potentials, which would imply an excess of anti-neutrinos and thus a negative lepton number of the Universe. This lepton asymmetry could exceed the baryon asymmetry by orders of magnitude.

Specifically, they found that the neutrino-antineutrino asymmetry supported by each of the several kinds of CMB data was in the range of 38 extra-antineutrinos per 100 neutrinos to 2 extra neutrinos per 100 neutrinos, a scenario that prefers an excess of antineutrinos, but is not inconsistent with zero at the one standard deviation level.

A 2011 paper considering newly measured PMNS matrix mixing angles (especially theta13), and WMAP data had predicted a quite modest relative neutrino-antineutrino asymmetry (if any), but it doesn't take much of an asymmetry at all to make B-L positive and for the neutrino contribution to this conserved quantity to swamp the baryon number contribution.

Intuitively, an antineutrino excess over ordinary neutrinos make sense because beta decay, a common process in nature, generates antineutrinos, and there is no other process which obviously creates an equivalent number of neutrinos.  Beta decay conserves lepton number (indeed, the antineutrino was proposed theoretically in order to conserve lepton number in beta decay since the electron produced has L=1 and the antineutrino has L=-1, an argument in my mind also against Majorana neutrinos in which the particle and antiparticle are one and the same), but beta decay does tend to unbalance the relative number of neutrinos and antineutrinos in the universe, and there is no particularly obvious reason why this would be exactly cancelled out in other processes at approximately the same time.

Thus, B-L is about 85%-90% likely to be greater than zero in the universe, at least to the extent that Standard Model particles are considered, although this is not a sure thing.

Beyond The Standard Model Conjectures

In addition to the missing baryonic matter, something must account for the 23% of mass energy ascribed to dark matter by the six parameter lambda CDM model used to explain the CMB from first principles.

It is possible that the "Cold Dark Matter" signal isn't really "Cold Dark Matter". As many prior posts at this blog have explored, evidence of "Warm Dark Matter" and "Cold Dark Matter" are almost indistinguishable in CMB observations, and Warm Dark Matter is a better fit to the other data than Cold Dark Matter.

But, given that all experimental evidence related to "dark energy" can be entirely explained with just a single constant in the equations of general relativity (the measured value of the cosmological constant) which is part of the equations for a force rather than a physical thing made up of particles, it could very well be the case that another tweak to the equations of general relativity (perhaps in a quantum gravity formulation) could also explain the CDM figure.

If there is a lot of dark matter out there, this could "balance the books" and bring the value of B-L to zero, the a priori expectation of cosmologists who assume that the Big Bang started with pure energy with B=0 and L=0.

If dark matter particles have a positive lepton number, or a negative baryon number (or both, asymmetrically, with a net value that was a positive lepton number or a negative baryon number), then B-L could equal zero.
This would also suggest, if true, that some sort of mechanism that links the magnitude neutrino-antineutrino asymmetry to the creation of dark matter is involved in the production of dark matter.

The example of a single kind of stable 2 keV sterile neutrino warm dark matter particle.

Consider a simple model in which dark matter consists entirely of sterile neutrinos (with virtually no sterile anti-neutrinos) with lepton number 1 for each particle, in a number exactly counterbalancing the B-L total from other methods. If this sterile neutrino had a mass on the order of 2 keV (the empirically preferred warm dark matter mass),one could deduce the anticipated number of sterile neutrinos that make up dark matter. This could then be used to predict the extent of the neutrino-antineutrino asymmetry in the universe and compare it against future measurements of that quantity which is currently not known very accurately.

Given that a typical nucleon has a mass of about 1 GeV and that there are about 4*1079 baryons in the universe, and that the mass of leptons relative to baryons in the universe is negligible, and that the ratio of dark matter to baryonic matter in the universe is about 23 to 4.6, it follows that the total mass of dark matter in the universe is about 2*1086 keV and so you would expect that there are 1086 warm dark matter particles in the universe.

Now, given that there are 1.2*1089 neutrinos in the universe, these warm dark matter particles would balance the B-L imbalance in the universe to zero, if the number of antineutrinos in the universe exceeds the number of neutrinos in the universe by about one part per 1,000, a number that would still cause the lepton asymmetry in the universe in favor of antimatter to exceed the baryon asymmetry in the universe in favor of matter, in terms of raw numbers of particles (since 2*1086 is much greater than 4*1079).

A heavier dark matter particle would imply a neutrino-antineutrino asymmetry of one part per more than 1,000, and a lighter one would imply a neutrino-antineutrino asymmetry of one part per less than 1,000 down to the counter-factual limit of hot dark matter where dark matter has the same mass as a neutrino.

Given the current range of possible neutrino-antineutrino asymmetry proportions, this value is still consistent with the CMB data at the one sigma level, but just barely, as that data tends to favor a greater asymmetry which would take more (and hence lighter) dark matter particles to counterbalance.  Given the limitations posed by Neff, the lightest possible dark matter particle in a B-L balancing hypothesis would be about 10 eV (otherwise Neff would be about 4.1, rather than about 3.4), which would allow a 10 extra-antineutrino per 100 neutrinos asymmetry, much closer to the center of the CMB data preferred range.  But, a variety of experimental observations used to place lower bounds on the mass of a warm dark matter particle would seem to rule out this possibility.

Note also, that if a keV dark matter particle interacted with the Higg field with a Yukawa proportional to its mass, that this would be well within the experimental measurement error in the calculation that the sum of the Yukawas of all the Standard Model particles with a suitable adjustment for the Higgs boson self-interaction equals one, which the current data favor to a high degree of precision, because the Yukawa of a keV particle is so tiny.

An alternative conjecture with a non-zero B-L immediately after the Big Bang.

Suppose, however, that any non-Standard Model particles that exist (including dark matter particles), have B=0 and L=0 (or B=1 or L=-1, for example), or that there simply aren't enough of them to balance the cosmic accounting ledgers, and that the strong indication of existing experimental and theoretical knowledge which support the conclusions that B-L is always conserved in all interactions no matter how high their energy.

In that case, it necessarily follows that B-L was not equal to zero immediately following the Big Bang.

My personal speculation to resolve this situation, which just a handful of experimental results could make a reality in the near future, is that B-L immediately after the Big Bang is positive and counterbalanced by a B-L which is negative immediately before the Big Bang, on the theory that matter wants to move forward in time, while antimatter is simply identical stuff moving backward in time and hence would tend to move to a location before the Big Bang in time, rather than to a location after it. If matter and antimatter were created in equal amounts, most of the matter would end up after the Big Bang, most of the antimatter would end up before the Big Bang, and the Big Bang itself could be viewed no just as a source of pure energy, but as an explosion caused by the collision of all the antimatter in an antimatter dominated universe and all of the matter in a matter dominated universe.

Footnote Regarding The Neff Mystery

CMB observations predict an effective number of neutrino species.  The minimum is 3 which have been observed experimentally, for technical reasons this would be expected to produce an Neff of 3.046 (supported by what we know about the process and the measured magnitude of theta13 in the PMNS matrix, see, e.g. this 2011 paper).  If there were four neutrino species, Neff ought to be something on the order of 4.05.

But, in the 9 year data report, WMAP observed Neff to be 3.26 +/- 0.35 (later corrected upward to 3.84 +/- 0.4), which was a great downward reduction from the 4.34+/-0.87 number estimated based on the 7 year WMAP data.

Paper sixteen from the Planck experiment's data dump stated that Neff was 3.30 +/- 0.27 (within one sigma of the 3 species value and about three sigma from the four species value).  But, within a few days the Planck number was revised upward to about 3.62 +/- 0.5 which slightly favors N=4 over N=3 (with a fourth neutrino species having a mass of less than 0.34 eV at the 95% confidence interval).  Paper sixteen itself say 3.62 +/- 0.25 in the conclusion, and states that the adjustment is based on resolving tensions with the Hubble constant measurement revised by the Planck experiment as well, but the paper has a variety of estimates based upon the assumptions invoked, none of which is truly definitive, and two of which have best fit values as low as 3.08.

Polarization data from Planck that is supposed to be released in early 2014 is supposed to reduce the uncertainty from about +/- 0.27 to +/- 0.02 providing powerful limits on additional species of light particles or strongly favoring a fourth neutrino species if the updated values with polarization data are 3.0-3.1 or 4.0-4.1, but it will leave us with a real conundrum if the final result turns out to be in the range 3.35-3.75 or so.

The mystery is whether this estimate truly is an experimental measurement uncertainty, or if the true value, if measured with arbitrary precision, would be between 3.046 and 4.05, implying some phenomena that creates a fractional effective neutrino species, and if so, what that could be.

Could this be some unknown form of radiation?  Could it be a particle with a mass on the boundary between that of neutrinos and of particles that are not captured by the operationalized Neff measurement (e.g. a particle of 10 eV)?  Could it be a force that partially mimics a neutrino, perhaps transmitted by a massive, but light boson?  I don't know and have seen little literature review of the subject, but I would love to know more about the possibilities that could explain the result (both in terms of sources of experimental error and in terms of BSM or Standard Model physics).

Rereading Paper Sixteen at length, I find it much more likely that the conclusion at the end of the day will be that there are three rather than three fertile and one sterile neutrino species (a fourth fertile neutrino species with a mass of a fraction of 1 eV is clearly ruled out by W and Z boson decays).

Footnote Regard The Missing Baryonic Matter

Studies of the comic microwave background radiation (CMB) in the universe, studies by experiments like WMAP and the Planck satellite have estimated that the universe has about 4.6% ordinary "baryonic" matter, about 23% "dark matter" and about 72.4% "dark energy".

 About 10% of the ordinary baryonic matter is in galaxies that we can observe with telescopes.

 The Hubble space telescope has identified another 40% of it or so in the form of interstellar gas and other "barely" luminous ordinary matter between galaxies, and hope to find another 20% of it or so with the Cosmic Origins Spectrograph and other UV range searches around "filaments" of dark matter believed to stretch between galaxies. The composition of the missing baryonic matter (which I often call "dim matter" to distinguish it from the technical meaning of "dark matter" in physics, is believed to be similar to that of observed baryonic matter, but less compact.

But, about 30% of the baryonic matter in the universe is not only dim, but is so dim that we don't even have good ideas about where to look for it.

Monday, November 25, 2013

There is no Majorana Fermion Dark Matter

Anapole Dark Matter at the LHC

Yu GaoChiu Man HoRobert J. Scherrer
(Submitted on 22 Nov 2013)
The anapole moment is the only allowed electromagnetic moment for Majorana fermions. Fermionic dark matter acquiring an anapole can have a standard thermal history and be consistent with current direct detection experiments. In this paper, we calculate the collider monojet signatures of anapole dark matter and show that the current LHC results exclude anapole dark matter with mass less than 100 GeV, for an anapole coupling that leads to the correct thermal relic abundance.

From the paper (citations omitted):

The nature of the dark matter that constitutes most of the nonrelativistic density in the universe remains unresolved. While the leading candidates are usually considered to be either a massive particle interacting via the weak force (WIMP), or an axion, there has been a great deal of recent interest in the possibility that the dark matter interacts electromagnetically. Dark matter with an integer electric charge number ∼ O(1) has long been ruled out, and even millicharged dark matter is strongly disfavored. Hence, the most attention has been paid to models in which the dark matter particle has an electric or magnetic dipole moment, which we will call generically dipole dark matter (DDM). If one assumes a thermal production history for the dark matter, fixing the dipole moment coupling to provide the correct relic abundance, then the corresponding rate in direct detection experiments rules out a wide range of DDM mass.

An alternative to DDM is a particle with an anapole moment. The idea of the anapole moment was first proposed by Zel’dovich and mentioned in the context of dark matter by Pospelov and ter Veldhuis. More recently, the properties of anapole dark matter (ADM)
were investigated in detail by Ho and Scherrer. (See also the model of Fitzpatrick and Zurek, in which the anapole couples to a dark photon rather than a standard-model photon). Anapole dark matter has several advantages over DDM. The anapole moment is the only allowed electromagnetic moment if the dark matter is Majorana, rather than Dirac. The annihilation is exclusively p-wave, and the anapole moment required to give the correct relic abundance produces a scattering rate in direct detection experiments that lies below the currently excluded region for all dark matter masses (although see our discussion of LUX in Sec. V). . . .

In [a recent study], it was shown that the differential scattering rate for anapole dark matter at direct detection experiments reaches a maximum around mχ ∼ 30 − 40 GeV and it lies just below the threshold for detection by XENON100. Given the significantly improved sensitivity around this regime by LUX, it may be possible that anapole dark matter with mχ ∼ 30 − 40 GeV is ruled out. However, we have just shown that the current LHC results have already excluded anapole dark matter with mχ < 100 GeV. So the new bounds from LUX are redundant for mχ < 100 GeV. For mχ > 100 GeV, the annihilation channels χχ → W+W− and χχ → tt¯ open up and the correct relic abundance is achieved with a much smaller gA. Since the differential scattering rate is proportional to gA, the analysis in [that recent study] indicates that the bound from LUX on anapole dark matter with mχ > 100 GeV is far too loose to exclude this mass range.

Link

A Majorana fermion is a particle with non-integer spin that is its own antiparticle.

Neither quarks nor charged leptons fit this description, and it is not yet determined whether Standard Model neutrinos fit this description or instead have "Dirac mass."  The combined non-detection of neutrinoless double beta decay in all neutrinoless double beta decay experiments to date that are credible in the scientific community current limit the Majorana mass of neutrinos to no more than 100-150 meV, with the greatest likelihood estimates of the sum of the masses of all three types of neutrinos in a "normal hierarchy" currently at about 60 meV.  If neutrinoless double beta decay experiments reduce this bound by a factor of two to three, at least some neutrino mass will have to be non-Majorana, and an order of magnitude improvement in the bound, which is realistic in the next decade or so, should be capable of entirely resolving the nature of neutrino mass.

This paper basically rules out the possibility of light Majorana Fermion dark matter.

We further know that, for reasons generally applicable in cold dark matter v. warm dark matter comparisons, dark matter with masses of 100 GeV or more are a poor fit to the experimental evidence.

So, Majorana Fermion dark matter is basically ruled out by the totality of the evidence.

Cultural footnote:  Contemporary fantasy writer L.J. Smith (best known for her "Vampire Diaries" teen fiction series that has been adapted in to a many season television show), deserves credit for creating a female high school student character in her 1996 novel, "Daughters of Darkness" in her "Night World" series, who is thinking seriously about a career in astronomy and has research interest, including research into the distribution of dark matter in the universe or supernova properties with tools including orbital telescopes, that are closely in line with what real astronomers are still studying in 2013 (around the time that the character in the book would probably have recently become an associate professor and possibly leading some of these research projects).

Progress Made On Twin Primes Conjecture and Goldbach's Weak Conjecture

Twin Primes

A new paper has dramatically advanced number theory by greatly refining what we know about the distribution of prime numbers.

The holy grail would be the twin prime conjecture which is that there are infinitely many pairs of prime numbers that are adjacent odd numbers.  This hasn't been achieved, but the new paper does prove that there are infinitely many prime numbers within six hundred of each other, and that if something called the Elliot-Halberstam conjecture is true, that there are infinitely many prime numbers within twelve of each other.

As recently as May of this year it was considered a breakthrough when Yitang Zhang proved that there were infinitely many primes within 70 million of each other.

The Elliot-Halberstam conjecture is basically a weaker form of the generalized Riemann hypothesis, and both concern the frequency with which prime numbers are found in sequences of natural numbers.

Goldbach's Weak Conjecture

Meanwhile, Peruvian mathematician Harald Helfgott claims to have proved Goldbach's Weak Conjecture in 2013.  Thus, he claims to have proved that: "Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)" and also as "Every odd number greater than 7 can be expressed as the sum of three odd primes."  (The companion proof by example for numbers under about 10^30 is found here).

Goldbach's Strong Conjecture, that "Every even integer greater than two can be written as the sum of two primes," remains unproven.  The weak conjecture is trivially implied by the strong conjecture, because one can always choose an even number which can be expressed as a sum of two primes and then add three, to express the corresponding odd number as a sum of three primes.  But, since the three prime solutions to Goldbach's Weak Conjecture as proven, do not always include the number three, the converse does not flow from the Weak Conjecture.  Empirically, the strong conjecture is true, at least, for every number less than 4*10^18.

While Goldbach's Strong Conjecture has not been proved, it was established in 1995 that every even number n greater than or equal to four can be expressed as a sum of at most six primes (which follows trivially from the just established proof of Goldbach's Weak Conjecture), and that "every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes) such as 100=23 +7*11.

Proof of Goldbach's Weak Conjecture makes it trivial to prove as a corollary that every even integer can be written as a sum of at most four primes, a significant improvement over the six prime result of 1995, but Harald Helfgott states in a letter quoted in the linked material that progress beyond four prime partitions of even numbers will be much harder.  This is because every even number greater than 4 can be written as a sum of the prime number three (which is an odd prime) and an odd number, and every odd number can be written as a sum of at most three odd primes.

Previous coverage of this conjecture at this blog can be found here.

While this 2013 result has yet to be fully vetted, it will rival Fermat's Last Theorem (proposed in 1637 and first proven in 1995 in a documented and verified form) in notoriety if it is determined to be correct.  Goldbach's conjecture was first stated in 1742.

The Riemann Hypothesis

The Riemann Hypothesis was stated in 1859 (and generalized later).

It has been known since 1997 that the generalized Riemann hypothesis implies Goldbach's weak conjecture.  And, while the converse is not true, both the progress on the Twin Primes conjecture and the progress on Goldbach's Weak Conjecture, make a proof of the generalized Riemann hypothesis, which is the biggest prize in all of number theory because it implies so many other results, seem like something that could be achieved in my lifetime.

We are on the verge of experiencing a revolution in number theory that fills in the missing links necessary to prove myriad statements about the hidden underlying structure of our number system that can provide a unified foundation for further understanding in a field that has thus far been largely made up of isolated results that seem similar in character to each other but are not yet linked by any real unifying principal.

Friday, November 22, 2013

Two New Snowmass Papers

* A new Snowmass paper on Baryon Number Violation, something that has never been observed and is largely ruled out by the Standard Model.

But, the theoretical motivation for finding it to explain matter-antimatter asymmetry in the universe, is great, although there is no sign of it so far.

The most obvious place to look is proton decay, which has a minimum half-life of longer than 1.4* 10^34 years for one of the two most common decay modes that could exist if it was allowed, and 5.9*10^33 years for the other.  Predictably, beyond the Standard Model theories contemplate proton decay rates just a little bit beyond what can currently be measured experimentally.  Other searches such as neutron-antineutron oscillation have also come up empty.

The paper urges continued searches for proton decay and new neutron-antineutron searches.

The paper doesn't really discuss it, but looking anew at the Standard Model v. SUSY gauge unification graphs that extrapolate the running of the coupling constants of the three Standard Model forces at high energies, I am struck by how powerful a tool that could be to prove or falsify a lot of BSM physics.  While SUSY merely bends the strong force and electromagnetic force coupling constants a bit at high energies (aka modifies their "beta functions"), it actually changes the direction of the running of the weak force coupling constant at high energies, starting around 1 TeV-10 TeV.  This seems like a credible target to measure in my lifetime or at least my children's lifetimes.  State of the art theoretical calculations of the Standard Model runnings of these coupling constants can be found here.

Experimental measurements of the strong force coupling constant apparently only reach up to the single digit GeV scale (at least as of 2007), although some experiments seem to reach considerably further but in a less definitive way, and the LHC has also expanded the envelope a bit.  This study from 2009 seems to be one of the stronger bounds on strong force coupling constant running deviations from the Standard Model expectation.  This talk shows experimental results for the strong force coupling constant running up to about 200 GeV.  A 2012 experiment found no deviation in the running of the strong force coupling constant up to 600 GeV.  A fair amount of active research on the QCD coupling constant running, however, is concerned not with the UV limit, but the IR limit.

Standard Model running of the weak force coupling constants had been confirmed up to about 100 GeV as of 2009.  Electromagnetic coupling constant running has apparently been measured up to 21 TeV as of 2006 at LEP.

* The new Snowmass paper on Charge Leptons looks at two issues.  Lepton Flavor Violating (LFV) Processes and Lepton Flavor Conserving Processes.  Flavor violation in Charge Lepton processes is predicted by the Standard Model to exist at an undetectable 10^-56 branching ratio of muon decays if neutrinos have a Dirac nature and the PMNS matrix elements are at approximately current experimental levels.  But, many beyond the Standard Model theories predict greater lepton flavor violation.  Various mid-budget physics experiments are looking for and contemplated that would search for LFV.

A second part of the paper on Lepton Flavor Conserving Processes mostly focuses on the prospects of further research regarding anomalous magnetic moments (g-2) and electric dipole moments (EDM), of the three charged leptons to rule out or confirm the existence of BSM phenomena at energy scales impossible to reach directly in near term big budget collider experiments.

Aside:  the arvix HEP-Experiment category really conflates two entirely different kinds of papers.  Those that propose new experiments, and those the report the results of existing experiments.  It would be nice if the system could categorize the two kinds of papers separately.

Also, notably, both of these papers are suggesting medium budget physics that may be attractive alternatives to a next generation LHC.  Getting fundamental physics funds out of a single basket seems wise, even if the LHC may have been the right tool for today.

It is also remarkable how tight the measurements of quantities like the electron EDM, the muon g-2, the lower limit on the mean lifetime of the proton, and the existence of lepton flavor violating processes already is with current experimental work.  There is simply not a lot of wiggle room for alternatives to the Standard Model to fit themselves into given the extreme precision of some of the measurements that have been made already, particularly those involving electroweak processes.


It's A Dog Eat Dog World Out There In HEP

Parasitic measurements with the new Fermilab g-2 experiment will improve the muon EDM limit by two orders of magnitude.
From here (a Snowmass report on "Charged Leptons").

Given that measuring some quantity in addition to the main target of the experiment's main objective doesn't necessary deplete or consume the main goal of the experiment, I'm not sure that "parasitic measurement" is really an apt description.  But, surely this does capture some of the psychology of a high energy physics experiment manager's world.  Then again, maybe the measurements of the muon EDM will be conducted by intelligent mosquitoes or tape worms.

Thursday, November 21, 2013

Accuracy of Muon g-2 Theoretical Prediction Improved

The previous state of the art prediction for the anomalous magnetic moment of the muon (g-2) was:

(154 ± 1 ± 2) × 10−11

where the first error was due to electroweak hadronic uncertainties, but the second, larger uncertainty was due to the unknown Higgs boson mass.

A new paper incorporating new fundamental constant measurements, most importantly, the Higgs boson mass, has essentially eliminated the uncertainty due to the Higgs boson mass uncertainty from the theoretically predicted value of muon g-2, leading to a new world's best estimate of:

 (153.6 ± 1.0) × 10−11

The new estimate is approximately 55% more precise than the previous one.
The final theory error of these contributions dominated by the electroweak hadronic and three-loop uncertainties . . . . It is enlarged to the conservative value ±1.0 × 10−11 . . . . The parametric uncertainty due to the input parameters MW, mt, and particularly MH is negligible. The precision of the result is by far sufficient for the next generation of aµ measurements. Clearly, the full Standard Model theory error remains dominated by the non-electroweak hadronic contributions.
In other words, as is often the case, the difficulty associated with doing QCD calculations prevents the theoretical estimate from being more precise.

Precision muon g-2 theoretical expectations are important because the current experimentally measured value of this constant is about three standard deviations from the theoretically expected value and greater precision on both the theoretical and experimental fronts can clarify if this discrepancy is real, or just a product of inaccurate theoretical predictions and experimental values.

UPDATE: I am not entirely clear how the conversion of the figures above to the muon g-2 figures below in the Snowmass on the Mississippi Charge Lepton paper at Equations 4.3-4.5 on page 35 is accomplished.  This sets forth a discrepancy between theory from a pair of 2011 papers and experiment from BNL E821 (2006) of: 287(80)*10^-11, with the theoretical prediction being slightly lower than the experimentally valued at a roughly three sigma level.

Using a root of the sum of squares methods to combine error estimates, the theoretical error is 49*10^-11, the experimental error is 63*10^-11 and the combined theoretical and experimental error, as shown, is 80*10^-11.

A review of the paper paper in my original post is looking at only a portion of the muon g-2 theoretical estimate (the electroweak contributions to muon g-2 rather than the entire muon g-2), producing a -0.4*10^-11 adjustment in the mean theoretical value, and a 1.2*10^-11 reduction in the combined theoretical error estimate, a less impressive improvement that previously suggested.  The precision improvement in the overall muon g-2 estimate from knowledge of the Higgs boson mass is actually only about 2%.

Tuesday, November 19, 2013

T2K Hints At CP Violating Phase In Neutrino Oscillations

recent paper based on data from the T2K experiment which observed electron neutrinos in a muon neutrino beam touts in its abstract the unremarkable and well proven fact that parameter theta13 in the PMNS matrix is non-zero, which it limits to certain counter-factual or speculative assumptions about the value of other PMNS matrix parameters.

More interestingly, if their data is evaluated in light of the already fairly accurately estimated values of theta13, theta23 and the neutrino mass eigenstate differences, the T2K data show a preferences for a CP violating phase of the PMNS matrix that governs neutrino oscillation of about -pi/2 (or equivalently -90 degrees, 270 degrees, or 3/2pi aka pau).  This result is insensitive to the question of whether the neutrino masses have a normal or inverted hierarchy and apparently also to the value of theta12.

This is midway between two prior efforts to make a best fit CP violation phase for the PMNS matrix, one of about 1.08pi and the other of about 1.7pi.  The average of the three estimates is about 1.42pi.  It is also not far removed from the CP violating phase in the CKM matrix of about 1.23pi.

All of the estimates, of course, have significant margins of uncertainty.  But, the gross consistency of the three measurements does suggest that a non-zero CP violation phase for the PMNS matrix is likely, and favors one half of the allowed 0 to 2pi range over the other.

Monday, November 18, 2013

Higgs Boson Mean Life Time Bounded By LHC

Every fundamental and composite particle made up only of fundamental particles has a characteristic mean lifetime.  The mean lifetime is a function of a particle's half-life.  Both of these measurements of the average lifetime of a particle are a function of what physicists call the "total width" of a particle, which is something that can be measured at a particle accelerator like the Large Hadron Collider, and it has electron volt units, just like fundamental particle masses.

The top quark has a width of 1.5 GeV, and the Z boson has a width of 2.5 GeV.  The Standard Model predicts a Higgs boson width of about 4 MeV, which is very narrow.  Large widths imply short lifetimes, narrow widths imply comparatively long lifetimes.

The LHC is incapable of directly measuring a particle width this tiny, but it can put an upper bound on it with existing data of about 100 MeV to 180 MeV, that will improve a bit with more data, but will never be all that tight a boundary because the LHC simply isn't a precise enough tool to directly measure a width that tiny, although indirect model dependent estimates suggest that the reality is closer to +/- 14% of the expected value (i.e. a boundary under 7 MeV).

While this isn't a particularly precise measure of the fit of the experimentally measured Higgs boson to the Standard Model expectation, every limitation on the maximum degree to which the experimentally measured particle could differ from the Standard Model expectation limits the parameter space of beyond the Standard Model theories.  If your theory predicts that the Higgs boson has a mass of under 125 GeV or over 127 GeV, or has a width of more than 180 MeV, it is ruled out by experiment, even if it matches other experimental data to date.  Also, if you theory predicts that a Higgs boson can have a width of less than 180 MeV only if some other parameter X is less than a certain value, this can bound the parameter space of your model.

While the fit of the Higgs boson width experimentally measured to the Standard Model expectation isn't very tight, the fact that this width is so small to start with means that this boundary does have some discriminatory effect, particularly taken together with the whole of the other experimental data we have on the Higgs boson already.

Thursday, November 14, 2013

A Lost (And Found?) Galaxy?

One of the earliest galaxies indexed, NGC 56, is widely considered a "lost galaxy" because there appears to be no sign of a galaxy fitting its description where it was originally indexed.  But, galaxies shouldn't just disappear with a whimper one day.

However, another somewhat nearby galaxy, PGC 1107, that does seem to fit the original description of NGC 56, may be the missing NGC 56 galaxy, with coordinates that were not quite perfectly recorded the first time when it was indexed.

Wednesday, November 13, 2013

Qingsongite

A newly christened mineral has an atomic structure that’s similar to diamond and nearly as hard. Qingsongite was first created in the laboratory in 1957, and geologists first found natural qingsongite, which is a cubic boron nitride, in chromium-rich rocks in Tibet in 2009. The mineral is named after deceased Chinese geologist Qingsong Fang, who discovered diamond in similar Tibetan rocks in the late 1970s.
From Science News.

The name was officially assigned in August, 2013.

Tuesday, November 12, 2013

More Evidence For Tetraquarks

More evidence is mounting that four quark composite particles, aka tetraquarks, have been observed.  The Standard Model permits such structures, but they are experimentally hard to discern.

Anomalous Muon Magnetic Moment (Muon g-2) Research Reviewed

Theory predicts the anomalous magnetic moment of the muon (the second generation charged lepton, a heavy electron) and the electron (the quantity is sometimes described as "g-2").  The experimental value for the electron matches the theoretical value at a parts per billion level.  The experimental value for the muon is in tension at a 3-4 standard deviation level from the theoretical value, both known to the sub-part per million level, although the two are still very close in absolute terms (less than a part per million).

A recent paper reviews the issue and ongoing efforts to obtain greater precision in both the experimental and theoretical estimates of the value of this physical constant.

The discrepancy is simultaneously (1) one of the stronger data points pointing towards potential beyond the Standard Model physics (with the muon magnetic moment approximately 43,000 times more sensitive to GeV particle impacts on the measurement than the electron magnetic moment) and (2) a severe constraint on beyond the Standard Model physics, because the absolute difference and relative differences are so modest that any BSM effect must be very subtle.

We are five to seven years from the point at which improved theoretical calculations and experimental measurements combined will either definitively establish a beyond the Standard Model effect, or rule one out to a much higher level of precision.  My money, for what it is worth, is on the latter result.

The muon g-2 limitations on supersymmetry are particularly notable because unlike limitations from collider experiments, the muon g-2 limitations tend to cap the mass of the lightest supersymmetric particle, or at least to strongly favor lighter sparticle masses in global fits to experimental data of SUSY parameters.  As a paper earlier this year noted:
There is more than 3 sigma deviation between the experimental and theoretical results of the muon g-2. This suggests that some of the SUSY particles have a mass of order 100 GeV. We study searches for those particles at the LHC with particular attention to the muon g-2. In particular, the recent results on the searches for the non-colored SUSY particles are investigated in the parameter region where the muon g-2 is explained. The analysis is independent of details of the SUSY models.
The LHC, of course, has largely ruled out SUSY particles with masses on the order of 100 GeV.  Another fairly thoughtful reconciliation of the muon g-2 limitations with Higgs boson mass and other LHC discovery constraints can be found in a February 28, 2013 paper which in addition to offering its own light sleptons, heavy squark solution also catalogs other approaches that could work.

Regrettably, I have not located any papers examining experimental boundaries on SUSY parameter space that also include limitations from the absence of discovery of proton decay of less than a certain length of time, and the current thresholds of non-discovery of neutrinoless double beta decay.  The latter, like muon g-2 limitations, generically tends to disfavor heavy sparticles, although one can design a SUSY model that addresses this reality.

Some studies do incorporate the lack of positive detections of GeV scale WIMPS in direct dark matter searches by XENON 100 that have been made more definitive by the recent LUX experiment results.  Barring "blind spots" in Tevatron and LHC and LEP experiments at low masses, a sub-TeV mass plain vanilla SUSY dark matter candidate is effectively excluded by current experimental results.  And, other lines of reasoning strongly disfavor dark matter candidates with masses in excess of a TeV.


Tuesday, November 5, 2013

Many Readers Of This Blog Not In The United States

About two-thirds of the readers of this blog are U.S. based, and about one-third are outside the United States, a figure that is consistent with the universal character of the subjects discussed here.

Also, here is a hello to reader "graham d" who has quoted this blog and several others that I read multiple times.  Thanks for reading.

Notable Recent Physics Research

The Muonic Hydrogen Problem

Two recent physics papers address the fact that protons in muonic hydrogen apparently has a different radius than ordinary hydrogen when not quantum electrodynamics effects should make that possible.

One of the papers argues that this flows from failing to consider general and special relativity correctly in the calculations:
M.M. Giannini, E. Santopinto, "On the proton radius problem" (1 Nov 2013)

The recent values of the proton charge radius obtained by means of muonic-hydrogen laser spectroscopy are about 4% different from the electron scattering data. It has been suggested that the proton radius is actually measured in different frames and that, starting from a non relativistic quark model calculation, the Lorentz transformation of the form factors accounts properly for the discepancy. We shall show that the relation between the charge radii measured in different frames can be determined on very general grounds by taking into account the Lorentz transformation of the charge density. In this way, the discrepancy between muonic-hydrogen laser spectroscopy and electron scattering data can be removed.
This result is identical to that of a paper earlier this year by D. Robson, but does so in a manner that the authors of the new paper believes is more rigorous and theoretically correct.  As the earlier paper explains:
Associating the muonic-hydrogen data analysis for the proton charge radius of 0.84087 fm with the rest frame and associating the electron scattering with the Breit frame yields a prediction 0f 0.87944 fm for the proton radius in the relativistic frame. The most recent value deduced via electron scattering from the proton is 0.877(6)fm so that the frame dependence used here yields a plausible solution to the proton radius puzzle.
Another paper, using very general realizations from lattice QCD argues that high energy hadrons are more compact than low energy hadrons (contrary to widespread intuition expressed in some QCD effective theories), and that the high energy of hadrons in the muonic hydrogen system may provide a QCD as opposed to a QED explanation:
Tamar Friedmann, "No Radial Excitations in Low Energy QCD. II. The Shrinking Radius of Hadrons" (Submitted on 12 Oct 2009 (v1), last revised 4 Nov 2013 (this version, v4))

We discuss the implications of our prior results obtained in our companion paper [arXiv:0910.2229]. Inescapably, they lead to three laws governing the size of hadrons, including in particular protons and neutrons that make up the bulk of ordinary matter: a) there are no radial excitations in low-energy QCD; b) the size of a hadron is largest in its ground state; c) the hadron's size shrinks when its orbital excitation increases. The second and third laws follow from the first law. It follows that the path from confinement to asymptotic freedom is a Regge trajectory. It also follows that the top quark is a free, albeit short-lived, quark. [For Note Added regarding experimental support, including the experiments studying muonic hydrogen, and other experiments, see last page.]
Gravitational Impacts on Weak Sector Interactions

Along the same lines as Giannini and Santopinto's paper, a recent paper looks at the role of gravity in Higgs boson and weak boson scattering. The paper predicts scattering effects that can be discerned at the TeV scale and hence that should be possible to observe at the LHC.

Warm Dark Matter

A recent paper compares three varieties of sterile neutrino production: non-resonant, oscillation based; non-resonant, non-oscillation based; and resonant. The paper argues that the first method of sterile neutrino production is inconsistent with Milky Way data, while not ruling out the other two. It also compares the conventional way of stating sterile neutrino mass of 2.5 keV is equivalent of a 0.7 keV thermal mass. The paper also engages in some critical analysis of the assumptions that go into sterile neutrino warm dark matter mass boundaries from other kinds of observations.

Snowmass Paper On New Particles

The latest Snowmass white paper on new particles is a disappointingly unimaginative and unpersuasive argument for building new very expensive colliders to detect new supersymmetric or other new particles or extra dimensions.

The discussion of WIMP dark matter in the white paper, for example, fails to meaningfully acknowledge the serious problems presented by astronomy models and observations with very heavy dark matter particles, despite abundant research on the subject that essentially rules out such heavy WIMPs based on observed dark matter phenomenology at galactic and smaller scales. Similarly, the discussion of compositeness fails to really grapple with the very high energy scales at which compositeness by the measures suggested have already been ruled out.

In areas where the paper really needed to be specific, like low energy blind spots at the LHC, or the waning solidity of the motivation for new TeV scale physics in light of the developments of the last couple years, it was decidely vague.

A Low Energy Effective Weak Force Theory

While mostly interesting as a historical novelty, an effective weak force theory for energies well below the W boson small scale, that does not require W bosons, Z bosons, Higgs bosons, or top quarks, that was explored by some of the leading physicists before these high energy particles were discovered (e.g. Fermi, Feynman, Gell-Mann, Marshak, and Sudarshan), is explored and refined in a recent paper.

The theory works at these low energies, and hence could be applied as a way to simplify calculations where the elaborations necessary to handle high energy situations is not required. It also is useful as a way of elaborating in a historically authentic way how a low energy effective theory can be transformed into a more exact higher energy effective theory and to clarify why the paradigm shift to the Standard Model approach was necessary.

Within The Standard Model Neutrino Mixing Parameter Predictions

A recent paper looks at an approach to add additional group symmetries (C2*D3) to the Standard Model lepton interactions in a manner that makes it possible to predict the three theta mixing angles of the PMNS matrix from a single parameter (which can be derived from a function of the three charged lepton masses). It also illustrates the interaction of the lepton masses and the neutrino oscillation mixing parameters.

Friday, November 1, 2013

Refining Measurements Of Fundamental Physical Constants

A couple of new ultra-precise experimental measurements of Standard Model physical constants this released this week reduce the tensions between predictions made by the Standard Model that are calculated using these constants and the experimentally measured values of the predicted phenomena.

Tau Mean Lifetime

A new measurement of the mean lifetime of the tau charged lepton from Belle tweaks it down a bit from 290.6 +/- 1.0 * 10^-15 seconds (based mostly on the LEP experiment) to 290.17+/-0.53 stat +/- 0.33 sys * 10^-15 seconds (the combined one sigma MOE is about 0.62).  This downward adjustment helps resolve what had been a 2.6 sigma tension between the LEP data on tau decays of a particular type, and Standard Model predictions for those tau decays.

The study also measured (for the first time the measurement has been attempted) a 0.7% asymmetry between tau and anti-tau lifetimes, which are identical in the Standard Model.  This result is within 0.3 sigma of zero with the uncertainty being almost entirely statistical rather than systemic.  Thus, is confirms the Standard Model expectation.

W Boson Mass

A final report from the now closed Tevatron's D0 experiment measured the W boson mass to a precision matching that of the current world average measurement (not bad for an "obsolete" collider).  Thus result for the final measurement was:

MW = 80.367 ± 0.013 (stat) ± 0.022 (syst) GeV = 80.367 ± 0.026 GeV. When combined with our earlier measurement on 1 fb1 of data, we obtain MW = 80.375 ± 0.023 GeV

The new result is within about half of a standard deviation from the old world average measurement.  The new result closes about half of the difference between the independently measured W boson mass and the 80.362 GeV value preferred by a global fit of the W boson, top quark  and Higgs boson masses (125.7 +/- 0.4 GeV) given the relationship of these three masses to each other in the Standard Model.  The global fit expectation is less than one standard deviation from the new measurement, again confirming the Standard Model.

Footnote On Direct Dark Matter Detection

A Snowmass working group released a white paper on direct WIMP dark matter detection which is obsolete on day one for its omission of the LUX results that are more powerful than all of the previous experimental measurements to date.