Pages that I visit a lot.

2022-04-09

April Arxivery

Articles studied this month - some of which might go to Slashdot.
The Origin and Evolution of Multiple Star Systems
Martian meteorites reflectance and implications for rover missions
Chemical Habitability: Supply and Retention of Life’s Essential Elements During Planet Formation
Why tyrannosaurid forelimbs were so short? An integrative hypothesis.
Weak versus Strong Chaos (in planetary orbits, particularly outer Solar system ones
Freeze-thaw cycles enable a prebiotically plausible and continuous pathway from nucleotide activation to nonenzymatic RNA copying

April science readings.

I guess I should look for Acta Primae Aprila, but I can't say I'm really bothered. See this post for a trip down April Fool Road.

The Origin and Evolution of Multiple Star Systems

https://arxiv.org/pdf/2203.10066.pdf
FTFA : Most stars are born in multiple stellar systems.
Is this true? I knew it was getting close, but was it over the 50% line? I guess, it's easier to disrupt a multiple star system to form a single and a (lower-)multiple, or two singles, while it's much harder to form two singles into a double, or a single and a multiple into a (higher-) multiple, so the multiple:single ratio should decrease over time. Even in a relatively densely packed molecular cloud, it's still going to be harder.

Corollary, and this is my own thought, hence date - when disrupting a multiple star system to form a multiple and a single, wouldn't most of the Oort cloud and Kuiper belt stay with the larger star, potentially giving a way to distinguish single which were born as singles from singles which were ejected from multiples? So, finding a Sun-like star with a much smaller Oort cloud (and/ or Kuiper Belt) than the Sun's, might be a flag that this newly-characterised system is an ejectee? Do we have observational techniques which could survey to this depth? (2022-04-09 13:22)

OK, back from that rabbit hole.

RoTFA : In this review, we compile the results of observational and theoretical studies of stellar multiplicity. We summarize the population statistics spanning system evolution from the protostellar phase through the main-sequence phase and evaluate the influence of the local environment.

[See, I did notice the impact of nearby (proto-)stars above!]

We describe current models for the origin of stellar multiplicity and review the landscape of numerical simulations and assess their consistency with observations. We review the properties of disks and discuss the impact of multiplicity on planet formation and system architectures. Finally, we summarize open questions and discuss the technical requirements for future observational and theoretical progress.

Those last sections sound like a useful review.

Introduction Observational data is catching up with theoretical models.

Observed Stellar Multiplicity- Multiplicity changes a lot during formation and early evolution of systems. A significant increase in multiplicity happens at star (system?) masses greater than 0.5 M⊙ (see figure 4). That bears a lot on the question I pose above. Does that represent a mass at which collapsing molecular clouds become more likely to have complex turbulance that can fragment the cloud? The decline in multiplicity continues down towards and possibly into the brown dwarf regime, but detectability biases become challenging. (Table 1, in Astronomy notebook, 13columns, does not render well.) "Moving toward earlier spectral types, we find that the trend of an increasing MF with primary mass continues" and that trend continues to O-type stars - the biggest and brightest, where 90% are multiples, and most are triples or higher (companion frequency (CF) is 2.1 ± 0.3 for O-B stars)
Definitions : MF, CF
My question above remains in play when the authors say "The multiplicity fraction increases monotonically with primary mass from MF ≈ 20% for [brown dwarves] and late-M dwarfs to ≈ 50% for solar-type stars to MF > 90% for OB stars. The triple fraction increases even more dramatically from THF ≈ 2% for late-M dwarfs to 14% for FGK dwarfs to nearly 70% for O-type stars." because more than half of all stars are in the M class. There are also moderate (but significant) trends in metallicity, separation of stars within a system, and the ratio of star masses between primary and subsidiary stars in a system. For M stars, there doesn't seem to be much reduction in multiplicity in the first few hundred million years of life (which bears on my corollary above).

Models For Multiple Star Formation - Well, I didn't know that Fred Hoyle had his finger in the pie of star formation processes. "Hoyle F., 1953 ApJ, 118, 513. On the Fragmentation of Gas Clouds Into Galaxies and Stars" Hoyle studied the radiation of heat from contracting (self-gravitating) gas clouds and showed that they were unstable to separation into smaller, denser, faster-contracting clumps. Which leads naturally to a hierarchical system of collapse. More recent treatments add in the effects of turbulence and magnetic fields, for a fuller but inherently stochastic model (due to the turbulence, if nothing else).

Current ideas for multiple formation can be divided into three main categories: theories in which multiples form via fragmentation of a core or filament (§3.1), via fragmentation of a massive accretion disk (§3.2) or through dynamical interactions (§3.3). This third mode can also rearrange the hierarchy and multiplicity of systems formed via the prior fragmentation channels.[...] Due to their wide initial separations, multiples formed from turbulent fragmentation accrete gas with different net angular momentum. This frequently produces misaligned stellar spins, accretion disks and protostellar outflows.

There's a lot more, but it boils down to we/ve got several ways which could lead to the story of star system formation that we do see, and distinguishing between them is unlikley to be clear statisitcally. It's not a recipe for "this model works, and no other", more likely "this that and the other model all work, and any could have formed this system". That's for systems which have settled into Main Sequence tedium for a few tens to hundreds of million years. High mass, bright stars don't live long enough for the evidence of their birth environment to have dissipated, but that doesn't necessarily reflect the conditions under which low mass stars form.

Return to Article List

Martian meteorites reflectance and implications for rover missions

https://arxiv.org/pdf/2203.10051.pdf

Obviously you need to do the legwork before trying to interpret what you see with your rover/ helicopter/ Musk-0-nought. So you look at what samples of Mars you do have using instruments as similar as possible to the ones being flown. If that means putting a Martian meteorite into a sock and beating Musk round the head with it, well, it's a dirty job, but someone's got to do it. Please remember that Musk wants to "die on Mars, but not on impact". Maybe hit him about the torso and limbs instead?

The problem (FTFA) is

the current spectral database available for these [Martian meteorite] samples does not represent their diversity and consists primarily of spectra acquired on finely crushed samples, albeit grain size is known to greatly affect spectral features.

Yeah, legwork definitely needed.

The spectrometers in question are VNIR (Visible and Near-Infrared) instruments. So while some classroom/ Mark-1 Eyeball experience is helpful, you do need to characterise the behaviour of real minerals and mineral mixtures (rocks) to have confidence in their interpretation.

the physical state of the sample, and especially its grain size, have been shown to significantly influence both the absolute reflectance and the shape of the absorption bands

Geologists know this from thier thousands of hours in the laboratory and field. (No lab-work? Not a geologist.) Non-geologists may need to be reminded of it. In theory you can do it from a book. But in theory there is no difference between theory and practice, whereas in practice there is. (Ref : "Oldies but Goodies", Chthulhu & Ugg, Gobekli Tepe Publications, 10000 BCE

In addition, in situ measurements by the SuperCam instrument [on the Mars 2020 rover, currently on Mars as "Perseverance"] will be achieved remotely and without any sample grinding

I wasn't paying close attention, but I had noticed repeated references to "pew-pew-pew"-ing various rock surfaces before drilling a hole, then later collecting the rock dust, or "pew-pew-pew"-ing the dust pile after drilling. I'd been taking that as getting spot measures versus a bulk-rock, but it would also address the grains size issue above. The rock dust pile would represent the mixed (to some degree) minerals of the rock interior, while the surface readings would be measuring the oxidised, UV+cosmic-ray-blasted surface minerals - which are not necessarily the same. Then there is the fact that soft minerals (interstitial carbonates, weathering clays ("phyllosilicates")) would express different surface area in the powder than in the bulk rock. It may be hard to differentiate these effects from the data collected in each drilling procedure, but if you didn't collect the data, then it would be impossible to differentiate these effects.

11 Martian meteorite had previously been IR-sperctrogrammed (mostly as powders, e.g. cutting debris) ; this study add 16 more meteorites in IR, and 11 of these were also studied with hyperspectral imaging.

Petrology

Most of the samples were mafic to ultramafic rocks, often showing cumulate textures. One polymict breccia (NWA 7034, explosive or sedimentary?) and several basalts and phenocrystic lavas were also in the suite examined. The shergottites in general yeild three ages : cosmic-ray exposure ages (indicating duration of interplanetary flight before a relatively recent impact on Earth) of 0-5 to 20 Ma ; mineral separates show Rb-Sr, Sm-Nd, Lu-Hf and U-Pb ages of 175-475 Ma (possibly a metamorphism age, or a re-melting?) and whole-rock Pb-Pb and Rb-Sr ages of about 4.1 Ga (original intrusion, or a protolith later re-melted?). That's a nice example that will go into the isotope geochemistry text books - if it's not there already.

Contamination of the meteorite samples by weathering on Earth's surface (or possibly during interplanetary transport) is quite common. It is more reported in finds from hot deserts than from cold deserts - Antarctica - but that 40-60 °ree;C storage temperature difference is sufficient to account for that. Carbonate mineals in Martian meteorites are mostly in vein-fills, and typically interpreted as terrestrial weathering products. (The presence of carbonates in distinct "rosette" concentric discs in specimen ALH84001 is one of the arguments for this feature having been formed on Mars.)

The zoning in the cpx in some of the shergottites looks really nice. But I doubt I'd ever afford a big enough sample for a thin section. Is there a sample in the BGS files? (No, but a reasonable number elsewhere. and some of them are pretty!)

Return to Article List

Chemical Habitability: Supply and Retention of Life’s Essential Elements During Planet Formation

https://arxiv.org/pdf/2203.10056.pdf

This paper is part of the proceedings of conference "Protostars and Planets VII".

Which elements ? Well, they're sticking with "life as we know it" (Star Trek quote?), so CHONPS - Carbon Hydrogen, Oxygen, Nitorgen, Phosphorus, Sulphur. Some terrestrial lifeforms require a bit of other elements (humans a few ppb of molybdenum, IIRC), but for the structures of living things, you need that lot, in about that order. It's a choice, which could be challenged, but in itself it's a perfectly reasonable choice.

The fact that they're most of the commonest elements in the universe, is also probably part of the reason that life uses them. My data bucket (taken from http://www.kayelaby.npl.co.uk/chemistry/3_1/3_1_3.html, but you'll probably have to drill through the site to get to the table) tells me the commonest elements in the Sun are (in descending order) H, He, O, C, N, Ne, Mg, Si, Fe, S, Ar, Al, Ca, Na, Ni, Cr, Cl, and eventually P. Removing the nobel gasses, chlorine, silicon and the metallic elements (which are mostly combined with oxygen), the list is H, O, C, N, S, and P. And that, in itself is ample justification for choosing to concentrate on these elements.

From a physiological/ metabolism point of view, they're the elements used to make carbohydrates, amino acids, and energy-labile phosphates - the major structural components of biochemistry. (Sulphur is most important in cross-linking proteins into meshes as chains are folded and bring sulphur-containing amino acids into proximity, when they can cross-link.

Why is this a question? Well, it's reasonably easy to model the accumulation of high-melting point materials (metal oxides, silicate minerals) into planets, and to model the accumulation of hydrogen and helium onto a body in a condensing stellar nebula. But it's not so easy to understand how moderately volatile (boiling point a few hundred Kelvin) compounds (carbon oxides, ammonia, sulphur hydrides and oxides) accumulated onto a planetaty core, since the accumulation of the planet probably heated materials to lava-like temperatures - around a thousand (Kelvin or Centigrade). (Phosphates compounds are the least troublesome in this respect - magmas can directly crystallise phosphate minerals such as apatite from the melt, as anyone who has studied mineralogy under the polarizing microscope will remember it as one of the first minerals you are taught to diagnose (rounded crystals, moderate relief, low birefringence, optically negative, uniaxial if you can get an indicatrix). The substantially different volatire inventories of the Solar "rocky" planets, and the range of properties inferred for exoplanets show that the volatile content of planets is something that varies, a lot, even within one system, so might be a distinguishing factor etween which planets develop life and which don't.

The availability of these volatile materials influences whether a planet is considered in the "habitable zone", as the presence of "greenhouse gases" in a planets atmosphere can considerably alter the surface properties. If the Earth didn't have a lot of water available on it's surface - and therefore in the atmosphere in amounts approaching a percent - it's surface temperature would be sitting near the freezing point of water. The FUD around anthropogenic global warming is about whether people want to live on a planet with 15 degrees (Kelvin or centigrade) of global warming, or 16, 17, even 20 degrees of warming. Geologically, that experiment has been done, recently : algal blooms in the Arctic Ocean, Scandinavian crocodile-infested swamps, unbearale tropics.

The authors define "chemically habitability" in terms of

  • 1) a supply of carbon, hydrogen, nitrogen, oxygen, phosphorus, sulfur (”CHONPS”), and other bio-essential elements that are accessible to prebiotic chemistry, and
  • 2) is capable of maintaining the availability of the CHNOPS elements over relevant geologic timescales.

As observational techniques improve, and the planetary examples and systems which need to be explained increase and diversify, the range under which models of chemical habitability will be tested also increases.

Section 2. Tracing The Earth’s Ingredients Back Through Time The main constraint on this is understanding when life originated on Earth. From the (body) fossil record, we know it originated before the oldest fossils at 3.5 Gyr ago, but more controversial arguments (for example, Moizjies Akilia graphite-in-apatite at 3.9 Gyr and Bell's graphite in a Jack hills zircon of 4.1 Gyr) interpret negative carbon-13 isotope ratios as evidence of a functioning biosphere consideraly before then. This is only a few hundred Myr after the Moon-forming impact, and well into the period of the Late Heavy Bombardment (if that happened, and isn't an artefact of the distribution of rocks around the Imbrium crater). If (again, a real "if") the Moon-forming impact stripped Earth's preimordial atmosphere off (alternative : formed a synestia) then the final accretion of the Earth included the volatiles for the atmosphere/ hydrosphere in it's (relatively) undifferentiated (i.e. unheated) components. With the exception of P, and the special case of O, the CHONPS are fairly uniformly distributed in the Earth's various zones (core, lower & upper mantle, lithosphere, crust, hydrosphere, atmosphere, biosphere) and can affect the systems at part per million levels (e.g. CO2 in the atmosphere). Generally, they don't form distinct minerals (though #TeamIce keep having that argument in the annual #MineralCup on @Twitter, and winning it.) and mostly are fluids. There is a lot of cycling of CHONPS between the Earth's lithosphere, hydrosphere, atmosphere and biosphere, a lot of it mediated (on Earth) by plate tectonic activities. On other worlds, that may be different - cases of "Waterworld" and "Stagnant Lid" worlds (example Venus) are considered. (Venus, being a neighbour, is obviously a prime example for examining the variations in CHONPS behaviour and other aspects of planetary science). Via it's role in reducing fault friction and enabling (or enhancing) plate tectonics, the presence of water may be very inportant in altering fluxes of other CHONPS components. But further, the role of plate tectonics-mediated heat flux in mantle may be an important point in altering the convection in the liquid core which is responsible (most likely) for producing the Earth's magnetic field, which itself is a part of the system reducing water loss from the upper atmosphere by solar wind fluxes. That may (or may not) be a general condition for habitability.

Carbon Through it's role as a greenhouse gas, even at ppm levels, the cycling of C as CO and/or CO2 between atmosphere and crust/ lithosphere/ hydrosphere has whole-planet consequences, but remember that H2O is also a significant greenhouse gas at surface temperatures above about 250K. These effects probably also constrain the levels of N (as NH3, NOx), though P and S are thought to be less effected. At high (core) pressures, C is a siderophile element, and most of Earth's C inventory is thought to reside in the core (about 4 times the amount in the atmosphere, hydrosphere and biosphere). If Earth's crustal and mantle inventories of C were to be put into the atmosphere as CO2, the resultant atmosphere would be broadly comparable to that of Venus.

Hydrogen is more evenly distributed between Earth's surface and deep interior, with an uncertain amount in the core, several "oceans" worth in the mantle (as mineral defects, which has a considerable effect o nmineral viscosity, and of course one ocean on the surface. Hydrogen that makes it ot the upper atmosphere is prone to loss via photodissociation to form monatomic hydrogen. However the current structure of the atmosphere is such that there is a "cold trap" in the stratosphere which effectively prevents water from getting above the UV-absorbing tri-oxygen layer.

Oxygen is the commonest element in the Earth - but almost all as oxide and silicate minerals, not di-oxygen gas (let alone tri-oxygen - ozone - and hypothetical higher allotropes of interest to the explosives chemists). The oxidation state of the whole Earth is dominated by the metallic oron in the core, ut that is effectively isolated from the mantle by it's density contrast. The oxidation state of the mantle is more managed by the QFM - quartz-fayalite-magnetite - reaction system than interaction with the metallic iron of the core. A similar interaction barrier exists between the mantle (and crust) and the atmosphere, with the atmosphere having a thermodynamically delicate store of 21% di-oxygen which is maintained by photosynthesis. This state has only existed since (approximately) the second to third Gyr of the Earth's existance, when the development of life, and then of photosynthesis, led to surface reservoirs of reducing power (e.g. Fe2+ minerals) being oxidised before di-oxygen started to accumulate in the atmosphere in the "Great Oxidation Event" (GOE). There is some debate over whether the GOE is due mainly to biological events, or to changes in mantle properties or circulation leading to less reducing power at the surface. This is considered problematic for the development of biological (or proto-biological) chemistry, which mostly reacts to di-oxygen by falling apart. Only small parts of biochemistry can tolerate the presence of di-oxygen, and there are considerable biochemical complications to keep it in it's place (in mitochondria and chloroplasts, for eukaryotes; prokaryotes are more variable). In a more general situation than just Earth, it is not at all clear if a "GOE" is necessary. Compared to other planets (where there is evidence), the Earth's mantle seems to be relatively oxidised (have a low free iron content) - whether this is a cause or an effect of habitability is unclear.

Nitrogen distribution between the atmosphere and the body of the Earth is strongly influenced by the oxygen fugacity of the atmosphere. As such it is then dependent on the details of the early atmosphere in contact with a post-formation magma ocean, or synestia if that was the path taken. The pressure of the Earth's atmosphere at different stages in it's history is an open question.

Phosphorus Tyrrell (1999 Nature, v400, p525, "The relative influences of nitrogen and phosphorus on oceanic primary production") considers P to be the ultimate limiting nutrient on Earth. Since there is no significant gaseous reservoir of phosphorus, it is primarily available through aqueous solution replenished by rock weathering. Currently, phosphorus is mostly released from granite/ granitoid rocks into which it is significantly segregated in igneous differentiation. However on Early Earth, there probably wasn't as much granite/ granitoid rocks at surface today and smaller amounts of phosphorus would have been released from basaltic/ mafic rocks. Very early, schreibersite ((Fe, Ni)3P), a mineral found in some meteorites may also have been a source. (The discovery of 4-4.3 Gyr zircons from the Jack Hills (Australia) and Acasta (Canada) gneisses challenges ideas of an early paucity of granite/ granitoid rocks in the very early Earth.)

Sulphur is degassed from the mantle as SO2 and H2S, but rapidly converts to sulphate and rains out into the hydrosphere. Sulphur cycles back inot the mantle via subduction, primarily as sulphide minerals (which are often processed biologically). In surface ultramafic rocks - and presumably in the mantle too - separate liquid sulphide phases (a "matte")can separate out and segregate core-wards due to it's density, taking chalcophilic elements with it. Other elements don't seem to have an effective sink to the core operating to this day. This matte process probably also operated during the Earth's assembly and after the Moon-forming impact. After the GOE, cycling between sulphide and sulphate minerals happened near the surface, which can lead to some very high degrees of isotopic differentiation.

During the accretion of the Earth (or any other planet under consideration) there were several phases, with differing processes and rates of CHONPS loss or segregation. The earliest event that can be clearly dated in the Solar system was the formation of Calcium-Aluminium Inclusions (CAIs) which are now found in chondrittic meteorites. That dates quite precisely to the memorable 4.567 Gyr ago. Rapidly, about 4~5 Myr, the gaseous component of the nebula dispersed, by which time Jupiter and Saturn had of necessity formed, and probably the cores of the terrestrial planets had reached considerable size - maybe half their current masses - which were capable of holding their own primordial atmospheres. Addition of material to the terrestrial planets continued, possibly stimulated by rearrangements of the gas giants and ice giants accreting the last of the gaseous nebula, with the hierarchical accretion of nearly similar-size bodies including the "Moon forming impact" (roughly dated to 50-150 Myr after the formation of CAIs). It remains somewhat unclear how much accretion was driven by interaction with Jupiter (including bringing in outer Solar system material to the terrestrial planets), and from where the Earth's volatile inventory came. The stable isotope ratios of meteorites (and their parent bodies) suggest formation in different parts of the Solar nebula, at different times, but the story seems complex and mixed up. Possibly by the Jovian "Grand Tack", if that happened. Possibly by the (relatively) long distance movement of "protoplanets" before their mutual collision to form "planets". It is still unclear if the Solar nebular disc remained effectively segregated by isotopic composition. One of the recent discoveries is how highly siderophile elements on Earth, which should have gone into the core during the core-forming event, and the Moon-forming impact, remain accessible on the Earth's surface, where they shouldn't be. Hence ideas of a late "veneer" on the Earth's surface.

Is this a situation that numerical scientists would describe as "ill-conditioned"? Where the models are looking for the crossing points of relationships whith very low closing angles (if you plotted them graphically), so that unavoidable noise leads to numerical results which are outside the range of the possible.

There's a huge amount more in this. Which I don't have time to go into in the necessary depth. We have a good understanding of the processes involved, but which processes are important isn't clear, and may be different in different places. And our data sources are not of the best - unavoidable because of distance and the contrast ratio between stars and planets. Frankly, "more data!" - which means visiting more planetary systems as soon as possible.

Return to Article List

Why tyrannosaurid forelimbs were so short: An integrative hypothesis

Acta Palaeontologica Polonica

A change from astronomy. APP has been a trail-blazer for Open Access publication in the field of palaeontology for a long time (well over a decade?) The journal has a tendency towards publishing material from Eastern European researchers, but it is global in it's spread. (Follow the link for the abstract ; the PDF is linked from that page.

Padian is a well-known palaeontologist, aprticularly in the cretaceous dinosaurs of Canada and their close relatives in Mongolia.

So waht's his big idea? People have puzzled over why the forelimbs of Tyrannosaurus species (rex, and others) are so relatively small since the species was recognised in the early years of the 20th century, and has been a staple of popular (vertebrate) palaeontology ever since. What is less well known outside the field, is that similar developments happen in multiple other types of large dinosaurian carnivores through both Jurassic and Cretaceous (tyrannosaurids, albertosaurids, abelisaurids, carcharodontosaurids). Something repeatedly propelled large dinosaurian carnivores towards relative reduction in the size of their arms.

Padian's proposal is that this is a passive but recurrent process. The genera which develop the small-arms feature have previously adapted their hunting and feeding strategies to using only the mouth (and it's scary set of teeth) to catch, kill, and consume their prey. This leaves the arms with, literally, nothing to do, so energy conservation tends to reduce their size. So much isn't particularly new, but Padian adds that many of these species show evidence of group or cooperative hunting (aligned trackways, mass mortality assemblages), and proposes that the presence of these arms dangling uselessly where multiple carnivores are tearing apart a prey item is an invitation to inadvertent biting, bleeding and infection, which provides an amplification to the passive drift to reducing arm size.

Padian adds a lot more detail to the argument, but that's the basic argument. Interesting idea, and Padian discusses the ways it could be falsified, which is a good sign. We'll never know witout a time machine, so we'll never know.

Return to Article List

Weak versus Strong Chaos

PNAS QnAs with Renu Malhotra

Does any reader (not me) need introduction to Renu Malhotra? A celestial dynamicist, she studies the variation and interrelationship of the orbits of bodies in the Solar system. That PNAS chose to do a Q'n'A with her affirms her status in the field.

What struck me here was the distinction between "weak" and "strong" chaos. Many people falsely think that "chaotic" means "anything can happen" ; what it actually means is more like that "future events can't be accurately predicted far into the future".

Also, as people did more accurate computer simulations of the orbits of the planets over the age of the Solar System, they learned that Pluto’s orbit is chaotic on the long time scale. Interestingly, it’s chaotic in a mathematical sense only; it doesn’t actually translate into any dramatic consequences for Pluto’s orbit. Pluto still remains more or less very close to its current orbit, the resonance with Neptune is preserved, and nothing terrible happens to Pluto over billions of years. So, there was this understanding that Pluto’s orbit is chaotic, but only weakly so. [...] We now understand that with the orbital arrangement of Jupiter, Saturn, and Uranus, there’s only a small range of their effective quadrupole moment over which Pluto-like orbits are stable for billions of years. If that quadrupole moment were not in that narrow range, then Pluto would be very strongly chaotic. So, Pluto is much closer to strong chaos than had been previously understood.

What is the distinction between "weak" chaos and "strong" chaos?

Google Is My Friend. But I use DDG, so here are the search results. A number of discussing systems that move between weak and strong chaos, which aren't very likely to discuss the meaning of the phrase, which one would be expected to know if you're in the field. A lot of this work is done in electronics type labs - relatively easy to do experimentally, I guess.
"We start by reminding the reader of fundamental chaos quantities (https://webspace.maths.qmul.ac.uk/r.klages/papers/klages_wchaos.pdf)" ... [Contents] "2.3 A generalized hierarchy of chaos" Sounds useful. It does help, by bringing in a thing called the Lyapunov exponent λ but there's a lot more background. "The Lyapunov time mirrors the limits of the predictability of the system. By convention, it is defined as the time for the distance between nearby trajectories of the system to increase by a factor of e." (https://en.wikipedia.org/wiki/Lyapunov_time) Which isn't terribly helpful, since the time varies over many orders of magnitude. There are hints that people use the time-behaviour of Lyupanov exponents - if they're increasing, the chaos is strong (gets worse with time ; if they're decreasing, the chaos is weak. But otherwise, I don't find anything resembling a simple measure of chaotic-ness.

Aha! Malhotra and colleagues seem to be using the Lyupanov exponent as a discriminant. It's in the paper that prompted the Q'n'A - Doh! "Sussman & Wisdom (6) propagated the orbital motion of the outer four giant planets and Pluto for 845 million years, and found that its nearby trajectories diverge exponentially with an e-folding time of only about 20 million years" What numbers they attach to "Strong" or "Weak" chaos though ... Or maybe not? "The detection of positive Lyapunov exponents notwithstanding, Pluto’s and the planets’ perihelion and aphelion distances and their latitudinal variations remain well bounded on multi-gigayear timescales, indicating that the chaos detected in the above investigations is very weak indeed." This still isn't well defined. They use the J2 parameter as a probe for examining the influence of the guiant (and inner) planets on the outer bodies, and at values of J2 somewhat less than what we actually have, the evolution of eccentricity * cos(argument of perihelion) changes from attaining all values (circling the Sun) to having values restricted to one quadrant (only a partial arc) That may be what she means by "strong" versus "weak" chaos, but I wish it was clearer.

I think that's enough on this question. If I ever meeet her, I'll ask.

Return to Article List

Freeze-thaw cycles enable a prebiotically plausible and continuous pathway from nucleotide activation to nonenzymatic RNA copying

https://www.pnas.org/doi/10.1073/pnas.2116429119

Another sideline from the usual arXivery.

The Faint Young Sun Paradox (or Problem) is the problem that the steady accumulation of helium in the core of the Sun leads (via the increasing mean particle mass) to higher fusion pressures, temperatures and so power outputs. Power increases at something like 5% per gigayear, or about 22% increase from the origin of the Solar systems to today. That implies that the surface of the Earth would have been frozen regularly and repeatedly during the Hadean and Archean. This is very sympathetic to the "Smowball Earth" hypothesis, but also suggests that Darwin's "warm little pond" may actually have had ice crusting it and sometimes covering it at frequent intervals during the O(s)OL period.

That's not necessarily a bad thing. Growing ice crystals in a pond of dilute organic soup is a good way of getting round the "concentration problem" - a growing ice crystal would have had a far higher concentration of "soup" on it's growing surface than the bulk liquid. So there are good justifications for looking at the influnece of ice crystals, even if it's not necessarily the perfect soilution. Of course, ice-crusted pools in one place are not incompatible with the products flowing down hill to ice-free pools, nor to the pools having hydrothermal heating X days in Y, and ice the other Y-X days.

Return to Article List

Ooops, end of the month, and time to start the next batch.

End of Document
Back to Article List.

2022-03-04

March ArXivery

Articles studied this month - some of which might go to Slashdot.
Plate Tectonics
Origin of Ceres
Three-dimensional imaging of convective cells in the photosphere of Betelgeuse
AMATEUR OBSERVERS WITNESS THE RETURN OF VENUS’ CLOUD DISCONTINUITY
Alternative ideas in cosmology. A long post where I'm trying to get my head around these ideas.
Unprecedented change in the position of four radio sources
Do Atoms Age
Hitting a New Low: The Unique 28 h Cessation of Accretion in the TESS Light Curve of YY Dra (DO Dra)
The Asteroid-Comet Continuum
Jupiter’s inhomogeneous envelope
A Star-sized Impact-produced Dust Clump in the Terrestrial Zone of the HD 166191 System
Dielectric properties and stratigraphy of regolith in the lunar South Pole-Aitken basin: Observations from the Lunar Penetrating Radar
Plate Assessment of Microbial Habitability Across Solar System Targets
New satellites of figure-eight orbit computed with high precision
Can a particle moves zigzag in time?
On the fate of quantum black holes
Yet another star in the Albireo system
Terrestrial volcanic eruptions and their association with solar activity
End of document

March science readings.

Orbital properties and implications for the initiation of plate tectonics and planetary habitability

https://arxiv.org/pdf/2202.10719.pdf

What's this one about? Well, the author (single author - always an amber flag ; Rajagopal Anand) thinks that the initiation and continuation of plate tectonics on Earth was important in several ways for the origin and development of life. Which itself isn't a bad idea. Earth is the only planet with (ahemm) "Earth Like" plate tectonics in the solar system, though there are some hints that something similar operated on Mars way back in the Hesperian or Noachian (3-4 Gyr ago). And people are still trying to figure out what is (or was, or does occasionally) going on on Venus. Our author sees that the rotation period of the Earth on it's axis and the time for the Earth to travel 1° around in it's orbit are about the same, and shows that isn't true for any other "rocky" planet in the Solar System. Which is true, and conuld conceivably be important. Or it could be numberology - what is special about 1°, compared to, for example, 1/32 radian? It's an interesting idea, but whether it's an important idea ... I'm a lot less than convinced.

Dynamical origin of the Dwarf Planet Ceres

https://arxiv.org/pdf/2202.09238.pdf

Ceres is an odd one. It comprises about 1/3 of the mass of the Asteroid Belt, but it's composition is considerably different - more "volatile" rich - to the average asteroid. As witnessed by the cryovolcanoes revealed by the Dawn mission.

This paper suggests that Ceres could have been formed in the Kuiper Belt, then migrated inwards during the period of Jupiter-Saturn interaction that rearranged the rest of the Solar system's planets (and incidentally, moved Pluto and the Plutinos into 3:2 resonance orbits with Neptune, fed small bodies into Jupiter's Trojan regions, and generally created interplanetary havoc. With a variety of not-particularly demanding assumptions, they get Solar system evolution models that indeed move about 1 Ceres-size "minor bodies" into the main belt of the Asteroids.

Interesting idea. It should be testable with (particularly) small atom stable isotopes (C-13:C-12, N14:N15, etc). Which really looks for a sample-return mission at the moment.

Three-dimensional imaging of convective cells in the photosphere of Betelgeuse

https://arxiv.org/pdf/2202.12011.pdf

Betelgeuse should be well-known to most people - only those in the very far north and even further south can't see it at some point in the year, and it's about the 6th brightest star in the sky. Exactly which position on the "bright star list may vary, because it is moderately variable. The first records of it's brightness (by "manual" optical comparison with other nearby stars was in the 1830s when John Herschel (son of William Herschel, the discoverer of Uranus (leave the jokes out, please!) set up an observatory in South Africa and recorded about a half-magnitude of variability. Variation in the brightness of Betelgeuse is, very literally, old news.

During the first pandemic of the 21st century, there was a degree of excitement when Betelgeuse went through a profound dimming - then returned to normal brightness. (The last time I looked, using the BetelBot, it was at 107% of "normal" brightness. Meh.) Everyman and their dogs were howling that we were going to get a supernova. Except, of course, for those who knew that Betelgeuse has this long-standing variability.

Yes, very likely, Betelgeuse is going to go "bang" at some point. But it's in a phase in it's evolution where it is losing a significant amount of matter to it's solar wind, and how much mass it will eventually lose remins unpredictable. It may remain big enough to go out with a bang, it may lose enough mass to go out with a whimper. Nobody really knows. (Also, it could go next year, or it could go in a few tens of thousand years ; again, nobody knows.) Finally, it's far enough away (200 persec, 650-odd light years) that we're unlikely to get anything worse than a light show when it does go bang (or whimper).

On the other hand, being a big star, at an interesting phase of it's development, and relatively nearby, it's also a site for trying all sorts of innovative imaging techniques. Since the 1910s, we've occasionally got a view of it's surface and it's changes from year to year, and day to day. This paper is about that, taking a look into the sub-surface of the star. ("Surface" needing a little elaboration - in this case it means the "surface of last scattering", where photons emitted by a hot gas molecule then travel to our eyeas without scattering off any other molecules. That gives us both the surface we can see, and it's height compared to some datum. This analysis shows around 8 rising (and falling) columns of plasm in the stars surface. That's (to me) surprisingly low - the Sun has tens of thousands of cells, if not millions. But for a bigger body that doesn't astonish me.

Fun science, pretty pictures.

AMATEUR OBSERVERS WITNESS THE RETURN OF VENUS’ CLOUD DISCONTINUITY

https://arxiv.org/pdf/2202.12601.pdf

Another area of science where amateurs contribute (the monitoring of variable stars is also utterly dependent on amateur contributions) is in seeking transient phenomena on Solar system bodies. Spotting comet impacts on Jupiter is becoming a frequent event. MEteorites hitting the Moon too. Martian dust storms hasve been recognised since the 1880s or so (with people straining their eyes to the point of seeing canals). Venus too exhibits subtle variations, as recorded here with contributions from the Hellenic Amateur Astronomy Association, Astronomical Society of Australia, Union of Italian Amateur Astronomers, Kagarlyk Kiev Region Ukraine, AstroCampania Association Italy, British Astronomical Association, Asociación Astronómica del Campo de Gibraltar Spain, Private Astronomical Observatory Messina Italy, Agrupación Astronómica de la Safor Spain, Portuguese Association of Amateur Astronomers, Astronomy Society of NSW Australia and Astroqueyras, France. They did the shivering in the rain, hoping for a break in the clouds ; the professionals just processed and collated their data.

That Venus has clouds is in every elementary school textbook. That the clouds exhibit variations in the UV (ultraviolet) and IR) is less well known. But with appropriate filtering and non-eyeball observation, you can see variation in the clouds, and if you coordinate with other observatories you can see the clouds move. We do the same on Earth, but we can see the clouds with the naked eye.

The structure of Venus's clouds is more complex than on Earth (then again, Venus has about 90 times the amount of atmosphere ... so, less than surprising). This study is about middle-level clouds about 50-56km above the surface, which can be seen in the near IR (less thean 1µm - accessable to amateurs with the right imager). Following a serendipitous observation of a gap in the clouds in March 2020, a call went out to amateur astronomers which uncovered suitable imagery from OCtober 2019 until the feature disapperaed at the end of April 2020. The data are available on a JAXA (Japanese Space Agency) website, if you particularly fancy downloading 25=plus GB of imagery.

During the observation campaign the CD (Cloud Discontinuity) rotated around the planet several times, taking about 5 days each time (Venus is a very slow rotator itself - and retrograde too! - taking 243 days to rotate on it's axis). The detailed structure and wind speed structure (3 to 4 times hurricane force, in Beaufort terms) varied with time and each apparition, with dark stripes and turbulent patches appearing from time to time. Evidently, when (if?) the Musk Terrraforming Company is floated on the stock market, they're going to have to include weather forecasting on their to-do list.

Fun, but not immediately useful. The surface topology may be visible in it's effects on the clouds, particularly the deep layer, but to what effect?

Alternative ideas in cosmology

https://arxiv.org/pdf/2202.12897.pdf

A week rarely goes past without someone bleating that their "ELectric Universe", "Plasma Universe", or (less insanely) MOdified Newtonian Dynamics (MOND - a way of getting observed gravitational rotation profiles without Dark Matter) favourite alternative model of the universe doesn't get considered by "big science". There's then often a somewhat paranoid diatribe against the university system that doesn't pay them (the author, normally) to sit on their arses and complain about the injustice of their lives on blogs. OK, I'm being a bit harsh - but there is a lot of that going on.

Of course, people who actually read the journals know that alternative ideas are proposed all the time. But they generally don't attract many people to work in that particular idea, for whatever idea. It is a market place of ideas, and success requires knowing what is likely to generate new ideas, and attract funding and students. That is where the "I have a new Theory of Everything crowd fall down - they can't attract people to follow and develop their ideas, and test them, and (for most hypotheses) find them wanting and discard them.

What do they consider an "alternative" theory? Well, they start from Λ-CDM - the so-called "Standard Model" of cosmology which is "dominated by gravity (Friedmann equations derived from general relativity) with a finite lifetime, large scale homogeneity, expansion and a hot initial state, together with other elements necessary to avoid certain inconsistencies with observations (inflation, non-baryonic dark matter, dark energy, etc.)" (their phrasing). They first consider a variety of "minor variations" on Λ-CDM with "different considerations on CP violation, inflation, number of neutrino species, quark-hadron phase transition, baryonic or non-baryonic dark-matter, dark energy, nucleosynthesis scenarios, large-scale structure formation scenarios; or major variations like a inhomogeneous universe, Cold Big Bang, varying physical constants or gravity law, zero-active mass, Milne, and cyclical models." (again, their words - that I only understand some of them is a justification of sorts for the whole work). Then they move out to more extreme models, such as static universes, "tired light" (I recognise that one from the wingnut branch of Young Earth Creationism) and other very peculiar cosmologies. Whether they get to counting elephants on turtles ... [SPOILER ALERT : elephants = turtles = 0]. For those on the further extents of theoretical work, the critical thing is to try to get your theory as well developed as possible, and try to get it published into the literature. You might manage that through posting a log, but if you can foster a relationship with a published cosmologist, you should be ale to get the credentials to get your work published if it is of publishable quality. In my (limited) experience of shuc theorising, that bit about "forming a relationship" is the difficult bit.

A lot of models are briefly described - and that in itself is a useful contribution - in several groups. After each grouping, I'll summarise issues.

Minor variations with respect to the standard model
Antimatter and CP violation
Inflation
Dark energy variations
Scenarios without non-baryonic cold dark matter with standard gravity
Nucleosynthesis variations
Large–scale structure formation variations

- The antimatter problem is that we have more matter than antimatter - at least locally. But all our experiments produce equal amounts of matter and antimatter - the CP problem. One or other, or both of these must have been wrong (at least at some point in time) so ... yes there are plenty of reasons to try to find a way between this Scylla ans Charybdis.
Lifetime of the proton theorising falls into this group too. We don't see proton decay, and people have looked, for decades.

-Inflation is a solution to the isotropy problem - a.k.a. the horizon problem. Which it does - but it makes people uncomfortable. A thousand variations on inflationary theory are mentioned - leading to the charge that it is too flexible a theory. On the other hand, that also means this is a very active area of research.

- "Dark Energy" - the Λ in Λ-CDM - is another thing that makes people uncomfortable, and has generated a number of alternatives. A lot of these challenge the assumption that we are "Copernican" observers, with an isotropic universe about us. If there is a net movement of galaxies extending across our line of site that might indeed produce the semblence of dark energy's acceleration. A change in supernova properties with time (or metallicity) might also produce the same effect. Again, this is a significant area of research. Whether it's an alternative theory or testing the assumptions of the Λ-CDM model is a two-whisky problem. Time-variable cosmological parameters also fall into this category, at least sometimes.

- "Scenarios without non-baryonic cold dark matter with standard gravity" is a bit of a mouthful covering a range of models for the galaxy-mangling effects of dark matter, but using some variant of regular matter. Unfortunately most such forms of matter, being electromagnetically active, also have consequences on the Cosmic Microwave Background Radiation (CMBR) which ... is a major problem from an observational point of view.

- Nucleosynthesis variations don't normally mess with the half-life of the neutron (that's too well known), but the number of neutrino species is a parameter that people try to play with, as well as attempts at examining non-uniform distributions of protons versus neutrons.

- Most universe-history models have small structures (galaxiaes) forming and coalescing to form larger structures (clusters, walls, voids). But some theories postulate the formation of large structures (of dark or normal matter) first, then smaller and smaller strutures. This produces a fair number of modelling experiments, and the inter-galactic distribution of dark matter is amenable to expermental probing via gravitational lensing.


Major variations with respect to the standard model

Inhomogeneous universe
Cold Big Bang
Variations or oscillations of physical constants
Modifications of the gravity law
Other Friedmann-Lemáıtre-Robertson-Walker (FLRW) solutions
Cyclical universes

- Inhomogenous universes fall into two groups :

Those where the mass within a region of radius R does not follow R3 but follows a lower (normally - but I'd have to check that - less than 3) dimensionality implying a fractal distribution of matter.
Those where rate of time varies with the mass distribution - normally running slower in high-mass regions and slower in low mass regions, which differences then accentuate. At some point that's going to have drastic effects, à la "Big Rip"

- The "Cold Big Bang" is referred to the 1960s. Very little more is said about it - it has major observational comflicts such as the CMBR, the Hubble relation ... Wikipedia adds a little, asserting that the initial state of the universes was a very high entropy state, not the Λ-CDM model's low entropy state. Regardless, the heating necessary to go from a cold big bang, wherever it came from, would leave signs on the acoustic patterning of the CMBR, which are just not seen.

- Several models look at varying one or other of the fundamental constants. The speed of light, Planck's contstant, the fine structure constant and various others have been proposed. Variations in the force of gravity form their own group.

- A whole class of theories fiddle with gravity - either varying the gravitational constant (with time, or with place) or varying the gravitational law. The best known of thse is the MOND family, which typically have F=/=ma for certain ranges of a. Some successes have been claimed for this group of theories, but disputed because of difficulties with such data sets. A lot of other "different gravity" theories go under obfuscating names : Einstein-ether theory, bimetric or general higher-order theories, Hoˇrava-Lifschitz gravity, Galileons, Ghost Condensates, including Kaluza-Klein, Randall-Sundrum, Dvali-Gabadadze-Porrati model 4D gravity on a brane in 5D Minkowski space, Weyl conformal gravity. Some of these are very busy fields, judging from their appearences on Arxiv.

- The "FLRW metric" is "an exact solution of Einstein's field equations of general relativity; it describes a homogeneous, isotropic, expanding (or otherwise, contracting) universe that is path-connected, but not necessarily simply connected" [WIKI], which adds up to "General Relativity- plus". Other relations that cover the same distance- speed- momentum- time territory are available, and are used. One discussed reange of alternatives is the "Zero-active mass condition", which notes that the present day deceleration of the Hubble flow (because of gravity) ia approximately matched by the acceleration due to dark energy, so the net effect is not far from if the universe had no effective mass, and the Hubble flow were not decelerating (due to gravity) or accelerating (due to dark energy). This is achieved by (if I've got this right) scaling distance differently at different times. Which is an interesting interpretation, indeed.

- A different way of replacing "FLRW" is "Milne Cosmology," which does various things with the FLRW metric, but specifically includes a requirement for density = 0. Which is clearly wrong (I'm here, you're there, and there is something between us including a web server). So I don't know why people go any further with it.

- The "cyclical universes" include the ones popular in SF, and to give a sort of answer to "what happened before the Big Bang?" questions. Typically they have a smooth "Big Bang" (so they don't need an inflation phase). Obviously they also need to have no dark energy (or insufficient dark energy to prevent re-collapse of a universe.

Others; the outer fringes

Quasi-Steady State Cosmology
Plasma Cosmology
Universe as a Hypersphere

- The Steady State cosmology has an intimate relationship to the Big Bang, thank you Fred Hoyle. (He invented the term "Big Bang", as a slur.) They postulated the creation ex nihilo of about 10-24 baryons/cc/s, doing away with the early hot phase, probably any beginning (at least any obvious beginning) and any obvious end state, but retaining isotropy, expansion and homogeneity (in both space and time). But the CMBR pretty much shot the SS in the back of the head. Hoyle (and others) tried resurrecting the theory in the 1990s with an oscillating expansion rate (never going into contractino, but sometimes expanding faster, sometimes less fast). Lots of problems remain with the Steady State theory though, and it doesn't get much activity now.

- Plasma cosmology suffered from the attentions of Internet Kooks, and still does, but it's origins are in the works of several Nobel laureates, so at least the maths is well done. It's central postualte is that the electromagnetic forces of a universe-dominating plasma produces fields at least as strong as the gravitational field. With a factor of 1020 or more between the strengths of electromagnetic and graviational fields for a dilute plasma, fairly low levels of net charge difference would lead to significant forces. Bolted-on amendments give answers of a sort for the CMBR.

- Plasma cosmology "posits that our universe is a hypersphere of a higher-dimensionality geometrical entity; that is, a set of points at a constant distance from its centre, constituting a manifold with one dimension less than that of the ambient space." Whatever the fuck that means. Variations on this class use different numbers of dimensions, and ascribe the to diffferent properties of the universe, for example : "More recent is the hypothesis of the existence of five combined spacetime dimensions. By making some peculiar assignments between coordinates and physical distances and time, a hyperspherical symmetry is made apparent by assigning the hypersphere radius to proper time and distances on the hypersphere to usual 3-dimensional distances in a Euclidean universe" Which sounds quite weird, but almost sounds sort of credible. Again. they're not dead theories, but they're not popular.


Static Models and non-cosmological redshifts

Cosmological models motivated by tired-light redshifts 7 varieties! Waste of time.
Other non-cosmological redshifts and other static models A mere 2 varieties!
Plausibility of static models Just to offend those who don't wish to be judged!

Static models deny the existence of a Hubble flow. That's problematic, by the Feynmann Test - if it disagrees with experment, it's wrong.

"Tired light" models - these first came to my attention, dragged out like dead puppets by Young Earth Creationists (YECs) desperate to be important in the universe. Do they have any redeeming characteristics at all.
A tired-light scenario assumes that the photon loses energy owing to some proposed photon–matter process, photon–photon interactions, or some dissipative property of the photon.
Zwicky went down this path from about 1929 to the mid-50s. Unfortunately, a photon that interacts in flight to lose energy, and does so in a distance-of-flight related way, would also exchange some momentum (vector) in that interaction. Therefore, the line of flight would be modified. And distant objects would be lurred. Which they're not. QED (Feynmann). The YECs were wasting their time.

- Curvature Cosmology - I don't get this at all. I'ts a single-author idea.

- Plasma-Redshift - weird. Another single-person counter-factual.

- Subquantum Kinetics ... "works as if intergalactic space were on average endowed with a negative gravitational mass density." Another single-author counterfactual.

- Scale Expanding Cosmos - "A universe is proposed in which not only space expands (therefore, it is not properly a static = non-expanding model) but time also expands: the relationship between space and time could remain constant during the cosmological expansion and all cosmological locations in time and space could be equivalent, if the metrics of both space and time expand." Sounds slightly less kooky than some of the others, but "why?" One significant outcome might be that "The scale expansion could be eternal, which would eliminate the creation event", and for many people that is a desirable outcome.

- Dichotomous cosmology - "Contrary to general relativity, here there is a dichotomy between light and matter dynamics: the luminous portion of the Universe is expanding at a constant rate as in the de Sitter cosmology in a flat Universe, whereas the matter component is static" Well that's wild. One author. No follow on.

- Wave System Cosmology - Another deeply weird one. "The universe is a pure system of waves with mass density and tension parameters proportional to the local intensity of the modes of the waves." One author, no follow-up.

- Eternal Universe - "Based on the existence of a negative pressure in a cosmic fluid derived from general relativity (not very different from the role that the cosmological constant has acquired nowadays", but these postulated properties (the universe is static, infinite, without an instant of creation, and without expansion) is in contradiction to Olber's Paradox, despite the assertion that "Olbers’ paradox is solved by means of absorption in clouds of dust.". One author, no follow-on.

Other non-cosmological redshifts

"Self Creation Cosmology (SCC)" gets a one paragraph description - "In SCC, energy is conserved but energy–momentum is not. Particle masses increase with gravitational potential energy and, as a consequence, cosmological redshift is caused by a secular, exponential increase of particle masses. The universe is static and eternal in its Jordan frame and linearly expanding in its Einstein frame." Whatever that means. But there is a sort-of testable outcome : "Furthermore, as the scalar field adapts the cosmological equations, these require the universe to have an overall density of only one third of the critical density while remaining spatially flat." That's interesting because the optical density (matter content) is lower than that, but only by a factor of 5 or so. By cosmology's standards, that a pretty good agreement.

"Cellular Cosmology" gets a paragraph too, but I can't make out much from that.

Several grounds for considering the plausibility of a model are discussed. Trying to get around a Dopper interpretation of the Hubble relation generates a lot of twisting and turning, cosmological constants, varying constants of physics, and gravities of finite range. Fiddling with the geometry of space is popular too, producing unusual geometries. Olbers Paradox is also mentioned as a constraint, requiring more special pleading, particularly for infinite duration universes.

Plausibility of static models

While static models are definitely not very popular, they're not impossible. Einstein's attempt at a static universe (with a cosmological constant to work against collapse) being a case in point (and the cosmological constant being a case in point of the difficulty of getting such models to match the observed universe). Finite-range gravity and variable fundamental constants are other routes to stabilizing such universes. When contemplating universes of infinite duration, Olber's Paradox requires some special treatment to avoid contradiction.

Themes in Cosmology

Moving on from the listing area. This is a listing of places where there are major variations in cosmology, and where there is experimental/ observational evidence to the contrary, the proponant really needs to have an explanation of "why".

Themes - Classification according to characteristics.
Gravity, forces
Expansion
Age of the universe
Redshift
Dark elements
CMBR origin
Light Element Nucleosynthesis
Homogeneity at large-scale
Galaxy formation

Gravity and forces - Most models use a GR-like gravity, but some go further. A lot further. How the Cellular Cosmology generates a gravity is really obscure. Well, obscure compared to the geodesic property of space-time generating it's gravity.

Expansion is accepted by most models. Those that don't have to fiddle with space, clocks (or light speed - same thing) and the like to explain the Hubble relation.

Age of the universe As mentioned above, infinite universes have a problem with Olber's Paradox. It may be an old problem, but it's still a problem. Most universes accept a finite age, and the cyclical universe dodges the question.

Redshift - most models accept a recession origin for the redshift, but a few take the bull by the horns and have a non-recession red shift, such as the varios "tired light" scenarios, Plasma Cosmology, and however Cellular Cosmology generates it's red shift at cell boundaries.

Dark elements are sometimes explicit (dark matter, dark energy in λ-CDM, the C-field creating matter in Hoyle et al's, and sometimes more cryptic (Plasma Cosmology requires strong inter-galactic magnetic fields - which aren't detected). Dark here, dark there, dark everywhere!

CMBR origin The signal from z ≅ 1100 is simple in λ-CDM. Static models, on the other hand either have to come up with some novel mechanism such as thermalisation of radiation by dust (which process hasn't been observed), or they just ignore the problem.

Light Element Nucleosynthesis Again, this happens naturally with a hot big bang, but cold big bang, QSSC etc don't have a simple mechanism and rely on stellar nucleosynthesis - which is in arguable contradiction to the composition of gas clouds in distant galaxies, which are still about 25% He. "In the version of hypersphere universe by Netchitailo, it is produced in the dark cores of Macro-objects." which sounds like pure guff to me.

Homogeneity at large-scale That's given in the design of most scenarios, but not all. Some variable gravity scenarios do have inhomogeneity, but that is in contradiction to observation. Cue the Feynmann criterion.

Galaxy formation is controlled by gravity in most cosmologies, but not Plasma Cosmology, which calls for otherwise unseen electromagnetic forces. Feynmann's criterion, again.

General problems of the alternative models

There is a general problem of "development", with λ-CDM simply having had more work done in it's arena than others. In the spirit of generosity and exploration, theories which haven't had a particular observation explored, generally aren't discounted for that under development, but it is a problem for them. Bluntly, such theories need to attract more workers to develop them better, or they are never going to improve. But there isn't a mechanism to propel people towards unpopular fields of study. There is an erroneous link in the paper (to http://www.astro.ucla.edu/∼wright/errors.htm ; a typographical tilde has replaced the ASCII tilde). The correct link is titled "Errors in some popular attacks on the Big Bang", and includes the damning disclaimer that "For each theory debunked here there are a hundred more that are totally crazy, but I don't have the time or the inclination to debunk all of them." That is one of the more generaous treatments of some of these theories.

Another quote : "A solution to Olbers’ paradox by dust absorption in a universe of no defined extent, for instance, is not clear. One may wonder, if energy does not disappear, whether the absorbing element (dust) should be heated and re-emit, and, if the energy disappears how that can be consistent with known physical laws. This problem has no easy solution." And the problem does need a solution! Literally, in some of these universes the night sky should be the brightness and intensit of the surface of a star.

Time is another issue. A universe without a definite start - or even, no start - can have time to generate heavy elements without the need for a very early population of stars ("Population III") postulated by other universes, but not observed in ours. Most of these universes come into strong internal contradictions, or postualte "dark" elements to fix these problems.

The CMBR is another problem for many cosmologies. "Nonetheless, all proposals to explain a CMBR produced in the intergalactic medium — even assuming that a perfect black body shape can be produced — have the problem that the integration along the line of sight gives a superposition of many layers of black body radiations, each with a different redshift, giving in total something different from a black body". People seem to think that black body radiation is in some way fractal, and adding multiple BBs would produce another BB with a different characteristic temperature. Not so. (I suspect people get confused by the way that they have the same shape when plotted on log-log axes.)

Although the QSSC is one of the best developed of alternative cosmologies, it has a snake at it's bosom: "The very idea of continuous creation of matter also necessitates some very exotic physics, and has no empirical support" Feynmann would have something very rude to say about this. With bongo drum accompaniment.

My conclusions

For what is mostly a listing, my interpretations are hampered by the fact that I don't often understand either the maths or physics at quite a high level. However the exercise has been useful. I now recognise the context of a large number of other phrases I see in use on the cosmology channels of arXiv. Contrary to the claims of the wingnut fringe, there is a lot of work being done in directions away from the direct line of λ-CDM. A lot of this is on the minor variants (different FLRM interpretations, for example), but that is what you would expect for a main theory with a lot of experimental evidence.

For example, previously I wouldn't have recognised "arXiv 2203.00413 (gr-qc) Flat LRW Universe in logarithmic symmetric teleparallel gravity with observational constraints" as being a study of an alternative cosmology, but it is. It's not very alternative, but I've got a better framework to position it in now.

Unprecedented change in the position of four radio sources

https://arxiv.org/pdf/2202.13119.pdf

If we are looking at a star, the angular size of the object is (almost always ; see Betelgeuse above) below the resolution of our images. So if you see a motion, you're safe in concluding that the object is moving across the field of view. This is the procedure which Bessel used in 1838 to determine a distance to 61 Cygni. It is also used to characterise galactic stars in general and to determine which may be closest. The fastest-moving star on the plane of the sky is Barnard's Star (GJ699), moving a little over 10 arcsec/year. When you get to extragalactic objects though, you wouldn't be expecting to see measurable movement though. So, seeing aparrent movement of some 55mas (0.055 as, to compare to Barnard's Star) in a radio source, 3c48 was unexpected to say the least. Further, examination of the data showed the RA didn't change significantly, but the DEC did, arguing against gross errors of measurement.

The data was inspected in more detail, with re-mapping of the source regions. Then it ecame clear that the shape of the extended source had changed, the source including a seemingly (because of relativistic redshift beaming) superluuminal jet from the compact body postulated in the core of the source.

Three other seemingly rapidly-moving radio sources were examined (CTA 21, 1144+352, 1328+254) and also showed similar shape changes i nthe structure of their jet lobes.

An interesting bit of science as it is done. "Oh, that's odd" ; [looks more closely] ; "Oh, that's what's going on."

Do Atoms Age

https://arxiv.org/pdf/2203.00195.pdf

Now that is a very simple, and therefore interesting, question.

Abstract : Time evolution generically entangles a quantum state with environmental degrees of freedom. The resulting increase in entropy changes the properties of that quantum system leading to “aging”. It is interesting to ask if this familiar property also applies to simple, single particle quantum systems such as the decay of a radioactive particle. We propose a test of such aging in an ion clock setup where we probe for temporal changes to the energies of the electronic state of an ion containing a radioactive nucleus. Such effects are absent in standard quantum mechanics and this test is thus a potent null test for violations of quantum mechanics. As a proof of principle, we show that these effects exist in causal non-linear modifications of quantum mechanics.

Hmmm, so basically, it doesn't seem as if they do know of such a measure, but if we did find one it wouldn't be with conventional QM. A backwards way of looking at it, but also a reason to continue to look for such situations. They're looking for changes in single particles, so that implies "single particles which don't interact with the outside world", which would disrupt the system.

Hitting a New Low: The Unique 28 h Cessation of Accretion in the TESS Light Curve of YY Dra (DO Dra)

https://arxiv.org/pdf/2203.00221.pdf

Compact bodies tend to accrete anything in the neighbourhood (subject to Keplerian rules), and they're messy eaters. But if there's (temporarily) nothing around to eat ... the lights go out. All very predictable - except that the process is essentially stochastic, so from this distance you can't predict it (pulsar brightness has been proposed as a source of widely distributed high-quality cryptographic randomness), but it's going to happen sometimes. And it did.

"there is a day-long, flat-bottomed low state at the beginning of 2020 during which the only periodic signal is ellipsoidal variation and there is no appreciable flickering" This relates to .... [ SOMETHING ELSE RECENT, twin SMBHs, orbiting each other, quiet periods allow the sinusoidal signal to come through the noise.]

Maybe later.

The Asteroid-Comet Continuum

https://arxiv.org/pdf/2203.01397.pdf

Sorry, but any "build a stellar sytstem model has small, medium and large (non-stellar) bodies in a continuum of sizes. Baseball round the head time - yes, it's a continuum.

Jupiter’s inhomogeneous envelope

https://arxiv.org/pdf/2203.01866.pdf

Building a solar system isn't easy (see above). We're still trying to work out how the Solar system was built. We still don't know, but "what does Jupiter contain?" is a good question, important for constraining models of the assembly of the Solar system. We know that other planetary systems have different trajectories (q.v. "hot Jupiters"), but for the foreseeable future the Solar system is going to remain the best studied member of the class.

There are several chief results:

Conclusion (from abstract)My comments
We also find that uncertainties in the equation of state (EoS) are crucial when determining the amount of heavy elements in Jupiter’s interior. Well, actually that's imediately obvious when you first start trying to work out how conditions change with distance from surface. We learned about ideal gasses in secondary school, but most people forget the caveat we were given that "ideal gasses do not exist". Yeah, they don't. You need an EoS. Which are not easy to construct ab initio, or to measure at mid-Jupiter conditions. Particularly when you don't have a good handle on the chemical composition.
Our models put an upper limit to the inner compact core of Jupiter of 7 M, independently on the structure model (with or without dilute core) and the equation of state considered. That's a bit lower than "traditional" (last decade or two) levels, which have estimated it at 10 to 30 M, but the exact numbers are very much dependent on how the "heavy" elements are distributed within the planet, from "everything at the centre", to "evenly distributed, below the visible clouds".
Furthermore, we robustly demonstrate that Jupiter’s envelope is inhomogenous, with a heavy-element enrichment in the interior relative to the outer envelope.Again, since the end point models haven't been made to work, generally people get that ; what the distribution is, remains a question.

The study combined three different EoS for the H2 component, two EoS for the He component, one for the silicates and one for the H2O component. We have a reasonably well constrained temperature for the temperature at the 1bar level (166K from the Galileo death-plunge ; between 165 and 170K from various interpretations of the Voyager radio-occultation data), about which point a range of values are examined.

A Star-sized Impact-produced Dust Clump in the Terrestrial Zone of the HD 166191 System

https://arxiv.org/pdf/2203.02366.pdf

Most astrophysics descriptions of star formation have them forming within molecular clouds of gas and dust, andalmost always i nthe close proximity of other stars. The continuing use of studies of "extinct" (short-lived) isotopic systems as clocks to study the first few millions of years of the Solar system is enerally ascribed to the proto-Solar structure being pepperd with deris from a nearby supernova, and many models (and no small number of popular science descriptions) rather depends on this proximity.

Therefore, one should see the results of star-star (or proto-star - proto-star) interactions sometimes on the sky. Which we do already, with things like "blue stragglers" in globular clusters (probably the result of two small, old stars meeting and merging, to produce an abnormally bright and blue star in a cluster of generally older stars ; note that brightness (luminosity) has a steeper-than-linear relationship with mass. (My astronomy workbook has luminosity as mass^(2.3 - 4.0) with an overall exponent of close to 3.0.). Similarly, we should see the results of "hierarchical" impacts (impacts of similarly sized bodies together) in the dust+gas discs surrounding young stars.

This report descries a star which has a brief (5-10 years) period with a lot of IR excess and a temporary dimming, which are interpreted as something hitting the system Oor possibly two planets colliding, producing a lot of warm dust and gas in a cloud of around 0.6AU diameter. ("Diameter" assuming the particles are on circular Keplerian orbits.)

Observations from 2015 to 2020 (when the Spitzer spacecraft stopped working) showed an approximate 2-fold increase in IR brightness between mid-2018 and early 2019. The optical brightness did not change significantly over this period. A "giant impact" of two bodies of approsximately Ceres- or Vesta- size. Early during the "rise" phase of the change, which was partly sampled during one of the 39-day long periods each half-year when the spacecraft could observe the system, there was a brief decrease of the "colour temperature" of the system. In combination, these observations suggest the sudden production of a large amounts of cool dust from a compact body (planetesimal), which dust then warms as it disperses from the source.

Our models suggest we should see these things. Some people who don't like being unexceptional (typically god-squaddies) don't like these models, and sieze on a low number of such observations as evidence that we are exceptional. Well, here's another case suggesting that we are, indeed, inhabitants of an unexceptional stellar system.

Dielectric properties and stratigraphy of regolith in the lunar South Pole-Aitken basin: Observations from the Lunar Penetrating Radar

https://arxiv.org/pdf/2203.02840.pdf

The Chang'e-4 Lunar Rover was landed on the far side of the Moon in 2019 and operated until September 2020. During it's traverse across the floor of the Von Karman crater, it carried a Ground-Penetrating Radar (GPR) system which could read some of the mineralogy and structure of the regolith below the rover. The rover trail wasn't straight, but tended generally WNW for about 550m from the landing point. On this traverse, some 15000 radar soundings were taken, which this paper presents to give an approximate distance-depth cross-section, presented here as figures 2 (raw, up to 600ns/ section), 7 (migrated, up to ca 45m depth) and 8 (interpreted, up to 45m depth).

Unsurprisingly, the contrasts between one layer of regolith and the next aren't great. Density is a bigger contrast than mineralogy, with a bulk density varying from 1.49 to 2.07 g/cm3 (as determined from "reflection hyperbolae" from boulders suspended in the regolith. Assuming a mineral grain density of 3.6 g/cm3, mid-range for olivine, that corresponds to 58 to 42 % porosity. Uniform spheres have a porosity of 37% in a close packing, so these reduced densities imply non-spherical grains which are less than perfectly packed. The lowest densities are nearest the surface, which isn't incompatible with the compaction one would expect. If some of the grains are not spherical (cows in a vacuum, or olivine grains, also in a vacuum), like many fragments of impact glass retrieved from the Apollo samples, one would also get those reduced packing efficiencies.

Figure 8 - the "interpretation" of the migrated section - shows there to be a small crater, about 120m across and 10m deep below it's rim, (their measures on the full print out give 128m across and 13m deep, but it's hard to be that precise on the print resolution versions). which is compatible with a small impact crater. This crater cuts across 2 (possibly 3) of the sub-surface layers, but contains it's own fill unit elow the surface layer of unstructured material (with the lower density as noted above).

A nice bit of work. "Radar Robin" would like it - or would have, before he retired.

Assessment of Microbial Habitability Across Solar System Targets

Hmmm, well everyone (and their dogs, and cats) are happy that there is a non-zero chance of microbes (as reciognised on Earth, today) surviving trips to other places, and surviving the trip, and living on afterwards. Small, but non-zero chances.

These authors try to develop a systematic analysis. It's a useful idea, but it gets practically an extra free parameter for every inhalation. Definitely needs more work to make it workable. It's also something that sticks to the "known unknowns" part of the field (not that you can really do anything else).

Potentially useful, but complex.

New satellites of figure-eight orbit computed with high precision

https://arxiv.org/pdf/2203.02793.pdf

This is a new numeric approach to computing possible stale orbits in a Newtonian 3-body problem.

The 3-body problem has notoriously no analytic solution, but that doesn't preclude finding solutions by numberical searches. The typical situation examined is of three identical bodies moving in each other's gravitational fields. If you examine a case with differing body masses, then you collapse rapidly towards a single main mass with "light" "test particles" moving in the main particle's gravitational field with neglegible influence from the field of the other "light" "test particle" (that is what "light" "test particle" means!), or to a case of two large particles orbited by a single "light" "test particle". The situation is held in a plane defined by the three bodies, and since they comprise the entire universe of the model, there's nothing to generate an out-of-plane force, so it stays like that.

The "figure-of-eight" orbits referred to here are a class where, if you set up a rotating frame of reference centred on the centre of mass of the system, oriented how? There is a lot of crossing of the "zero point" in the plots, so there must be some offset between the plot and the points involved, or they'd have the system collapsing by collision. Clearly there's something going on in the presentation of these results that I don't understand.

Is it worth pursuing, tracking down the references? I don't really think so, for me. It's a numerical method, about a non-physical situation. Good improvements in computational efficiency no doubt, and that'll play back into things like collision modelling. Very useful. But definitely not my field.

Can a particle moves zigzag in time?

https://arxiv.org/pdf/2203.04200.pdf

Everyone has heard of Wheeler's electron? Right? Physicist, author of some of the classic texts on gravity and GR that informed generations of physicists. I see that Wiki calls it the "one-electron universe". Seems a good enough name for it. Wheeler's (half-joking?) suggestion was that the universe only contains one electron, which zig-zags backwards and forwards in time (at what sort of clock cycle?), showing as an electron when travelling in one direction, and as a positron in the other direction. Nice idea - how does it describe what happens when a positron annihilates with an electron? And how does it explain the preponderance of electrons over positrons today? Well, quibles aside ... the question of whether a particle can zigzag in time does seem fairly fundamental to the model. Hence this paper.

The "arrow of time" still pervades fundamental physics, sometimes appearing as the "arrow of causality", sometimes as the arrow of thermodynamics.

OK, it's a complex question. The problem is that the authors claim that zigzag trajectories are possible ... but don't lead to results that can be measured. Even in theory. Very big "Hmmmm", indeed.

It looks like this is going to be dense. I can get this though : "In this paper, we consider the third question, “why can a particle zigzag in space but not in time?”. But ... then it gets beyond me. "The transition amplitude kernel has a well-known formal form" - does it really? Nope, totally beyond me. The English is a bit strained, which doesn't help, but I don't think the original russian would be much of an improvement - the problem is in the maths. And the physics.

On the fate of quantum black holes

https://arxiv.org/pdf/2203.04238.pdf

I remember discussing this question on CIS:SciMath, way back about the millennium. As the mass-energy of a BH decreases, the curvature of spacetime at the event horizon increases, and so the energy of the (mean) particle of "Hawkins' Hair" also increases. At some point, the (mean) particle emitted by such a decaying BH will contain as much mass energy as the BH itself. And thwen what happens?

I never did get a confident answer from the physicists (and mathematicians) there. Quite a brush-off, IIRC, even. Which probalby marks it as an interestingly difficult question. Lets see what these people have to say.

The endpoint of the process of Hawking evaporation remains unknown—it is expected that a theory of quantum gravity will answer this question by resolving the singularity and providing dynamics past the classically singular region.

Question dodgers!

A construct called "loop quantum gravity (LQG)" is "a non-perturbative attempt to quantize general relativity using (stuff that falls under the non-FLRW "alternative cosmologies discussed above. It's not a hopeless situation : "Solving the full quantum dynamics in LQG currently remains out of reach, but there is a substantial body of work on symmetry-reduced models, including applications to homogeneous cosmological space-times known as loop quantum cosmology (LQC)"
I'm not sure how to classify this in terms of the families of cosmologies discussed above. I hadn't particularly picked up on this assertion : "In LQC, these methods have successfully led to the resolution of the big-bang singularity in classical general relativity, replacing it with a non-singular bounce in homogeneous cosmological space-times" - but maybe I've just classified all the "Big Bounce" models as dodging the singularity for reasons of dodging the singularity.

One partial success of these approaches is that "In the second category are works that use coordinates in spherical symmetry where the radial coordinate remains spacelike throughout the entire space-time, making it is possible to treat the interior and exterior of the black hole on an equal footing for quantization." I.E. a lot of other tools can be used to study the interior of the black hole. That's good, I think.

Then they go further "It has the advantage that the correct classical limit is recovered where the space-time curvature is small compared to the Planck scale." Which I think is talking about the situations where I'm asking about the release of high-mass-energy particles from a severelt "bent" event horizon. I may have stumbled on a quite important question.

"A generic feature of these works is that, as in cosmology, the singularity is resolved and matter bounces when space-time curvature reaches the Planck scale." That is, indeed a level of scale that I'd suspected would be important.
Another interpretation of Hawking radiation has been popular in the SF world for a long time, and these ideas of white holes are also still in play in cosmologies. "It may also be possible to understand the bounce generated by quantum gravity effects as a transition from a black hole to a white hole, with potential observational consequences."

The paper then starts going foar over my head, but I do see one result worth noting "After evolutions spanning two orders of magnitude in data mass M , we find that this lifetime is T ∼ M^2/(Planck Mass, mPl). " "Data Mass" is an interesting concept too, as well as that lifetime estimate. It seems to have escaped Wiki, but I'll have to look deeper on that.

Six more sections of maths lose me completely. But I think I've got some significant points out of that.

Yet another star in the Albireo system

https://arxiv.org/pdf/2203.04222.pdf

Yeah, I fucked up. There's a famous quadruple-star system that has considerable colour contrasts, even in a small telescope. It's not Albireo. I can't remember which star I'm thinking of. But yeah, another star is multiply multiple ; film at 11.

Terrestrial volcanic eruptions and their association with solar activity

https://arxiv.org/pdf/2203.03637.pdf

People have ben trying to predict volcanic eruptions and earthquakes since ... probably 5 minutes after the first hominid felt an EQ and had the nous to think "WTF was that?" [more traceably, 1910 (Reid, Harry Fielding (1910), "Volume II. The Mechanics of the Earthquake.", The California Earthquake of April 18, 1906: Report of the State Earthquake Investigation Commission, Washington, D. C.: Carnegie institution of Washington.) and 1935 (Wood, H. O.; Gutenberg, B. , "Earthquake prediction", Science, 82 (2123): 219–320, doi:10.1126/science.82.2123.219, PMID 17818812.] Unsurprisingly, that followed the 1906 San Francisco earthquake and the increasing spread and reliability of the instrumental record of earthquakes.

To say that "Earthquake Prediction" (and it's cousin, volcano prediction) is a branch of science covered with illustrious success and unalloyed glory would be a bit optimistic. To claim that it has any successes at all is a bit optimistic. But people still hope that there is some successful combination of measurements which they can take which would lead to a usable prediction (in terms of magnitude, location and timing. The timing element is what distinguishes a prediction from a forecast, which is not so restricting on the timing.)

Unsurprisingly (particularly since a major US earthquake would mean a lot of dollar damage, and even dead white people - when the highest likelihood of the first million-casualty earthquake is going to be along the Himalayan Front. But that'll only kill poor brown people, so that's not as important as a bit of property damage in California), there are quite a lot of people really keen that a successful method is found. Ditto for volcanic eruptions, but that's more of an Indonesian (poor, brown, non-English-speaking) and Japanese (rich, brown, non-English, and not even using the Latin writing system) obsession than American, and so considerably less vociferously pursued. Such intense desire does not however mean that there is such a system, today, or possible. (Why do I doubt that it is possible? The wallrocks of the faults involved are inhomogeneous ; the faults are inhomogeneous vertically, horizontally and in terms of confining pressure ; the necessary information is on the centimetre to metre scale, but the technology for measuring what is happening kilometres into the ground can give accuracies to the scale of tens of metres (that hasn't changed much in the last 30 years that I've been drilling oil wells, needing that improved precision ; of course, once you've got a well in place and logged, you can take a lot of the uncertainties out of the particular solution for that fault, but that's not a scaleable solution). You're not going to get the information you need to buld a reasonably accurate model of the fault plane.) Volcanos are even more complex, and their plumbing systems are even harder to image than a single fault plane.

One consequence of this ... is "desperation" too strong a word? ... to find a workable predictino method is that people are looking everywhere to find something that correlates with, and leads, earthquakes. A couple of years back there was a probably successful claim of correlating the ocurrence of volcanic eruptions with the phase of the Moon. Which ... well the phase of the Moon does exert considerable gravitational forces on the Earth, so that isn't so terrribly surprising. But it is a weak relationship.

This paper continues the process of data mining, but to my mind less convincingly. Here they see a small, positive correlation between the strength of the geomagnetic field and the occurrence of earthquakes. But to get the answer they are looking for from the analysis they have to be careful to align the cycles of the solar magnetic field "just so" - whereas surely a correlation should come out, without needing a careful alignment.

In the end, the authors sum up the predicament of the field thus : "there are no viable mechanisms yet proposed for the explanation of any correlation between volcanic activity on the Earth and solar activity." I don't see any estimation of the forces that the geomagnetic field can exert on the near-surface rocks of the Earth from it's (varying) interaction with the Solar field, and how to feed that back into triggering earthquakes and volcanic eruptions.

Without that sort of mechanical force estimate, there remains "no viable mechanisms", as the authors say. So just from that, I doubt that there is a meaningful mechanism for this correlation. I'm also quite suspicious of how they derive their estimates for the Solar magnetic field strength from before the instrumental record. There's a suspicious "geomagnetic jerk" that they bring in to associate with the 1859 Carrington event, which is probalby significant in "improving" their statistical results.

It's an interesting idea, but I remain decidedly unconvinced. Attempts to reconstruct magnetic field strengths before the instrumental record seem particularly dubious to me - there are so many confounding effects. But as the instrumental record extends (at a rate of 1 year per year) one would hope that the signal to noise ratio improves. If there is a signal.

End of Document
Back to List.

2022-01-27

January Arxivery.

High Resolution Search for KBO Binaries from New Horizons 

This one is (https://arxiv.org/pdf/2201.05940.pdf) a very normal report. The cameras on New Horizons (and the pointing systems, the telecomms, etc) are being used to image a variety of KBO objects as it flies on. This observing report is about those which have been identified (with some degree of confidence) as binaries.
2 of the 5 systems examined showed evidence of being "equal brightness" binaries, so closer to the class of bodies of Arrakoth ("MU69"), flown past in 2020.
The sub-text is ; they've finished draining the memory from the Arrakoth flypast, so can use the communications system at no risk, and they are consuming the fuel reserves for the pointing process - which is a irreplaceable resource. I infer that their search for other KBOs within the travel cone of NH has been unsuccessful, and they are turning the spacecraft's resources to other research subjects.

The Sun to planetary centre of mass distance is coherent with solar activity on
the decade, centennial and millennium time scales when Planet 9 is included in
the solar system.

This author links solar activity cycles to the effect of "Planet 9" on solar system dynamics. Which isn't actually insane, but is looking to amplify a very small signal into quite a large system. As the author says, "The direct gravitational effect is, however, very small resulting in tides on the Sun of the order of only 1 mm. Whether tides of this magnitude could result in any significant acceleration on the Sun is regarded with some scepticism"
Yep, I'm sceptical too. 
They use the Brown-Batygin 2016 parameters for their P9, (noting that BB updated their estimate in 2021).
Well, it's a theory, but it's not very convincing.  IF P9 is identified (an attractive idea, but maybe not true), it would be worth revisiting.

Building Terrestrial Planets: Why results of perfect-merging simulations are not quantitatively reliable approximations to accurate modelling of terrestrial planet formation

 Planet-merging models of solar system formation are models, with significant simplifications from the current state of the Solar system, and we don't know what the actual state of the early Solar system reall was. Well, yes, we did know that. Probably it was strictly necessary to prove it at least once, but it's hardly an urgent demonstration.
Next?

Blue marble, stagnant lid: Could dynamic topography avert a waterworld?

OK, SF planet builders have never had a problem with having a water world with some above-waterline topography, which they've hand-waved away or just ignored. With some modelling about convection, hot mountain roots, isostasy, and the like, it looks like the SF-ian Handwavium is reasonably justified. I'll remember it if I'm ever looking for references for a story I'm writing. 

A Material-based Panspermia Hypothesis: The Potential of Polymer Gels and Membraneless Droplets

Now you may not know that I trust Panspermia less than I trust a crooked bookie who throws the dice where you can't see them, but I know that and I trust what I think. The annoying thing is, panspermia isn't impossible, just quite unlikely and it moves the genuine problem of how life originated to somewhere we don't know, under conditions we don't have any realistic constraints on. So, as a hypothesis generating and testing scenario, it's worth jack shit.
So, these authors add a (relatively) new component to the mix : polymeric abiogenic gels to help provide some cushioning for the proto-bugs in their rocky or droplet-y transport vehicles. Yeah, a plausible component, a good excuse. Nice try. But essentially still untestable.
Sorry, not untestable - we'd need to put a spacecraft into contact with an interstellar (or interplanetary) rock, and then find a bug (or proto-bug) in the rock. But it's a pretty hard test to apply. And finding a billion sterilized rocks wouldn't make it impossible for the billion-and-first rock to hold the proof of panspermia. Or Willy Wonka's Golden Ticket. 
Meh.


2022-01-11

New Year, New ArXivery

I've been a bit slack on keeping up with Arxiv for a while, so let's see what's hiding in the in-box.

2022-01-04

Hyper-Fast Positive Energy Warp Drives

Now that sounds like a challenge to anyone not wanting to "out-geek their inner Trekkie", or however they describe it.

What's it all about? It's on the "gr-qc" section of Arxiv, which is "general relativity & quantum cosmology", so from the off, I'm not expecting a claim of a working model, ready to go out and speak to Vulcans.

Abstract

Solitons in space–time capable of transporting time-like observers at superluminal speeds have long been tied to violations of the weak, strong, and dominant energy conditions of general relativity. This trend was recently broken by a new approach that identified soliton solutions capable of superluminal travel while being sourced by purely positive energy densities.

Yes, previous suggestions of how to travel faster than light have been "hampered" by needing some way of generating "negative energy density" volumes of space, which nobody has any idea of what it means, or how to make one. So staying on the positive side of that question is probably a good idea - at least if you're wanting to talk about things that might actually be realisable.

[continued] This is the first example of hyper-fast solitons satisfying the weak energy condition, reopening the discussion of superluminal mechanisms rooted in conventional physics. This article summarizes the recent finding and its context in the literature. Remaining challenges to autonomous superluminal travel, such as the dominant energy condition, horizons, and the identification of a creation mechanism are also discussed.
OK, it's more of a review article than a research report, but that's OK.

The first 18 references are about the negative energy density thing. It still doesn't mean much to me, but at least I got that bit right. A soliton is a shaped block of space time that in some way differes from the surrounding smooth space time by means of it's rapidly varying curvature. I think. What does Wiki say? a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity - so it's not just a space-time thing, but a general wave phenomenon. And then ... the paper goes down a rabbithole of maths, way over my head.

There are still numerous challenges between the current state of physical warp drive research and a functioning prototype.
Ohhh, that sounds fun ...
The most glaring challenge is the astronomical energy cost of even a modest warp drive, currently measured in solar masses where kilograms is closer to the threshold of human technology.
Spoilsports!
the next hurdle to approach is modeling the full life cycle of a physical warp drive (creation, acceleration, inertial motion, deceleration, and diffusion).
Similarly more spoilsportery. Such tedious attention to uninteresting mere engineering. How are we meant to vibrate the dilithium crystals if someone insists that we make the damend things first?
The last hurdle I will mention is the full characterization of the sourcing fields, whether it be a plasma or other state of matter and energy. [...] the specification of the drive geometry only is an incomplete description of the full solution. Stress-energy sources must be specified to close the system.
More mere engineering. Is this paper meant ot be an inspiration to dreamers, or some sordid little chain which we have to slip, to touch the rest of the universe.

It sounds fascinating, but it's not a huge rallying cry to the Trekkies of the world. Sad. but they're used to it, I'm sure.

On thermodynamics of compact objects

ArXiv (where I recently discovered that the "X" isn't an "X" but a "chi", so the pronunciation is "archive"). Sounds dull, but since we can't yet find a way around those pesky laws of thermodynamics, I suppose we'd better pay attention to them. My first question is, are they talking about "compact bodies" which aren't gases, or bodies that are compact enough to have gone out the other side of atoms to being piles of fundamental particles. with no free internal space? Oh, it's black holes, so the inner workings of the bodies are thoroughly hidden from us. How convenient! No details to worry about.

Oh, no, they're not hiding the details : "focusing on self-gravitating compact systems without event horizons" means they don't have any convenient "Veil of Cosmic Censorship" (a.k.a "event horizon") to hde the details from the rest of the universe. How do they manage that? "The key step is the appropriate identification of thermodynamic volume [...] which is in general different from the geometric volume." Ah, that makes a degree of sense to me - if the spacetime were flat, a metre here is the same size as a metre over there, and a right angle to this straight line here defines a plane which is parallel to a right angle from the same straight line over there ; but explicitly they're not looking at a flat space time but a curved one, so you can't rely on either of those identities of translation. And 75 equations later, we get to a summary. Which is expressed i nterms of the Equation of State. of the material. (They're looking at photon gases, or fundamental particles, not atoms, so we shouldn't need to worry about "chemistry".)

Sidebar - Equation of State

I remember seeing these in Mike "Plutokiller" Brown's planetary science course, but I need to refresh mt memory. They provide a link between the pressure on a material and it's density. Wiki puts it slightly differently : "an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy." That included the pressure-density relation (density being related to volume for a particular bit of matter), plus others. The classical ideal gas law EoS is
p * V = n * R * T (Eq. 1 ; pressure, volume, number of moles in system, Rydberg constant and temperature, respectively)
or
p = R * T * (n/V) (Eq 2)
where n/V is clearly an expression of density.

If you're holding volume constant (so, doing no work, because the pressure vector doesn't move (if you think in terms of a piston model) you get a temperature- pressure relation. If you hold the temperature constant, you get a pressure volume (or pressure-density) relationship.

When you get into QM systems, you have to worry about Fermi or Bose-Einstein statistics. which is a more complicated study.

"For relativistic gas in particular, i.e. with EoS ρ = 3p" How did they get from Eq 2 (or one of the more complex QM expressions) to implying that R * T = 1 / 3 ?

So ... I don't understand what this paper is trying to say. It reminds me that I need to go back to Plutokiller's class notes - and it was a good class! - because I remember thinking I understood the EoS stuff then, but I don't seem to any more.

Rumour has it that Plutokiller is revising and updating that class. No mention on Coursera's website, but I'll keep ear open.

The influence of a fluid core and a solid inner core on the Cassini sate of Mercury

There's a sudden flurry of papers about "Cassini states", also for the Moon. It's something to do with the rotational interactions between outer and inner components. I guess there was a conference recently. Mercury behaves like a rigid body (in terms of how it's spin axis and it's axis of rotation about the Sun relate - the so-called Cassini state) BUT we know it has a substantial liquid core (umm, how? I'll havve to check --- magnetic field?) so what is going on. The authors infer that there may be a large interior solid inner core. Earth has one, but at rather different T & P conditions. So ... probably the chemical composition of Mercury's core is different ot Earth's. Since there are still arguments going at the ~10% level about the composition of the Earth's core, that's not wildly constraining.

There's an update there for Mercury's properties. Stick that into the big astronomy database file.

Otherwise, not a lot of news.

Isostatic Modelling, Vertical Motion Rate Variation and Potential Detection of Past-Landslide in the Volcanic Island of Tahiti

What's this doing on Arxiv - it should be on Earth Arxiv. Anyway, In Tahiti, a coastline uplift of 80-110 m occurred 872 kyr ago after a giant landslide - that sounds like a bad hair day. Tahiti is considered a "stable" island and a tie point for reconstructing global eustatic sealevel variations, so finding a point deformation of the surface there and modelling the post-landslide deformation of the mantle underpinning of the island affects all the rest of the world's sealevel curves. Not by a lot, but definitely by a bit.

Is Tahiti really 6000m above sea level? That sounds incredible. I don't trust GeoMapApp's data sources. Wikipedia says "Highest elevation ... 2,241 m (7352 ft)" Ah, maybe they've got a ft elevation model, and attached it to a metres bathymetry model. That's a bit un-funny.

Update 2022-01-16 - Volcanic eruption, high explosivity in neighbouring Tonga yesterday. These "high islands" of the Pacific are active volcanoes.

What sort of size of volcano are we talking about? (Link to seabed image https://app.box.com/s/4omk3pi5roy1lui05sz8mw3rsowmpbtv ) It's about a 10km wide structure rising 750-800m above the seabed, 50-odd km behind the Tonga Trench. There's a line of volcanoes (and "high islands") close to the trench, and this is a step further back, sourced from deeper off the descending slab. (link to cross section (You can also see the trench-edge primary melt volcanoes and the more distal more andesitic line of volcanoes including the erupting one. "Andesitic volcano" and "good neighbour" don't generally appear in the same sentence.) Volcanism is reported as andesitic, which is appropriate to the reported explosivity of the eruption. The surface islands enclosed two sides (about 1/5) of the perimeter of the summit caldera. SI Global Volcanism database entry

2022-01-08

Gravity-Assist as a Solution to Save Earth from Global Warming https://arxiv.org/pdf/2201.02879.pdf

Well, it sounds serious. But ... if you're wanting to move the Earth outwards by (so much), you need to transfer a comparable amount of material inwards from wherever (the asteroid belt, in this discussion). But the asteroid belt weighs, about 1/2000 of the Earth. If you move that amount (several million asteroids) all the way to the surface of the Sun, you'd expect to move the Earth out by a corresponding amount - fractions of a percent.

What the energy and pollution costs of that would be, to not even address the actual outstanding problem of global warming, let alone the next generation's contribution ... Mr Sohrab Rahvar doesn't seem to have considered that. He calculates an expression for the change of power of light delivered to the Earth proportional to the asteroid mass and a factor for the change of angle - as my gut feeling at the start told me. And the temperature change would be the 4th root of that.

There's a minor point buried away in the details - the impact factor (how close the asteroid close-passer gets to Earth is set to 6400 km - a comfortable miss by 30km. For certain values of "comfortable" which many people wouldn't find very comfortable. That''l be fun.

Someone is trying to steal Avi Loeb's thunder!

This may be a suitable one for Slashdot's peanut gallery.

Two-step nucleation of the Earth’s inner core

I did this one as a comment, attached to "Giant lasers simulate exoplanet cores prove they're more likely to have life, which was a fairly overblown piece about high pressure EoS for iron and it's magnetic effects. Great, you can generate a magnetic field at bigger planet sizes than Earth, but so what? The Earth gets most of it's radiation protection from it's atmosphere, not it's magnetic field. (More of an issue for Mars-size bodies, but so? The overblowing is about super-Earths and sub-Neptunes, not Mars-a-likes.)

A recent paper on ArXiv addresses the question of how you for an inner solid iron core in the molten iron core of a planet - which is believed to be necessary to produce the turbulent flow necessary to produce a self-stimulating dynamo.

The problem is that the "iron catastrophe" involved in separating the iron of a protoplanet from the rock it is mixed with, and it then settling to the middle of the planet, releases quite a lot of energy. (Depending on the composition, possibly enough to melt essentially all the protoplanet.) That leaves the initial core hot and molten, and you then need to nucleate iron crystals to form the solid core. Which would normally need a significant degree of undercooling (cooling the mixture below it's nominal melting point). Which is hard to achieve in the middle of a planet, possibly under thousands of km of magma ocean.

So, a new paper models a different way of forming the necessary crystals. Rather than going from the melt directly to a hexagonal close-packing (hcp) crystal (which is the energetically most favourable end product), they propose a two stage process of first forming a body-centred-cubic (bcc) close packing crystallite, which then rearranges to a hcp as it's growing. They propose that the energy barriers to that two step process would be lower, so the process rates would be higher.

Which is an interesting wrinkle on the details of core formation, but probably a bit less than practically useful, since people are still arguing at the several-percent level on the composition-temperature-pressure phase diagram for core formation. I haven't followed the details for years, but the last time I looked people were still arguing over whether the Earth's core contained several % (atom) of oxygen, sulphur, potassium, or all three (in addition to bulk iron and nickel) to get the right combination of viscosity, radiogenic heating, resistivity (conductivity) and magnetic permittivity, at appropriate temperatures and pressures.

Of course, the crystals formed would probably be quite pure iron, because you're essentially running a planet-scale zone refining process. So your melt composition is going to be constantly changing. Regardless of any continuing additions to/ losses to the overlying mantle bottom layer. Don'tcha just love reality against theory?

And I'm just about caught up with IArxiv.