The Quantum Big Bang in the Weyl String and Supermembrane EpsEss

Discussion in 'Thuban Cosmology in Quantum Relativity' started by admin, Nov 29, 2014.

  1. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756


    Surprise! The Universe Is Expanding Faster Than Scientists Thought

    By Mike Wall, Space.com Senior Writer | June 2, 2016 03:44pm ET







    hubble-space-telescope-ugc-9391-.36620.
    Hubble Space Telescope view of the galaxy UGC 9391, which contains Cepheid variable stars and supernovas that scientists studied to calculate a newly precise value for Hubble's constant.
    Credit: NASA, ESA, and L. Frattare (STScI)

    The universe is expanding 5 to 9 percent faster than astronomers had thought, a new study suggests.
    "This surprising finding may be an important clue to understanding those mysterious parts of the universe that make up 95 percent of everything and don't emit light, such as dark energy, dark matter and dark radiation," study leader Adam Riess, an astrophysicist at the Space Telescope Science Institute and Johns Hopkins University in Baltimore, said in a statement.
    Riess — who shared the 2011 Nobel Prize in physics for the discovery that the universe's expansion is accelerating — and his colleagues used NASA's Hubble Space Telescope to study 2,400 Cepheid stars and 300 Type Ia supernovas. [Supernova Photos: Great Images of Star Explosions]


    hubble-constant-cepheids-supernova-.36622.
    This Hubble Space Telescope image shows Cepheid variable stars (circled in red) and a Type Ia supernova (blue “X”) in the galaxy UGC 9391. Astronomers studied these and other “cosmic yardsticks” to calculate how fast the universe is expanding.
    Credit: NASA, ESA, and A. Riess (STScI/JHU)
    These are two different types of "cosmic yardsticks" that allow scientists to measure distances across the universe. Cepheids pulse at rates that are related to their true brightness, and Type Ia supernovas — powerful explosions that mark the deaths of massive stars — blaze up with consistent luminosity.

    This work allowed the team to determine the distances to the 300 supernovas, which lie in a number of different galaxies. Then, the researchers compared these figures to the expansion of space, which was calculated by measuring how light from faraway galaxies stretches as it moves away from Earth, to determine how fast the universe is expanding — a value known as the Hubble constant, after famed American astronomer Edwin Hubble.

    The new, unprecedentedly precise value for the Hubble constant comes out to 45.5 miles (73.2 kilometers) per second per megaparsec. (One megaparsec is equivalent to 3.26 million light-years.) Therefore, the distance between cosmic objects should double 9.8 billion years from now, the researchers said.
    The new figure is 5 to 9 percent higher than previous estimates of the Hubble constant, which relied on measurements of the cosmic microwave background radiation — the light left over from the Big Bang that created the universe 13.8 billion years ago.

    There are a number of possible explanations for this discrepancy, study team members said.

    030616_universe_2-.36624.

    Pulsating Cepheid variable stars and Type Ia supernovae were used to calculate how fast the Universe expands.Credit: NASA / ESA / A. Feild (STScI) / A. Riess (STScI / JHU)

    Illustration showing the three steps astronomers used to measure the universe's expansion rate to an unprecedented accuracy, reducing the total uncertainty to 2.4 percent.
    Credit: NASA, ESA, A. Feild (STScI), and A. Riess (STScI/JHU)

    For example, the mysterious force known as dark energy, which is thought to be behind the universe's accelerating expansion, may be stronger than astronomers had thought. It's also possible that "dark radiation" — an unknown, superspeedy subatomic particle or particles that existed shortly after the Big Bang — could be playing a role that hasn't been taken into account, the researchers said.
    Enigmatic dark matter, which is thought to be four times more abundant than "normal" matter throughout the universe, could also have some weird and unappreciated characteristics. Or maybe there's something important missing from Einstein's theory of gravity, the researchers said.
    In short, there's a lot of work left to do before astronomers can fully appreciate the meaning of the new results.
    "We know so little about the dark parts of the universe; it's important to measure how they push and pull on space over cosmic history," study co-author Lucas Macri, of Texas A&M University, said in the same statement.

    The new study has been accepted for publication in The Astrophysical Journal.

    Follow Mike Wall on Twitter @michaeldwall and Google+. Follow us @Spacedotcom, Facebook or Google+. Originally published on Space.com.
    Editor's Recommendations

    - See more at: http://www.space.com/33061-universe-expanding-faster-than-thought-hubble.html#sthash.iNmRUMyI.dpuf

    curvatureads-.37597.

    Comment from the 'Council of Thuban':

    Due to the dimensional intersection of the 3D/10D open (negatively curved) anti deSitter universe with its 4D/11D closed (positively curved) der Sitter 'encompassment' multiverse; the applied standard cosmology will measure an accelerating and 'getting younger' universe from a cosmological timemarker beginning so 2.2 billion years ago and continuig for another 14.7 billion years.
    A (nodal) Hubble parameter of Ho=58 km/sMpc for a 'Hubble Age' of 16.9 billion years then correlates with 67 km/sMpc as 14.6 billion years and with 73 km/sMpc as 13.4 billion years.

    Regarding the text and analysis below; the 'Bubble' represents the 10D cosmology within its 11D 'envelope' and with the effect that the 11D 'bubble' expands at invariant light speed 'c' relative to a gravitationally decelerating asymptotic 'cosmological redshift' velocity presently 53% of 'c'. It so is the lower dimensional and smaller 'bubble' which is 'slower' that its higher dimensional encompassment and reverse the statement by Riess and co.

    https://www.researchgate.net/public...Dark_Energy_and_Dark_Matter_in_the_Multiverse

    http://www.space.com/33061-universe-expanding-faster-than-thought-hubble.html
    The idea of dark energy was so far-fetched, many scientists began contemplating other strange interpretations, including the cosmic bubble theory. In this theory, the lower-density bubble would expand faster than the more massive universe around it. To an observer inside the bubble, it would appear that a dark-energy-like force was pushing the entire universe apart. The bubble hypothesis requires that the universe's expansion rate be much slower than astronomers have calculated, about 60 to 65 kilometers per second per megaparsec. By reducing the uncertainty of the Hubble constant's value to 3.3 percent, Riess reports that his team has eliminated beyond all reasonable doubt the possibility of that lower number.
    "The hardest part of the bubble theory to accept was that it required us to live very near the center of such an empty region of space," explained Lucas Macri, of Texas A&M University in College Station, a key collaborator of Riess. "This has about a one in a million chance of occurring. But since we know that something weird is making the universe accelerate, it's better to let the data be our guide."​
    Click to expand...
    Hubble rules out one alternative to dark energy

    March 14, 2011

    nasashubbler-.36627.

    The brilliant, blue glow of young stars trace the graceful spiral arms of galaxy NGC 5584 in this Hubble Space Telescope image. Thin, dark dust lanes appear to be flowing from the yellowish core, where older stars reside. The reddish dots sprinkled throughout the image are largely background galaxies. Credit: NASA, ESA, A. Riess (STScI/JHU), L. Macri (Texas A&M University), and Hubble Heritage Team (STScI/AURA)


    PhysOrg.com) -- Astronomers using NASA's Hubble Space Telescope have ruled out an alternate theory on the nature of dark energy after recalculating the expansion rate of the universe to unprecedented accuracy.

    The universe appears to be expanding at an increasing rate. Some believe that is because the universe is filled with a dark energy that works in the opposite way of gravity. One alternative to that hypothesis is that an enormous bubble of relatively empty space eight billion light-years across surrounds our galactic neighborhood. If we lived near the center of this void, observations of galaxies being pushed away from each other at accelerating speeds would be an illusion.

    This hypothesis has been invalidated because astronomers have refined their understanding of the universe's present expansion rate. Adam Riess of the Space Telescope Science Institute (STScI) and Johns Hopkins University in Baltimore, Md., led the research. The Hubble observations were conducted by the SHOES (Supernova Ho for the Equation of State) team that works to refine the accuracy of the Hubble constant to a precision that allows for a better characterization of dark energy's behavior. The observations helped determine a figure for the universe's current expansion rate to an uncertainty of just 3.3 percent. The new measurement reduces the error margin by 30 percent over Hubble's previous best measurement of 2009. Riess' results appear in the April 1 issue of The Astrophysical Journal.

    The value for the expansion rate is 73.8 kilometers per second per megaparsec. It means that for every additional million parsecs (3.26 million light-years) a galaxy is from Earth, the galaxy appears to be traveling 73.8 kilometers per second faster away from us.
    Every decrease in uncertainty of the universe's expansion rate helps solidify our understanding of its cosmic ingredients. Knowing the precise value of the universe's expansion rate further restricts the range of dark energy's strength and helps astronomers tighten up their estimates of other cosmic properties, including the universe's shape and its roster of neutrinos, or ghostly particles, that filled the early universe.
    "We are using the new camera on Hubble like a policeman's radar gun to catch the universe speeding," Riess said. "It looks more like it's dark energy that's pressing on the gas pedal."

    Bursting the Bubble

    Dark energy is one of the greatest cosmological mysteries in modern physics. Even Albert Einstein conceived of a repulsive force, called the cosmological constant, which would counter gravity and keep the universe stable. He abandoned the idea when astronomer Edwin Hubble discovered in 1929 that the universe is expanding. Observational evidence for dark energy didn't come along until 1998, when two teams of researchers (one led by Riess) discovered it.

    hubbleruleso-.36626.

    Cepheids in Spiral Galaxy NGC 5584. This illustration shows the location of Cepheid variables found in the spiral galaxy NGC 5584. Ultraviolet, visible, and infrared data taken with Hubble's Wide Field Camera 3 in 2010 reveals Cepheids of varying periods. Those stars with periods of less than 30 days and between 30 and 60 days are marked with blue and green circles, respectively. A small number of Cepheids, with periods larger than 60 days, are marked in red. Credit: NASA, ESA, A. Riess (STScI/JHU), and L. Macri (Texas A&M University)

    Read more at: http://phys.org/news/2011-03-hubble-alternative-dark-energy.html#jCp

    The idea of dark energy was so far-fetched, many scientists began contemplating other strange interpretations, including the cosmic bubble theory. In this theory, the lower-density bubble would expand faster than the more massive universe around it. To an observer inside the bubble, it would appear that a dark-energy-like force was pushing the entire universe apart. The bubble hypothesis requires that the universe's expansion rate be much slower than astronomers have calculated, about 60 to 65 kilometers per second per megaparsec. By reducing the uncertainty of the Hubble constant's value to 3.3 percent, Riess reports that his team has eliminated beyond all reasonable doubt the possibility of that lower number.
    "The hardest part of the bubble theory to accept was that it required us to live very near the center of such an empty region of space," explained Lucas Macri, of Texas A&M University in College Station, a key collaborator of Riess. "This has about a one in a million chance of occurring. But since we know that something weird is making the universe accelerate, it's better to let the data be our guide."

    Using stars as "cosmic yardsticks" measuring the universe's expansion rate is a tricky business. Riess' team first had to determine accurate distances to galaxies near and far from Earth. The team compared those distances with the speed at which the galaxies are apparently receding because of the expansion of space. They used those two values to calculate the Hubble constant, the number that relates the speed at which a galaxy appears to recede to its distance from the Milky Way. Because astronomers cannot physically measure the distances to galaxies, researchers had to find stars or other objects that serve as reliable cosmic yardsticks. These are objects with an intrinsic brightness, brightness that hasn't been dimmed by distance, an atmosphere, or stellar dust, that is known. Their distances, therefore, can be inferred by comparing their true brightness with their apparent brightness as seen from Earth.
    Among the most reliable of these cosmic yardsticks for relatively shorter distances are Cepheid variables, pulsating stars that dim and fade at rates that correspond to their intrinsic luminosity. But Cepheids are too dim to be found in very distant galaxies. To calculate longer distances, Riess' team chose a special class of exploding stars called Type Ia supernovae. These stellar explosions all flare with similar luminosity and are brilliant enough to be seen far across the universe. By comparing the apparent brightness of Type la supernovae and pulsating Cepheid stars, the astronomers could measure accurately their intrinsic brightness and therefore calculate distances to Type Ia supernovae in far-flung galaxies.
    Using the sharpness of the new Wide Field Camera 3 (WFC3) to study more stars in visible and near-infrared light, scientists eliminated systematic errors introduced by comparing measurements from different telescopes.

    "WFC3 is the best camera ever flown on Hubble for making these measurements, improving the precision of prior measurements in a small fraction of the time it previously took," said Macri.

    Using one instrument to measure the Hubble constant is like measuring a hallway with a tape measure instead of by laying a ruler from end to end. By avoiding the need to pick up the ruler and lay it back down, you can prevent mistakes. "The camera on Hubble, WFC3, is the best ever flown on Hubble for making these measurements, improving the precision of prior measurements in a small fraction of the time it previously took," Riess said.
    The astronomer hopes that Hubble will continue to be used in this way to reduce the uncertainty in the Hubble constant even more, and thus refine the measured properties of dark energy. He suggests the present uncertainty could be cut in two before Hubble gives way to improvements out of Hubble's reach but within the scope of the James Webb Space Telescope, an infrared observatory scheduled to launch later this decade.

    Chasing a runaway universe, Riess has been pursing dark energy for 13 years. He co-discovered the existence of dark energy by finding that distant Type Ia supernovae were dimmer than expected, which meant they were farther away than anticipated. The only way for that to happen, Riess realized, was if the expansion of the universe had sped up some time in the past.
    Until that discovery, astronomers had generally believed that the cosmic expansion was gradually slowing down, due to the gravitational tugs that individual galaxies exert on one another. But the results implied that some mysterious force was acting against the pull of gravity, shoving galaxies away from each other at ever-increasing speeds.
    Riess decided that one of the best ways to tighten the constraints on dark energy is to determine an accurate value for the Hubble constant, which he has been doing with the Hubble Space Telescope. That measurement, combined with others from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), traces the universe's behavior from nearly the dawn of time to the present age. (WMAP showed the universe as it appeared shortly after the Big Bang, before stars and galaxies formed.)

    Riess is just one of many astronomers who, over the past 80 years, have been measuring and re-measuring the Hubble constant. The Hubble telescope has played a major role in helping astronomers precisely measure the universe, expansion. Before Hubble was launched in 1990, the estimates for the Hubble constant varied by a factor of two. In 1999, the Hubble Space Telescope Key Project on the Extragalactic Distance Scale refined the value of the Hubble constant to an error of about 10 percent.

    1x1-.36628. Explore further: 'Big baby' galaxy found in newborn Universe
    Provided by: NASA's Goddard Space Flight Center img-dot-.36629.


    Read more at: http://phys.org/news/2011-03-hubble-alternative-dark-energy.html#jCp
     
    Last edited: Aug 23, 2016
  2. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    3.8 A Synthesis of LCDM with MOND in an Universal Lambda Milgröm Deceleration


    420px-M33_rotation_curve_HI.
    [Excerpt from Wikipedia:
    https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics

    Several independent observations point to the fact that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analysed using Newton's laws. This discrepancy – known as the "missing mass problem" – was first identified for clusters by Swiss astronomer Fritz Zwicky in 1933 (who studied the Coma cluster),[4][5] and subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda.[6] These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin at the Carnegie Institute in Washington, who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre, Rubin and collaborators found instead that they remain almost constant[7] – the rotation curves are said to be "flat". This observation necessitates at least one of the following: 1) There exists in galaxies large quantities of unseen matter which boosts the stars' velocities beyond what would be expected on the basis of the visible mass alone, or 2) Newton's Laws do not apply to galaxies. The former leads to the dark matter hypothesis; the latter leads to MOND.

    220px-Milgrom_Mordechai.
    MOND was proposed by Mordehai Milgrom in 1983

    The basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments (in the Solar System and on Earth), they have not been verified for objects with extremely low acceleration, such as stars in the outer parts of galaxies. This led Milgrom to postulate a new effective gravitational force law (sometimes referred to as "Milgrom's law") that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics.[1] This law, the keystone of MOND, is chosen to reduce to the Newtonian result at high acceleration but lead to different ("deep-MOND") behaviour at low acceleration:

    mond1. ........(1)

    Here FN is the Newtonian force, m is the object's (gravitational) mass, a is its acceleration, μ(x) is an as-yet unspecified function (known as the "interpolating function"), and a0 is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires μ(x) → 1 for x >> 1, and consistency with astronomical observations requires μ(x) → x for x << 1. Beyond these limits, the interpolating function is not specified by the theory, although it is possible to weakly constrain it empirically.[8][9] Two common choices are:

    mond2. ("Simple interpolating function"),
    and
    mond3. ("Standard interpolating function").

    Thus, in the deep-MOND regime (a << a0):

    mond4.

    Applying this to an object of mass m in circular orbit around a point mass M (a crude approximation for a star in the outer regions of a galaxy), we find:

    mond5. .......(2)

    that is, the star's rotation velocity is independent of its distance r from the centre of the galaxy – the rotation curve is flat, as required. By fitting his law to rotation curve data, Milgrom found a0 ≈ 1.2 x 10−10 m s−2 to be optimal. This simple law is sufficient to make predictions for a broad range of galactic phenomena.
    Milgrom's law can be interpreted in two different ways. One possibility is to treat it as a modification to the classical law of inertia (Newton's second law), so that the force on an object is not proportional to the particle's acceleration a but rather to μ(a/a0)a. In this case, the modified dynamics would apply not only to gravitational phenomena, but also those generated by other forces, for example electromagnetism.[10] Alternatively, Milgrom's law can be viewed as leaving Newton's Second Law intact and instead modifying the inverse-square law of gravity, so that the true gravitational force on an object of mass m due to another of mass M is roughly of the form GMm/(μ(a/a0)r2). In this interpretation, Milgrom's modification would apply exclusively to gravitational phenomena.
    [End of excerpt]




    For LCDM:
    acceleration a: a = G{MBM+mDM}/R2

    For MOND:
    acceleration a: a+amil = a{a/ao} = GMBM/R2 = v4/ao.R2 for v4 = GMBMao
    amil = a{a/ao-1} = a{a-ao}/ao = GMBM/R2 - a

    For Newtonian acceleration a: G{MBM+mDM}/R2 = a = GMBM/R2 - amil

    amil = - GmDM/R2 = (a/ao)(a-ao) and relating the Dark Matter to the Milgröm constant in interpolation amil

    for the Milgröm deceleration applied to the Dark Matter and incorporating the radial independence of rotation velocities in the galactic structures as an additional acceleration term in the Newtonian gravitation as a function for the total mass of the galaxy and without DM in MOND.

    Both, LCDM and MOND consider the Gravitational 'Constant' constant for all accelerations and vary either the mass content in LCDM or the acceleration in MOND in the Newtonian Gravitation formulation respectively.
    The standard gravitational parameter GM in a varying mass term G(M+m) = M(G+ΔG) reduces to Gm=ΔGM for a varying Gravitational parameter G in (G+ΔG) = f(G).

    The Dark Matter term GmDM can be written as GmDM/R2 = -amil = a - a2/ao = ΔGM/R2 to identify the Milgröm acceleration constant as an intrinsic and universal deceleration related to the Dark Energy and the negative pressure term of the cosmological constant invoked to accommodate the apparent acceleration of the universal expansion (qdS = -0.5585).

    ΔG = Go-G(n) in amil = -2cHo/[n+1]3 = {Go-G(n)}M/R2 for some function G(n) descriptive for the change in f(G).

    The Milgröm constant so is not constant, but emerges as the initial boundary condition in the Instanton aka the Quantum Big Bang and is identfied as the parametric deceleration parameter in Friedmann's solutions to Einstein's Field Equations in amil.ao = a(a-ao) and ao(amil + a) = a2 or ao = a2/(amil+a).

    A(n)= -2cHo/[n+1]3 = -2cHo2/RH[n+1]3 and calculates as -1.112663583x10-9 (m/s2)* at the Instanton and as -1.16189184x10-10 (m/s2)* for the present time coordinate.

    The Gravitational Constant G(n)=GoXn in the standard gravitational parameter represents a finestructure in conjunction with a subscale quantum mass evolution for a proto nucleon mass
    mc = alpha9.mPlanck from the gravitational interaction finestructure constant ag = 2πGomc2/hc = 3.438304..x10-39 = alpha18 to unify electromagnetic and gravitational quantum interactions.

    The proto nucleon mass mc(n) so varies as complementary finestructure to the finestructure for G in mcYn for a truly constant Go as defined in the interaction unification.
    G(n)M(n)=GoXn.MoYn = GoMo(XY)n = GoMo in the macro evolution of baryonic mass seedling Mo and Gomc in the micro evolution of the nucleonic seed remain constant to describes a particular finestructure for the timeframe in the cosmogenesis when the nonluminous Dark Matter remains separate from the luminous Baryon mass.

    The DM-BM intersection coordinate is calculated for a cycletime n=Hot=1.4142..or at an universal true electromagnetic age of 23.872 billion years.
    At that time, the {BM-DM-DE} mass density distribution will be {5.536%; 22.005%; 72.459%}, with the G(n)M(n) assuming a constant value in the Hubble cycle.
    The Dark Energy pressure will be PPBM∩DM = -3.9300x10-11 (N/m2)* with a corresponding 'quasi cosmological constant' of LBM∩DM = -6.0969x10-37 (s-2)*.

    Within a local inertial frame of measurement; the gravitational constant so becomes a function of the micro evolution of the proto nucleon mass mc from the string epoch preceding the Instanton.
    A localized measurement of G so engages the value of the mass of a neutron as evolved mc in a coupling to the evolution of the macro mass seedling Mo and so the baryonic omega
    Ωo=Mo/MH = 0.02803 in the critical density ρcritical = 3Ho2/8πGo = 3MH/4πRH3 = 3c2/8πGoRH2 for the zero curvature and a Minkowski flat cosmology.

    The finestructure for G so engages both the micro mass mc and the macro mass Mo, the latter being described in the overall Hypermass evolution of the universe as a Black Hole cosmology in a 5/11D AdS 'closed' spacetime encompassing the dS spacetime evolution of the 4/10D 'open' universe.
    Details are described in a later section of this discourse.

    The Milgröm 'constant' so relates an intrinsic Dark Energy cosmology to the macrocosmic hypermass evolution of Black Holes at the cores of galaxies and becomes universally applicable in that context.
    No modification of Newtonian gravitation is necessary, if the value of a locally derived and measured G is allowed to increase to its string based (Planck-Stoney) value of Go=1/k=4πεo = 1.111..x10-10 string unification units [C*=m3/s2] and relating spacial volume to angular acceleration in gravitational parameter GM.

    The necessity for Dark Matter to harmonise the hypermass evolution remains however, with the Dark Energy itself assuming the form of the Milgröm deceleration.


    amil = -2cHo/[n+1]3 = -{Go-G(n)}M/R2 = -Go{1-Xn}M/R2 for the gravitational parameter GM coupled to the size of a galactic structure harbouring a central Black Hole-White Hole/Quasar power source.

    GoM/R2 = 2cHo/{(1-Xn)(n+1)3}

    For a present n=1.13242 ......{(1-Xn)(n+1)}3 = 4.073722736.. for M/R2 = constant = 2.48906

    For the Milky Way barred spiral galaxy and a total BM+DM mass of 1.7x1042 kg, the mass distribution would infer a diameter of 1.6529x1021 m or 174,594 light years, inclusive the Dark Matter halo extension.

    For the Andromeda barred spiral galaxy and a total BM+DM mass of 3x1042 kg, the galaxy's diameter would increase to 2.1957x1021 m or 231,930 light years for a total matter distribution.




    hs-2012-25-a-print-554x580.

    Dear Brian!

    I recently saw your ‘A Universe at a Time’ blog on the web and followed your professional standard critique of the ‘Electric Universe’ idea. Afterwards I saw you also discussing the MOND versus the LCDM standard model on your homepage.
    So you might find an actual synthesis between the latter two cosmology models interesting.
    I have therefore pasted a relevant short part of a detailed discussion and refinement of the standard cosmology below for your discernment.
    Should you find an inclination to look at the entire cosmology; the work is incomplete but the tabulation would give you a quick overview of how the standard cosmology can be extended to solve the dark energy, dark matter, hierarchy, monopole, supersymmetry and membrane horizon problems in a synergy between deSitter and Anti deSitter spacetimes in a multiverse phaseshifted in parallel timespace parameters but colocal in spacetime.

    All the best

    Tony B.
     
    Last edited: Feb 3, 2018
  3. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    What If We Haven’t Found Aliens Because Humans Came First?

    En3Ujuws6X3JMXwTLZS_y8LMX8Jm0ChD2qNhNLi3jSYYoKXyQH_f8FB6De1O8EKTHLbgteoiXlO93awENVHYjnAqp=s300-c.
    Written by
    Becky Ferreira

    Contributor
    August 15, 2016 // 08:00 AM EST


    Copy This URL

    Space is “vastly, hugely, mind-bogglingly big,” as the author Douglas Adams put it, and those gargantuan dimensions present a colossal roadblock in the search for alien life beyond Earth. Even if our own Milky Way galaxy is currently teeming with extraterrestrial beings, their worlds could be scattered thousands of light years distant from each other, passing blindly like cosmic ships in the night.
    But what if the key obstacle to detecting alien pen pals is not spatial, but rather temporal? That’s one of the questions posed by a forthcoming paper in the Journal of Cosmology and Astroparticle Physics.

    Led by astrophysicist and professor Avi Loeb, who chairs the department of astronomy at Harvard University, the paper charts out the probability of life’s emergence from the birth of the first stars 30 million years after the Big Bang to the death of the last stars trillions of years into the future. Loeb’s team focused on “life as we know it,” meaning terrestrial organisms on a rocky planet with liquid water, within the habitable zone of its star.
    The results suggested that low-mass red dwarf stars are the most likely candidates for hosting habitable planets, thanks to their extreme longevity. These slow burners are only about ten percent as massive as yellow dwarfs like the Sun, but they outlive Sunlike stars thousands of times over.
    Red dwarfs may also have some major setbacks, including a propensity for wildness in their youth. Flares emitted by adolescent red dwarfs may singe and sterilize the atmospheres of their surrounding planets, rendering life impossible. Though a recent study demonstrated that some planets within the habitable zones of red dwarfs do have compact atmospheres, similar to Earth, Mars, or Venus, the jury is still out on whether life can exist on these worlds.

    But supposing red dwarfs could host life, it stands to reason that the long, stable, adult lifespans of red dwarf systems would amplify opportunity for fledgling ecosystems to bloom. “Our conclusion is that if low-mass stars can support life, then life is much more likely in the future,” Loeb told me. “Since [low-mass stars] live so much longer, they are providing heat to keep a planet warm for longer.”
    Indeed, Loeb’s team found that life would be about one thousand times more likely to arise in the distant future by calculating the probability of habitable, Earthlike planets forming over trillions of years.

    1470953622174108.
    Concept art of TRAPPIST-1, a red dwarf system with planets in the habitable zone. Image: ESO/M. Kornmesser/N. Risinger


    The scenario casts Earthlings as early bloomers, prematurely born long before the universe’s most fertile life-bearing years. Perhaps this is one possible explanation for the classic Fermi paradox: Have we struck out in our attempts to detect alien intelligence simply because we are the first example of it to show up to the cosmic party?
    “It might be morning in the cosmos, to quote Reagan,” Seth Shostak, senior astronomer and director of the Search for Extraterrestrial Intelligence (SETI) Institute, told me. “There’s going to be a lot more life to come.”

    “But there’s no reason for any of that to imply that there is a lack of habitation today,” he added.
    Indeed, we may be one of several precocious civilizations strewn across the cosmos. But Loeb’s team is not alone in speculating that the real heydey of life in the universe lies billions or trillions of years ahead.
    Another recent study led by Pratika Dayal, an astrophysicist based at the University of Groningen, came to a similar conclusion by delving into the role that radiation from sources like supernovae and gamma ray bursts have played in habitability over the course of the last 13 billion years.
    Her team’s models show that decreasing radiation may have resulted in a universe that is 20 times more liveable today than it was four billion years ago, when life first appeared on Earth. The study also projected that our cosmic surroundings will continue to evolve into a more nurturing environment for life in the future.
    “People have been taking such different approaches,” Dayal told me. “The paper that Avi [Loeb] wrote takes a different approach to what we’re doing, but then all of us are, more or less, coming up with the same answers. It is encouraging because it shows we’re on the right track.”
    Of course, all of this research will remain inherently speculative until we have built up more observational evidence of the universe’s habitability over time, and more robust simulations to interpret that evidence.

    “Even the best simulations aren’t good enough to study habitability of the universe on an extremely large scale,” Dayal said. “We’re just trying to think of clever ways to get around the problem.”
    “We should be agnostic until we go and search,” Loeb said.
    Still, it’s intriguing to ruminate on the implications of humans being the first, or among the first, intelligent life forms to emerge in the cosmos. Let’s say we are, for kicks. Does that change how we view ourselves and our place in the universe? Are we elder brothers and sisters to societies that will emerge around stars that have yet to be born? In addition to trying to bridge communication gaps in deep space, should we also reach out across deep time, to the wealth of extraterrestrial beings that are projected to develop billions of years after our Sun dies?
    We look for alien life in the stars because we want to learn from other intelligent civilizations, but we may be most valuable as teachers. Even if these speculative future lifeforms are advanced relative to humans, they might welcome information about our perspective on the universe. Imagine what a boon it would be for Earthlings to receive this kind of message-in-a-bottle from a bygone alien society, regardless of whether it was technologically superior to us.
    Of course, it’s possible that we already are receiving posthumous letters from aliens, but have no way to identify them. “This is one of the key outstanding questions: When you say you look for life in the universe, what do you mean?” Dayal said.
    “From the astrophysical perspective, we are basically talking about Earthlike life. But what if it’s different? Would you even be able to recognize it? How do we know there’s not some form of life trying to communicate with us and we don’t know?”

    “It’s quite possible that there are other forms of life, and that nature has more imagination than we do,” Loeb told me. “We just have one example.”
    This is an important limitation to keep in mind for ourselves as well. The only thing more excruciating than the thought of alien attempts at contact falling on deaf ears on Earth is the opposite scenario, in which human messages reach civilizations that can’t interpret them. Transcending this problem will be essential to securing even one link in a chain of cognitively sophisticated beings across time and space.

    "It’s quite possible that such forms of life spread throughout the galaxy, and are mostly in places we don’t suspect."


    “I bet you’d get a lot of people weighing in on how to communicate with critters that might spring up five, ten, or 100 billion years from now,” Shostak said. “It’s a tough problem.”
    There are some basic roadmaps for solving it, though each is contingent on major technological breakthroughs. We could, for instance, develop interstellar spaceflight in order to disperse ourselves more widely across the stars. It’s a lot easier said than done, but theoretically, it would up the odds of humans sticking around long enough to interact directly with the more populous universe of the future.
    “If you are an intelligent form of life like we are—a technological civilization—then everything changes because you are not restricted to live next to a star,” Loeb said. “In principle, such a civilization, if it’s very advanced, could move away from the star that hosted it in the beginning.”
    “It’s quite possible that such forms of life spread throughout the galaxy, and are mostly in places we don’t suspect,” he continued. “There might be a lot of spacecraft moving through the galaxy that are not particularly visible to us because they are small. If you imagine a civilization that is hundreds of millions of years old in terms of its technology, the sky’s the limit in terms of how widely it would be able to spread.”

    Humans may one day develop the capabilities necessary to become this type of star-hopping civilization. Projects like Breakthrough Starshot, which aims to send a fleet of tiny spaceships to the nearest star system, Alpha Centauri, are hoping to pave the way for this achievement.

    “Our civilization will have to move somewhere [to survive long-term],” Loeb said. “The nearest example is Proxima Centauri, so we might consider traveling there.”
    It’s difficult to predict if and when these efforts will come to fruition, and we may ultimately have to submit to our primitive planetary life and its star-exploding expiration date. If so, our plan B for contacting future life could be launching our own epitaphs in the form of robotic spacecraft or radio messages to civilizations that don’t exist yet. It would be a shot in the dark, and we’d never know whether we’d succeeded. But it would be a small step towards the connection we so clearly crave from other living denizens of the cosmos.
    For now, it’s a fun thought experiment. However, if evidence continues to accumulate suggesting that we live in an era of biological sparseness relative to an abundance of future civilizations, it could reorient our attitude to our place in the universe, and our approach to the other creatures—past, present, and future—with whom we share it.

    http://motherboard.vice.com/read/wh...-because-humans-came-first?trk_source=popular

    Topics: aliens, seti, early life, universe, space, history of life, creation
     
  4. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    First stars formed even later than previously thought

    Date:
    September 2, 2016
    Source:
    European Space Agency (ESA)
    Summary:
    ESA's Planck satellite has revealed that the first stars in the Universe started forming later than previous observations of the Cosmic Microwave Background indicated. This new analysis also shows that these stars were the only sources needed to account for reionising atoms in the cosmos, having completed half of this process when the Universe had reached an age of 700 million years.
    160902125859_1_540x360.
    Cosmic reionisation.
    Credit: ESA – C. Carreau



    ESA's Planck satellite has revealed that the first stars in the Universe started forming later than previous observations of the Cosmic Microwave Background indicated. This new analysis also shows that these stars were the only sources needed to account for reionising atoms in the cosmos, having completed half of this process when the Universe had reached an age of 700 million years.

    With the multitude of stars and galaxies that populate the present Universe, it's hard to imagine how different our 13.8 billion year cosmos was when it was only a few seconds old. At that early phase, it was a hot, dense primordial soup of particles, mostly electrons, protons, neutrinos, and photons -- the particles of light.
    In such a dense environment the Universe appeared like an 'opaque' fog, as light particles could not travel any significant distance before colliding with electrons.
    As the cosmos expanded, the Universe grew cooler and more rarefied and, after about 380,000 years, finally became 'transparent'. By then, particle collisions were extremely sporadic and photons could travel freely across the cosmos.

    Today, telescopes like Planck can observe this fossil light across the entire sky as the Cosmic Microwave Background, or CMB. Its distribution on the sky reveals tiny fluctuations that contain a wealth of information about the history, composition and geometry of the Universe.
    The release of the CMB happened at the time when electrons and protons joined to form hydrogen atoms. This is the first moment in the history of the cosmos when matter was in an electrically neutral state.

    After that, a few hundred million years passed before these atoms could assemble and eventually give rise to the Universe's first generation of stars.
    As these first stars came to life, they filled their surroundings with light, which subsequently split neutral atoms apart, turning them back into their constituent particles: electrons and protons. Scientists refer to this as the 'epoch of reionisation'. It did not take long for most material in the Universe to become completely ionised, and -- except in a very few, isolated places -- it has been like that ever since.

    Observations of very distant galaxies hosting supermassive black holes indicate that the Universe had been completely reionised by the time it was about 900 million years old. The starting point of this process, however, is much harder to determine and has been a hotly debated topic in recent years.
    "The CMB can tell us when the epoch of reionisation started and, in turn, when the first stars formed in the Universe," explains Jan Tauber, Planck project scientist at ESA.
    To make this measurement, scientists exploit the fact that a fraction of the CMB is polarised: part of the light vibrates in a preferred direction. This results from CMB photons bouncing off electrons -- something that happened very frequently in the primordial soup, before the CMB was released, and then again later, after reionisation, when light from the first stars brought free electrons back onto the cosmic stage.

    "It is in the tiny fluctuations of the CMB polarisation that we can see the influence of the reionisation process and deduce when it began," adds Tauber.
    A first estimate of the epoch of reionisation came in 2003 from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), suggesting that this process might have started early in cosmic history, when the Universe was only a couple of hundred million years old. This result was problematic, because there is no evidence that any stars had formed by then, which would mean postulating the existence of other, exotic sources that could have caused the reionisation at that time.
    This first estimate was soon to be corrected, as subsequent data from WMAP pushed the starting time to later epochs, indicating that the Universe had not been significantly reionised until at least some 450 million years into its history.

    This eased, but did not completely solve the puzzle: although the earliest of the first stars have been observed to be present already when the Universe was 300 to 400 million years old, it remained unclear whether these stars were the main culprits for reionising fully the cosmos or whether additional, more exotic sources must have played a role too.
    In 2015, the Planck Collaboration provided new data to tackle the problem, moving the reionisation epoch even later in cosmic history and revealing that this process was about half-way through when the Universe was around 550 million years old. The result was based on Planck's first all-sky maps of the CMB polarisation, obtained with its Low-Frequency Instrument (LFI).
    Now, a new analysis of data from Planck's other detector, the High-Frequency Instrument (HFI), which is more sensitive to this phenomenon than any other so far, shows that reionisation started even later -- much later than any previous data have suggested.

    "The highly sensitive measurements from HFI have clearly demonstrated that reionisation was a very quick process, starting fairly late in cosmic history and having half-reionised the Universe by the time it was about 700 million years old," says Jean-Loup Puget from Institut d'Astrophysique Spatiale in Orsay, France, principal investigator of Planck's HFI.
    "These results are now helping us to model the beginning of the reionisation phase."
    "We have also confirmed that no other agents are needed, besides the first stars, to reionise the Universe," adds Matthieu Tristram, a Planck Collaboration scientist at Laboratoire de l'Accélérateur Linéaire in Orsay, France.

    The new study locates the formation of the first stars much later than previously thought on the cosmic timeline, suggesting that the first generation of galaxies are well within the observational reach of future astronomical facilities, and possibly even some current ones.
    In fact, it is likely that some of the very first galaxies have already been detected with long exposures, such as the Hubble Ultra Deep Field observed with the NASA/ESA Hubble Space Telescope, and it will be easier than expected to catch many more with future observatories such as the NASA/ESA/CSA James Webb Space Telescope.


    Story Source:
    The above post is reprinted from materials provided by European Space Agency (ESA). Note: Content may be edited for style and length.

    Journal References:
    1. Matthieu Tristram and Collaboration. Planck intermediate results. XLVII. Planck constraints on reionization history. Astronomy & Astrophysics, 2016; DOI: 10.1051/0004-6361/201628897
    2. Planck Collaboration. Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth. Astronomy & Astrophysics, 2016; DOI: 10.1051/0004-6361/201628890


    https://www.sciencedaily.com/releases/2016/09/160902125859.htm

    [3:16:22 AM-September 3rd-2016] Sirius 17: some new data they have

    [12:38:15] ShilohaPlace:
    " After that, a few hundred million years passed before these atoms could assemble and eventually give rise to the Universe's first generation of stars."

    ShilohaPlace: Our number is 236 million years for a redshift of z=7.4 we called this the Quasar Wall

    "Observations of very distant galaxies hosting supermassive black holes indicate that the Universe had been completely reionised by the time it was about 900 million years old. The starting point of this process, however, is much harder to determine and has been a hotly debated topic in recent years."

    This we dated to 913 million tears as the equilibrium between radiation and the dark matter

    The starting point has been set by us at twice the 236 million year marker at 472 million years as supercluster seeds, and as this is a black hole evolution it links to the baryon mass seed defined in the 0.02803 cycletime coordinate, which gives the 16.9 billion times 0.02803 as 472 million year timeframe. But you can also add the half cycle as 472+236=708 million years and this is what your article says
     
    Last edited: Sep 3, 2016
  5. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    Matter-Antimatter Asymmetry: CERN Experiments On Particles Containing Charm Quark Fail To Detect CP Violation

    By Avaneesh Pandey @avaneeshp88 On 09/29/16 AT 7:12 AM

    cern-lhc.PNG
    A view of the Large Hadron Collider (LHC) at CERN. Photo: CERN

    Why is there something rather than nothing? This is a question that has, for the longest time, perplexed physicists.
    If our current understanding of the universe is correct, it should not even exist. The very fact that planets, stars and galaxies exist undercuts one of the most fundamental premises of particle physics — that the Big Bang, which created our universe 13.8 billion years ago, created equal amounts of matter and antimatter.
    If this really happened, why, given that matter and antimatter particles annihilate each other when they collide, does anything exist at all? Why do you and I exist when the laws of physics, as we know them, seem to dictate that the cosmos should be nothing but a wasteland strewn with leftover energy?
    Obviously, as attested by the fact that we exist, there is a fundamental difference between matter and antimatter. Either significantly more matter was created by the Big Bang, or there is a fundamental, as-of-yet-undiscovered asymmetry between matter particles and their antimatter counterparts — one that would have given the former an edge over the latter in the race for survival.

    The quest to discover this asymmetry is a goal that has witnessed the involvement of several particle physicists from across the world, including researchers at the European Organization for Nuclear Research (CERN) — the institution that houses the world’s most powerful particle collider.
    On Wednesday, researchers associated with the LHCb experiment at the Large Hadron Collider announced that they had made the most precise measurement of Charge-Parity (CP) violation among particles containing a charm quark.
    Quarks, the fundamental particles that make up protons and neutrons, come in six different “flavors” — up, down, strange, top, bottom and charm. Each quark has an antimatter equivalent known as antiquark. Both protons and neutrons — contained within the nucleus of an atom — are made up of three quarks bound together.
    Related Stories

    The Standard Model of particle physics, which describes how three of the four known fundamental forces work, has a central tenet — charge-parity symmetry, which posits that the laws of physics remain unchanged even if a particle is replaced with its antiparticle, which has the opposite charge, and if its spatial coordinates are inverted.
    If a significant violation of CP symmetry is detected, it would not only hint at the existence of physics beyond the Standard Model, it would also help us understand why the universe is completely devoid of antimatter.

    So far, however, the extent of CP violation detected among elementary particles is not significant enough to explain the observed matter-antimatter asymmetry — something that was further confirmed by the precise measurements carried out by LHCb researchers.
    “The LHCb collaboration made a precise comparison between the decay lifetime of a particle called a D0meson (formed by a charm quark and an up antiquark) and its anti-matter counterpart D0 (formed by an charm antiquark and up quark), when decaying either to a pair of pions or a pair of kaons. Any difference in these lifetimes would provide strong evidence that an additional source of CP violation is at work,” CERN said in the statement. “The latest results indicate that the lifetimes of the D0 and D0 particles, measured using their decays to pions or kaons, are still consistent, thereby demonstrating that any CP violation effect that is present must indeed be at a tiny level.”

    baryogenesis.

    There is no CP-violation in any quark whose constituents are up-quarks and anti-up quarks, as is the case for the charm quarks c-uu[bar]u and c[bar]=u[bar]uu[bar], as the CP violation requires Inner-Ring (down-antidown) or Outer-Ring (strange-antistrange) interaction.

    ufoqrpdf.

    The Top-Super Diquark Resonance of CERN - December 15th, 2015

    As can be calculated from the table entries below; a Top-Super Diquark Resonance is predicted as a (ds)bar(ss)=(ds)barS or a (ds)(ss)bar=(ds)Sbar diquark complex averaged at (182.758+596.907)GeV=779.67 GeV.
    atlas_cms_diphoton_2015--31707-.31710.

    In the diquark triplet {dd; ds; ss}={Dainty; Top; Super} a Super-Superbar resonance at 1.194 TeV can also be inferred with the Super-Dainty resonance at 652.9 GeV and the Top-Dainty resonance at 238.7 GeV 'suppressed' by the Higgs Boson summation as indicated below. Supersymmetric partners become unnecessary in the Standard Model, extended into the diquark hierarchies.

    Ten DIQUARK quark-mass-levels crystallise, including a VPE-level for the K-IR transition and a VPE-level for the IR-OR transition:

    VPE-Level [K-IR] is (26.4922-29.9621 MeV*) for K-Mean: (14.11358 MeV*); (2.8181-3.1872 MeV*) for IROR;
    VPE-Level [IR-OR] is (86.5263-97.8594 MeV*) for K-Mean: (46.09643 MeV*); (9.2042-10.410 MeV*) for IROR;
    UP/DOWN-Level is (282.5263-319.619 MeV*) for K-Mean: (150.5558 MeV*); (30.062-33.999 MeV*) for IROR;
    STRANGE-Level is (923.013-1,043.91 MeV*) for K-Mean: (491.7308 MeV*); (98.185-111.05 MeV*) for IROR;
    CHARM-Level is (3,014.66-3,409.51 MeV*) for K-Mean: (1,606.043 MeV*); (320.68-362.69 MeV*) for IROR;
    BEAUTY-Level is (9,846.18-11,135.8 MeV*) for K-Mean: (5,245.495 MeV*); (1,047.4-1,184.6 MeV*) for IROR;
    MAGIC-Level is (32,158.6-36,370.7 MeV*) for K-Mean: (17,132.33 MeV*); (3,420.9-3,868.9 MeV*) for IROR;
    DAINTY-Level is (105,033-118,791 MeV*) for K-Mean: (55,956.0 MeV*); (11,173-12,636 MeV*) for IROR;
    TRUTH-Level is (343,050-387,982 MeV*) for K-Mean: (182,758.0 MeV*); (36,492-41,271 MeV*) for IROR;
    SUPER-Level is (1,120,437-1,267,190 MeV*) for K-Mean: (596,906.8 MeV*); (119,186-134,797 MeV*) for IROR.

    The K-Means define individual materialising families of elementary particles;

    the (UP/DOWN-Mean) sets the (PION-FAMILY: po, p+, p-);
    the (STRANGE-Mean) specifies the (KAON-FAMILY: Ko, K+, K-);
    the (CHARM-Mean) defines the (J/PSI=J/Y-Charmonium-FAMILY);
    the (BEAUTY-Mean) sets the (UPSILON=U-Bottonium-FAMILY);
    the (MAGIC-Mean) specifies the (EPSILON=E-FAMILY);
    the (DAINTY-Mean) bases the (OMICRON-O-FAMILY);
    the (TRUTH-Mean) sets the (KOPPA=J-Topomium-FAMILY) and
    the (SUPER-Mean) defines the final quark state in the (HIGGS/CHI=H/C-FAMILY).

    The VPE-Means are indicators for average effective quarkmasses found in particular interactions.
    Kernel-K-mixing of the wavefunctions gives K(+)=60.210 MeV* and K(-)=31.983 MeV* and the IROR-Ring-Mixing gives (L(+)=6.405 MeV* and L(-)=3.402 MeV*) for a (L-K-Mean of 1.50133 MeV*) and a (L-IROR-Mean of 4.90349 MeV*); the Electropole ([e-]=0.52049 MeV*) as the effective electronmass and as determined from the electronic radius and the magnetocharge in the UFoQR.

    The restmasses for the elementary particles can now be constructed, using the basic nucleonic restmass (mc=9.9247245x10-28 kg*=(Squareroot of [Omega]xmP) and setting (mc) as the basic maximum (UP/DOWN-K-mass=mass(KERNEL CORE)=3xmass(KKK)=3x319.62 MeV*=958.857 MeV*);
    Subtracting the (Ring VPE 3xL(+)=19.215 MeV*, one gets the basic nucleonic K-state for the atomic nucleus (made from protons and neutrons) in: {m(n0;p+)=939.642 MeV*}.


    The HB discussed in the New Scientist post below is said of having been measured in the decay of W's, Z's and Tau Leptons, as well as the bottom- and top-quark systems described in the table and the text above.

    Now in the table I write about the KIR-OR transitions and such. The K means core for kernel and the IR means InnerRing and the OR mean OuterRing. The Rings are all to do with Leptons and the Kernels with Quarks.

    So the Tau-decay relates to 'Rings' which are charmed and strange and bottomised and topped, say. They are higher energy manifestations of the basic nucleons of the proton and the neutrons and basic mesons and hyperons.


    Is This the Beginning of the End of the Standard Model?

    Posted on December 16, 2015 | 15 Comments
    Was yesterday the day when a crack appeared in the Standard Model that will lead to its demise? Maybe. It was a very interesting day, that’s for sure. [Here’s yesterday’s article on the results as they appeared.]
    I find the following plot useful… it shows the results on photon pairs from ATLAS and CMS superposed for comparison. [I take only the central events from CMS because the events that have a photon in the endcap don’t show much (there are excesses and deficits in the interesting region) and because it makes the plot too cluttered; suffice it to say that the endcap photons show nothing unusual.] The challenge is that ATLAS uses a linear horizontal axis while CMS uses a logarithmic one, but in the interesting region of 600-800 GeV you can more or less line them up. Notice that CMS’s bins are narrower than ATLAS’s by a factor of 2.
    atlas_cms_diphoton_2015-.31707.
    The diphoton results from ATLAS (top) and CMS (bottom) arranged so that the 600, 700 and 800 GeV locations (blue vertical lines) line up almost perfectly. (The plots do not line up away from this region!) The data are the black dots (ignore the bottom section of CMS’s plot for now.) Notice that the obvious bumps in the two data sets appear in more or less the same place. The bump in ATLAS’s data is both higher (more statistically significant) and significantly wider.


    Both plots definitely show a bump. The two experiments have rather similar amounts of data, so we might have hoped for something more similar in the bumps, but the number of events in each bump is small and statistical flukes can play all sorts of tricks.
    Of course your eye can play tricks too. A bump of a low significance with a small number of events looks much more impressive on a logarithmic plot than a bump of equal significance with a larger number of events — so beware that bias, which makes the curves to the left of the bump appear smoother and more featureless than they actually are. [For instance, in the lower register of CMS’s plot, notice the bump around 350.]

    We’re in that interesting moment when all we can say is that there might be something real and new in this data, and we have to take it very seriously. We also have to take the statistical analyses of these bumps seriously, and they’re not as promising as these bumps look by eye. If I hadn’t seen the statistical significances that ATLAS and CMS quoted, I’d have been more optimistic.

    Also disappointing is that ATLAS’s new search is not very different from their Run 1 search of the same type, and only uses 3.2 inverse femtobarns of data, less than the 3.5 that they can use in a few other cases… and CMS uses 2.6 inverse femtobarns. So this makes ATLAS less sensitive and CMS more sensitive than I was originally estimating… and makes it even less clear why ATLAS would be more sensitive in Run 2 to this signal than they were in Run 1, given the small amount of Run 2 data. [One can check that if the events really have 750 GeV of energy and come from gluon collisions, the sensitivity of the Run 1 and Run 2 searches are comparable, so one should consider combining them, which would reduce the significance of the ATLAS excess. Not to combine them is to “cherry pick”.]

    By the way, we heard that the excess events do not look very different from the events seen on either side of the bump; they don’t, for instance, have much higher total energy. That means that a higher-energy process, one that produces a new particle at 750 GeV indirectly, can’t be a cause of big jump in the 13 TeV production rate relative to 8 TeV. So one can’t hide behind this possible explanation for why a putative signal is seen brightly in Run 2 and was barely seen, if at all, in Run 1.
    Of course the number of events is small and so these oddities could just be due to statistical flukes doing funny things with a real signal. The question is whether it could just be statistical flukes doing funny things with the known background, which also has a small number of events.
    And we should also, in tempering our enthusiasm, remember this plot: the diboson excess that so many were excited about this summer. Bumps often appear, and they usually go away. R.I.P.
    atlas_dibosonxs-.31708.
    The most dramatic of the excesses in the production of two W or Z bosons from Run 1 data, as seen in ATLAS work published earlier this year. That bump excited a lot of people. But it doesn’t appear to be supported by Run 2 data. A cautionary tale.

    Nevertheless, there’s nothing about this diphoton excess which makes it obvious that one should be pessimistic about it. It’s inconclusive: depending on the statistical questions you ask (whether you combine ATLAS and CMS Run 2, whether you try to combine ATLAS Run 1 and Run 2, whether you worry about whether the resonance is wide or narrow), you can draw positive or agnostic conclusions. It’s hard to draw entirely negative conclusions… and that’s a reason for optimism.

    Six months or so from now — or less, if we can use this excess as a clue to find something more convincing within the existing data — we’ll likely say “R.I.P.” again. Will we bury this little excess, or the Standard Model itself?

    http://profmattstrassler.com/2015/12/16/is-this-the-beginning-of-the-end-of-the-standard-model/

    Hints of Higgs Boson at 125 GeV Are Found:
    Congratulations to All the People at LHC!
    cover_issue_23_en_us--16765-.31709.


    Refined Higgs Rumours, Higgs Boson Live Blog: Analysis of the CERN Announcement, Has CERN Found the God Particle? A Calculation, Electron Spin Precession for the Time Fractional Pauli Equation, Plane Wave Solutions of Weakened Field Equations in a Plane Symmetric Space-time-II, Plane Wave Solutions of Field Equations of Israel and Trollope's Unified Field Theory in V5, If the LHC Particle Is Real, What Is One of the Other Possibilities than the Higgs Boson? What is Reality in a Holographic World? Searching for Earth’s Twin.

    Editor: Huping HU, Ph.D., J.D.; Editor-at-Large: Philip E. Gibbs, Ph.D.

    ISSN: 2153-8301

    Dear Huping!

    The Higgs Boson resonance, found by ATLAS and CMS is a diquark resonance.

    Excerpt:

    "Ok, now I'll print some excerpt for the more technically inclined reader regarding the Higgs Boson and its 'make-up', but highlight the important relevant bit (wrt to this discovery of a 160 GeV Higgs Boson energy, and incorporating the lower energy between 92 GeV and to the upper dainty level at 130 GeV as part of the diquark triplet of the associated topomium energy level) at the end.

    In particular, as the bottomium doublet minimum is at 5,245.495 MeV* and the topomium triplet minimum is at 55,956.0 MeV* in terms of their characteristic Kernel-Means, their doubled sum indicates a particle-decay excess at the recently publisized ~125 GeV energy level in 2x(5.246+55.956) GeV* = 122.404 GeV* (or 122.102 GeV SI).
    These are the two means from ATLAS {116-130 GeV as 123 GeV} and CMS {115-127 GeV as 121 GeV} respectively.

    http://press.web.cern.ch/press/PressReleases/Releases2011/PR25.11E.html

    Then extending the minimum energy levels, like as in the case to calculate the charged weakon gauge field agent energy in the charm and the VPE perturbations as per the table given, specifies the 125 GeV energy level in the Perturbation Integral/Summation:

    2x{55.956+5.246+1.606+0.491+0.151+0.046+0.014} GeV* = 127.02 GeV*, which become about 126.71 GeV SI as an UPPER LIMIT for this 'Higgs Boson' at the Dainty quark resonance level from the Thuban Dragon Omni-Science.
    Using the 3 Diquark energy levels U,D and S yield 2x{55.956+5.246+1.606} GeV* = 125.62 GeV* and 125.31 GeV SI.""




    This newest data/discovery about the Higgs Boson aka the 'God-Particle' states, that there seems to be a 'resonance-blip' at an energy of about 160 GeV and as just one of say 5 Higgs Bosons for a 'minimal supersymmetry'.
    One, the lowest form of the Higgs Boson is said to be about 110 GeV in the Standard Model. There is also a convergence of the HB to an energy level of so 120 GeV from some other models.
    Now the whole thing , according to Quantum Relativity' about the Higgs Boson, is that IT IS NOT a particular particle, but relates to ALL particles in its 'scalar nature' as a restmass inducer.

    I have discussed the Higgs Boson many times before; but would like here to show in a very simple analysis that the Higgs Boson MUST show a blip at the 160 GeV mark and due to its nature as a 'polarity' neutraliser (a scalar particle has no charge and no spin, but can be made up of two opposite electric charges and say two opposing chiralities of spin orientations.)

    Without worrying about details, first consider the following table which contains all the elementary particles of the standard model of particle physics. The details are found in the Planck-String transformations discussed elesewhere.

    The X-Boson's mass is: ([Alpha]xmps/[ec]) modulated in (SNI/EMI={Cuberoot of [Alpha]}/[Alpha]), the intrisic unified Interaction-Strength and as the L-Boson's mass in: ([Omega]x([ec])/(mpsxa<2/3>), where the (Cuberoot of [Alpha]2 is given by the symbol (a<2/3>)=EMI/SNI).

    Ten DIQUARK quark-mass-levels crystallise, including a VPE-level for the K-IR transition and a VPE-level for the IR-OR transition:

    VPE-Level [K-IR] is (26.4922-29.9621 MeV*) for K-Mean: (14.11358 MeV*); (2.8181-3.1872 MeV*) for IROR;
    VPE-Level [IR-OR] is (86.5263-97.8594 MeV*) for K-Mean: (46.09643 MeV*); (9.2042-10.410 MeV*) for IROR;
    UP/DOWN-Level is (282.5263-319.619 MeV*) for K-Mean: (150.5558 MeV*); (30.062-33.999 MeV*) for IROR;
    STRANGE-Level is (923.013-1,043.91 MeV*) for K-Mean: (491.7308 MeV*); (98.185-111.05 MeV*) for IROR;
    CHARM-Level is (3,014.66-3,409.51 MeV*) for K-Mean: (1,606.043 MeV*); (320.68-362.69 MeV*) for IROR;
    BEAUTY-Level is (9,846.18-11,135.8 MeV*) for K-Mean: (5,245.495 MeV*); (1,047.4-1,184.6 MeV*) for IROR;
    MAGIC-Level is (32,158.6-36,370.7 MeV*) for K-Mean: (17,132.33 MeV*); (3,420.9-3,868.9 MeV*) for IROR;
    DAINTY-Level is (105,033-118,791 MeV*) for K-Mean: (55,956.0 MeV*); (11,173-12,636 MeV*) for IROR;
    TRUTH-Level is (343,050-387,982 MeV*) for K-Mean: (182,758.0 MeV*); (36,492-41,271 MeV*) for IROR;
    SUPER-Level is (1,120,437-1,267,190 MeV*) for K-Mean: (596,906.8 MeV*); (119,186-134,797 MeV*) for IROR.

    The K-Means define individual materialising families of elementary particles;

    the (UP/DOWN-Mean) sets the (PION-FAMILY: po, p+, p-);
    the (STRANGE-Mean) specifies the (KAON-FAMILY: Ko, K+, K-);
    the (CHARM-Mean) defines the (J/PSI=J/Y-Charmonium-FAMILY);
    the (BEAUTY-Mean) sets the (UPSILON=U-Bottonium-FAMILY);
    the (MAGIC-Mean) specifies the (EPSILON=E-FAMILY);
    the (DAINTY-Mean) bases the (OMICRON-O-FAMILY);
    the (TRUTH-Mean) sets the (KOPPA=J-Topomium-FAMILY) and
    the (SUPER-Mean) defines the final quark state in the (HIGGS/CHI=H/C-FAMILY).

    The VPE-Means are indicators for average effective quarkmasses found in particular interactions.
    Kernel-K-mixing of the wavefunctions gives K(+)=60.210 MeV* and K(-)=31.983 MeV* and the IROR-Ring-Mixing gives (L(+)=6.405 MeV* and L(-)=3.402 MeV*) for a (L-K-Mean of 1.50133 MeV*) and a (L-IROR-Mean of 4.90349 MeV*); the Electropole ([e-]=0.52049 MeV*) as the effective electronmass and as determined from the electronic radius and the magnetocharge in the UFoQR.

    The restmasses for the elementary particles can now be constructed, using the basic nucleonic restmass (mc=9.9247245x10-28 kg*=(Squareroot of [Omega]xmP) and setting (mc) as the basic maximum (UP/DOWN-K-mass=mass(KERNEL CORE)=3xmass(KKK)=3x319.62 MeV*=958.857 MeV*);
    Subtracting the (Ring VPE 3xL(+)=19.215 MeV*, one gets the basic nucleonic K-state for the atomic nucleus (made from protons and neutrons) in: {m(n0;p+)=939.642 MeV*}.


    The HB discussed in the New Scientist post below is said of having been measured in the decay of W's, Z's and Tau Leptons, as well as the bottom- and top-quark systems described in the table and the text above.

    Now in the table I write about the KIR-OR transitions and such. The K means core for kernel and the IR means InnerRing and the OR mean OuterRing. The Rings are all to do with Leptons and the Kernels with Quarks.

    So the Tau-decay relates to 'Rings' which are charmed and strange and bottomised and topped, say. They are higher energy manifestations of the basic nucleons of the proton and the neutrons and basic mesons and hyperons.

    As I have shown, the energy resonances of the Z-boson (uncharged) represents an 'average' or statistical mean value of the 'Top-Quark' and the Upper-Limit for the Higgs Boson is a similar 'Super-Quark' 'average' and as the weak interaction unification energy.

    The hitherto postulated Higgs Boson mass of so 110 GeV is the Omicron-resonance, fully predicted from the table above (unique to Quantum Relativity).
    Now the most fundamental way to generate the Higgs Boson as a 'weak interaction' gauge is through the coupling of two equal mass, but oppositely charged W-bosons (of whom the Zo is the uncharged counterpart).

    We have seen, that the W-mass is a summation of all the other quark-masses as kernel-means from the strangeness upwards to the truth-quark level.
    So simply doubling the 80.47 GeV mass of the weak-interaction gauge boson must represent the basic form of the Higgs Boson and that is 160.9 GeV.

    Simplicity indeed and just the way Quantum Relativity describes the creation of the Higgs Boson from even more fundamental templates of the so called 'gauges'. The Higgs Boson is massless but consists of two classical electron rings and a massless doubled neutrino kernel, and then emerges in the magnetocharge induction AS mass carrying gauges.

    This massless neutrino kernel now crystallises our atomic solar system.


    Next we interpret this scalar (or sterile) Double-Higgs (anti)neutrino as a majoron and lose the distinction between antineutrino and neutrino eigenstates.

    We can only do this in the case of the Zo decay pattern, which engage the boson spin of the Zo as a superposition of two antineutrinos for the matter case and the superposition of two neutrinos in the antimatter case from first principles.

    So the Zo IS a Majorana particle, which merges the templates of two antineutrinos say and SPININDUCES the Higgs-Antineutrino.
    And where does this occur? It occurs at the Mesonic-Inner-Ring Boundary previously determined at the 2.776x10-18 meter marker.
    This marker so specifies the Zo Boson energy level explicitely as an upper boundary relative to the displacement scale set for the kernel at the wormhole radius rw=lw/2π and the classical electron radius as the limit for the nuclear interaction scale at 3 fermis in: RcomptonxAlpha.

    So the particle masses of the standard model in QED and QCD become Compton-Masses, which are HIGGS-MASSINDUCED at the Mesonic-Inner-Ring (MIR) marker at RMIR=2.776x10-18 meters.

    The Compton masses are directly obtained from E=hf=mc2=hc/λ and say as characteristic particle energies.
    At the Leptonic-Outer-Ring or LOR; λLOR=2πRe and at the MIR λMIR=2πRMIR for characteristic energies of 71.38 GeV and 71.33 MeV respectively.

    So we know that the Higgs-Mass-Induction occurs at those energy levels from the elementary template and as experimentally verified in terms of the neutrino masses by Super-Kamiokande in 1998.
    The LOR-energy of course indicates the Muon mass as a 'heavy electron' and the MIR-energy indicates the associated 'heavy quark' mass.

    This has been described before in the general mass induction scales for the diquarks as consequence from the bosonic bifurcation of string masses (XL-Boson string splits into quark- and lepton fermions as fundamental supersymmetry and the magnification of the Planck-scale).
    We also know, that the elementary proto-nucleon seed mc has grown in a factor of Yn~(1.618034)n~1.72 for a present n=1.1324..to create the present nucleonmasses in a perturbation of its finestructure.
    Subsequently, the MIR-energy of 71.38 GeV represents a Zo-Boson seed, which has similarly increased between a factor of √(Yn)~1.313 and Yn~1.724.

    These values so give present boundary conditions for the Higgs Boson in terms of its Zo coupling as the interval {93.73-123.09} GeV* or {93.50-122.79} GeV. The latter interval reduces by 1.58% to {92.02-120.85} GeV, as we have used the 'effective electron mass' me, differing in that percentage from the bare electron's restmass in our calculations.
    The lower bounded HB so manifests in the form of the Zo and as the majorana Higgs-Induction and coupled to the Spin-Induction of the Scalar Higgs Antineutrino.
    As described previously; the Zo-Boson mass is the mean of the top-quark K-Mean as 91.380 GeV* = 91.155 GeV and so relates the quark energy levels to the Higgs inductions for both spin and inertia. This occurs at the down-strange ds-diquark level of the cosmogenesis.

    The W-Boson masses are the summation of the quark K-Means and represents the summation of all lower diquark energy levels from doubleup to doubledown.
    As the down-strange or MIR-LOR energy level is coupled as a Kernel-MIR level in the bottom-antibottom mesonic diquark system, the energy difference between the Zo- and the W-bosons should amount to that b-quark energy of about 10 GeV and which indeed is experimentally verified as such.
    Finally the doublestrange diquark level then becomes the well known Fermi-Energy of the Superquark K-Mean at 298.453 GeV*=297.717 GeV and which reduces to 293.013 GeV in the 1.58% in the SI mensuration system for an Fermi energy of 1.165x10-5 1/GeV2.

    Quantum Relativity then stipulates, that the Higgs-Mass-Induction energies will assume particular energy value related to the diquark mass induction table of the K-Means, coupled to the weakon masses as indicated.
    The overarching energy level is however that at 92 GeV as the lower bound and as represented in the definition of the Zo-Boson as a Majorana Spininduced scalar Higgs boson. The upper bound is the Fermi energy of the Super-Diquark as a doublestrange.
    This 92 GeV level represents a seedling energy of 71.38 GeV from the primordial universe and when the XL-Boson aka the heterotic string class HO(32) decayed into a fermionic quark-lepton bifurcation and which today is represented in the diquark eigenstates of the standard model in particle physics through its Unitary Symmetries.

    Tony B. - December 28th, 2014 -Queanbeyan, NSW, Australia​
     
  6. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    PARTICLE PHYSICS
    Grand Unification Dream Kept at Bay

    Physicists have failed to find disintegrating protons, throwing into limbo the beloved theory that the forces of nature were unified at the beginning of time.

    GUT_RuneFisker_1K.
    Rune Fisker for Quanta Magazine

    ABOUT QUANTA

    Quanta Magazine's mission is to enhance public understanding of research developments in mathematics and the physical and life sciences. Quanta articles do not necessarily represent the views of the Simons Foundation.

    An editorially independent publication of the
    SIMONS FOUNDATION


    By Natalie Wolchover
    December 15, 2016


    For 20 years, physicists in Japan have monitored a 13-story-tall tank of pure water cloistered deep inside an abandoned zinc mine, hoping to see protons in the water spontaneously fall apart. In the meantime, a Nobel Prize has been won for a different discovery in the cathedral-esque water tank pertaining to particles called neutrinos. But the team looking for proton decays — events that would confirm that three of the four forces of nature split off from a single, fundamental force at the beginning of time — is still waiting.
    “So far, we never see this proton decay evidence,” said Makoto Miura of the University of Tokyo, who leads the Super-Kamiokande experiment’s proton decay search team.
    Different “grand unified theories” or “GUTs” tying together the strong, weak and electromagnetic forces make a range of predictions about how long protons take to decay. Super-K’s latest analysis finds that the subatomic particles must live, on average, at least 16 billion trillion trillion years, an increase from the minimum proton lifetime of 13 billion trillion trillion years that the team calculated in 2012. The findings, released in October and under review for publication in Physical Review D, rule out a greater range of the predicted proton lifetimes and leave the beloved, 1970s-era grand unification hypothesis as an unproven dream. “By far the most likely way we would ever verify this idea is proton decay,” said Stephen Barr, a physicist at the University of Delaware.
    Without proton decay, the evidence that the forces that govern elementary particles today are actually splinters of a single “grand unified” force is purely circumstantial: The three forces seem to converge to the same strengths when extrapolated to high energies, and their mathematical structures suggest inclusion in a larger whole, much as the shape of Earth’s continents hint at the ancient supercontinent Pangea.
    “You have these fragments and they fit together so perfectly,” Barr said. “Most people think it can’t be an accident.”


    Lucy Reading-Ikkanda for Quanta Magazine; chart data source: Snowmass 2013

    If the forces were indeed one during the “grand unification epoch” of the universe’s first trillionth of a trillionth of a trillionth of a second, then particles that now have distinct responses to the three forces would then have been symmetric and interchangeable, like facets of a crystal. As the universe cooled, these symmetries would have broken, like a crystal shattering, introducing distinct particles and the complexity seen in the universe today.
    Over the past four decades, physicists have proposed a variety of GUT models that describe possible initial symmetric arrangements of the particles. Finding out which model is correct would reveal not only the underlying mathematical structure of nature’s laws (and how they might square with the fourth force, gravity), but also what other particles might exist besides the known ones. This in turn could potentially solve other deep mysteries of physics, such as the universe’s matter-antimatter imbalance and the unexplained masses of neutrinos. “Our dream, of course, is to have a unified theory of everything,” said Dimitri Nanopoulos, a physicist at Texas A&M University who coined the term GUT.
    Replicating the merging of the forces directly would require an impossible amount of energy. But grand unification should produce a subtle trace in the universe today. All GUT models posit that quarks, the fundamental building blocks of protons and neutrons, were initially indistinguishable from leptons, the class of particles that includes electrons. Because of quantum uncertainty, the grand unified force associated with this fundamental symmetry should occasionally resurface, spontaneously morphing a quark or antiquark into a corresponding lepton or antilepton. When this happens to one of the quarks inside a proton, the proton will instantly fall apart, emitting a detectable flash of radiation. That’s what the physicists at the Super-Kamiokande experiment have been waiting to see. (Neutrons would similarly decay; experts call it proton decay as shorthand.)

    The dream of grand unification began in 1974, when the future Nobel laureate Sheldon Glashow, now at Boston University, and Howard Georgi, now at Harvard, discovered that the mathematical symmetry groups known as SU(3), SU(2) and U(1), which correspond, respectively, to the strong, weak and electromagnetic forces and together form the “Standard Model” of particle physics, can be incorporated into a single, larger group of symmetries that relate all the known particles at once: SU(5).
    “We thought it was absolutely beautiful,” Glashow recalled.
    But the proton lifetime predicted by that first, and simplest, GUT model, along with the first thousandth of the range of proton lifetimes predicted by other models, has already been ruled out. Super-Kamiokande is now probing the range of predictions of several popular proposals, but with two decades under its belt, it won’t be able to push much further. “It’s harder to do much better now because it’s accumulated so much data,” said Ed Kearns, a physicist at Boston University who has worked for Super-K since the experiment started.
    This leaves the fate of grand unification uncertain. Barr, one of the originators of the still-viable “flipped SU(5)” GUT model, compared the situation to waiting for your spouse to come home. “If they’re 10 minutes late, there’s simple explanations for that. An hour late, maybe those explanations become a little less plausible. If they’re eight hours late … you begin to worry that maybe your husband or wife is dead. So the point is, at what point do you say your theory is dead?”
    Right now, he said, “we’re more at the point where the spouse is 10 minutes late, or maybe an hour late. It’s still completely plausible that grand unification is correct.”
    If grand unification is indeed correct, this means that fundamental symmetries existed at the beginning of the universe and then broke as the temperature dropped, just as water, which looks the same in every direction, freezes into ice, which has distinct directions.
    Symmetries are transformations that leave something unchanged. Rotate a square 90 degrees, for instance, and it looks the same as before. For a rectangular object to exhibit this rotational symmetry, it must have four identical sides. Likewise, if a certain symmetry exists in the laws of nature, then a set of symmetric particles must exist to realize it.

    Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo
    The Super-Kamiokande observatory in Kamioka, Japan, shown when it was being refilled with water in 2006.

    Take SU(3), the collection of symmetries corresponding to the strong force (which glues quarks together into protons and other composite particles). This symmetry group includes the rule that “up quarks” (one of the six types of quarks) come in three different charges — often labeled red, blue and green — that are interchangeable. That is, if you switched all the red up quarks in the universe for the blues, all the blues for the greens, and all the greens for the reds, no one would be able to tell. “Down” quarks and all other quarks also come in these symmetric triplets, which are like sides of an equilateral triangle. Gluons, the eight particles that convey the strong force, can be thought of as the rotators of the triangles.
    Meanwhile, the SU(2) symmetries associated with the weak force (which is responsible for many kinds of radioactive decay) include a symmetry between, for example, up quarks and down quarks. Switch all the u’s and d’s in the equations describing the weak force, “and you’re never going to understand that I have done this,” Nanopoulos said.
    GUTs such as SU(5) include all the symmetries of SU(3), SU(2) and U(1) and add new ones to the mix. For instance, SU(5) groups quarks and antiquarks together with leptons and antileptons into “fiveplets,” which are like the indistinguishable sides of a regular pentagon. The particles that normally convey the strong, weak and electromagnetic forces are identical in this larger mathematical structure; all 12 of them, and an extra dozen that arise naturally, convey a single “grand unified” force.

    When they discovered the SU(5) model, Glashow and Georgi immediately realized that the 12 extra force carriers present in the structure of SU(5) would trigger proton decay. When SU(5) broke into the three pieces seen today, 12 of the original force carriers would have taken their present forms, but the other dozen, rather than disappearing, would merely have become extremely heavy and weak. These ghostly force carriers would occasionally materialize and swap a quark for a lepton. Georgi and others calculated that if the SU(5) model is right, then the average proton (which is made of three quarks) will decay within 1029 years.
    This prediction was falsified in the 1980s by both the Irvine-Michigan-Brookhaven experiment in Ohio and the Kamiokande experiment, Super-K’s predecessor. Some wiggle room was found, leading to a new, roughly 100-times-longer proton lifetime prediction, but this wasn’t enough. A few years after going online in 1996, the Super-K experiment definitively ruled out SU(5). “Everybody was crestfallen,” Barr recalled.
    The situation has only gotten more ambiguous since then. Whereas SU(5) was as simple as possible, researchers have found a variety of other symmetry groups that the existing particles might fit into, with extra features and variables that can make protons decay much more slowly. Some of these models add an extra symmetry, called “supersymmetry,” that doubles the number of particles. Others, like flipped SU(5), rearrange which quarks and antiquarks go with which leptons and antileptons inside SU(5)’s fiveplets, tacking on an extra symmetry in the process.


    Source: Elementary Particle Explorer, designed and written by Garrett Lisi, Troy Gardner, and Greg Little.

    Super-K’s latest result, which sets the lower limit on the proton’s lifetime just above 1034years, moves into the region of interest of many models — including that of flipped SU(5), which predicts that protons will take between 1034 and 1036 years to decay. “I’m very excited about this,” said Nanopoulos, one of the researchers who developed flipped SU(5) in the early 1980s.
    But while Super-K could suddenly strike gold in the next few years and confirm one of these models, it could also run for another 20 years, nudging up the lower limit on the proton’s lifetime, without definitively ruling out any of the models.
    Japan is considering building a $1 billion detector called Hyper-Kamiokande, which would be between eight and 17 times bigger than Super-K and would be sensitive to proton lifetimes of 1035 years after two decades. It might start seeing a trickle of decays. Or it might not. “We could be unlucky,” Barr said. “We could build the biggest detector that anyone is ever going to build and protons decay just a little bit too slow and then we’re out of luck.”
    No matter how big one’s detector, ever more extravagant GUT models can always be constructed that elude the tests — such as the symmetry groups E6 or E8, whose plentiful parameters can be tuned to make protons live as long as one pleases. One of these models might be correct, but no one would ever know. “People can construct models with higher symmetries and stand on their nose and try to avoid proton decay,” Nanopoulos said. “OK, you can do it, but … you cannot show it to your mother with a straight face.”

    Glashow, for one, largely lost interest in the whole affair when SU(5) was ruled out. “Proton decay has been a failure,” he said. “So many great ideas have died.”
    Grand unification hasn’t died, exactly. The circumstantial evidence is as compelling as ever. But the idea could remain in perpetual limbo, rather like the proton.

    This article was reprinted on Wired.com.

    Tony Bermanseder

    There is no EXTERNAL super symmetry as is proposed in the mainstream unification models, because there was no perfect matter-antimatter bifurcation at the genesis. There is however a NATIVE supersymmetry in matter and this eliminates any need for a supersymmetric bosonic-fermion separation at the Quantum Big Bang Instanton. It follows that the 'missing' antimatter was never there in the first place, but the quark-lepton families emerged from a di-neutron boson, Gamow termed ylem- or neutron matter. It was a great arrogance and fallacy of his contemporaries, who decided that Gamow's 'neutronium' was not required to construct an appropriate cosmology. They instead invoked the nonexistent antimatter, the latter ONLY existing in the process of pair-creation by super energetic electromagnetic radiation such as gamma rays and above and the necessity for some matter nucleus existing to manifest the four force unification interaction, including gravity.

    In other words the proton is absolutely stable and like the electron never decays; despite both carrying an internal unification structure described in a neutrino-gluon kernel, an inner mesonic ring and an outer leptonic ring. The space between the rings defines the physical consciousness then magnified from neutron to the hydrogen atom as the long ranged electromagnetic- and gravitational forces and the space between the inner ring and the core specifies the strong- and weak (radioactive) nuclear forces which are short range and have a size characterising the Higgs Boson and the classical electron radius as the asymptotic boundary for the 'pointlike' electron becoming smeared out into its say wave envelope.




    Some details are here: http://cosmosdawn.net/index.php/en/
    attachments%2Fhistory_of_the_universe_svg-.37657%2F&cfs=1&upscale=1&_nc_hash=AQBUQllzDJQYyM8C.
    Home - Cosmogenesis
    Cosmology, Superbranes, Inflaton. Big Bang, Dark Energy, Dark Matter, Multiverse
    COSMOSDAWN.NET
     
  7. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    Visions of Future Physics

    Nima Arkani-Hamed is championing a campaign to build the world’s largest particle collider, even as he pursues a new vision of the laws of nature.
    Nima_996x581.
    Béatrice de Géa for Quanta Magazine


    By Natalie Wolchover
    September 22, 2015

    Get Nima Arkani-Hamed going on the subject of the universe — not difficult — and he’ll talk for as many minutes or hours as it takes to transport you to the edge of human understanding, and then he’ll talk you past the edge, beyond Einstein, beyond space-time and quantum mechanics and all those tired tropes of 20th-century physics, to a spectacular new vision of how everything works. It will seem so simple, so lucid. He’ll remind you that, in 2015, it’s still speculative. But he’s convinced that, someday, the vision will come true.
    On the strength of the torrent of ideas he has produced over the past 20 years — he won the inaugural $3 million Fundamental Physics Prize in 2012 “for original approaches to outstanding problems in particle physics, including the proposal of large extra dimensions, new theories for the Higgs boson, novel realizations of supersymmetry, theories for dark matter, and the exploration of new mathematical structures in gauge theory scattering amplitudes” — Arkani-Hamed, 43, a professor at the Institute for Advanced Study (IAS) in Princeton, N.J., is widely considered one of the best theoretical physicists working today. Colleagues point to his knack for simplifying impossibly complex problems, as well as his exceptional mathematical ability, creativity, instincts and vast knowledge of physics. “Nima is amazing in every component of talent space,” said Savas Dimopoulos, a theoretical particle physicist at Stanford University.
    But while many top physicists shy away from stagecraft, Arkani-Hamed functions, colleagues say, as a “messiah,” a “Pied Piper,” an “impresario.” Arms in motion and dark hair spilling to his shoulders, he weaves together calculations, thought experiments and historical precedents into narratives, confidently outlining chapters to come. His listeners range from graduate students to Nobel Prize winners. “He keeps coming up with the goods, and his persuasiveness is hypnotic,” said Raman Sundrum, a theoretical physicist at the University of Maryland in College Park, “so a lot of people follow where he leads.”




    Arkani-Hamed’s mission — simple to state, but so all-consuming that he barely sleeps — is to understand the universe. “I don’t feel I have any time to lollygag, at all,” he said this summer in Princeton. This obsession takes him in several directions, but in recent years one question about the universe has come to preoccupy him, along with the field as a whole. Particle physicists seek to know whether the properties of the universe are inevitable, predictable, “natural,” as they say, locking together into a sensible pattern, or whether the universe is extremely unnatural, a peculiar permutation among countless other, more mundane possibilities, observed for no other reason than that its special conditions allow life to arise. A natural universe is, in principle, a knowable one. But if the universe is unnatural and fine-tuned for life, the lucky outcome of a cosmic roulette wheel, then it stands to reason that a vast and diverse “multiverse” of universes must exist beyond our reach — the lifeless products of less serendipitous spins. This multiverse renders our universe impossible to fully understand on its own terms.
    As things stand, the known elementary particles, codified in a 40-year-old set of equations called the “Standard Model,” lack a sensible pattern and seem astonishingly fine-tuned for life. Arkani-Hamed and other particle physicists, guided by their belief in naturalness, have spent decades devising clever ways to fit the Standard Model into a larger, natural pattern. But time and again, ever-more-powerful particle colliders have failed to turn up proof of their proposals in the form of new particles and phenomena, increasingly pointing toward the bleak and radical prospect that naturalness is dead.

    BeaDeGea_Nima_02-640x424.
    Béatrice de Géa for Quanta Magazine

    Still, many physicists, Arkani-Hamed chief among them, seek a more definitive answer. And right now, his quest to answer the naturalness question leads through China. Two years ago, he agreed to become the inaugural director of the new Center for Future High Energy Physics in Beijing. He has since visited China 18 times, campaigning for the construction of a machine of unprecedented scale: a circular particle collider up to 60 miles in circumference, or nearly four times as big around as Europe’s Large Hadron Collider (LHC). Nicknamed the “Great Collider,” and estimated to cost roughly $10 billion over 30 years, it would succeed the LHC as the new center of the physics universe. According to Arkani-Hamed and those who agree with him, this 100-trillion-electron-volt (TeV) collider would slam subatomic particles together hard enough to either find the particles that the LHC could not muster or rule them out, rescuing or killing the naturalness principle and propelling physicists toward one of two radically different pictures: that of a knowable universe, or an unknowable multiverse.

    The Chinese collider campaign has the support and involvement of many prominent researchers aside from Arkani-Hamed, including Yifang Wang, the Nobel Prize winner David Gross, and the Fields medalist S.T. Yau, as well as legions of experimentalists and engineers working behind the scenes, yet the project is controversial. Experts disagree about what the machine would achieve. They also wonder if China is ready to take the helm in particle physics, questioning whether its small particle physics community can grow quickly enough over the next two decades to run a project so enormous and complex, even with the help of thousands of physicists in Europe and the United States. As Tao Han, a particle physicist who supports the campaign, expressed the concerns of some of his Chinese colleagues, “Are we going to jump too far and fall hard?”
    Now it is decision time. The Chinese government will release its five-year budgetary plan by the end of the year, revealing whether it plans to invest in research and development for the collider project.
    “This 100-TeV collider program in China is brilliant; it’s challenging; it’s risky. And that’s precisely why nothing like this, I think, could really have had as much traction without Nima,” said Sundrum, who has visited Beijing to aid the campaign. “It has taken enormous persuasion for him to take this from a total fantasy, a losing fantasy, to something which has a fighting chance.”

    Park and Go

    To Arkani-Hamed, the Chinese collider campaign feels like pushing an open door. “When you think about it more, it’s just perfect,” he said, sipping Coke Zero on his office couch. “It would be great for physics; it would be great for China. They’re looking for something where they can just be the best in the world.” He continued, “There are very few things in life where what you want to do for idealistic reasons and what someone else wants to do for Machiavellian reasons are identical. And when that happens, you should just do it. You should just do it!”
    June sunlight poured onto chalk-speckled blackboards and a magnificent antique desk. Arkani-Hamed sat beneath a framed photo of a male leopard, taken by his partner, a biologist, when he went with her on safari two years ago in South Africa. Sporting his usual black T-shirt, cargo shorts and sandals, with arms covered in cat scratches — the tough love of a worshipped tabby — he leapt up to erase a patch of speckles and chalk out a new mathematical argument, and then sprang up again to hug a visiting researcher who politely peeked in from the hallway. At lunch, surrounded by protégés, he scrawled theories on napkins, defending some, explaining others, and chugged more Coke Zero. (His caffeine intake peaked several years ago at 15 to 16 espresso shots per day.)

    Nima_Multi-640x312.
    Béatrice de Géa for Quanta Magazine
    Nima Arkani-Hamed with collaborators Raffaele Tito D’Agnolo (seated at left) and David Pinner at the Institute for Advanced Study in Princeton, N.J.

    Generous with his time, even with a young man hanging out in the hallway whom he half-jokingly described as his “stalker,” Arkani-Hamed claims never to have turned down a graduate student who wanted to work with him. Many from his flock have gone on to join the faculties of top research universities and are now leaders in their generation. “Being Nima’s student was like having Usain Bolt as a track coach,” said Clifford Cheung of the California Institute of Technology in Pasadena, who studied under Arkani-Hamed at Harvard University. Jesse Thaler of the Massachusetts Institute of Technology described spending nearly every day in Arkani-Hamed’s bustling Harvard office, in laughter-filled bull sessions and “nerve-racking caffeine-driven interrogation.” Thaler added: “If I look at the high points of my physics career thus far, many of them occurred because I (consciously or not) tried to follow Nima’s example: pursuing one’s own ideas with unbridled enthusiasm, politely disregarding naysayers and tackling obstacles head-on. And drinking espresso.”
    Arkani-Hamed has been a disruptive force throughout his career. He started making a name for himself in graduate school, in the mid-1990s, at the University of California, Berkeley. When he pointed out a mistake in a pre-print by Dimopoulos, a prominent researcher two decades his senior, his adviser suggested that Dimopoulos might want to return from a sabbatical in Europe to work with Arkani-Hamed, who was to become a postdoctoral researcher at Stanford’s SLAC National Accelerator Laboratory. “How lame,” Dimopoulos recalled thinking. “Why would I let a postdoc decide my future?” In the end, he did return, and he and Arkani-Hamed became close friends and collaborators. “We had an extremely productive time together and a good time,” Dimopoulos said. “He is one of my very best friends in my life.” Their biggest collaboration rounded out the Standard Model with the hypothetical effects of extra spatial dimensions curled up at each point in our three-dimensional reality.

    As Arkani-Hamed spawned one new research area after another, he resisted real-world distractions, like parking rules. As a young professor at Berkeley, he insisted on parking in the mostly empty lot near his building rather than the faraway space assigned to him, leading to an epic war with a parking attendant that landed his face on a “Wanted” poster and helped drive him from Berkeley to Harvard. There, his parking troubles eased somewhat (though his car was regularly towed to a nearby lot), and his career flourished. He “made the whole place come alive,” said Melissa Franklin, a Harvard physicist. When, in 2008, he left for the IAS, seeking its “purity of purpose” and freedom from teaching duties, his parking problems ended, but “we cried,” said Franklin. “We wept.”
    The tranquility of the IAS, where great thinkers like Albert Einstein and Kurt Gödel finished their careers, hasn’t slowed Arkani-Hamed’s ambitions. Now, on top of a continued outpouring of new ideas, his days and nights are filled with flights and meetings in pursuit of his dream collider. Much later that June day, after dropping an intellectually exhausted reporter off at the train station, Arkani-Hamed drove to Newark to catch a working redeye flight to Hong Kong, where he would speak at a conference before boarding another flight to Beijing to meet with Chinese colleagues and guide research for the collider project. “I sleep the way lions eat,” he explained — “very little for stretches of time, punctuated by huge and delicious feasts.”


    Escape to the Stars

    Arkani-Hamed’s mother, Hamideh Alasti, believes her son’s drive to understand the world once saved his life. He was born in Houston, where his father, Jafar, worked for the Apollo program analyzing physical properties of the moon. (Arkani-Hamed’s mother and his sister, Sanaz “Sunny” Jensen, are also physicists.) As the family bounced between academic jobs in Iran and the U.S., young Nima absorbed books like Tell Me Why, by Arkady Leokum, and enjoyed hands-on scientific investigations like catching and raising frogs, snakes and salamanders and studying their behavior. “He really didn’t care about material life,” Alasti said. “If you wanted him to put a nicer shirt on, he didn’t want that.” His father added, “I used to take Nima hiking almost every weekend in Tehran. He was very stubborn. I remember once he hiked about 11 hours at the age of about 4. I asked him to come onto my shoulder and he refused.”

    BeaDeGea_Nima_04-640x424.
    Béatrice de Géa for Quanta Magazine

    In 1979, when the Shah of Iran was overthrown, the family again returned to their homeland from the U.S., to the promise of free expression and possibility. Nima sat in on political discussions between his parents and their Western-educated friends, and recalls reading The Communist Manifesto as a Farsi comic book. But within a year, Ayatollah Khomeini began shutting down universities. Jafar, then working at Sharif University in Tehran, co-wrote an open letter with 14 colleagues denouncing the closures. The signatories were blacklisted; those who could be found were imprisoned or hanged, Jafar said. He went underground, and eventually paid $50,000 — his life savings — for smugglers to convey him and his family out of the country on horseback. When one smuggler in the chain of handoffs didn’t receive full payment, the man abandoned Nima, his parents and his baby sister in the mountains between Iran and Turkey.

    A week into a journey that was supposed to take two days, 10-year-old Nima developed a 107-degree fever and was too weak to walk. Jafar left his wife and children huddled in a valley and ran for help. Three hours later, he came across a group of nomadic Kurds, and among them, a leader of the Kurdish opposition to Khomeini. A swashbuckling hero in Nima’s memory, the man sent horses to rescue the family. The boy, close to dying, sat slumped on the back of his mother’s horse as they were led out of Iran under the cover of nightfall. “He was in very bad shape,” Alasti said. To energize him, she directed his attention to the bright ribbon of stars sweeping across the sky — the Milky Way galaxy — and promised that when they made it to safety, he could get a telescope. “That kept him very, very engaged,” she said, “to the point that it managed to keep him alive.” Once safely across the border, the family made their way to Toronto.

    Life was good in Canada; only one thing was jarring. At that time, “there was a ceiling to the level of bigness and ambition with which people thought about things,” Arkani-Hamed said. He was particularly struck by how proud many Canadians were of having built the robot arms of NASA’s space shuttles. During news coverage of launches, he recalled, “there would be all these close-ins on the arm, on the ‘Canada’ on the arm, and I’d be, like, the space shuttle is a bigger deal!” At school, he refused to do busywork and got mediocre grades, other than in math (which was all tests) and English (because he loved his teachers, and reading and writing), while earning the top score in all of Canada on a national physics exam. By his senior year in high school, it was clear he would become a successful theoretical physicist. “You’re going to be the next Einstein,” he and his parents recall his physics teacher teasing, “and I’ll be that guy who gave you a B!”

    Homework no longer mattered at the University of Toronto; he aced his first physics test, and by his senior year he was helping to teach quantum field theory to graduate students. People were drawn to his contagious enthusiasm for mathematics and physics. “Most people get used to the idea that it’s really hard to understand stuff and we should mostly give up. He just hasn’t done that,” said Hugh Thomas, a friend and classmate who is now a mathematician. “Part of it is he’s really, really, really, really smart, so he has a shot at understanding a lot of stuff.”


    The Highest Energies

    Arkani-Hamed’s campaign for a 100-TeV collider began on July 30, 2013, at a panel discussion about the future of American particle physics in Minneapolis, Minn. With only five minutes to address an audience of 1,000 physicists, and a habit of speaking for as long as he pleased, Arkani-Hamed carefully prepared his words beforehand. “We all know that we’re embarking on, really, an unprecedented era in fundamental physics,” he began. After raising the naturalness predicament, he went on: “The stakes are higher than the past. We aren’t asking about this or that particle, but something much more deeply structural about physical reality. … By far the best way to settle this question is to lead a charge to the highest possible energies and build a 100-TeV collider.”
    “I sat next to him and watched him read word for word what he had written,” said Kyle Cranmer of New York University, a fellow panelist who said he felt like “a little kid sharing the stage that day. … Nima’s talk breathed life into those that deep down feel that we need a bigger collider to make real progress. … It wasn’t making a case about practicality, it was a bold call to action, a moonshot, and he basically called out those that didn’t see it that way as cowards and those that did as having courage.”

    Béatrice de Géa for Quanta Magazine



    Video: Nima Arkani-Hamed makes his “big-picture” case for building a 100-TeV particle collider.

    His battle cry electrified the audience and dominated the rest of the discussion, but it didn’t go over well with most of Arkani-Hamed’s fellow panelists. One suggested that he was “dreaming.” Many favored the construction of a smaller-scale neutrino experiment at Fermilab in Illinois as the next big U.S. project — a plan that was included in the particle physics community’s policy-shaping report the following May. Arkani-Hamed strongly disagrees with this plan. Neutrino physics is “perfectly interesting,” he said recently, “but it shouldn’t be the flagship of a great country.” He diagnoses American physicists as suffering from “SSC post-traumatic stress disorder,” an inability to recover from the disastrous cancellation of the Superconducting Super Collider, which was to have been three times the size of the LHC, partway through its construction in Texas in 1993. Not only did jettisoning the SSC waste billions of dollars, screw up young people’s careers and permanently damage relationships with foreign institutions, he said, it also “has had a stultifying impact on the way the field thinks about itself, and the way it presents itself to the government, to the general public.” In the U.S., as Sundrum put it, an idea like a $10 billion, 100-TeV collider is “dead on arrival.”

    Arkani-Hamed soon heard from Tao Han, who holds posts at both the University of Pittsburgh and Tsinghua University in Beijing. Han has been agitating for the construction of a higher-energy particle collider in China for 10 years, until recently without success, he said. Despite China’s reputation for excellence in science education, it lags far behind the U.S. and Europe in basic research. For decades, the country’s best particle physicists have emigrated to the U.S. and Europe, rather than cultivate a tradition there.

    This began to change with the successful construction of the Beijing Electron-Positron Collider II (BEPCII), a 240-meter ring completed in 2008. Han sensed an even bigger sea change in 2012, when China pulled off a major neutrino experiment at Daya Bay, off the South China Sea. The results, published that April, completed the picture of how the elusive, lightweight particles are able to shape-shift from one type to another, a phenomenon known as “neutrino oscillations.” Western scientists saw Daya Bay as arguably the most important particle physics result ever to come out of China.

    The driving force behind both BEPCII and Daya Bay was Yifang Wang, a go-getting physicist in Beijing who spent his early career in Europe and the U.S. In 2011, partly on the strength of the experiments’ success, Wang was named director of the Institute of High Energy Physics (IHEP) in Beijing. He immediately pushed for an even bigger experiment in China. Because building machines is expensive and time-consuming, Wang and his colleagues decided to let theory lead the way. Two years ago, they agreed to set up a theory center at IHEP. It would need a founding director, and Han said he knew just the person.

    After a series of meetings between Arkani-Hamed, Wang and others in Beijing, the Center for Future High Energy Physics at IHEP was launched at a ribbon-cutting ceremony in December 2013, with Arkani-Hamed as director. “I called 40 of my closest collider-physics friends,” he said, and brought them to Princeton to schedule their visits to his center in China. There, they have been collaborating with the Chinese to work out the physics case for building the new supercollider. The machine would start out as a “Higgs factory,” colliding particles at lower energies to generate particles called Higgs bosons and scour their properties for indirect signs of new physics, then ramp up to between 70 and 100 TeV (depending on the available magnet technology) by 2042. Their studies have resulted in 50 research papers and a comprehensive report detailing how the experiment will work. Meanwhile, Arkani-Hamed and Gross, a theorist at the University of California, Santa Barbara, persuaded nine more of the world’s highest-profile physicists to cosign a letter recommending the project. One of them, Yau, a mathematician and string theorist at Harvard who is famous in China, personally delivered the letter to China’s vice president, Li Yuanchao. According to Yau, the list of famous signatures caught the vice president’s attention; at his request, Yau said, the science and technology minister has held a series of meetings to discuss the feasibility of the project. (Yau’s book, From the Great Wall to the Great Collider, co-authored with Steve Nadis, will appear this fall.)

    Significant challenges remain. Even with substantial help from the international community, experts estimate that several thousand new Chinese particle physicists must be trained over the next 10 years. Interest in particle physics already appears to be rising among graduate-school applicants there, and Arkani-Hamed is characteristically optimistic, but some researchers worry that the rate of increase won’t be enough. Furthermore, the magnets required to accelerate protons to 100 TeV are fabricated at Fermilab, a U.S. government lab, and at this point they are still prohibitively expensive; the countries must cooperate, and the magnet cost must go down considerably over the next decade to keep the project within budget.

    BeaDeGea_Nima_03-640x424.
    Béatrice de Géa for Quanta Magazine
    Arkani-Hamed with Pinner (left) and D’Agnolo.

    More troublingly, some researchers suspect that the Great Collider won’t provide the kind of definitive answer to the naturalness question that Arkani-Hamed is touting. Cranmer, for one, has doubts. “I am very sympathetic to the idea that this is a critical point in the field and that naturalness/fine-tuning is a deep issue,” he wrote in an email. “However, I’m not convinced that if we built a 100-TeV collider and saw nothing that it would be conclusive evidence that nature is fine-tuned.” There would remain the nagging possibility that a natural completion of the Standard Model exists that a collider simply can’t access. (Arkani-Hamed and collaborators proposed one such scenario this summer, dubbed “Nnaturalness,” which is testable in other ways.) Adam Falkowski, a particle physicist in Paris who blogs about developments in the field, argues that if no new particles are found at 100 TeV, this will leave physicists exactly where they are now in their search for a more complete theory of nature — clueless. “There is currently no indication that this collider will help us solve any of the puzzles in particle physics or cosmology,” he said.

    The most prominent opponent of the collider project is the Nobel laureate C.N. Yang, a well-known, 93-year-old physicist whose work has significantly impacted particle physics, but who considers condensed matter physics(which concerns the behavior of materials) much more beneficial to society. Yang’s views do not appear publicly in writing, but Han described them as publicly known and an obstacle to the campaign.
    Many particle physicists want a next-generation collider because it would guarantee thousands of jobs and a future for the field. And Gross, who considers naturalness a murky concept, simply wants a last-ditch search for new physics. “We need more hints from nature,” he said. “She’s got to tell us where to go.”
    According to Han, the deliberation process in China is opaque, but he has heard encouraging news trickling down, and Chinese particle physicists are proceeding with their planning.
    If China backs out, Arkani-Hamed will throw his full weight behind a parallel (if slower-paced) collider campaign at Europe’s CERN laboratory, which houses the LHC. Michelangelo Mangano, a particle theorist at CERN who is involved in assessing the options there, suggested that both projects might get off the ground. “If China goes ahead with their primary goal of the [Higgs factory], a possible scenario is one in which CERN points directly to the 100 TeV collider, and China uses their experience with their first project to then move on to something even more ambitious than 100 TeV.”
    Or there might be no next-generation collider. “This hit me really hard at a certain point,” said Joe Incandela, a leading particle physicist at the LHC who supports both the European and the Chinese collider campaigns. Once the world stops building colliders, he said, the partnerships and collective expertise needed to do so will vanish within a generation. “The results that we have are going to have to stand for millennia, perhaps. … And boy, to stop and leave those questions open — you can see the responsibility that we feel. Nima feels this responsibility. We all feel like this can’t be the end. We’ve got to at least take it one more step.”


    Beyond Space and Time

    Whether a 100-TeV collider materializes or not, Arkani-Hamed’s legacy may rest on a different and potentially more important campaign. Even as he chases the question of whether the properties of the universe are natural, he is also seeking to discover what gives rise to space and time in the first place. As radical as it sounds, many physicists now think that the spatiotemporal dimensions we seem to move around in are not fundamental, but rather emerge from a deeper, truer description of reality. And in 2013, an unexpected discovery by Arkani-Hamed and his student Jaroslav Trnkaoffered a possible clue to what the underlying laws of nature might look like.


    BeaDeGea_Nima_05-640x424.
    Béatrice de Géa for Quanta Magazine

    They uncovered a multifaceted geometric object whose volume encodes the outcomes of particle collisions — beastly numbers to calculate with traditional methods. The discovery suggested that the usual picture of particles interacting in space and time is obscuring something far simpler: the timeless logic of intersecting lines and planes. Although the “amplituhedron” (as Arkani-Hamed and Trnka dubbed their object) initially described a simplified version of particle physics, researchers are now working to extend its geometry to describe more realistic particle interactions and forces, including gravity. “It looks like we are going to be able to go very far,” said Zvi Bern, a leader in this research discipline at the University of California, Los Angeles. Arkani-Hamed’s own research is proceeding apace, and he freely speculates about where it will lead.


    He believes that the interchangeability of points and lines in the geometry of the amplituhedron may be the origin of a mysterious mathematical duality between particles and strings, the basic building blocks of nature in string theory. And particle interactions are just “the baby version of the problem,” he said. His ultimate goal is to describe the entire cosmological history of the universe as a mathematical object. In unpublished work, he has begun finding patterns in cosmological correlations — the likelihood, for instance, that if two red stars lie 20 kiloparsecs apart, a blue star lies 50 kiloparsecs away from them both. These statistical patterns encode the history of the cosmos, like dinosaur bones buried in the sand. And as with particle collisions, he has found that these patterns can be represented as geometric volumes. Ultimately, he said, anywhere from 10 to 500 years from now, the amplituhedron and these cosmological patterns will merge and become part of a single, spectacular mathematical structure that describes the entire past, present and future of everything “in some timeless, autonomous way.”
    At a recent dinner, joined by a small coterie of postdocs, Arkani-Hamed drew a pentagram on a napkin. The pentagram, like the amplituhedron, is defined by a finite set of lines crossing at a finite number of points. Arkani-Hamed darkened nine points in the configuration and explained that the first eight of these dots can be placed on a grid. But no matter how fine the grid, the ninth dot always falls between grid points; it is forced to correspond to an irrational number. There is a mathematical proof, Arkani-Hamed observed, that all algebraic numbers can be derived from configurations of a finite whole number of intersecting points and lines. And with that, he expressed a final conjecture, at the end of a long, cerebral day, before everyone else went home to bed and Arkani-Hamed headed to the airport: Everything — irrational numbers, along with particle interactions and the correlations between stars — ultimately arises from possible combinatorial arrangements of whole numbers: 1, 2, 3 and so on. They exist, he said, and so must everything else.

    Arkani-Hamed considers his tendency to speculate a personal weakness. “This is not false modesty, it’s really a personal weakness, but it’s true, so there’s nothing I can do about it,” he said. “It’s important for me while I’m working on something to be very ideological about it. And then, of course, it’s also important after you are done to forget the ideology and move on to another one.” Thinking of the naturalness question, and his quest for a mathematical theory of nature, he continued, “But certainly in things where progress isn’t so immediate, I find it very important to convince myself that it’s the one true path. Or at least a true path.”

    This article was reprinted on Wired.com.

    https://www.quantamagazine.org/20150922-nima-arkani-hamed-collider-physics/


    [5:45:11 PM=January 18th, 2017 +10UCT] Allisiam:
    https://www.quantamagazine.org/20150922-nima-arkani-hamed-collider-physics/
    Andrew shared this
    [5:51:17 PM] ShilohaPlace: yes he is the guy who talked about this new geometry remember we posted and commented on some years ago
    [5:51:20 PM] Allisiam: he is they guy who discovered the amplituhedron, the jewel like geometric structure
    [5:51:24 PM] ShilohaPlace: yes
    [5:51:47 PM] Allisiam: now it seems he is trying to get China to build a super large collider
    [5:52:00 PM] ShilohaPlace: Waste of money
    [5:52:01 PM] Allisiam: anyhow long article it was interesting
    [5:52:10 PM] Allisiam: haha you won't convince him of that
    [5:52:24 PM] Allisiam: he is driven to either prove or disprove the nature problem
    [5:53:23 PM] Allisiam: the article is from 2015 though
    [5:53:29 PM] ShilohaPlace: because the energy of the wormhole is 0.002 Joules thousands of times greater that the LHC. The LHC can tap the mesonic ring though and there is no energy level in between 14 TeV and 13,000 TeV
    [5:53:51 PM] ShilohaPlace: We already shown the nature problem
    [5:54:10 PM] ShilohaPlace: It is related to the SUSY stuff I shared with Andrew
    [5:54:28 PM] Allisiam: yes well unless this guy reads our stuff he'll never know
    [5:54:44 PM] ShilohaPlace: They all will know in a few years
    [5:55:04 PM] ShilohaPlace: The new physics must be revealed eventually
    [5:55:04 PM] Allisiam: yes he seems tuned in
    [5:55:30 PM] ShilohaPlace: If they ask the right questions, the answers will follow
    [5:56:08 PM] ShilohaPlace: abandon SUSY and antimatter but search for quantum geometry inside the proton. 12,453 TeV is the energy of the wormhole, so his 100 TeV is too small by a factor of 120 to tap the ZPE
    [6:04:43 PM] ShilohaPlace: He is right on here:

    "And with that, he expressed a final conjecture, at the end of a long, cerebral day, before everyone else went home to bed and Arkani-Hamed headed to the airport: Everything — irrational numbers, along with particle interactions and the correlations between stars — ultimately arises from possible combinatorial arrangements of whole numbers: 1, 2, 3 and so on. They exist, he said, and so must everything else."

    [6:05:11 PM] ShilohaPlace: Our algorithm remember?

    "He believes that the interchangeability of points and lines in the geometry of the amplituhedron may be the origin of a mysterious mathematical duality between particles and strings, the basic building blocks of nature in string theory. And particle interactions are just “the baby version of the problem,” he said. His ultimate goal is to describe the entire cosmological history of the universe as a mathematical object. In unpublished work, he has begun finding patterns in cosmological correlations — the likelihood, for instance, that if two red stars lie 20 kiloparsecs apart, a blue star lies 50 kiloparsecs away from them both. These statistical patterns encode the history of the cosmos, like dinosaur bones buried in the sand. And as with particle collisions, he has found that these patterns can be represented as geometric volumes. Ultimately, he said, anywhere from 10 to 500 years from now, the amplituhedron and these cosmological patterns will merge and become part of a single, spectacular mathematical structure that describes the entire past, present and future of everything “in some timeless, autonomous way.”

    [6:06:16 PM] ShilohaPlace: Modular duality of Thuban branes lol
     

    Attached Files:

    Last edited: Jan 18, 2017
  8. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    GRAVITY
    The Case Against Dark Matter

    A proposed theory of gravity does away with dark matter, even as new astrophysical findings challenge the need for galaxies full of the invisible mystery particles.

    Verlinde_1000x560.
    Ilvy Njiokiktjien for Quanta Magazine
    The Dutch theoretical physicist Erik Verlinde argues that dark matter does not exist.



    By Natalie Wolchover
    November 29, 2016

    For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical “dark matter” seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.


    But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. This month, a series of developments has revived a long-disfavored argument that dark matter doesn’t exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.
    The latest attempt to explain away dark matter is a much-discussed proposal by Erik Verlinde, a theoretical physicist at the University of Amsterdam who is known for bold and prescient, if sometimes imperfect, ideas. In a dense 51-page paper posted online on Nov. 7, Verlinde casts gravity as a byproduct of quantum interactions and suggests that the extra gravity attributed to dark matter is an effect of “dark energy” — the background energy woven into the space-time fabric of the universe.

    Read the related Abstractions post:
    Quantum Gravity’s Time Problem

    Instead of hordes of invisible particles, “dark matter is an interplay between ordinary matter and dark energy,” Verlinde said.
    To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called “qubits”), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.
    In his calculations, Verlinde rediscovered the equations of “modified Newtonian dynamics,” or MOND. This 30-year-old theory makes an ad hoc tweak to the famous “inverse-square” law of gravity in Newton’s and Einstein’s theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. “I have a way of understanding the MOND success from a more fundamental perspective,” Verlinde said.

    Many experts have called Verlinde’s paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxies published on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MOND’s case against dark matter.
    The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxy’s center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the “radial acceleration relation.” This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxy’s rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

    Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlinde’s theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or “lensing” of light from the galaxies — another phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MOND’s original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlinde’s theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

    Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlinde’s proposal at least demonstrates how something like MOND might be right after all. “One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior,” she said. “If [Verlinde’s] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.”


    The New MOND

    In Newton’s and Einstein’s theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pull — and orbit more slowly — the farther they are from the galactic center. Stars’ velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The “flattening” of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matter — explained, in that paradigm, by dark matter clouds or “halos” that surround galaxies and give an extra gravitational acceleration to their outlying stars.


    Lucy Reading-Ikkanda for Quanta Magazine

    Searches for dark matter particles have proliferated — with hypothetical “weakly interacting massive particles” (WIMPs) and lighter-weight “axions” serving as prime candidates — but so far, experiments have found nothing.
    Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain level — precisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earth — he postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. “There’s this magic scale,” McGaugh said. “Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.”
    Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlinde’s theory revitalizes MOND by attempting to reveal the method behind the magic.

    GalacticOrbit_615.

    Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.
    The particular brand of emergent gravity in Verlinde’s paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-time — an approach that Verlinde has now absorbed into his new work.

    In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum information — that is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become “entangled” with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. “The best way we understand quantum gravity currently is this holographic approach,” said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.
    The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that it’s exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

    De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energy — often called dark energy — which drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.
    Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newton’s inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

    Van Raamsdonk calls Verlinde’s idea “definitely an important direction.” But he says it’s too soon to tell whether everything in the paper — which draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysics — hangs together. Either way, Van Raamsdonk said, “I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.”
    One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. “To be fair, we’ve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe,” Swingle said, referring to work in AdS space. “We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.”



    Ilvy Njiokiktjien for Quanta Magazine

    Video: Erik Verlinde describes how emergent gravity and dark energy can explain away dark matter.


    The Case for Dark Matter

    Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.
    One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another. The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energy’s gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.
    But the crowning achievement for Verlinde’s theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe. The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks’ amplitudes. Verlinde expects that his version will work — once again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. “Having said this,” he said, “I have not calculated this all through.”

    While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues’ recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

    In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxy’s halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies’ rotation speeds, even though they’re set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies’ dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says it’s too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues’ galaxy data set. If not, then the standard dark matter paradigm is in big trouble. “Obviously this is something that the community needs to look at more carefully,” Zurek said.

    Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. “If somebody were to come to you and say, ‘The solar system doesn’t work on an inverse-square law, really it’s an inverse-cube law, but there’s dark matter that’s arranged just so that it always looks inverse-square,’ you would say that person is insane,” he said. “But that’s basically what we’re asking to be the case with dark matter here.”
    Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. “That said, you should always check that you’re not on a bandwagon,” she added. “Even though this paradigm explains everything, you should always check that there isn’t something else going on.”

    This article was reprinted on TheAtlantic.com.

    https://www.quantamagazine.org/20161129-verlinde-gravity-dark-matter/


    3.8 A Synthesis of LCDM with MOND in an Universal Lambda Milgröm Deceleration


    420px-M33_rotation_curve_HI.
    [Excerpt from Wikipedia:
    https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics

    Several independent observations point to the fact that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analysed using Newton's laws. This discrepancy – known as the "missing mass problem" – was first identified for clusters by Swiss astronomer Fritz Zwicky in 1933 (who studied the Coma cluster),[4][5] and subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda.[6] These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin at the Carnegie Institute in Washington, who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre, Rubin and collaborators found instead that they remain almost constant[7] – the rotation curves are said to be "flat". This observation necessitates at least one of the following: 1) There exists in galaxies large quantities of unseen matter which boosts the stars' velocities beyond what would be expected on the basis of the visible mass alone, or 2) Newton's Laws do not apply to galaxies. The former leads to the dark matter hypothesis; the latter leads to MOND.

    220px-Milgrom_Mordechai.
    MOND was proposed by Mordehai Milgrom in 1983

    The basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments (in the Solar System and on Earth), they have not been verified for objects with extremely low acceleration, such as stars in the outer parts of galaxies. This led Milgrom to postulate a new effective gravitational force law (sometimes referred to as "Milgrom's law") that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics.[1] This law, the keystone of MOND, is chosen to reduce to the Newtonian result at high acceleration but lead to different ("deep-MOND") behaviour at low acceleration:

    mond1. ........(1)

    Here FN is the Newtonian force, m is the object's (gravitational) mass, a is its acceleration, μ(x) is an as-yet unspecified function (known as the "interpolating function"), and a0 is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires μ(x) → 1 for x >> 1, and consistency with astronomical observations requires μ(x) → x for x << 1. Beyond these limits, the interpolating function is not specified by the theory, although it is possible to weakly constrain it empirically.[8][9] Two common choices are:

    mond2. ("Simple interpolating function"),
    and
    mond3. ("Standard interpolating function").

    Thus, in the deep-MOND regime (a << a0):

    mond4.

    Applying this to an object of mass m in circular orbit around a point mass M (a crude approximation for a star in the outer regions of a galaxy), we find:

    mond5. .......(2)

    that is, the star's rotation velocity is independent of its distance r from the centre of the galaxy – the rotation curve is flat, as required. By fitting his law to rotation curve data, Milgrom found a0 ≈ 1.2 x 10−10 m s−2 to be optimal. This simple law is sufficient to make predictions for a broad range of galactic phenomena.
    Milgrom's law can be interpreted in two different ways. One possibility is to treat it as a modification to the classical law of inertia (Newton's second law), so that the force on an object is not proportional to the particle's acceleration a but rather to μ(a/a0)a. In this case, the modified dynamics would apply not only to gravitational phenomena, but also those generated by other forces, for example electromagnetism.[10] Alternatively, Milgrom's law can be viewed as leaving Newton's Second Law intact and instead modifying the inverse-square law of gravity, so that the true gravitational force on an object of mass m due to another of mass M is roughly of the form GMm/(μ(a/a0)r2). In this interpretation, Milgrom's modification would apply exclusively to gravitational phenomena.
    [End of excerpt]




    For LCDM:
    acceleration a: a = G{MBM+mDM}/R2

    For MOND:
    acceleration a: a+amil = a{a/ao} = GMBM/R2 = v4/ao.R2 for v4 = GMBMao
    amil = a{a/ao-1} = a{a-ao}/ao = GMBM/R2 - a

    For Newtonian acceleration a: G{MBM+mDM}/R2 = a = GMBM/R2 - amil

    amil = - GmDM/R2 = (a/ao)(a-ao) and relating the Dark Matter to the Milgröm constant in interpolation amil

    for the Milgröm deceleration applied to the Dark Matter and incorporating the radial independence of rotation velocities in the galactic structures as an additional acceleration term in the Newtonian gravitation as a function for the total mass of the galaxy and without DM in MOND.

    Both, LCDM and MOND consider the Gravitational 'Constant' constant for all accelerations and vary either the mass content in LCDM or the acceleration in MOND in the Newtonian Gravitation formulation respectively.
    The standard gravitational parameter GM in a varying mass term G(M+m) = M(G+DG) reduces to Gm=DGM for a varying Gravitational parameter G in (G+DG) = f(G).

    The Dark Matter term GmDM can be written as GmDM/R2 = -amil = a - a2/ao = DGM/R2 to identify the Milgröm acceleration constant as an intrinsic and universal deceleration related to the Dark Energy and the negative pressure term of the cosmological constant invoked to accommodate the apparent acceleration of the universal expansion (qdS = -0.5585).

    DG = Go-G(n) in amil = -2cHo/[n+1]3 = {Go-G(n)}M/R2 for some function G(n) descriptive for the change in f(G).

    The Milgröm constant so is not constant, but emerges as the initial boundary condition in the Instanton aka the Quantum Big Bang and is identfied as the parametric deceleration parameter in Friedmann's solutions to Einstein's Field Equations in amil.ao = a(a-ao) and ao(amil + a) = a2 or ao = a2/(amil+a).

    A(n)= -2cHo/[n+1]3 = -2cHo2/RH[n+1]3 and calculates as -1.112663583x10-9 (m/s2)* at the Instanton and as -1.16189184x10-10 (m/s2)* for the present time coordinate.

    The Gravitational Constant G(n)=GoXn in the standard gravitational parameter represents a finestructure in conjunction with a subscale quantum mass evolution for a proto nucleon mass
    mc = alpha9.mPlanck from the gravitational interaction finestructure constant ag = 2pGomc2/hc = 3.438304..x10-39 = alpha18 to unify electromagnetic and gravitational quantum interactions.

    The proto nucleon mass mc(n) so varies as complementary finestructure to the finestructure for G in mcYn for a truly constant Go as defined in the interaction unification.
    G(n)M(n)=GoXn.MoYn = GoMo(XY)n = GoMo in the macro evolution of baryonic mass seedling Mo and Gomc in the micro evolution of the nucleonic seed remain constant to describes a particular finestructure for the timeframe in the cosmogenesis when the nonluminous Dark Matter remains separate from the luminous Baryon mass.

    The DM-BM intersection coordinate is calculated for a cycletime n=Hot=1.4142..or at an universal true electromagnetic age of 23.872 billion years.
    At that time, the {BM-DM-DE} mass density distribution will be {5.536%; 22.005%; 72.459%}, with the G(n)M(n) assuming a constant value in the Hubble cycle.
    The Dark Energy pressure will be PPBM∩DM = -3.9300x10-11 (N/m2)* with a corresponding 'quasi cosmological constant' of LBM∩DM = -6.0969x10-37 (s-2)*.

    Within a local inertial frame of measurement; the gravitational constant so becomes a function of the micro evolution of the proto nucleon mass mc from the string epoch preceding the Instanton.
    A localized measurement of G so engages the value of the mass of a neutron as evolved mc in a coupling to the evolution of the macro mass seedling Mo and so the baryonic omega
    Wo=Mo/MH = 0.02803 in the critical density rcritical = 3Ho2/8pGo = 3MH/4pRH3 = 3c2/8pGoRH2 for the zero curvature and a Minkowski flat cosmology.

    The finestructure for G so engages both the micro mass mc and the macro mass Mo, the latter being described in the overall Hypermass evolution of the universe as a Black Hole cosmology in a 5/11D AdS 'closed' spacetime encompassing the dS spacetime evolution of the 4/10D 'open' universe.
    Details are described in a later section of this discourse.

    The Milgröm 'constant' so relates an intrinsic Dark Energy cosmology to the macrocosmic hypermass evolution of Black Holes at the cores of galaxies and becomes universally applicable in that context.
    No modification of Newtonian gravitation is necessary, if the value of a locally derived and measured G is allowed to increase to its string based (Planck-Stoney) value of Go=1/k=4peo = 1.111..x10-10 string unification units [C*=m3/s2] and relating spacial volume to angular acceleration in gravitational parameter GM.

    The necessity for Dark Matter to harmonise the hypermass evolution remains however, with the Dark Energy itself assuming the form of the Milgröm deceleration.


    amil = -2cHo/[n+1]3 = -{Go-G(n)}M/R2 = -Go{1-Xn}M/R2 for the gravitational parameter GM coupled to the size of a galactic structure harbouring a central Black Hole-White Hole/Quasar power source.

    GoM/R2 = 2cHo/{(1-Xn)(n+1)3}

    For a present n=1.13242 ......{(1-Xn)(n+1)}3 = 4.073722736.. for M/R2 = constant = 2.48906

    For the Milky Way barred spiral galaxy and a total BM+DM mass of 1.7x1042 kg, the mass distribution would infer a diameter of 1.6529x1021 m or 174,594 light years, inclusive the Dark Matter halo extension.

    For the Andromeda barred spiral galaxy and a total BM+DM mass of 3x1042 kg, the galaxy's diameter would increase to 2.1957x1021 m or 231,930 light years for a total matter distribution.


    hs-2012-25-a-print-554x580.
     

    Attached Files:

    Last edited: Jan 18, 2017
  9. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    Friday, 09 December 2016
    Does God Exist?

    Written by Charles Scaliger


    9ef5cdc8e7c99cd1108dd3e9a040dc0c_M.

    “We hold these truths to be self-evident,” wrote Thomas Jefferson in the Declaration of Independence, “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.” Whatever their sectarian inclinations, the Founding Fathers (with the possible exception of Thomas Paine) would have agreed with Jefferson’s first assumption, namely, that there exists a Supreme Being from whom all rights, and the natural laws they are predicated on, originate.

    Freedom and limited government as unquestioned goods depend ultimately on the notion that God is real, and that freedom and natural rights are gifts from Him to His children, to be safeguarded by properly constituted government. But is the existence of God, an almighty cosmic lawgiver, anything more than unprovable dogma, an article of personal faith lying outside the realm of reason?
    The caricatures of Christianity offered up by its enemies notwithstanding, Christian theology has always sought to ground itself in reason. Proving the existence of God has been a fruitful exercise since the days of early Christian thinkers such as Augustine of Hippo, who sought to prove the existence of God by showing, via the inherent perfection of numbers and mathematical proofs, that man could not possibly be the highest being.

    ampaign%3DTNA+Top+Daily+Headlines%26utm_content%3DTNA+Top+Daily+Headlines+Dec+9+16&cb=18afe20743.

    Perhaps the best-known and most influential proofs of the existence of God were served up by that quintessential thinker of the high Middle Ages, Thomas Aquinas. With characteristic concision Aquinas laid out his Quinque Viae or “Five Ways” by which the existence of God could be proven. These ways are the unmoved mover, the first cause, the argument from contingency, the argument from degree, and the teleological argument or argument from design.
    By the first argument, Aquinas meant that all things in this world are in constant change, and every change is effected by some agency that is itself changing. Yet change cannot be infinitely recursive; somewhere far up the chain of causation, there must be some unchanging agency that is the source of and standard for all change. This unchangeable source is held to be God.

    The argument of first cause holds that all things have some cause external to themselves but, as with change, it is impossible to conceive of an infinite concatenation of cause and effect. There must be some first cause that is not the effect of some other prior cause, from which all other causes, and their effects, spring. This first cause is God.

    According to the argument from contingency, all things in our mortal experience are perishable and will only exist for a finite time span, however long. Yet if we assume an infinite past, then all things in a perishable universe should have ceased to exist. The fact that this is not so implies the existence of Something that is eternally imperishable, which is God.

    The argument from degree is one of Aquinas’ strongest. By this argument, we recognize degrees of goodness, virtue, truth, and so on, such that we can recognize, e.g., that A is better than B, or C is more correct than D. But in a universe with no absolutes, such distinctions would be impossible. Thus there must exist some ultimate, perfect standard for goodness, beauty, virtue, truth, and the like, and that perfect standard is God.

    Finally, Aquinas’ teleological argument asserts that, just as intelligent beings behave in regular, purposeful ways to achieve ends, so too do non-intelligent things behave in regular ways to bring about predictable results. A seed planted in the ground will produce a plant of determinate type (and not, say, a rock or an animal), while a stone thrown in the air will follow a predictable trajectory. But such results can only obtain if guided by some type of intelligent agency, and if the thing in question does not possess such agency, it must be guided by an outside intelligence, which is God.

    Nor was Aquinas alone in seeking to prove the greatest of all questions. A couple of American thinkers of surpassing originality and penetrating insight added proofs of their own. Charles Sanders Peirce, the brilliant philosopher, logician, and scientist who was the son of America’s first mathematical physicist and astrophysicist, Benjamin Peirce (and who is credited, alongside his father, with the invention of that most essential of modern mathematical tools, the matrix), was deeply religious after his own fashion. In one of his better-known papers, “A Neglected Argument for the Reality of God,” Peirce pointed out the remarkable and seemingly contradictory circumstance of immense variety in a universe clearly governed by law. Peirce saw law as something conceptually distinct from God Himself, observing that “the endless variety in the world has not been created by law. It is not of the nature of uniformity to originate variation, nor of law to beget circumstance. When we gaze upon the multifariousness of nature we are looking straight into the face of a ‘living spontaneity.’”

    Elsewhere, Peirce clarified his point: “The variety of the universe … which we see whenever and wherever we open our eyes, constitutes its liveliness, its vivacity. The perception of it is a direct, though darkling perception of God.”
    In other words, while laws may originate with God, in and of themselves laws tend toward uniformity and predictability. We are thus left with the need to explain diversity and spontaneity, and God must be the direct source of these.
    Another American philosopher and near contemporary of Peirce, Josiah Royce, argued that error, or human fallibility, of all things, was the strongest evidence for the existence of God. For where could men have gotten the notion of fallibility and error in the first place, much less the capacity to confront and overcome error, if not from some perfect Source? A world without God would perforce be a world in which the very notion of error would be inconceivable.
    These are but a smattering of the many arguments for the existence of God brought forth by humanity’s most ingenious intellects. None of them, of course, will move the dogmatic skeptics among us. But the fact remains that God does exist, and that He is the ultimate author of our liberty and laws.


    http://www.thenewamerican.com/cultu...&utm_content=TNA+Top+Daily+Headlines+Dec+9+16
     
  10. admin

    admin Well-Known Member Staff Member

    Messages:
    3,756
    BASE-s.

    Matter-antimatter mystery remains unsolved
    01/19/17
    By Sarah Charley
    Measuring with high precision, physicists at CERN found a property of antiprotons perfectly mirrored that of protons.

    There is little wiggle room for disparities between matter and antimatter protons, according to a new study published by the BASE experiment at CERN.
    Charged matter particles, such as protons and electrons, all have an antimatter counterpart. These antiparticles appear identical in every respect to their matter siblings, but they have an opposite charge and an opposite magnetic property. This recalcitrant parity is a head-scratcher for cosmologists who want to know why matter triumphed over antimatter in the early universe.
    “We’re looking for hints,” says Stefan Ulmer, spokesperson of the BASE collaboration. “If we find a slight difference between matter and antimatter particles, it won’t tell us why the universe is made of matter and not antimatter, but it would be an important clue.”

    Ulmer and his colleagues working on the BASE experiment at CERN closely scrutinize the properties of antiprotons to look for any miniscule divergences from protons. In a paper published today in the journal Nature Communications, the BASE collaboration at CERN reports the most precise measurement ever made of the magnetic moment of the antiproton.
    “Each spin-carrying charged particle is like a small magnet,” Ulmer says. “The magnetic moment is a fundamental property which tells us the strength of that magnet.”
    The BASE measurement shows that the magnetic moments of the proton and antiproton are identical, apart from their opposite signs, within the experimental uncertainty of 0.8 parts per million. The result improves the precision of the previous best measurement by the ATRAP collaboration in 2013, also at CERN, by a factor of six. This new measurement shows an almost perfect symmetry between matter and antimatter particles, thus further constricting leeway for incongruencies which might have explained the cosmic asymmetry between matter and antimatter.

    The measurement was made at the Antimatter Factory at CERN, which generates antiprotons by first crashing normal protons into a target and then focusing and slowing the resulting antimatter particles using the Antiproton Decelerator. Because matter and antimatter annihilate upon contact, the BASE experiment first traps antiprotons in a vacuum using sophisticated electromagnetics and then cools them to about 1 degree Celsius above absolute zero. These electromagnetic reservoirs can store antiparticles for long periods of time; in some cases, over a year. Once in the reservoir, the antiprotons are fed one-by-one into a trap with a superimposed magnetic bottle, in which the antiprotons oscillate along the magnetic field lines. Depending on their North-South alignment in the magnetic bottle, the antiprotons will vibrate at two slightly different rates. From these oscillations (combined with nuclear magnetic resonance methods), physicists can determine the magnetic moment.

    The challenge with this new measurement was developing a technique sensitive to the miniscule differences between antiprotons aligned with the magnetic field versus those anti-aligned.
    “It’s the equivalent of determining if a particle has vibrated 5 million times or 5 million-plus-one times over the course of a second,” Ulmer says. “Because this measurement is so sensitive, we stored antiprotons in the reservoir and performed the measurement when the antiproton decelerator was off and the lab was quiet.”

    BASE now plans to measure the antiproton magnetic moment using a new trapping technique that should enable a precision at the level of a few parts per billion—that is, a factor of 200 to 800 improvement.
    Members of the BASE experiment hope that a higher level of precision might provide clues as to why matter flourishes while cosmic antimatter lingers on the brink of extinction.
    “Every new precision measurement helps us complete the framework and further refine our understanding of antimatter’s relationship with matter,” Ulmer says.

    http://www.symmetrymagazine.org/article/matter-antimatter-mystery-remains-unsolved
     
    Last edited: Jan 22, 2017

Share This Page