Regularization (in physics), Renormalization, renormalization group, **zeta function**

By analyzing the answers of experts in light of this base-2 exponential notation model, we discover…

“Renormalization specifies relationships between parameters in the theory when the parameters describing large distance scales differ from the parameters describing small distances. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in infinities. When describing space and time as a continuum, certain statistical and quantum mechanical constructions are ill-defined. To define them, this continuum limit—the removal of the “construction scaffolding” of lattices at various scales—has to be taken carefully, as detailed below. Renormalization procedures are based on the requirement that certain physical quantities are equal to the observed values.”

One of its fundamental aspects was the prediction of the existence of the Higgs boson.

“The W bosons are named after the * w*eak force. Weinberg named the additional particle the “Z particle”,

^{[3]}and later gave the explanation that it was the last additional particle needed by the model. The W bosons had already been named, and the Z bosons have

*ero electric charge.*

**z**^{[4]}

“The two **W bosons** are verified mediators of neutrino absorption and emission. During these processes, the W boson charge induces electron or positron emission or absorption, thus causing nuclear transmutation. The Z boson is not involved in the absorption or emission of electrons and positrons.

Weinberg’s model, now known as the electroweak unification theory, had the same symmetry structure as that proposed by Glashow in 1961: hence both models included the then-unknown weak interaction mechanism between leptons, known as neutral current and mediated by the Z boson.

The 1973 experimental discovery of weak neutral currents [Haidt, D. (2004). “The discovery of the weak neutral currents”. CERN Courier] (mediated by this Z boson) was one verification of the electroweak unification. The paper by Weinberg in which he presented this theory is one of the most cited works ever in high energy physics.[16]

Electroweak interaction

In particle physics, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 100 GeV, they would merge into a single electroweak force. Thus, if the universe is hot enough (approximately 1015 K, a temperature exceeded until shortly after the Big Bang), then the electromagnetic force and weak force merge into a combined electroweak force. During the electroweak epoch, the electroweak force separated from the strong force. During the quark epoch, the electroweak force split into the electromagnetic and weak force.

Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles.[1][2] The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron.

In 1999, Gerardus ‘t Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable.

The Electroweak Epoch Requires A Temperature of 2×10^{12} Kelvin to create the Quark-Gluon Plasma (QGP). And, it appears that requires 175 MeV per particle. That temperature is reached between notations 136 and 137. The universe is less than one-hundredth of second from its start.

The Electroweak Epoch will also be downgraded from an epoch to a *transition process*. Given its temperature requirements it is guessed to begin between notations 136 and 137. The Quark Epoch should begin soon thereafter.. It is up for grabs. Much more anhttps://bbludata.wordpress.com/43-48/alysis is required. We will be using the temperature requirements of the big bang theorists to determine some of these placements.

In the 1980s, he was a pioneer in the theory of elektroweak baryogenesis. In 1985, his influential work with Valery Rubakov and Mikhail E. Shaposhnikov estimated the rate of anomalous electroweak process that violated baryon-number conservation in the cosmic plasma of the early universe.^{[1]}

Yet, to be sure we are all on the same page, the universe is still so young, we have no instrumentation to measure such a short duration. Our first measurement of a duration will not happen until Notation 84.

In the *QE* model time is discrete quantized, and locally defined BY-AND-WITHIN ITS NOTATION. It will be among the most difficult definitions to explain and it may take many successive approximations over several years.

**Grand Unification and the Electroweak Epochs renamed Processes**: Based on the fact that entities and things require a necessary amount of space that only becomes available from the 67th notation and above, the first 60 to 66 notations are foundational to all notations. Using the analogy of the birthing process, all the forms-and-functions, then processes-and-procedures, and then relations-and-systems prior to the actual birthing event, are the first 60 or so notations. Here that finite-infinite relation creates the foundational order, the most basic relations, and many dynamical systems. Using the analogue of a birthing event will be further explored. Metaphors are a most fundamental educational tool grounded within homogeneity and isotropy.

We postulate that the Grand Unification processes continue beyond the 67th notation as specific unification processes. A kind of *de facto* unification continues within all the ratios, even though there are now truly entitive manifestations which are most often best described by the big bang theorists.

**Electroweak processes**: Now the heat requirements will determine when these processes begin to manifest and the measurements given by the big bang theorists can then be tweaked and integrated within the Quiet Expansion model. The analysis begins within the notational cluster from 133 to 138.

Maybe:

**Howard Mason Georgi III** (born January 6, 1947) is an American theoretical physicist and the Mallinckrodt Professor of Physics and Harvard College Professor at Harvard University.^{[1]} He is also Director of Undergraduate Studies in Physics and has been Co-Master of Leverett House with his wife, Ann Blake Georgi, since 1998. His early work was in Grand Unification and gauge coupling unification within SU(5) and SO(10) groups (see Georgi–Glashow model).

Unparticle physics is a theory that there exists matter that cannot be explained in terms of particles, because its components are scale invariant. Howard Georgi proposed this theory in the spring of 2007 in the papers “Unparticle Physics” and “Another Odd Thing About Unparticle Physics”.^{[3]}^{[4]}

The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper^{[4]} they opted for the less anatomical *GUM* (Grand Unification Mass). Nanopoulos later that year was the first to use^{[5]} the acronym in a paper.^{[6]}

“As Steven Weinberg tells the story, there was Isaac Newton’s theory of gravity, ‘intended to explain the movements of the celestial bodies and how such things as apples fall to the ground; and there was James Clerk Maxwell’s account of electromagnetism as a way to explain light, radiation, magnetism, and the forces that operate between electrically charged particles.'” The quantum revolution “introduced two new forces, both operating at very short range, within the nucleus of the atom. The strong force holds the particles of the nucleus together and is very strong… The other is known as the weak force, which is responsible for radioactive decay. And so, until the 1960s there were four forces that needed to be reconciled: gravity, electromagnetism, the strong nuclear force, and the weak radioactive force” (Peter Watson, *The Modern Mind*).

At Berkeley in 1967, “Weinberg produced a gauge theory that correctly predicted electromagnetic and weak nuclear forces. This was later to become known as the electroweak theory. In his paper, ‘A model of leptons’, he showed that although electromagnetism is much stronger than the weak force of everyday energies, the only way to devise a theory of the weak force is to include the electromagnetic force. Weinberg showed how what was seemingly impossible could be achieved and the forces could be unified through the interchange of particles in spite of the difference in their strengths. Abdus Salam had independently reached the same conclusions and what became known as the Weinberg-Salam model was a major advance on earlier models that had originally been applied to leptons… In 1979 Weinberg shared the Nobel Prize for Physics for this work with Salam and his old school friend Sheldon Glashow, who had extended the work that Weinberg and Salam had independently developed” (Chris Cooper, *Physics*).

The discovery and description of the electroweak force has been confirmed experimentally and now is one of the essential elements of the “standard model” of particle physics. “By 1988, Weinberg’s three-page paper in the *Physical Review* was the most frequently cited paper in elementary particle physics since the end of World War II” (Lightman, *The Discoveries*). *Particle Physics: One Hundred Years of Discoveries*; Brian Greene, *The Elegant Universe*

The great debate between defining notions of space and time as real objects themselves (absolute), or mere orderings upon actual objects (relational), began between physicists Isaac Newton (via his spokesman, Samuel Clarke) and Gottfried Leibniz in the papers of the Leibniz–Clarke correspondence.

Arguing against the absolutist position, Leibniz offers a number of thought experiments with the purpose of showing that there is contradiction in assuming the existence of facts such as absolute location and velocity. These arguments trade heavily on two principles central to his philosophy: the principle of sufficient reason and the identity of indiscernibles. The principle of sufficient reason holds that for every fact, there is a reason that is sufficient to explain what and why it is the way it is and not otherwise. The identity of indiscernibles states that if there is no way of telling two entities apart, then they are one and the same thing.

The example Leibniz uses involves two proposed universes situated in absolute space. The only discernible difference between them is that the latter is positioned five feet to the left of the first. The example is only possible if such a thing as absolute space exists. Such a situation, however, is not possible, according to Leibniz, for if it were, a universe’s position in absolute space would have no sufficient reason, as it might very well have been anywhere else. Therefore, it contradicts the principle of sufficient reason, and there could exist two distinct universes that were in all ways indiscernible, thus contradicting the identity of indiscernibles.

Standing out in Clarke’s (and Newton’s) response to Leibniz’s arguments is the bucket argument: Water in a bucket, hung from a rope and set to spin, will start with a flat surface. As the water begins to spin in the bucket, the surface of the water will become concave. If the bucket is stopped, the water will continue to spin, and while the spin continues, the surface will remain concave. The concave surface is apparently not the result of the interaction of the bucket and the water, since the surface is flat when the bucket first starts to spin, it becomes concave as the water starts to spin, and it remains concave as the bucket stops.

In this response, Clarke argues for the necessity of the existence of absolute space to account for phenomena like rotation and acceleration that cannot be accounted for on a purely relationalist account. Clarke argues that since the curvature of the water occurs in the rotating bucket as well as in the stationary bucket containing spinning water, it can only be explained by stating that the water is rotating in relation to the presence of some third thing—absolute space.

Leibniz describes a space that exists only as a relation between objects, and which has no existence apart from the existence of those objects. Motion exists only as a relation between those objects. Newtonian space provided the absolute frame of reference within which objects can have motion. In Newton’s system, the frame of reference exists independently of the objects contained within it. These objects can be described as moving in relation to space itself. For many centuries, the evidence of a concave water surface held authority.

Randomness:

Introduce a new Lagrangian based on simple logic, simple numbers and simple geometries.

All point to quiet expansion model.

If in some manner verifiable, the first 67 notations could become a new field of study that we are calling, *hypostatic studies, from perfections to imperfections and symmetry breaking*.

We’ll take this nice and slow because we are going to be making some rather unusual statements.

But, this chart is different; every notation seems to define a current domain of activity and there is nothing historical or past; it is an active imprint within a current notation and it helps define the universe as it is.But, this chart is different; every notation seems to define a current domain of activity and there is nothing historical or past; it is an active imprint within a current notation and it helps define the universe as it is.

Discrete space-time, Rodolfo Gambini, Jorge Pullin

“…construct gravitational theories on discrete space-times, usually referred to as the “consistent discretization” approach.” https://arxiv.org/abs/gr-qc/0505023

Thomas Campbell, former a physicist at NASA claims space time to be granular.

Carlo Rovelli, Zakopane lectures on loop gravity, introductory lectures on loop quantum gravity (LQG). August 4, 2011

Our working premise begins with what is known as a space-time singularity, the dynamic transformation nexus between the finite and infinite where there is a complete unification of all the forces of nature, i.e. the Planck base units (aka Planck scale). We postulate that this unification is extended through dynamic working ratios throughout all 200+ notations from the first moment of creation to the current time and present day.

The key to the QE: More than just the bbt’s four forces of nature within the Planck scale, we assume these four are encapsulated within all five Planck base units and the constants that define them, and that this unification is carried through all 201+ notations. And, as we have noted, the Planck base units are defined by length, time, mass, temperature and charge; and, these are further defined by the speed of light (or special relativity), the gravitational constant (or general relativity), the reduced Planck constant (or ħ or quantum mechanics), the Coulomb constant (or ε0 or electric charge or electromagnetism), and the Boltzmann constant (or kB or of temperature).