The Incompleteness of Science
Headings
Introduction
There is more to the topic of the relationship between faith and science than merely the conclusion that science fails, at least up to this point, to provide a comprehensive account of reality, which is more or less accepted. Rather, we will see in this article that there is suggestion that science itself points to some power that must be beyond it. This would mean that not only does science not disprove the existence of God, rather Science, actually demonstrates the necessity of the theistic view. We will be careful not to make a case than is stronger than it should be, and so at the outset we will concede that when we discuss things that are at the edge, so to speak, of human knowledge, which is the kind of thing that we will be dealing with here, our certainty in stating anything is limited, we are, as it were, stretching our epistemic limits, our capacity to know, or to comprehend what it is to know.
Why do we have, & What is Movement?
Movement is the answer to the question (which strangely few people ask) of why does the Universe not simply fall flat on its face rather than actually do something at all. I remember asking this of waves a very long time ago, why do waves travel? I might be able to at this must later stage in my understanding at least begin to describe an approach to the difficulty of the issue. A field has a value at every point in space, which means that it exsts general everywhere in space. We do not know of anything that would inhibit this omnipresence of fields because they are quantum probablistic concepts. This means that even if you said they were very unlikely to be somewhere, it would still imply that there were a possibility, however small, that they were in that place, and that is all that one requires in quantum. It is precisely possibilities such as these that are responsible for our entire universe, after all. But waves are merely properties of a field, reflecting particular values of points in that field in space. When these values taken on a pattern of gradual wax and wane, like the people doing the wave at a football match gradually rising and then settling in thier chairs, we get a wave of that field. Fields tend to be like football spectators in the manner in which they transmit energy between their different points, and so they tend to be wavy in the manner in which they move. If those same spectators did not synchronise their motion, they would not create the illusion of anything moving at all. When they do synchronise their movement, it is still hard to say what exactly is moving, certainly all the co-ordinate points are staying exactly where they are, like each individual spactator respectfully restricts themselves to the seats that they have paid for. Were these points discrete, and without connection in a fundamental sense, then they could not possibly infleunce other points. For example if you screened off each spectator from the next there would be no possible means for them to produce a co-ordinated effect. From this we can only assume that there are essential connections between these points. However if we are dealing with fundamental issues, we could not possibly assume some ghostly influences, because those are what we are trying to explain in the first place. As a consequence, the discrete points themselves could not possibly be fundamental to reality, rather they could only possibly be descriptive of a continuous undelying fabric. In order to fully, at least for our purposes, which at present are represented by quantum, describe this undelying fabric, it seems that it is necessary to describe parts of it that are a Plank length apart, which gives the full resolution picture without any pixelation. So not we can give the reason that waves move- it is because the reality which underlies those waves jiggles like jelly. Because what one part of it is doing will affect what the other part of it is doing, since its all connected as one body, that effect is transferred between parts in these wany ripples, like your belly fat from the left side to your right when you wiggle or try to dance after a heavy meal- how could it possibly not move, only if there were a disconnect between the two sides or between all the little choice cuts. So movement is resultant of the pervasiveness and continuity of the underlying fabric of reality which of course, we don’t yet know what it is, but seem to have at least a limited ability to describe certatin differences in magnitude between various points in it, and these appear as wave-like differences. That is the best summary of why we see this primordial motion in reality, this motion which is seeminly the reason for every other movement, including the formation of stars and galaxies and the movement that we are most familiar with, which is life itself and the the animation of matter- it is waves that rumble through an imperceptible substance for reasons no more or less mysterious than the waves that ripple across the ocean or through your Jell-O. Do bear in minid that studying the scientific properties of jelly is a bona fide scientific field, a mysterious substance in its own right, somewhere between solids and liquids. The reason jelly behaves the way it does is that its molecules are combined into lengthly chains, which is what gives rise to its inter-connectedness, which is perfect for our purposes of trying to analogize an interconnected fabric that seeminly underlies our own reality. We live inside of something like a Ben Elf “jelly flood” scenario caused due to a “too much magic” situation, courtesy of Nanny Plum for those of you familiar with the popular chidren’s animation. It’s a delicious analogy, really.
What is the nature of the sibstance of such a field. Well , already described the nature of its variations, which are wave-like progations, but what is it that is being propagated? Well the best answer is “Probabilities”. The gel type situation at the base of reality has waves in it and the reason the waves travel through it is because it is, essentially, one substance, one fabric, rather than anything that is discontinuous. Were the contrary the case, there would be no reason for the waves to propagate uniformly and predictably. For example if you put two jellies side by side the manner in which a wave propagated from one to the other would become unpredictable, if it did propagate at all. But that doesn’t tell us why the gel doesn’t sit still. The reason for that is apparently its uncertain nature .When something it uncertain, that gives rise to possibilities, which are probabilities. The waves are waves of probability that surge through the gel and the proof that they are probability waves is the fact that they are described by the Heisenberg’s uncertainty equation. This means that the best description of reality is that its an “uncertain jelly/gel”, or probably more positively branded, a “Gel of Possibility”. This gel has the essential property of accomodating possibility, that is its essential flavour. So if someone asked what it tasted like the answer would simply have to be “everything”.
The gel type situation at the base of reality has waves in it and the reason the waves travel through it is because it is, essentially, one substance, one fabric, rather than anything that is discontinuous. Were the contrary the case, there would be no reason for the waves to propagate uniformly and predictably. For example if you put two jellies side by side the manner in which a wave propagated from one to the other would become unpredictable, if it did propagate at all. But that doesn’t tell us why the gel doesn’t sit still though. The reason that it does not is apparently due to its uncertain nature- when something it uncertain, that gives rise to possibilities, which are probabilities. The waves are waves of probability that surge through the gel and the proof that they are probability waves is the fact that they are described by the Heisenberg’s uncertainty equation. This means that the best description of reality is that its an uncertain jelly.
Now we would have to ask the question still of what this gel were made up of. All we have as candidates are the particle fields that we already know of. Of these, photon field, or the Higgs field, or the dark matter or dark energy field, are worth some consideration because all of those seem to have certain invariant or unchanging properties. In reality the answer might be all of them, or more likely some underlying reality or field that explains all of them, and the others. Now the question really is whether all of the fields are somehow united in producing this probability wave, and the answer to that is probably affirmative. Separating these fields seems problematic because the bosonic fields seem to, by themselves behave differently from the g=fermionic fields as a whole as well as to perhaps a lesser extent, differentely from each other. Mailnly, the bosonoc fields seem to largely do “whatever they want”. Photons just whiz along at a constant velocity as though they were stainding still at a certain speed where that speed where their findamental characteristic, the Higgs is, well, like nothing else. It affects other particles while remining itself unaffected, and does so to varyihng extents, and in doing so attributes to those particles at least one of their differentiating propoerties, their Mass.
This is significant because we know of nothing else that can literally impart to or be responsible for a property of what we regard a fundamental particle of reality, which is what makes the Higgs so incredible. What’s unique about the Higg’s field is that its “resting” state is not ground zero, which you would think was the definition of a resting state. The unique property of the Higg’s is that its “rest state” is a non-zero value or number
Lastly, these bosonic waves are really not probablistic, they are just uniform and do not seem to suffer the vagaries of uncertainty. They seem to lose energy ony because they get stretched by space itself, which is incredible because that seems to indicate that they are part of space itself, expanding along with it. That is the reason that they diminish in frequency (this is certainly true at least of photons), However do they decrease in amplitude?
Uncertainty Principle
The root of uncertainty lies essentially in the nature of waves, and this is key to a correct understanding of what both uncertainty and waves are. The thing about waves is that they extend infinitely on either side, that is, a wave is simply defined by a frequency and an amplitude, there is nothing in it about ending where the page or the laptop monitor ends.
Now even before we start, we really must note that inspite of all the hullabaloo about the surpringness of the principle, there is certainly a sense in which it it also not at all unexpected. Some, or even many of us will have thought about this intuitively themselves- when they are driving a car, for example, how is the instantaneous speed measured, if its constantly changing? Even if you measure it between two really close points, it could and I would say necessarily would be well be changing between those points too, because there are always at the very least small, unavoidable immeasurable effects of engine, road and weather and other intangible conditions affecting the speed in the background. What’s the solution?- to measure the velocity at an instant. But the car doesn’t move in an instant, so how to measure it? If you locate the car accurately you’ve lost the speed, the speed is infinite, because the car traversed the moment in an instant. If you so measure some speed, you’ve had to specify at least two locations, and usually the points in between them also (the way math handles this, of course, is calculus, taking the integral over infinitesimal changes, the slope of the curve at any point gives the instantaneous velocity).
Mathematically, wave functions can be represented as sine or cosine functions of imaginary numbers the values of which oscillate around a mean (I probably need to make sure that’s correct). This means that there is no specific location for the energy for a wave and this sets up an uncertainty pairing (one of many). What is represented on the page is its energy, or “momentum”, because that’s simply given by Plank’s constant divided by the wavelength lambda, a simple inverse relation equation (or e=hv, Planck’s Equation).
However when we superimpose several waves of varying frequency, the summation is an amplitude wave, that is, the resulting wave is representation of amplitudes at every frequency. Superimposing an infinite number of waves (rightly chosen, I think) in this manner results in a single wave where the greatets amplitude is present at a particular frequency and the ampltudes drop off on either side. This means that the location of the energy of that superimposed set of waves is now specified at a single point.
Thus the composite waveform, the wave gains a “location”, which is that specific point, but it doing so, it loses momentum, because a representation of amplitudes to frequency is not a frequency in itself, we’ve lost the wavelength which requires the peaks and troughs to be measured. This is the “Fourier Transform”, something that every engineering student learnd and knows to perform because of its widespread applicability in electronics. For example this is precisely how music, which is a complex of waves can be represented digitally, that is, as a set of amplitudes at every frequency. The reason for this phenomenon is the inherent uncertainty between the position and frequency of waves.
Every other state of a wave is a spectrum of uncertainty in between these two “certain” extremes and the place where we experience all of reality. If the second phase, the summated wave seems strange, and difficult to accept, it is somewhat somforting to remember first that we have already accepted the first state, an infinite sine function of an imaginary number anyway. What could we expect when we calculate the effect of strangeness but more strangeness, but at least in doing so we have in a manner encapsulated and tamed the strangeness in between these two extremes, so what we can now assess its effects and implications.
So it seems we can make the startling conlusion that the reason we calculate and observe quantum to have this incredible uncertainty and probability as fundamental to it, is that if quantum is reflective of our reality at a fundamental level, then the reason for this, and one we can make from a back-of the envelope paper calculation of, is that the simple curvy lines are mathematically products of imaginary numbers, and these curvy lines are the waves that make up the quantum fields that constitute reality at least as far down as we can perceive it. Imaginary numbers should not exist, and yet since they obviously are part of the reason that we exist, is it any surprise that our own existence is constituted by this quantum strangeness.
Uncertaintly as the Reason for the “Foaminess” of Spacetime
But what is important to appreciate here is that it is the inherent unceratainty of the simple wave form which translates into the foamy nature of space itself which is a tremendous insight.
If we combine uncertainty principle and general relativity, the relativity equation has mass/energy on the left side and curvature or geometry on the right (“mass tells space how to curve, space tells mass how to move”- Edison?). This means that we can substitute the uncertainty in momentum in the uncertainty equation with curvature, giving the uncertainty relationship between position and curvature.
The shortest length of space that you can measure the cirvature of is the Planck length, This is because in measuring it, the energy you require curves the space by the same amount as the space was curved in the first place (I’m not sure its the right explanation, but thats how Matt decribes it in the video). Above this length the closer to the Planck length, the more uncertain the calculation for the same reason while that uncertainty reaches the maximum at the Planck length, because you basically cannot make a reasonable measurement of the Plancl length. And now for the final part of the explanation of which I am also not convinced- since curvature is uncertain, and “any and every possible geometry is present”, so the mass and energy is also uncertain and therefore you get virtual particles etc.
Here’s the video:
Imaginary numbers
The Schrodinger equation itself is an imaginary number. This was quite a shocking and in more ways than one, unconfortable discovery when it was first made. But this was what turned a simple equation into a wave function. Freeman Dyson says of it: “the equation came as a complete surprise, to Schrodinger as well as to everyone else. And that square root of minus one means that nature works with complex numbers and not with real numbers.. It is the basis of all o chemistry and most of physics. It turns out that the Schrodinger equation desscribes completely everything we know about the behaviour of atoms. . And Schrondinger found to his delight that the equation has solutions corresponding to the quantized orbits in hte Bohr model of the atom. Suddenly it became a wave equation instead of a heat conduction equation. Schrodinger out the square root of minus 1 in the equation, and suddenly it made sense.” Heisenberg said “(the wave finction) psi. What is unpleasant here, and directly to be objected to, is the use of complex numbers.”
It is not difficult to see the problem with complex numbers. If I asked you the meaning of a zero, its not really hard, “nothing” makes perfect sense to us, for example if you accuse someone of theft and asked what they took, that would likely be their answer. Even a negative number makes intuitive sense- if I owed you three apples, I could also say that I possessed negative three apples, at least half of book-keeping is negative numbers. So here a negative number really has a real world equivalent- I would owe you three apples only if I had consumed three real apples that were yours, or services to that order. However, what does it mean to square a negative? If I wanted to tell you that with interest rates going up I now owed you three times the number of apples, I would multiply the debt by a positive three and end up with a bigger negative number. if I wanted to say that for some reason you now owed me, instead of me owing you, I would multiply by a negative, so that the double negative now reversed the polarity. For example, if I multipled by positive 1, I would be implying that you now owed me the three apples instead of the other way round, while if I multiplied by negative three, you would now owe me nine. Neither of those scenario yields a negative square root situation, that is, two identical sign factors that yield a negative. That’s just it, two identical sign factors always yield a positive, and we just decribe why intuitively this is true even in the double negative. This is already a problem for math, because Gauss’s theorem is foundational in math and it states that all numbers must have a root (I think).
The way to visualize imaginary numbers is to place them onto an imaginary axis that intersects the real numberline (of real numbers negatives upto zero, then positives). When you define a function as multiplying an imaginary number (say root -1 which by convention is “i”) by itself, then taking “1” on the positive side of the numberline as our starting point, when we multiply with “i”, we get neither positive nor negative nor zero, so we can convenitently plonk it onto our imaginary axis, which for ease of drawing we have plotted at right angles to the real one, although in “reality”, we don’t know what that means, numbers don’t have dimensions, there’s no “y” axis, that is, there isn’t even a second dimension, leave alone multiple ones. Next though, mutiplying i by itself we get -1, and we drop back into reality on the negative side. This is incredible, because multiplying i a third time, which is its cube, we are now back into the la-la axis on the opposite side, with -i. Repeating this, we are able to go around in an infinite satisfying circle of positive real to positive imaginary to negative real to negative imaginary and back to positive real again. This ostensibly solves the problem of not being able to have a negative product of a a square, not we can have it, the factors are “i” and we do have the negative product, its just on an imaginary axis. But the reason that it seems to make sense to say that this axis is “vertical” is because there is no increase in magnitude, just as you go from the x to the perpendicular y, all we have is a change in direction, not magnitude.
In a way this is incredible, because there is nothing a priori that should prevent us from putting a square root sign on top of any number, positive or negative, but uptil we sound this “solution”, we did not have a meaningful way of describing what that number meant. Now we can at pretend that we do. There are various mathematical axioms, that overlap foundational axioms of logic, because we must at least be able to assume certain things like identity and non-contradiction if we are to have any sustem system of coherent thought at all. However we did not think that not being able to have a root of a negative was one of those axioms, and in any case, until science demanded it, we did not seem to be much troubled by it either. Nw it turns out that it could not possibly be an axiom for were it so, then we could not possibly have quantum.
Basically what we get is that e raised to 1.a where a is always a real number, also lies on a circle, as we have seen, with radius 1 (e represents an exponential function, I’m not sure why its expressed in just this manner). Apparently, the real and imaginary parts of this wave can be expressed as sin theta + i.costheta, with theta as the ansngle subtended at the centre, and this is Euler’s formula. From what I can see, imaginary numbers enable a sophisticated and accessible means of visualising and representing oscillatory functions, which is what makes them great for representing waves.
This is why they are integral to both, Schrodinger’s equation and the standard Model Lagrangian, which is our best “theory of everything” at present, which basically means the equation representing and tying together everything we know at least. At least one of the advantages of the i is the fact that it has two aspects which means that where the real aspect can represent magnitude, the imaginary aspect can represent phase. What I think this means is that while simply using a posotive or negative sign can tell you whether the magnitude is up or down, it cannot tell you whether it is up but on its way lower, or up and on its way higher, for eg, same for down, so it would be positive in either side of the oeak of the wave, or negative on either side of the trough. Again, I don’t know what else, if anything, is implied in “phase”, but I think this is referring to the movement in terms of degrees of a circle rather than magnitude.
Let us take a look at how a wave emerges from math, or vice versa, whichever way you might want to see it. This is going to be useful, given that waves are apparently “everything” for us. You’re ot going anywhere in your scientific uncerstanding without that at all. So it transpires that the math of a wave turns out to be either a sine or a cosine function, depending on the phase of origin with respect to time. But you don’t need to surf the wave with a calculator in order to establish this. We know that the movement of a point along the circumference of a circle is described by such a sine function. If we take a unit circle and drop a perpendicular from the point where the radius as the hypotenuse meets the circumference down to the horizontal diameter, because the hypotenuse (being the radius) is one, the vertical side is Costheta and the horizontal sinetheta (or vice versa, I’m not sure). Sine is merely the ratio of the opposite side to theta to the hyoitenuse and cosine that of the adjacent side (or vv). So as the point moves along the circumference of the circle, lets picture counter-clockwise here, its a oure sine-cosine function that’s changing, and nothing else. Now its useful here to picture that same circle, as a giant ferris wheel and you sitting on top of it. the horizontal diameter is the baseline, so half the circle’s below and the other half above. Say you start midway on the right, your movement is up to the top, then down back to the middle, then down to the bottom and up back to the middle. Now say you cut the lower half of the circle and turn it outward- there’s your wave. completely decribed by the sin-cosine finction. That’s all there is to it. A circle goes back on itself from the midpoint, but a wave keeps that point moving forward, because it has time on the X axis, rather than phase or “angle”. A wave is therefore just the representation of circular motion in time. That’s the reason its so beautifully curvy.
How the hypothetical confers circular motion fron oscillatory:
When you square the hypothetical root of a real number you get the real number back. This almost sounds miraculous until you realise that you had to assume that this root existed in the first place. Just the fact that you now got the real number back doesn’t prove that the root existed. From there one makes the further assumption that this hypothetical also has a negative version of itself, like an anti-unicorn, because why not. Then when you square that hypothetical again and add the minus sign you’re back on the real number-line, which completes the circle. Because the magnitude you’ve chose for that real number is 1 (which means just “one hypothesis”), you can keep increasing the order of the exponent (the power to which it is raised), thus multiplying it by itself repeatedly and therefore unsurprisingly tracing the same circle over and over again, because the magnitude of multiplying 1 by itself remains 1. If we did not have the iamginary aspect and simply kept multiplying the real number -1 by itself, all you would get is an oscillation between 1 and -1 along a straight line. You’d have a yo-yo instead of a satellite in orbit. So it seems that the imaginary number has conferred upon us the gift of circular motion. if we were to increase the magnitude that number would spiral, so we stick to 1 here. In this sense it does gift us an additional dimention we were seemingly incapable of (unless anyone know of another circular function). That circular motion now gives you a wave, because it is a sine function, somethng that the sine function could not boast of. you can trace the sine angle along the curve, and I presume you can do the same along the wave (although I’m not sure how you do it along the wave). i (square root of minus 1) does not actually exist (because you cannot write down what it is), but if you assume it does, then it opens up a ton of mathematical options. But at the heart of it, you don’t even know the number. Some people will try to explain this as existing in another dimension, but numbers don’t have dimensions, space does. Numbers don’t occupy space.
Types of Movement
There are around three types of movement, believe it or not. First is chemistry- atoms and their electrons move around subject to electrostatic forces between them (I’m going to say that te magnetic force is part of this). This is responsible for chemistry and chemistry is respoonsible for biology and all the visible movement that we see around us. The second type of movement it inside the atomic nuclei. You never see it until it is released in nuclear fission and fusion. That power of movement when it is released powers the large-scale physics of the universe in addition to powering chemistry. For example the Sun’s nuclear energy powers chemistry and biology on the earth. Nuclear energy from super novae can power the formation of new galaxies and stars. And there is one final movement- gravity.
This force powers the universe itself, leading to cosmic scale events like the initial Big Bang, the universe’s accellerating expansion and eternal inflation, if that’s a thing. But why do these things move in the first place? The answer was really discovered by Noethe, it was because of symmetries. For every symmetry in the universe, there is a conserved quantity, and the universe moves in order to conserve it. This is not that easy to explain, but easily the most beautiful concept in all of science. Take some analogies- if you were in an accident and your face were asymmetrical as a result, you would want to treat it to restore its symmetry. If you car gets bashed on one side you would want to reform it back out. The Universe is also apparently like that. I’ll finish this section once I’ve had a chance to look at Noethe’s work again.
Basically it means that if there is a God, then he created a physics which is always going to search for symmetry, and in doing so, provides for all the dynamism that we see around us. God sets up a symmetric physics, then a flick of his hand, like setting of the first domino in an elaborate arrangement, or simply like winding the clock literally, that symmetry is upset, and the rest is history, or should we say “the rest is time”.
Shape of the Universe from Black Holes
The easiest way to understand the shape of the Universe is from Black holes, and the easiest way to do that is by turning your mind inside out like a sock. That’s the shape of the Universe, its like an inverted sock, that you’re not allowed to ask why not just a sock. So a black hole is where the curvature of space, already quite tormented by the weight of the black hole, continues to be bent out of shape as it approaches the singularity, the point of infinite mass and density described by Penrose and Hawking down to all but the short length scale, till its so bent that it hasn’t any space at all left. There are strange things going on here when we speak of space being “curved”, because what happens when its is curved back on itself, does it go backward on itself? But space does not “go” anywhere, does it, but that’s really the beginning- because the circle only decribes curve “1”, while at the edge of a blackhole space is infinitely curved, so I guess we picture it spiralling into itself. Now in the usual three dimensions it is impossible to picture any of this, and even the whole notion of space “curving” per se is not really picturable. Personally I think 3 D space needs to curve into at least a 4th spatial dimension, just like a one dimensional line must bend into the second dimension like a bow, and a two dimsnsional plane must bend into the third dimension, like a billowing sail. If you bend the gridlines within three dimensions, you cause some to come closer together, but others to move farther apart, and I’m not sure that’s what is intended.
Anyway, back to our description- the point at which space has gone totally “kaboom”, that’e the point of no return of the event horizon. Here space is now time and time is now space. Space is no more space, because there is no room left for space, which means it is not spacious at all, thanks to it been so tightly coiled. Time is now the lack of time because at the speed of light it stands still, and he acecelleration produced by the force of all this cuving means that those conditions are nt reached. Where the sphere lies and how large the sphere is demends on the weight of the singularity, but this surface is a sphere in 3-D since it lies equidistant from the singularity in every one of the three directions. However there is no space between this sphere and its centre, since there is no more space. So its a strange sphere whose centre does not really lie at the centre of a sphere.
But as it turns out all the information of that 3-D sphere lies on its 2-Dsurface. Somehow this makes sense, because the diensionality has vanished, so although the black hole looks like a 3-D sphere from the outside, the same viewed from the inside, “through the looking glass” as it were would not be 3-D, but I’m not sure if you would lose or add dimensions in the transition (I was pretty confused as to why the first photographed black hole looks like a 2-D hole, the glowy stuff that sorrounds it really should surround it on every side, so I though, but I figured this was only in perspective: the glowy stuff is thinly spread, so its “piled up” in your line of vision only at the edges, like a frog-egg).
That’s the shape of our Universe- the analogue to a “centre” is really its surface, but a time-surface. We just don’t know in how many dimensions this is/needs to be, but perhaps its 4, is what I think, although you never hear anyone say “4”, and the reason is probably that one does not need to stop at exactly 4, and we don’t know at how many it does stop.
Where we live is analogous to living inside a black hole, however that “centre” where our 2D info is coded lies all around us, as it would for a creature inside the black hole, and we see it just a bit later in the CMB, the baby stage of our Big Bang, which is us trying to look out of our own Black Hole. To restate that, the Black Hole is all around us (assuming the Black Hole analogy is correct, which we don’t know). Between that old CMB and the “now” where we are, it is a time map rather than a spatial map. Within this lie all the galaxies whose light have yet to reach us- not beyond the CMB, but rather within the interceding time, they just have “switched on” into iur vision yet, because their light has not reached us. When you look up at the night sky, you’re not looking at a space-scene as you would very naturally expect, but rather a time-scene. The reason is mind-boggling but suprisingly intuitive at the same time. This is shocking, really, but you have to remember that seeing the clear night sky is the equivalent of a fish looking out the surface of the ocean or the goldfish in a bowl analogy. Shockingly, we’re looking at the most mundane and yet really at the one thing that is the most removed from our daily experience. Literally all our ancestors could glean from lifestimes gazing at the stars was the movement or fixedity of some of hte brighter objects, some cyclical or periodic movements and how some of those, specifically the moon could affect us from its extreme proximity to us. That’s really the equivalent of the fish in the ocean watching for the positions of various oil rigs and boats and ships of various purpose from industry to entertainment so science and invasion, industrial dump sites, plastic waste and whatever other way in terrestrial life is either reflects into or affects the oceans. But as we said, the reason for this is not hard as I just said. First the fish, though- their entire perspective of terrestrial life is represented on the 2 D curface of the ocean with only specific interaction points at the closest proximity, like that of the Moon affecting the tides on Earth. When we look up at space, we are really not seeing space for the simple reason that space is infinite, and our eyes an neurons simply do not represent infinity to us in any meaningful way. This means that perceptively when you look out at the black sky, really expecting that you were peering at the bottom of something like a bottomless abyss, that’s only what your brain’s telling you “Sean…this is a bottomless abyss”, precisely because we now know that from science that is must be. But you eye only sees black like a blackboard, or black tarp. On that black screen, the Universe simply plays out in time. The best analogy is a movie screen with only the actors and objects against a black background. We know, or are pretty sure that we think we know that the Big Bang happened 13.4 billion years ago. From that time to the present, the Universe expanded either a finite, or an infinite amount,we don’t know which and we’re not sure how its possible anything could just “go infinite” in a finite time which is a mathematical impossibility but is the implication of the flatness problem. So we buy our tickets and step into the movie hall at 13:4 billion years, and this is the scene we see playing out, the one you have on the sky outside. The light of things from infinite distances have no made it onto the screen yet. Where are they, we feell like they’re “out there” when we peer deeply. But when we peer deeper we only see older things, not things further away (only up to a limit with telescopes). Just like when you put on certain lenses you can see things in a movie that you could not otherwise, when we use microwave glasses, you se things that you could not see in the visible spectrum and you see CMB.
We ourselves are embedded in this shape since we are part of the shape of the Universe. We perceive 3-D, but we are actually as many dimensions as space is, since we occupy it, at least that’s how I would see it. Were we less, we could not, just like a 1-d line does not really occupy any real estate on a 2-D page, or a 2-D surface in a 3-D cube. In order to visualise the possibility of more than 3 dimensions, it is necessary to visualise our own selves in only two, so that we can then bend those two into a “shape”. This means that the bending will be in the 4th dimension, since we are visualising one less.
So if we try to dwell on the shape of this object that is our Universe, and using a 2-D surface analogically, we have 3 possibilies: Either that surface (2-D or more) is flat, curved positively back onto itself like a sphere, or the other way which is a negative curvature. Whatever may be the shape, we do not notice it because we are within it, just like when we walk around the earth we do not encounter an edge or a place where we have to stop, or even experience the overall shape.
However from the point of view of the Universe, thsi raises three possibilities: if flat, then the Universe must simply go on forever, in however many dimensions. This is because it is like a piece of paper, but one that could hardly just abruptly end at a hard edge, so it must never end. Second, is a sphere, which is easy to visualise and finite. The implication of a circle would be that parallell photons would eventually meet. I’m not sure as to the implications of such proton collisions. Third, with a negative curvature, we get a hyperbolic type complex shape, where parallel photons “barell off” into infinity, never meeting and growing ever- distant. A hyperbolic shape is constructed by inserting an ever increasing amount of space in between spaces. When you do this to a shirt collar, for example, you get a frilly collar. In a hyperbolic spaces the frills are ever-unfurling, just like in a sphere, the space is closing in on itself.
Now the state of the conundrum is this: from our measurements, we have a flat universe. These measurements have been made by looking at the earliest Universe, which gives us the greatest sensitivity because its the farthest we can look, and measuring the angle we see between specific points that are differentiable because of their different densities. If that angle of sight (through a telescope, of course) is different from what the angle actually is then the reason can only be that space has curves those lines of sight. This means that as far as we can tell is that the Universe if flat. That seemingly leaves us with a few possibilities. First, we might not be measuring a large enough portion of our Universe in order to detect curvature. Second, which is more interesting is that the Universe is neither spherical nor hyperbolic but donut shaped. This apparently solves the infinity-cum-flat problem. The scientific name for a donut is apparently a “toroid”. Now you might ask me how this can be, a donut is also flat, and here I don’t have an immediate answer. But in a way I will have to explain later, it enables that by which one does not go off edge, rather the edge is merely where, Pac-Man style, you reappear at the beginning again- whether left from right or top for bottom.
Incompleteness of Science: Is the Unification of Gravity Impossible?
Einstein first discovered General Relativity, which provided our first description and explanation for gravity, and shortly after Quantum Mechanics, which he also helped invent and pioneered the disovery of quantum mechanics, which explained everything except gravity. This was a little over a century ago, and the last century has been spent in the effort to come up with a “master theory” that would explain both, or explain a unification between the two. If the reader is unaware as to why this should be an issue- both gravity as well as non-gravity, “everything else” constitute the same reality, so a theory that does not explain all of reality is at best, an incomplete theory, and at worst, it is fundamentally flawed and only explains a certain subjective human perspective of reality. For those who prefer household analogies, it would be the kind of disatisfacton one would feel at baking a perfect cake with relation to three of the four ingredients, the eggs were simply left raw straight from the shells.
In this video the narrator, Dr Matt O’Dowd explains that to unify the two theories would be to discover quanta of gravity, so that finally, everything could all be quantum, and we could all hang up our coats. However there is a problem, according to Freeman Dyson with this: even physics tells us that it is impossible to detect anything like a “gravitron” (the quantum of gravity). It is as though physics were simply telling us that there is no gravitron (in my interpretation of the conundrum). When we attempt to measure anything at the scale of a hyothetical gravitron, which is sub-Plank-length, a black hole is created, and the measurement is itself swallowed into the void:
The Vacuum Problem
The Hierarchy Problem
This is a real problem, I’ve heard Brian Green describe this. The hierarchy relates to the vast discrepancy between aspects of the weak nuclear force and gravity. There are several different ways to describe this hierarchy, each emphasizing a different feature of it. One way is that the mass of the smallest possible black hole defines what is known as the Planck Mass (A more precise way to define this is as a combination of Newton’s gravitational constant G, Planck’s quantum constant h-bar, and the speed of light c: the Planck mass is the square root of h-bar times c divided by G.) The masses of the W and Z particles, the force carriers of the weak nuclear force, are about 10,000,000,000,000,000 times smaller than the Planck Mass. Thus there is a huge hierarchy in the mass scales of weak nuclear forces and gravity. When faced with such a large number as 10,000,000,000,000,000, ten quadrillion, the question that physicists are naturally led to ask is: where did that number come from? It might have some sort of interesting explanation. The issue, now called the hierarchy puzzle or hierarchy problem, has to do with the size of the non-zero Higgs field, which in turn determines the mass of the W and Z particles.
The non-zero Higgs field has a size of about 250 GeV, and that gives us the W and Z particles with masses of about 100 GeV. Why is it at a value that is non-zero and tiny, a value that seems, at least naively, so unnatural?
At the bottom of this scale is the vacuum energy, which is the energy generated by all the strange quantum behavior in empty space — virtual particles exploding into existence and quantum fields fluctuating wildly due to the uncertainty principle.
The hierarchy problem occurs because the fundamental parameters of the Standard Model don’t reveal anything about these scales of energy. Just as physicists have to put the particles and their masses into the theory by hand, so too have they had to construct the energy scales by hand. Fundamental principles of physics don’t tell scientists how to transition smoothly from talking about the weak scale to talking about the Planck scale.
Trying to understand the “gap” between the weak scale and the Planck scale. In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi’s constant, it appears that Fermi’s constant is surprisingly large and is expected to be closer to Newton’s constant unless there is a delicate cancellation between the bare value of Fermi’s constant and the quantum corrections to it.
More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass.
The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.
Beginning of The Universe- A Creator?
The Universe as we understand it, at its most fundamental level and in its very beginnings from the best scientific understanding, is a kind of substance that is one of the two:
In the case of a non-bounce Universe: a Substance that cannot help but come into existence. This could be at its very primitive state a “quantum foam” that by some law of infinite probability could not help but spew out a universe or universes like bubbles in a quantum bathtub. Quantum allows for the spontaneous generation of matter-antimatter particles thereby conserving mass, and our Universe is just one such burst of particle pairs on a colossal scale. (Also, the proposed reason there is an abundance of matter over anti-matter in our Universe, necessary for its continued existence is also down to quantum randomness).
Thirdly, te density of atter present also leads to eternal inflation. So you got about tree levels of quantu going, but te first ting tat needs to appen is probably inflation. Witout inflation noting can really be explained. You first requires uncontrollable eternal inflation wit te quantu fluctuations witin tat decellerating inflation probably allowing for ultiple universesto be fored. te reason atter inflates in te first place is because its all so dense. In tat dense state, as Gut originally described it, graviyt instead of being attractive, is repulsive (negative gravity). Now it isnt obvious wat tis actually eans. Wen space warps ti causes positive gravity, or gravity per se, so wat does it need to do to get negative gravity?I’ve eard att dowd say tat tere isnt suc a ting as negative gravity and i can see why. wen space unwarps it just eand tat gravity decreases and eventually ceases to be. have an overall negative curvature (de Sitter space) of space does not iply negative gravity eiter. Overall curvature is not te sae as warp.
The proble is tat we require inflation. Why so/ because te entire Universe is at te sae teperature. Te vastness of space all sits at te sae teperature, ad te CB wic is 400,000 years old universe, te teperature si also identical to seveal decial places across te universe. Tere is noway tat all tese parts of te entire could ust decide to coe up wit te sae value of teperature for no reason at all ecept tat if tey wereat soe point in contact wi eac oter tereby enabling tat equalisation. but te tie required fro tat initial contact to te distance of te 400,000 universe is gretaer tan te speed, of ligt, in fact any ties greater. So inflation is literally a teory tat joins te dots between singularity and CnB by supplying te faster-tan-ligt speed “kaboo”, and proble solved.
Te nly way to even visualise inflation is te presence of a sort of uniaginably onipresent quantu foa. It would siply ave to be present “everywere”, watever that meant else we would have to describe its boundary. Being quantum it could have ultiple fluctuations many of wic inflated, as did our universe, lucky us. the point is tat like musroos growing on a wooden log, these inflations require substrate matter. it would be absurd for tere to only ave been on inflation- wy sould reality be this little grain of sand tat wants to inflate? But any grains can be seen as quantu fluctuations to tat is ore satisfying ad the proble is pushed back to te quantu “log” producing tem. Te inflation itself is about as interesting, f we are to believe Stepenawking, as a rubberband stretcing, and with physics about as interesting too. Because it twangs back, conservation laws are not broken basicallly, and so we’re fine, scence is fine. Te background foa just appens to be this twangly tig, like those cewing gus tat wen you eat te tey ave tese flavour granules tat just “pop” in your out, tat just ow it appens to bem incredibly dynaic, its an incredibe substance really- a cosic Willy Wonka treat. so tat’s why we ave reality- tere just cannot elp but be an mass of uncertain stuff that because it so uncertain just has to go pop pop pop inflate. Tere just obviously could not ave been a peaceful bunch of certain stuff, like aybe a big cosic tofu, no, it needed to be uncertain stuff that did no know weter it wat ere nor tere so it hkept going pop pop pop inflate soetimes, but pop pop all theh time. Whcich really takes it beyind a mere uncertainty situation, because uncertainty accounts for fluctuation, not inflation. so its really potent stuff te uncertainty of wic kicks in i a very rich and textured anner. Its not like the student sat at a test trying to decide whether to tick “10” of “20”, we’re not dealing with that kind of uncertainty, no. That indecision is soehow a really rich an layered and textured indecision rater than a mere decision. Thats’ the kind of stuff tis “moter log” is.
Now scientists ave tried to ask if the beginnings of these inflations iply a beginning, wic is the obvious question obviously. well teyre obviously not te beginning because the ting tat’s inflating is tere first before it inflates. You cannot inflate a ballon unless it was deflated in te first place. In wic case inflation cannot have been te initial condition. Or a fluctuation is an occurence, it requires to be a fluctuation of a ting tat is not a fluctuation. You cannot ave a model wic is te fluctuation of a fluctuation. But rater tan call te onset of te fluctuationa beginning, scientists ave tried to substitute the “beginning” wit a “loop” instead.So the inflation does not start at a beginning, rater it starts fron itself, like te univese coing out of its own anus (sorry tat was te first iage tat popped into ind). So te fluctuations are loopy-tied so tey’re OK, and also inflationary so we’re OK. If tis was truly being produced as a sweety treat for cosic colossuses, you can see tat it would sell lie ot cakes, at least one would ope. supernovae, te explosion of the power of a billion suns would mre or less pass without notice in these. Or peraps it would be no ore tan theh cosmic equivalent of a sponge cake, and a eans to ake te batter rise a teaspoon of tie loops. Paul Steinhart himself, one of the three architects of the inflationary model along with Alah Guth and Linde no longer holds to it and is ow developing alternate ideas related to colliding branes, which I think necessitates string theory.
Among other problems with inflation is that the question of time once again becomes an issue and the problem of a beginning remains. This is because inflationary cosmology states that the Big Bang that started our Universe represents a local decay in that inflationary field, with an output of an infinite number of such decay points. But the “parent” field must inflate in the first place, which also means that it begins from a lesser-than-inflated state.
I heard Brian Green ask James Peebles, a Nobel Laureate physicist (2019) with incredibly influential work in cosmology (from his Wikipedia page: “widely regarded as one of the world’s leading theoretical cosmologists in the period since 1970, with major theoretical contributions to primordial nucleosynthesis, dark matter, the cosmic microwave background, and structure formation”) on the former’s World Science Festival podcast, 2023 version, when Peebles actually initiates a point by saying “I might mention one point: yeah we talk about internal inflation, but eternal inflation it’s very clear can be eternal into the future but it can’t be eternal back in the past, we run into these logical problems as well as these empirical problems (pause) and we pay attention to them…” to which Brian Green asks “and do you think we’ll ever be able to deal with the issue of the past, I mean the question I’m asked most frequently both by students and by the general public is ‘OK you’ve got this Big Bang idea and OK maybe this inflation…but what got it started, right? why was there this inflaton field, you know, why was there any expansion right in the beginning at all?’…”, and Peebles immediate response I thought was surprisingly defensive “we’re waiting out of our subjects” Brian Green persists “you think so?” Peebles reiterates “yes”, Brian Green: “and which subject are we wading into?” “Speculation” is Peebles only reply.
Now “tie loops” sounds self justifying, but it isnt obvious what a “time loop” means. I know what time travel eans, and tat itself is quite hard to swallow, but in that, when one goes back, one ceases to be in the front, you see? When you go back in te past you are not also in theh future, you lose tat position in the future. So it sees pointless for te universe to go backward and forward in tie, but again tis sees to be te best teory we ave, a univere that is bot back and fort. So basically i tis odel it goes forward, ten it akes a dort of U turn, so I gues it would see tie slowing and ten oving backwards, and ten at soe point it encouters itself (or its own arse, in te previous analogy). I cannot understand anything in te nature of tie tat would ake one’s posterior appear anteriorly to itself as poeterior to it. First of all we don’t even understand tie. ANywy, tat’s state of te art.
It also came to me that there is no time travel even in the future. I know this because if there was then strange things would have been happening even in the present. Like people would disappear or change as you were talking to them because someone in the future had caused a change in your conversation’s past. Not only that, in my opinion, there is also no travel in any alien civilization, and probably also no warp drive. The unique nature of time means that the prerequisite of accepting a hypothesis, which is usually that of evidence does not apply, because we can obtain evidence of that fulfilment from the future, and we can with some assurance even take the absence of that evidence as nullifying the hypothesis. “Prerequisite” is a temporal term).
The obvious problem here is how do we account for the existence of the “quantum foam” in the first place, this simply seems to have pushed the envelope a step backwards. Another problem would be that this model would seem to necessitate a multiverse, which itself is controversial even in the scientific community. As someone said- one cannot posit a theory involving that which is unobservable to prove the existence of the observable. The reason this is necessitated would be that it would be somewhat special pleading to presume that we were the only Universe that the Quantum foam ever threw up or will ever threw up. If there really is a quantum foam “out there”, then it should be doing quite a bit of this. Even a speck of Radium does not decay only once in history, everybody knows that.
The Bouncing Universe proposed mainly by Robert Penrose seeks to give an explanation for the beginning of the Universe if anyone can understand the mechanism (he explains this in debate in an episode of Unbelievable?), however I do not sees how he can get away from the entire sequence of infinite cycles itself needing an explanation, and this again necessitating the quantum foam to hold it all in existence.
Among other problems with inflation is that the question of time once again becomes an issue and the problem of a beginning remains. This is because inflationary cosmology states that the Big Bang that started our Universe represents a local decay in that inflationary field, with an output of an infinite number of such decay points. But the “parent” field must inflate in the first place, which also means that it begins from a lesser-than-inflated state.
I heard Brian Green ask James Peebles, a Nobel Laureate physicist (2019) with incredibly influential work in cosmology (from his Wikipedia page: “widely regarded as one of the world’s leading theoretical cosmologists in the period since 1970, with major theoretical contributions to primordial nucleosynthesis, dark matter, the cosmic microwave background, and structure formation”) on the former’s World Science Festival podcast, 2023 version, when Peebles actually initiates a point by saying “I might mention one point: yeah we talk about internal inflation, but eternal inflation it’s very clear can be eternal into the future but it can’t be eternal back in the past, we run into these logical problems as well as these empirical problems (pause) and we pay attention to them…” to which Brian Green asks “and do you think we’ll ever be able to deal with the issue of the past, I mean the question I’m asked most frequently both by students and by the general public is ‘OK you’ve got this Big Bang idea and OK maybe this inflation…but what got it started, right? why was there this inflaton field, you know, why was there any expansion right in the beginning at all?’…”, and Peebles immediate response I thought was surprisingly defensive “we’re waiting out of our subjects” Brian Green persists “you think so?” Peebles reiterates “yes”, Brian Green: “and which subject are we wading into?” “Speculation” is Peebles only reply.
Principle of Least Action (PoLA) and Standard Model Lagrangian (SLM)
When one tries to contemplate the principles that might lie at the root of physical equations, really two possibilities come to mind: the principle of least action and the principle of symmetries, and of course one then wonders whether these are both the same thing or at least closely related, but it might not be possible to answer that just yet.
If we were to visialise the Principle of Least Action (PoLA) in a familiar way, the best example would be the parabola. This is the path that an object traces when thrown away from the Earth. PoLA tells us that the such an object’s path ends up being parabolic is that every point along that parabolic path minimises the difference between the kinetic (KE) and potential (PE) energies for that object. This quantity, which the universe seems always to attempt to minimise, is termed the action, which is why its called the PoLA.
Is it possible to intuit why of everything, the Universe chooses this particular quantity “KE-PE” to minimise? For example why could it not be KE+PE, or why not just KE, or why either of these in the first place? Well what defines our own universe seems to be movement, and so if there is to be any principle of movement, it is likely to be related to the energies involved in that movement, since movement in turn, is defined by the energies, or more accurately, the momentum vectors (momentum adds direction to energy). Further, since KE and PE are both involved in motion, that principle requires to involve them both rather than must one of them. The KE takes into account the mass energy of the object as well, since the larger the mass, the more the object’s momentum. PE is simply an object’s potential to gain KE, so whatever it the object’s maximum ability based upon its mass and its possibility, I would say something just below light speed, that would be “potentially” its PE.
In routine use, the PE of an object would depend upon its current conditions. A rock sat on top of the hill two inches away from the slope has got not relation to the slope, all its PE is related to the centre of the Earth and gravity. However for a ball sat atop a slope about to commence an inevitable journey down it, some of that “universal PE” somes into the picture as the PE for the scenario, which is equal to the maximum KE for that situation which it will find at its maximum velocity downhill when you least want it to meet you for that exact reason. Once it comes to rest, there is no relevance of the slope anymore, the rock does not know that the slope is there. It is now simply part of the Earth, which is basically a rock hurtling rwound the Sun, which is how we would do the KE in the new situation. This has been the source of my confusion wiht regards to PE and KE, which is, the two rest states at the beginning and end of the motion down the hill, and this answers it- the system is only concerned with the state of motion, not rest, and we could say that it started at the first moment that the motion became inevitable, like the moment the archers finger came off the string, rather than any moment before that.
However William Hamilton made a correction to the Lagrangian which was to show that that the preferred path was not only the one which minimised the action but also where the action was maximised. This this also gives us the initial and ending moment of the motion. The reason that both cases are true is from tracing the curve- the slope of the curve is zero both at the peak as well as the trough and these are the postitions of least rate of change. The rate of change being tbe least implies that it is also least likely to change. Thus instead of calling it the PoLA is can also be called the principle of stationary action.
The greatest advantage probably of the PoLA used in the Euler-Lagrange equation is that it eliminates the need for the infinite direction vectors required in Newtonian mechanics, and enables us to predict motion purely through the energies. The Lagrangian can then be used in all our theories. Writing the Lagrangian in “proper time” enables us to plug the relativistic Lagrangian into the Euler- Lagrange equation, we can accurately predict motions of cosmic objectsl like the oath of mercury around the Sun, something that GR was able to correct from Newtonian mechanics. In this version the object travels along the path which takes the least “proper time”. This is because for objects with mass, proper time also depends upon their mass, because mass bends spacetime and affects their acceleration through it. For light which does not have mass, proper time depends probably on the mass of the medium through which it travels and its degree of refraction. In quantum Feynman discovered that the action is determined by the phase shift of all the possible paths. The most likely path is the one where all the phases line up and constructively interfere. This is why you see constructive and destructive interference in the slip lamp experiment. This was his “Path Integral Integration Formulation” of Quantum mechanics, which perfectly reproduces the previous versions of Quantum mechanics like Schodinger’s equation of the evolution of the wave function.
Further, Paul Dirac’s equation for the evolution is also a form of the Lagrangian, and in this case, for a spin 1/2 quantum field. In a similar way there is a Lagrangian for each quantum field. Combining these gives the SLM which tracks the movement of particles through “configuration space” (including space, soacetime, quantum states, and so on). So there is an equation that gives the least action in electrodynamics, in general relativity and so on. Sabine Hossenfelder explains, “The path that the particle follows is the path that satisfies a set of differentials called the Euler-Lagrange equations (EL). For example the EL for an object thrown against gravity gives Newton’s Second Law, the EL for elecrodynamics are Maxwell’s equations. The EL for GR are Einstein’s field equations.”
First, the difference between KE and PE is going to be zero either when both are zero, or when both are equal. Let’s once again take two familiar objects to try to visualise this: a rubberband and a stone rolling down a hill. These have max PE and no Ke when it is in the state of maximum stretch, while it has max KE and no PE at just the moment before it reaches the bottom or the state of flacidity, both at maximum speed (where did the energy go? that particular model having collapsed into a state of lowest entropy, the energy is at thsi point dissipated into the Universe at the point of impact. If it is to be reset, then the potential energy will have to be externally supplied to reset it to a low entropy state).
So we found one point of least action, it is the state of total energy dissipation in the system. Here KE-PE is simply 0-0 which is 0. In a state somewhere in the midst of the free fall we would also get KE=PE with real values. One can perhaps conclude that there is a certain freedom of motion in this state, where not only does the object possess a respectable momentum, but also respectable energy reserves to continue that motion, like a car cruising along the freeway, having used up whatever fuel it required to get up to cruise speed. Prior to this the momentum is low, while posterior to this the reserves dwindle. I would say that perpetual motion objects like those in space always remain in that state, like say if your car itself were in space. This is why objects in space are not stationary, they’re all “cruising”.
It makes sense to have this relation between KE and PE, because if we examined the other possible relations: There would not be any point it using the addition, because the maximum addition is always going to be limit defined by e=mc2 anyway, energy conservation would dictate that it could never be any less either. Multiplication would have the same effect. But KE-PE gives us a relationshoip which depends upon how that maximum or total energy is distrubtuted between the two states. Finally PE-KE would be the same equation, just with negative values.
So that, imo, is out best intuition of any possible meaning of a principle that underlies all of physics- that it seeks to preserve either a certain freedom or optimisation of movement, if you will, or the complete run down of the system. When it comes to motion that is not frictionless like in space, it will not be possible to secure the least possible at every point along the object’s path, rather the least possible “given the circumstances”.
But at a universal scale, it would be possible to say that the CMB irregularites are the reason that we have motion in our Universe. This is because it is possible to say that particles in a features space would not need to move at all in order to preserve their least action. Those movements of objects in space is made necessary through the organisational features in the Universe, which in turn creatures features in the universal gravitational field.
The Lagrangian for the Universe
It should be no surprise, given all we have said, that we have been able to pinpoint a principle of Universal motion, and since physics itself is the activity of describing motion and change, this is a principle underlying all of physics. So from the initial insights of Fermat, who used the PoLA to describe the movement of light through a refracting medium and onto the developement through Lagrange, Hamilton Paul Dirac of our understanding and widening of the scope of the PoLA, we have been able to write the equation that describes the movement not only of light, but of everything in the Universe, which is really the closest equation of predicting how time itself unfolds in subsequent events. In other words, and destpite its significant limitations, this is our closest “theory of Everything”. Right enough, Einstein’s equations where matter tells space how to bend, it does so precisely through the difference between the KE and PE.
This is the coffee-mug version of the SLM from CERN:
This is the full version of the SLM, written up by Italian mathematician and physicist Matilde Marcoli:
We can divide this up into five parts.
Section 1: These three lines in the Standard Model are ultraspecific to the gluon, the boson that carries the strong force. Gluons come in eight types, interact among themselves and have what’s called a color charge.
Section 2: Almost half of this equation is dedicated to explaining interactions between bosons, particularly W and Z bosons.
Bosons are force-carrying particles, and there are four species of bosons that interact with other particles using three fundamental forces. Photons carry electromagnetism, gluons carry the strong force and W and Z bosons carry the weak force. The most recently discovered boson, the Higgs boson, is a bit different; its interactions appear in the next part of the equation.
Section 3: This part of the equation describes how elementary matter particles interact with the weak force. According to this formulation, matter particles come in three generations, each with different masses. The weak force helps massive matter particles decay into less massive matter particles. This section also includes basic interactions with the Higgs field, from which some elementary particles receive their mass.
Section 4: In quantum mechanics, there is no single path or trajectory a particle can take, which means that sometimes redundancies appear in this type of mathematical formulation. To clean up these redundancies, theorists use virtual particles they call ghosts. This part of the equation describes how matter particles interact with Higgs ghosts, virtual artifacts from the Higgs field.
Section 5: This last part of the equation includes more ghosts. These ones are called Faddeev-Popov ghosts, and they cancel out redundancies that occur in interactions through the weak force.
This is the link to my source: https://www.symmetrymagazine.org/article/the-deconstructed-standard-model-equation.
Cosmological Constant and the Expansion of the Universe
This is the reason that the vaccum energy of space is like reverse gravity: The increasing space is caused by the universal expansion implies increased (mainly dark) matter and energy being created. Picture cubes of space. If you were say, stacking boxes of supplies for a warehouse, and you were adding to a pile, you could go on adding and building the size of the pile, no problem. But you would obviously have to get the boxes from somewhere, with the requisite contents etc. The more boxes you stack, the more the mass and therefore that gravity that you exert. In the case of the Universe however, these boxes aren’t being brought from anywhere, they’re just appearing, so the work involved in this appearance, which you and your suppliers were responsible for in the example, is simply sidestepped. We don’t know how the Universe manages to do this (since there is no Space Depot), having done so, it knows that there is a negative in the balance sheet. Therefore these additional boxes exert a negative gravity instead of positive that they might have otherwise. I heard Adam Reiss who was awarded the Nobel prize for his discovery of the accellerating Universe along with two others, desctibe it thus: Normal matter exerts positive gravitation and causes a tendency for the Universe to recollapse. However the Universe is expanding, and in an accelerative manner. Imagine the Universe inside a cylinder. In order to put more space in, you have to do work to pull the piston back, and this as it were creates a “negative pressure”. This sounds exactly like drawing on the piston of a closed syringe. Strangely it does not sound like blowing a balloon because in this case the pressure and work is from the inside and will lead to a deflationary tendency. Its certainly very strange because the differential is seeminly between the exterior and interior of the system. Further, this is nothing to do with the curving of spacetime (or is it?). Intuitively, this “negative pressure” is mind-bending, precisely because it seems to make sense with a closed system with an exterior. But honestly, no one understands this. There is no closed system in nature that is its own accelerator, even the greatest force generators like stars and blackholes burn up energy, they do not add to it and eventually they burn out. Acceleration is never self-sustaining and it is always supplied from the outside. However there is nothing in the Universe as a whole that is “burning down” because its total mass/energy does not disintegrate, rather it is systems within it that arise and deteriorate. The Universe as a whole is not only not losing mass/energy, its gaining it, its almost as though the Universal clock is ticking back from the future, going slow to fast instead of the other way round, or as though the physics that applies to things within the Universe is just not the same as the physics that applies to the Universe as a whole. This is just one example of where we run into problems, in fact every one of the conundrums has a universal component to it, like what’s the “inflaton field” doing where it is, and so on.
What is striking about the cosmological constant is how tiny it is, it is 0 followed by a decimal and about 20 more of just zeroes with a 1 on the end. Its only because there is so much of space that its so powerful on the grander scales. But had that little 1 on the end been missing, I do not think the Universe would have expanded at the very beginning.
Infinite of Time or not?
The possibility of these cycles being infinite should not be a problem. If God can create time, he can also create infinite time. The point is that he is not in time. Perhaps time has a positive curvature in which case it goes around in a circle like a positive curvature universe model. Or perhaps Stephen Hawking was right and we have imaginary time. Jimmy Akin, well-known Catholic apologist has robustly defended the possibility of infinite time in theistic models against its opponents like William Lane Craig who uses this as one of the bases for his Kalaam cosmological argument. Akin, I presume would use more of a “grounding” type modified kalaam argument.
One of the main approaches to denying the possibility of an infinite past, and which is also seemingly quite strong and well-received, is very simply stated: were there infinite past moments, then the present would never get here, since it is impossible to traverse infinity. This seems to envisage someone on a sailing boat on the high seas, and us on the shore waiting for them. If the sea is infinite, we might as well be waiting forever. However if we’re trying to envisage a continuum of time, there is no necessary “stopping point” represented by the shore. The present is not a “stopping point” anyway since it is not as though time has actually stopped for us. In fact, physics does not predict any mechanism or calculation for the stoppage of time. Time is infinite in the forward direction in both the flat universe model as well as the universe with the positive curve in which time is simply circular without beginning or end. I think the parabolic universe which is the third possibility and ends with the big crunch which could be possibly seeing as a stoppage of time. As far as we know, our universe is flat, which is the first of those options. So in summary, should it even be true that the past encountered an infinity of moments of time, the present would simply be one of those moments. The present did not even stop for the duration of you reading this article, that can be proven, check the clock.
Jimmy Akin had a classic interview against William Lane Craig in which he defended his position. The other arguments against a past infinity used by Craig are the attempts to show the absurdity of an infinite past and its contradictory nature with illustrations like the well-known Hilbert’s Hotel paradox. I do not believe the that particular example is even valid, and I will describe why. But the advantage of not taking a strong position on the matter of infinities is that it can accommodate theism with bounce or imaginary time theories.
The problem with Hilbert’s hotel:
The problem with Hilbert’s Hotel should be very obvious. I’m not mathematician, and Hilbert is a giant, so I assume there are other applications for it, but in terms of making a logical reductio ab asbsurdum for the impossibility of real infinities it does not seem to work. The paradox basically seeks to show that it is absurd to have a hotel with infinite rooms because were we to say that all the rooms were occupied, then we can still fit in more guests merely by moving each guest up one room to the next number. This does not take off because our first premise itself said that all the rooms were occupied, there should not be anywhere to move the occupant up to.
Rob Coons proposed a paper-passer thought experiment in an attempt to show something similar which goes like this: imagine an infinite number of persons that come into existence one for every moment in the past, starting at Mr.0 at the present time and going backwards (leftwards) from there as Mr.1, Mr.2 and so on. Now each passes a piece of paper to the one on their right. If it is blank they write their name on it (eg. “Mr.354”), else if there is already a name they pass it on to the right in turn, right up to the present. Apparently the paradox is that Mr.0 cannot receive a blank chit because someone on his left, like Mr.1 at the very list had they obtained that same blank piece would have written their name on it. I’m not sure what the paradox is, but the problem here for me is that the very first person on the left at the supposed end of infinity should never finish writing his name in the first place because no matter what he wrote, he would need to add a number to it. The first chit pass should never occur, so there is no chit passer. You have to assume the reality of a infinite number in order to start to start the process of the thought experiment itself, which is question-begging.
The grim reaper thought experiment is quite well known and again it is quite problematic. The proposed absurdity is that no reaper gets to kill the guy because one before them has done it already. But that sequence seems to end at the mininum quantum length of time, so there is no paradox here. The paradox seems to assume that time is infinitely divisible. This is also not an example of a real infinity anyway, rather an imaginary one, like a supposed infinitely divisible length.
Computational Boundedness- Stephen Wolfram
Stephen Wolfrum is doing ground-breaking work specifically in computation theory. He’s really taking off from the work started by Conway and his “game of life”. The theory is that even a simple set of rules can lead to unexpected complexity in a system, and what Wolfram calls “computational irreducibility”. This means that there is literally no means of telling at the beginning how things are going to work out at the end. For example, the only way to compute how the world will be, or how a specific individual will be 5 years from now, is to wait 5 years- the system itself is the computation. It’s exactly like Douglas Adam’s conceptualization of aliens building a computer they called “Earth” and allowing its history to play out in order to find out how it would play out. Time, as Wolfram sees it, is the inexorable application of the rules to the system, the rules being the laws of physics in our cases, thus bringing about changes to the system. We are ourselves unable to conceive of the end of the game because we are, as he puts it “computationally bound” creatures- we’re in fact part of the computation.
Finally he even posits in the interview with Brian Greene a God-like “computationally unbound” entity which does not suffer from the constraints that we just described. It would not make sense for the one setting up the game to alsobe part of the game, nor does the model provide for the possibility of a game without an external set-up. For example, Conway’s game required Conway to set up the game and all of its rules. And we know for a fact that the “rules” of our Universe are far from simple, so Conway’s simple game can possiblly serve as an analogy for Creator-Universe. A possible objection to this might be that what the game scenario actually does show is that there is in fact no role for an external force, once the initial set up and rules are available. So the argument is going to hinge on the question of availability of the initial set-up, which again is an “origins” question and type argument. The fact of the developement of complexity should not preclude the possibility of divine intervention, just as the possiblity of developing complexity should not in and of itself imply the necessary developement of lifelike forms and intelligence. In a way this becomes similar to the question of the possibility of the co-existence of evolution and creation.
The next couple of videos should help to outline Wolfram’s thoughts:
What is the Fabric of Reality
is it Ideas?
All of reality as we know it can be explained by just two (deceptively complex) equations: the standard model Lagrangian and Einstein’s field equations for general relativity. We do not know, nor does there seem to be any way of telling what the Universe itself is made up of, or for that matter, what we are made up of, because at the root of everything we end up with fields, and those fields and their behaviors are described by those equations.
A field is “something” of which every point can be described by certain co-ordinates. This obviously would also imply that a field were that which possessed more than one point “in space”, else there would not be any point, nor space for points. It there seems like fields also imply that “space” were fundamental, but Einstein bound space and time together, so does that mean that space-time were fundamental? And this doesn’t get us very far either, because what is space, after all? When we describe space, we usually describe the things in it rather than the space itself, except when we want to describe gravity, in which case we tend to describe a grid of lines, literally without the lines being made of anything.
The simplest representation of a field is a two or three dimensional graph. But what is it in reality that represents the “paper” on which the graph is drawn, the fabric that is amenable to manipulation by these field equations, one which ensures that certain logical, or even probablistic axioms are played out in reality. For example, saying “my 4-year old child built a “Play-Doh” sculpture” is an adequate and satisfactory description of your child’s activity. However if you said to them they had built a “Play-Doh” sculpture without Play-Doh, the description would be incomplete in a fundamental way, even though you had described all the other details of your child’s movements with greater and ever-increasing clarity. This increasing clarity of description is analogous to our increase in scientific knowledge while missing the explanation that makes those descriptions possible.
Is it Numbers/ Mathematical Ideas?
In this video, Matt O’Dowd presents the proposal by Max Tegmark that Math itself, that thing we teach children in our nurseries, might itself be that fabric. Matt expresses his “concerns” regarding this proposal. First, were it true, then Plato was right, there exists a realm of “ideas”, more real than our own. Plato never attempted to try to explain the existence of that realm, while Aquinas inferred that these ideas were in the mind of God. Ideas, after all, exist in a mind, or even comprise and constitute it. Second, Matt observes that the fact that Godel showed that math itself is incomplete, would seem to undercut this hypothesis. How could something incomplete, or an incomplete explanation underlie reality. Godel’s theorem itself is hardly something that is straightforwad for mere mortals to handle and I feel that different scientists have somewhat different interpretations of its significance.
This is not a problem for God of course, since it is not the case that math were itself God, or the only thing in God’s mind. The way that I understand it, Godel’s theorem states that in any mathematical system, there is/are some statement(s) that cannot be proved by that system. Now, it makes sense that a system cannot prove that it is valid by employing itself for the purpose, because that becomes self-referential. The system itself is not set up for that purpose, the point of that system is to serve as a reference points for validating other statements, not itself. But that’s what logic is too, it is a system based on certain axioms (non- contradiction, excluded middle, identity). The law of non-contradiction cannot prove that it is itself true. We just feel that it should be. The same with math, Euclid had 5 (or was it 10?) axioms, which included those things that we feel cannot but be true, like the simplest basics addition, multiplication and geometry.
But that’s why these systems cannot ever be complete. Just like math, I would point out, neither has logic ever been shown to itself be complete, and perhaps Godel’s incompleteness applies to both. But this is the kind of statement that Godel (and also Turing) employed in order to demonstrate this incompleteness, the self-validation paradox, if we can call it that, which at first glance seems self-referential. But pehaps that’s the reason underlying the incompleteness. Alan Turing used a similar kind of method in his “halting problem”, which shows us that all computation too must be incomplete. As I understand it, this is achieved by feeding a computer as a programme into itself as a statement and expecting it to solve itself.
Roger Penrose possibly has the deepest insight into the incompleteness which as he states in interview, is one that he himself got from a mentor (can’t remember the name), but which stuck with him. What he states (and it might be that I do not fully understand him either) is that although as we noted above, a system cannot prove its own validity, yet intuitively we are able to grasp that it is valid (based on something about the system itself?- if that’s the case then I’m not sure how). But what he says here is significant- according to Penrose it serves to show that our own minds cannot be computational, because they seem to be using something beyond computation, or at least something that is not bound by it in any case.
Math being the fabric of reality, does not imply mathemathical numbers and equations floating around in space, of course, rather that they were that which to us appears as spacetime, and somehow the reason for the wave/particle quantum objects and the quantum beavior of those objects. I think Max Tegmark’s insight here is that when you look down to the basics of what differentiates entities from others in our physics worldview, its numbers.
At the deepest level our “solid” reality is made up of quarks and electrons, nothing else (the energetic reality we experience is just eh electrostatic force between charged particles. Apart from these there are the forces that bond the quarks within the nucleons, and the nucleons themselves within the neuclei of atoms, and gravity…and the Higgs…and whatever else we ahve not yet discovered).
The only differences between quarks are spin, colour charge, these determine all of their properties, (as I understand it), their masses are deterimned by how those properties interact with the Higgs. The ability to have those numbers is what makes a quark a quark, and a particular type of quark. Nothing else has those numbers, in all of nature that particular set of numbers is unique to the quark. Furthermore, these basic sets of properties describe all things in nature and there is nothing else in nature, which is spin, charge, color charge, parity (this is why the deepest symmetry of nature is CPT symmetry, for example, but we don’t discuss that here).
Disconnect between Relativity and Quantum
We have effectively two theories that work on different scales, but not together at the same scale. This means that there is no General Relativity at the Quantum Scale, and (this is arguable) there is no Quantum on a macroscopic scale. I say that the latter is arguable because we are increasingly developing the ability to prove that some microscopic yet non-atomic (these are usually collections of atoms) objects can display quantum properties. Neither theory works in a Black hole- type singularity, but that’s another issue in itself.
The problem to put is most simply is this: Spacetime is a thing in Relativity. When space bends, it causes the gravitational force at the macroscopic level. However there is no such thing as a “Quantum of space”, the Quantum equivalent of a space-particle, which can be used in Quantum calculations. Quantum deals with fundamental matter and force particles, this is what you stick into the Feynman diagrams in order to make calculations in QED. But you got no space particle to stick into it.
We could say that we got no time particle to stick into the Feynman diagrams either, but Quantum seems oblivious to time, everything, if I understand this correctly, is either light speed if it is massless, or some factor of that if massive. The waves will wave forever, completely oblivious to the time. That’s simply what they do, so in a sense, a wave or particle does not suffer passage of time. That’s just my take on quantum time and I’m only making it because I know time is a mystery.
Sabine Hossenfelder gives a decent outline of the role of the particles in the standard model. The matter we are familiar with is made up of just 3 matter particles (fermions)- the electron, up quark, down quark, and 9 force particles (1 photon for the em force, and 8 gauge bosons for the strong and weak nuclear forces). The other particles seem to turn up in particle reactors and are extremely short lived. They also serve to complete the holes in the model. Particles can decay into other particles as long as energy is conserved. That they decay does not mean that they are actually made up of those particles, they are not made up of anything. We know this because, among other things, they can decay in many ways. I completely cannot understand why such a thing should be possible, neither have I in my humble readings come upon an explanation as to why it should be. But we shall have to leave this cofounding matter here.
This is a good interview:
The Incompleteness of Math
A Mazda system which started axing and builds up to proves. The exams are things that are believed to be self-evident. Euclid proposed 5 actions
system a geometry unless it satisfied them:
- A straight line may be drawn between any two points.2. Any terminated straight line may be extended indefinitely.3. A circle may be drawn with any given point as center and any given radius.4. All right angles are equal.
But the fifth axiom was a different sort of statement: - If two straight lines in a plane are met by another line, and if the sum of the internal angles on one side is less than two right angles, then the straight lines will meet if extended sufficiently on the side on which the sum of the angles is less than two right angles.
These accidents are really so obvious that one wonders why they need to be stated in the first place. Open the fifth cause much controversy later is perfectly logical in plain geometry. It’s only non euclidian geometry that lines that start off at right angles to their perpendicular bend towards or away from each other at distances.
Anywhere we not discussing non-Nictorian geometry is the pointers. The exams of mathematics are ones that should not be debatable at least from common experience. For example, 1 + 1 always equals to 2. We know from quantum theory that this does not always hold and that quantum can suck things out of the void because of the uncertainty principle for infinitasimately small time periods. We want debate this here but I raise it up. The reason I raise it up is it might provide an explanation, or one of the explanations for why things go wrong later.
What godles theorem says is that the system constructed from these actions does not reduce every single truth in the real world. No, the eczema is generally very well respected and godell is a well-revered mathematician. What is trains about his proof? Is that the method is to use a self-referential question in order to discount the system. Personally my instinct is to object to this, when in fact we find similar self-referential questioning coming into several other arguments which are look at. Descend lead on the question whether we actually understand self referenceality itself or whether in fact physical systems in the case of physical systems, they showed actually be subject to their own stipulations.
The Incredible thing is that Godell users something not very different from a liars paradox. The liars paradoxes code simple. It’s the t-shirt on the front is written. Whatever it says in the back is a lie and on the back is it in order? It says in the front is true. Or you can have two persons standing and partying in each other and saying the same thing. Go to Texas statement in mathematics which says “this statement can not be proven from axioms” . The arguement is that one should be able to have a proof for the existence of a reality that is unproofable, since at is a true state of reality
Basically, if cannot prove purely from the exams that a statement cannot be proven from the exams. Robert Penrose says that. Basically, we know that this is true. However, we cannot prove from that system that it is true.
You can’t claim that you can prove it, because it says that you can’t prove it. You will be proving It were true that you could not be proving. You will be proving that you cannot prove. This is same as the liar paradox. “The statement is false”. If what the statement says about itself is true then it is not true. This is the problem of self-referential statement switch Burton russellville is trying to eliminate through principles mathematica. Apparently symbolic language cannot prove its own truth without an external referent. This was exactly what competition and language is trying to eliminate.
Any computably axiomatizable consistent theory of arithmetic T admits of true but unprovable statements, end the second incogeness theorem states that T cannot prove its own consistency.
Is mad minutes possible that things like the goldbach conjecture riemanns hypothesis and the twin primes hypothesis might be unprovable. The continum hypothesis is also unprovable from set theory. The states that there is a set in between the unequal sets of real and natural numbers.
What it says to indicators that the mind is not a digital computer and this is what Roger Penrose and Lucas asserted as an interpretation of being completeness.
Honda will mouth complete then everything and all of reality could be expressed computationally. This means that everything in reality could be expressed merely by the manipulation of symbols that is syntactically as Frenkel says
One of these was made by Burton Russell who argued against set theory saying that the savage contained all of the sets. Could not possibly be a set sense at obviously does not contain itself. The precise manor in which the objection of stated was that the set of all things that do not contain themselves. The set must contain itself, being one of the sets which does not contain itself. In which case it does contain itself. My immediate instinct on hearing this is that there is no such thing as a set Which contains itself in the first place. The point of a sad is not to contain itself, What’s the weather? It is simply a collection. For example, (do not contain themselves. So the question itself is flawed. India. This was how the issue was resolved by actually accurately defining what is that actually is. This was actually helbert who resolve
However, more search examples give craft picking up. One of these is the holding problem which turing stipulated. He stipulates that a turing machine which is a computer. If it is fed by its own self cannot decide on the problem. Again, the point of a computer it would seem was to solve problems, not a solve itself.
But what we found examples of were things about for genuinely on the sideable and this led credence to the idea that mathematics cannot possibly be complete and they must be something to these objections after all.
For example, there’s a tile game where you cannot decide whether the tiles will cover the plane or not, or aviation ticketing, or the great magic game gathering, or classically Conway’s game of Life, who decided weather that particular pattern will end or not. Indeed, it’s not very obvious. What does self-reference salary is. But what is certain is that the thing that we would have liked to decide cannot be decided.
The idea of Hilbert and Russel, the “formalists” is to write the theorems of math in a formal manner using a certain symbology such that all mathematical truths can be derived merely through the manipulation of those symbols, making it an entirely syntatical or computational exercise, without the need for reference to any meaning outside of it. The symbols in a way are a substititue for words, because the numbers themselves cannot talk or tell you what they mean, nor can equations or solutions. For example if you looked at Friedman’s solution to Einstein’s equations you could not tell just by looking that they indicated a singularity, or if you looked at GOdel’s own silutions of them you would not have deduced the tim loops, or if you looked at Einstein’s own equations you would not have known what they meant period. But the equations already have symbols in them, of course. At the simplest level, for example, having such a formal system might enable us to say with confidence, for example, that 1+1 really is 1. Russel too about 700 pages in Principia just to arrive at this very proof!