djl banner
djl homepage



Title: Annus mirabilis: Indeed
Author: David J Larkin
Abstract: Matter possesses a uniform or characteristic propensity of emission that renders particle emission-speed limited to an upper bound. Because of the absorption–emission process of transmission, the characteristic propensity, and the energy threshold of emission, any radiation propagation speed greater than (or indeed less than) the theoretical constant c will be modulated, that is, rendered limited and invariant. Therefore, the moment that you attempt to measure the traversal speed of light is the moment that you interfere with that light and renders its speed limited and invariant. Consequently, conclusions asserting the limited and invariant nature of the traversal speed of light remain contentious. Furthermore, when we examine other claims made in support of Einstein's theory of special relativity, in the context of susceptibility to particle distortion, those claims are similarly rendered tenuous.
Created: June, 2005

•The basic concept of the particle theory (or heuristic model) that I advocate (Larkin, 2000) is that sub-atomic particles such as protons, electrons, neutrinos, positrons, and photons, for example, are composite structures—composites of fundamental particles, indeed composites of composites. And it is the explication of this composite nature that facilitates the explanation of so many physical phenomena that are merely (convolutedly) described by the received wisdom of quantum mechanics and special relativity.

Fundamental particles can be characterised as impermeable rigid point-masses of continuous and uniform charge distribution: or point charges. As stated, protons and electrons are not fundamental particles, however, their anti-charge characteristic points to another important feature: There are two types of fundamental particle, one particle is positively charged and the other is negatively charged (they are, however, alike in every other respect), and the charge and field distributions are uniform and continuous. (The uniformity (in every respect) of the field distribution would appear to contradict Coulomb's experimental findings; this point will, however, be subsequently reconciled.)

Fundamental particles interact to form clusters that, in turn, interact with other fundamental particles or clusters to form composite sub-atomic structures such as protons or electrons. Only stable structures remain viable. These viable structures or composite particles can be succinctly idealised as spheres consisting of concentric layers of alternating 'net charge'. For example, the net charge on the electron's surface is negative, the net charge on the electron's (immediate) sub-surface is positive, and the net charge on the surface 'below' the sub-surface is also negative, and so on down through the structure. An important point to remember is that this is an idealisation because the 'individual' surface and sub-surface charges are themselves composite structures. Furthermore, the composite structures (at the micro- or macro-level) have the important characteristic of decreasing rigidity towards the outer layers of the particle, that is, when exposed to interaction forces the outer layers are more susceptible to positional distortion (or re-orientation) than the more rigidly bound inner layers.

As asserted, this composite characterisation facilitates the explanation of many physical phenomena. The theory is coherent, extensive, and founded upon a simple premiss.

For example: The composite-proton's surface can be idealised as a spherical array of equi-distributed 'net positive charge' alternating with adjacent surface-regions of ambiguity or neutrality. The ambiguity is a consequence of the influence of sub-surface charge; an external charge moving into the 'field' of ambiguity would be acted upon by both positive and negative sub-surface-charges. It is this discontinuous distribution of charge across the surface of the proton (or charged composite-particle in general) that gives rise to Coulomb's inverse square law and the associated idealisation of 'flux lines'.

Furthermore, under the influence of high-energy interaction between two protons, the positional distortion experienced by surface charges exposes sub-surface charges of opposite state (that is, negative charge). In order to facilitate the description, visualise the blooming of a flower. As the protons are 'driven' towards each other against the electromagnetic force of repulsion, the respective interacting surfaces 'bloom out' exposing the sub-surface. Consequently, the surface of the protons, once equi-distributed with net-positive charge, now exhibit a curvilinear array of alternating positive and negative charge. Upon further approach these arrays are forced into (negative to positive inter-proton) alignment and the protons are attracted to each other as they key-in through the array. This array bond, which is clearly an electromagnetic attraction, is characterised by the received wisdom as the (mysterious) strong nuclear force.

There are indeed many other scenarios that could be outlined that would facilitate the explanation of phenomena such as (but not limited to) refraction, defraction, dispersion, points of closest approach, energy levels, fine structure, and magnetism. However, in this article, given the significance of the year 2005, I want to focus upon the impact that this composite model has upon Einstein's theory of special relativity. In particular, I will focus upon two important and related phenomena: particle distortion and photon absorption–emission. Incidentally, the points that I will subsequently make do not exclusively rest upon a composite particle model.

•It is important to draw the distinction between photon emission speed and photon traversal speed. Emission speed is measured relative to a fixed and unique point of reference: the point of emission or energy exchange. Conversely, traversal speed may be measured relative to any (arbitrary) point of reference.

Now let us examine why there appears to be an upper-limit to the speed of light or electromagnetic radiation propagation. It is not the case that the harder that something is struck the faster it is propelled; a moments reflection upon the differing outcomes observed when striking, in turn, a billiard ball, a balloon partially filled with water, and an egg will lend weight to that conclusion. One important determinant of imparted speed is the capacity of the 'struck' object to withstand distortion.

Susceptibility to distortion has the consequence that the imparted energy of emission is divided or apportioned between that energy that remains absorbed within the distorted structure and that energy exhibited in any change in propagation. It is not difficult to imagine that any energy in excess of the particles capacity to withstand distortion (the energy threshold) would remain absorbed within the distorted structure, that is, no further increase in propagation speed would be possible. Therefore, emission speed would be limited. Furthermore, if we correlate mass with absorbed (potential) energy we can begin to appreciate the utility of Einstein's famous equation E=mc2, and also, we can begin to appreciate the basis of the difference amongst radiation particle species (from the less energetic emissions through the spectrum to the more energetic and destructive x-ray and gamma-ray radiation).

But even if emission speed is limited due to susceptibility to distortion, why does emission speed appear invariant? The answer to this question relates to what I refer to as the characteristic propensity of emission, and the energy threshold. The amount of energy available (an emitter must be sufficiently energised), the capacity of the energised entity to pass-on that energy, and the capacity of the emitted object to withstand distortion are all factors that define the propensity of emission. I would argue that the invariance or uniformity of emission speed is indicative of the uniformity of the propensity of emission, that is, characteristic of 'matter', and that emission only takes place at energy levels above the energy threshold . Or, that it is not speed per se that is limited and invariant but rather the propensity of matter entities to impart or sustain speed.

While this may account for the apparent invariance of emission speed, the apparent invariance of traversal speed seems more problematic and this problematic issue bears directly upon special relativity.

During the Earth's orbit of the Sun, the Earth reaches speeds of approach and recession of approximately 300 kps. Therefore, any observation of the speed of solar light relative to the Earth should yield values greater or less than the theoretical limit c by an amount equal to the speed of approach or recession respectively. Yet no such variation has been observed.

Any medium capable of transmitting light (for example, the Earth's atmosphere, a radiation detection instrument, or a glass prism), due to the characteristic propensity of the absorption–emission process of transmission, that medium will modulate the light speed. It needs to be clearly stated that transmission implies absorption then subsequent emission. Therefore, all that has been observed up to the present time, is nothing more than the invariant emission speed of an emission subsequent to absorption during transmission through the interceding (or perhaps more pertinently, interfering) medium. The traversal speed has not been measured. Consequently, any conclusions as to the invariant and limited nature of traversal speed are unfounded. Indeed, in the context of entities susceptible to distortion, when one examines the other claims in support of special relativity (GPS, muon decay, atomic-clock disparity, and so on) those claims are similarly rendered tenuous.

•Just briefly, the atomic-clock disparity experiment fails to or cannot eliminate the distinct possibility that the (non-inertial) accelerative force required to attain the high inertial-velocity is not a or the contributing factor to the disparity. Similarly, one can argue that it is indeed the applied accelerative force that is directly responsible for the observed decrease in the muon decay-rate at high inertial-velocity.

Consider the analogous behaviour of a hollow rubber ball subjected to a distorting force. The application of a slight force to the upper side of a ball, which is resting on a table, results in a proportional depression on the upper side. Removal of the force results in the ball returning to its original configuration in a time proportional to the characteristics of the ball and the applied force. Application of an increase in force results in a proportional increase in depression and subsequent elapsed time to recovery. Successive increases in force yield successive increases in depression and elapsed time. Eventually, however, the applied force is such that the depression results in the upper side being cupped and retained within the lower section of the ball. And depending upon the characteristics of the ball, either the upper side will remain cupped or it will be released. Now the corresponding relationship between the applied force and the elapsed time to recovery may be linear or non-linear, what is clear, however, is that there is an increase in time to recovery correlated to an increase in applied force, or conversely, correlated to an increase in depression or distortion.

In the case of muon decay it can be argued that the increase in time elapsed to decay (or decrease in decay rate) is due to the increase in distortion that occurs as a consequence of the accelerative force applied to attain the high inertial-velocity required for the experiment. This increase in distortion results in a transient stability within the muon structure. When the applied force is removed, a period of time elapses during which the muon structure recovers, and the muon then decays at the 'rest' rate. The high inertial velocity is proportional to the applied force, but it is the applied force that creates the impetus for change.

The fact that this phenomena is adequately modelled by the Lorentz transformation is not surprising since Lorentz modelled the transformation on the behaviour of electromagnetic radiation, but more importantly, as alluded to, since high inertial velocity is proportional to the applied force, Lorentz' transformation indirectly models the applied force or the resultant distortion. However, I should hasten to add that the acceptance of the Lorentz transformation is simply a matter of pragmatism; like the acceptance of any mathematical model, acceptance of the transformation is simply an acknowledgement of its utility in yielding quantitative results that are in good agreement with those observed. But since mathematics can model any scenario regardless of whether that scenario 'represents' a physical reality or not then mathematical models cannot provide an unqualified platform for explanation let alone a sound basis for claims of verification.

As for GPS, it is not Einstein's theory of special relativity that is critical to the successful application of GPS but, rather, the associated Lorentz transformation.

•Due to the invariance of the characteristic propensity of emission, and the emission energy-threshold, and by the consequent process of absorption and emission during transmission, the instant that you attempt to measure the traversal speed of light is the instant that you modulate that speed and render it invariant and limited. As a consequence, any attempt to accurately measure traversal speed is comprehensively thwarted and the misguided conclusions drawn from such experiment cannot, therefore, lend support to Einstein's theory of special relativity. And in the context of particle distortion, other claims in support of Einstein's theory are similarly rendered tenuous.

Larkin, D. J. (2000). The Absolute Present. Melbourne: David J Larkin
Notice Board
Article-specific update and revision notification.
New Format Posting 1/9/2014

Copyright © David J Larkin 2009. All rights reserved.   + LINKS