Spontaneous negentropy misunderstanding…

Quote: “Deriving a second order differential equation for a likely system of wave-like monopoles and its solutions is not a gargantuan task, especially on the basis of symmetrical considerations; however experimentally settling upon the masses, indeed is.”

In my opinion the justification for a ‘multiverse’ approach is that it provides a variety of mathematical logic frameworks (mind set) as a point of departure when each has a justifiable probability of success based on their ‘confirmed predictability’ scores. The way we have both handled the ‘sine qua non’ indispensables of symmetry, monopoles, baryonic dark matter codon, etc. is no longer seriously challenged except in details by other competing models. However, when you simultaneously consider the cognitive limitations in both the perceptual and conceptual domains of language discourse of our human species, then the order of the differential and integral equations for any probable system is indeed “a gargantuan task” even as just a speculation traveling along asymptotic paths, always approaching but never reaching absolute certainty. This is because for our species absolute certainty is a goal, a journey and never a static destination in a dynamically evolving brain sensory, cosmological phenomenologic and in between reality. When all relevant things are dispassionately considered on their own merits, it is IMHO a waste of time to argue a spontaneous organization of a living super complexity without necessarily violating natural laws like thermodynamic entropy laws properly understood. This is clearly seen when you extrapolate causaliy at the very first step when the very first unit physical particle spontaneously took the first negentropic step. In closing, my suggestion for a mathematical logic alternative to theosophy maybe follows the same oblivion as my own two way dark baryonic codon relay valve connecting cosmic radiation with identified premotor cortical areas. C’est la guerre my friend….
Posted in Neurophilosophy of Consciousness | Tagged , | Leave a comment

Universal Holism and Individualized Reductionism.

Universal Holism and Individualized Reductionism

When a Whole Cannot be the Sum of its Parts

Plato, Kant, Nietsche evolving views on complex dynamic existential reality.

INTRODUCTION.

It is almost impossible for a normal healthy human being to have an opinion without a theorizing. Regardless of his/her real intentions it implies premises contained in an adopted frame of reference. As pointed out in another publication, the frame of reference may either be consciously free willed or act by subconsciously controlling the opinion as expressed in the language report. Either way the propositioned opinion about an object or event occurrence is reported and its credibility depends on the probable truth content of the opinion. If the report is based on the sensory perceptual described verification of the individualized occurrence(s) by all witnesses at all times, anywhere in mesoscopic space, then it is likely to be accepted by many as true without further considerations. “Seeing is believing” will guide the choice.

But when the occurrence escapes sense-phenomenal perceptual or ontological verification/resolution then we must rely -by epistemological inferences- on the less credible certainty of the visible consequences of the occurrence, as linguistically reported by an observer or recorded by an instrumental measurement.

What if the individualized occurrence resist being framed/reduced into any symbolic or sentential formalism to be linguistically reported? Fortunately we need not worry about objects or events projecting infinitely into cosmological space ‘n’ or infinitesimally into sub Planckian micro space at either extreme of the ontologically descriptive spectrum? Either way, logically absent the possibility of a reductionist effort of receding infinities, the information content cannot become directly available. The practical/empirical solution is to eliminate infinities is to settle for approximations to a truthful reliable content by positing the reality of an abstract transfinite space ‘n-1’. This way, the absolute, reliable truthful certainty of anecdotal sensory evidence is sacrificed as we depart from the individualized reductionism into the speculative uncertainty domain of an indirect universal holism based on intuitions and the recent benefits of both new updated recorded history and a global explosion of other informations as we map today the probable territory of tomorrow as we look into the past for orientation. We can now benefit from the joint merits of a ‘universal holism’ and an ‘individualized reductionism’ synthesis. Let us now examine the merits and drawbacks of this approach.

ARGUMENTATION.

A ‘Holism’ perspective tacitly implies that everything in our entire universe is causally connected, entangled or otherwise existing as non separable entities functioning as a unit whole. Whatever experienced occurrence that you cannot ontologically describe you can always explain so long as your epistemological explanatory model poem account is strictly derived from the same relevant ontological measurements or observations when statistically correlated and then linguistically expressed as a comprehensive new unit singularity comprising the best of both perceptual and conceptual constitutive elements. But ‘Holism’, as practiced by organized JudeoChrIslamic religions and other theosophies in our real existential reality brings new contemporary issues influencing our vital decision-making process. Which element should we rely more on to satisfy the bioppsychosocial (BPS) imperative for biological survival, the immediate empirical experience  or the transcendental conceptual abstraction thereof? To follow are some of the salient issues to focus on.

Is the whole more than the sum of its constitutive parts? Are the constitutive units static or are they dynamically interacting?  If the latter, then it would be more appropriate to restate the concept of ‘Holism’ as one where the dynamically evolving state of the universal whole is more important than the dynamically evolving states of its relevant constituent parts. In hybridizing the new emergent Epistemontological singularity, which aspect should we rely more on, the ontological scientific methodology or the epistemologically derived inferences there from? How do we reconcile the invariant constituent unit mass particles with the variable states of their aggregates as the controlling determinants of the overall state of the unit ‘holistic’ whole? Let us briefly consider the merits of both aspects.

As previously suggested above both aspects have their own intrinsic merits and to benefit from both we need to identify or invent a common denominator to both capable of satisfying at least their necessary requirements if not their isolated sufficiency status. If we agree on the premise of an ongoing dynamic evolution of complex existential reality, then a consideration of ‘variable states’ is more fitting than an ‘invariant statism’ for a critical analysis. This way we stay alive in the present by anticipating the probable future threats to species survival based on recurring, consistent past experiences and updated recorded historical facts, lest we are “condemned by repeating the “Lessons of History” as author Will Durant warned us in his now famous book. But an Epistemontological new hybrid synthesis as a guiding singularity is a compromise between the relative certainty of the ontological scientific methodology sense-phenomenal tools and the current uncertainty of the epistemological tool resting on mathematical logical probabilities, a speculative approach. Something like a re-statement of the still raging debate on the merits of the classical ‘feet on earth’ Copenhagen classic school and the post modern ‘flights of fancy’ school relying heavily on symbolic/sentential reductionism many a times irrelevant to sensory ongoing real time realities of existence, as if it could have an independent life divorced/isolated from falsifiable environmental circumstances. We find that quantum mechanical theory provides the current best bet to bring both extremes together as argued extensively in our other published books. Let’s us briefly examine critically the respective merits of the methodological tools behind epistemological ‘holism’ and ontological ‘reductionism’ respectively.

Contrary to other commentators’opinions on how best to compare or contrast both methodologies, this author believes that inferential abstractions are always necessarily derived from their corresponding preceding observables, measured or observed experiences. One cannot infer reliable absolute consequences from acts that have not happened yet. If all complex objects must have had an origin from an unit particulate object at the beginning, it cannot spontaneously and independently evolve into complex structures/arrangements without a previous plan and a source of energy to fuel the new geometrical 4-d arrangement in any spacetime conceivable because, among other things, there is in principle no accessible memory of preceding occurrences. Then any reliable real time, updated analysis of both tools should start first with an examination of the general principles as they apply to the totality of the whole complex system or apply to the individualized structural/functioning of any of its reduced components. The proper reductionist activity is based on the totality of the environmental circumstances influencing the whole complex system considered as a unit and not on the sum of the applicable  characteristics of the environmental idiosyncrasies of the isolated constituent parts. Likewise, an explanatory reductionism is metaphysical and not an observable ontic, pragmatic category as some renowned particle physicists argue when ignoring the evolutionary aspects of complexity as viewed by the investigator within a valid frame of reference. See Weinberg’s 1992 justifications. This warning is particularly so when trying to understand the super complexity of dynamically interacting living systems at the micro or mesoscopic level of organization (molecular, cellular, histological, organismal, societal, etc.) where inherited and learned traits are continuously influencing each other in their environmental space-time milieu right here on our real time city hospitals or laboratories today where it is realistically meaningful as a first priority option. Within those priority option guidelines we prefer to start from the general to the particular in all cases whether at the cosmological, sub-Planckian sub-atomic levels, mesoscopic or in between because ultimately it is all about human life and its exclusive self consciousness capability to double up as actors/observers and narrators of the drama of existence, as argued in our own BPS brain dynamics model. In our opinion, it constitutes an excessive act of self indulgent behavior when claiming exclusive validation of either a holistic or a reductionist model as necessary and sufficient when obviously both are complementary and needed but not sufficient in themseves. We witness this unfortunate behavior often in physics where reductionists generalize about the resulting behavior of particulate matter when e.g., environmentally contrived component electrons, ions or molecules perform when condensed, frozen at sub-zero temperatures or otherwise unnatural environments on our vital earth biosphere environmental simulations with the intent to market their ideas as applicable to real time ongoing existential conditions on our planet earth vital biosystem or elsewhere. Another problem we will not discuss now is the questionable probability distribution assigned to varying participating events. This is not meant to deny the potential transcendental value of simulations under justifiable environmental conditions to generate various probable formulations to explain the measured reports. The one formulation able to produce the most future confirmations of their predictions would be the winner. Sometimes it would be the hands-on experimentalist and materialist reductionist, other times the arm chair holistic philosopher with the same materialist frame of reference. The former looks at the immediate, empirical ongoing now, the latter at the transcendental probable tomorrow scenario so valuable in anticipating and preparing actionable strategies for probable life survival threats in the future. Both approaches are necessary but not sufficient in themselves. Together, as a unit life survival kit Epistemontological singularity, it is the best choice as argued by this author in detail in many published volumes of arguments. We will briefly examine some of those arguments to follow.

For the reasons expressed above this author finds it unnecessary and confusing to dissect out further the metaphysical holism as an epistemological category to distinguish alleged (but as yet unjustified) ontological, property and nomological variations. The allegations that some physical objects carry non physical parts or the equivalent allegation that, the whole may contain non-physical entities directly responsible for causally efficient properties in addition to those properly attributed to the physical particulate matter constituents; they call it ‘ontological holism’, is incomplete. This incompleteness may have well been the reason why Ontological Holism has been a stumbling block in explaining quantum mechanical interpretations because, e.g., if a physical particle is not detected as traveling with a de Broglie ‘wavicle’ it is assumed it is not there being carried by the wave. For some it is more credible that a massless physical particle exists! Along the same lines ‘nomological holism’ stands for behavior that can only be attributed to a non-physical agency. These very special environmental circumstances attending these variations need more elaboration until they become experimentally testable or at least probable under a metaphysical logic scrutiny. However, it is fair to say that in the ideal world of Weinberg’s reductionism it is correct in insisting that it is the ontological particulate matter, visible or not, that ultimately decides the outcome of their reactive interaction and not the representational abstract formulation of interpreters that drives and controls the outcome. But in the real-time world scenario of fluctuating environmental idiosyncrasies the very same object or event under the same environmental conditions may well elicit different occurrence language accounts even by identical twins! So much for the importance of the renowned narrator in dictating trend setting norms for all to follow as truths regardless of the strict ontological correspondence to the real object in real existence. The very same object may elicit different conceptualizations in different qualified observers. I am reminded of Nobel Prize Niels Bohr remarks on what today we call ‘Ontological Holism’ as it applies to Quantum Theoretical considerations way back in 1934. While quantum mechanical phenomena can be described or explained in purely physical terms, obviously not all participating entities (e.g., physical particles, environmental conditions, etc.) can be characterized as physical material objects especially when independently characterized as to their isolate/individual structure/function and reactivity. Consequently to characterize a ‘quantum’ object as an independently existing object is simplistic and unnecessary. Even Bohmian Mechanics’s relatively more recent inclusion of the corresponding fields created by the totality of physical particles of the undivided universe that guide their particle trajectories, besides the physical particles themselves, is incomplete, albeit being necessary… but not sufficient because it excludes, among other things, the human being species obvious brain limitations in the perceptual/conceptual evaluation and linguistic characterization of existential reality as this author had abundantly analyzed in other publications.  .   .

Summary and Conclusions.

The simplest way to phenomenologically describe and/or inferentially explain the causally driven simplest possible system S with two or more participating components, say a, b, c, is to assume the unit-size particulate matter components may interact under clearly stated standard temperature and pressure (STP) environmental conditions and coordinates in space time. This would be an idealist representation of a Newtonian spatiotemporal kinematic behavior of a, b, c, ..n particles responding to finite forces f=ma as each particle projects forward along its trajectory. In anticipation of having to describe/explain some unexpected experimental results or observations, we then incorporate a quantum theoretical mathematical logic such that this system is now more adequately characterized by a tensor-product state-vector factorizing into a vector in the Hilbert space of each individual participant  thus: Ψa, b, c, …, n ≠ Ψa ⊗ Ψb ⊗ Ψc….. Ψn. In the real time human and earth spatio-temporal biosystem world of hands-on experimentalists and arm-chair theorists of language reporters of the observables results, the tensor products of the equation do not factorize out as shown in the previous equation. No wonder the participating elements are said to be all entangled if we imply an unreal statism instead of a a real-time complex evolving before the scratching heads of the human practitioners and the speculators whose access to absolute reality is denied to their physical brain processing capabilities in both the perceptual and/or conceptual domain of discourse! No wonder we have to settle for convenient approximations and propositional brainstorm model poems to see it their corresponding predictions are verified in future measurements and/or observations….. and even then it will undoubtedly change eventually with the passage of time, not to mention the unjustifiable excesses attributable to either the materialist physics scientific methodology ontological claims or the philosophical methodology epistemological claims when excluding each other as the only valid assessment of human existential reality. That is the reason why only in the ideal world the total is not necessarily and sufficiently expressed as the sum of its constitutive parts. This way both the ontological and epistemological views, albeit necessary, become extreme views because of their insufficient status when taken separately. Why not integrate the best of both into a new unit singularity, a dynamic hybrid Epistemontological synthesis like our own biopsychosocial BPS model of brain dynamics. This conceptualization, as spelled out in seven published volumes, a blog, a treatise and various other publications, is still in development as several issues remain unsolved as pointed out in our arguments above.

Reproduced in part and modified from “Treatise on the Neurophilosophy of Consciousness.” A Multidisciplinary Biopsychosocial (BPS) Model. Trafford Publishing, Inc.  Only Reference

In Deltona, Fl. Early Spring 2014.

Dr. Angell O. de la Sierra, Esq.

…..

Posted in Neurophilosophy of Consciousness | Leave a comment

Unnecessary confusions with brain terminology.

Picture 1

Two examples to follow:

Correlation vs. Causation

To really get a handle on their distinction you need to critically analyze under what circumstances one individual observer can feel reasonably certain about getting the maximum of truth content possible at a given moment while responding to a sudden new or familiar sensory stimulus and making a corresponding adaptive decision in your daily life. When you, at leisure, meditate about it, then you realize that the response effort is equivalent to distinguishing between contingent vs. absolute truth.

Thus, you realize that you cannot always perceptually describe things as they are or cannot always conceptually explain as they logically make sense and appear to be because uncertainty influences your judgment. But one thing we can always do with certainty and that is to express how, when or where the ‘what’ (object/event) is happening AS IF the occurrence is the absolute truth in that given moment. Consequently we can always, only explain things not as they are or logically are but always as WE are. What ‘we are’ determines the most important aspect of living which is to stay alive so we can satisfy our survival biological imperative to continue living as a species beyond the conclusion of our individualized life cycles ‘per secula secolorum’, across generations. Once human species survival is genetically committed and epigenetically sustained at subconscious levels of reflex biopsychosocial BPS performance then we arrive at the next higher order of adaptive conscious behavior still strongly influenced by BPS survival imperatives. Once we situate the living human being, with all his/her limitations in perception and cognition as amply discussed, then it is a lot easier to understand the differences between correlation and causation.

Phenomenal causation depends on perceptual identification of a physical mass (m) moving or accelerating (a) under the influence of a causally efficient force (f) as conceptually represented as f=ma. But there are physical mass particles beyond sensory resolution that may consistently cause particular sensory effects (broken glass window and strong, moist southerly winds). This scenario co-relates two or more sensory discernible events with a degree of certainty but gets more difficult to ascertain the true undetectable, causally efficient agent, . Etymologically two sides of same coin are inseparably, consistently and unavoidably related, i.e., they constitute a co-relation = correlation where the true, direct non-observable agents of causation may remain indiscernible for human’s sensory threshold. The reader may have noticed that even when an observable cause-effect is experienced by all, always, there may be intervening indiscernibles beyond human brain resolution threshold. Hope this helps a little.

Are animals moral?

Patricia Churchland, et al: “…..us back to the original Darwinian position that moral behavior is continuous with the social behavior of animals, and most likely evolved to enhance the cooperativeness of society. In this view, morality is part of human nature rather than its opposite……”

Angell: I am sure some here will continue to argue that animals are conscious and have ‘moral’ attributions. And I will continue to single out the need to have inherited a primitive Chomskian (New Horizons in the Study of Language and Mind.) language machinery that allowed for an introspective attainment of the self-conscious stage during early childhood. According to Piaget (Development of Thought), it made possible the distinction between ‘I’ and the ‘other’. I have argued that Darwinian Evolution was conceptually necessary but not sufficient to explain human evolution into the ideal moral/ethical creature that Kant (Critique of Practical Reason) had in mind. This reasoning is along the same lines that Teilhardt de Chardin, Bergson and others had. It was never meant an attribution of subhuman species like Churchland apparently pretends now. A subhuman behavior that appears ‘moral’ is essentially a subconscious reflex act in defense of a biopsychosocial BPS strategy for species survival. Rousseau’s (The Social Contract) thesis than man is born moral and society corrupts him is an anachronism as brain and sociology results have sustained. We humans begin to humanize when the first order self-conscious state is attained, however primitive, as amply discussed before. The escape from the BPS stage we share with sub humans is the beginning of human hood as detailed elsewhere. It is an ongoing process influenced by genetic and acquired environmental circumstances.

Angell

Posted in Neurophilosophy of Consciousness | Tagged | Leave a comment

Update on the Absolute and Contingent Life Realities.

Analytical Examination of an Ongoing, Mesoscopic Existential Reality.

The Variable Contingent and the Invariant Absolute Components in Perspective.

 

Jesus the Prophet

Jesus the Prophet

 

Introduction.

 

“The simplest and most general statement of the identity theory of truth is that when a truth-bearer (e.g., a proposition) is true, there is a truthmaker (e.g., a fact) with which it is identical and the truth of the former consists in its identity with the latter. The theory is best understood as a reaction to the correspondence theory, according to which the relation of truth-bearer to truthmaker is correspondence. A correspondence theory is vulnerable to the nagging suspicion that if the best we can do is make statements that merely correspond to the truth, then we inevitably fail to capture the reality they are about and thus fall short of the truth we aim at. An identity theory is designed to overcome this suspicion.” Candlish, Stewart and Damnjanovic, Nic, “The Identity Theory of Truth”, The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/spr2011/entries/truth-identity/ .

 

The discussion that will henceforth follow is premised on the controversial proposition about how –not why- our original human knowledge about the structure and function of all objects and events we experience must have probably started to begin with. First with their sensory observation -within sense-phenomenal resolution threshold- and then followed by language descriptive reports –however primitive but with a continuously evolving syntax semantic structure- as we mature. These evolve as contrived metaphysical representational explanations  -using more sophisticated accounts expressed in their symbolic and/or sentential equivalents- as an expression of certainty about their physical presence/occurrence. Their certainty became only a derived mathematical logic probability calculus. The more convincing metaphysical explanatory accounts were those that were the result of consistent and verifiable consequences of probable causally efficient physical unit particles. These posited ‘observables’ remain beyond human sensory threshold resolution in both the micro infinitesimal or the macro infinite cosmological extremes. The resulting causal model-poems had to be subject to future experimental verification as exemplified by confirmations of the predictions anticipated by their formulation. If no experiment could conceivable decide between posited models candidates, however sophisticated and elegant, they remained speculative conjectures until proven otherwise. Can a unit-dimensional physical particle such as a putative 3-d cube, change in absolute terms, or only in contingent terms, i.e., depending on their particle interactions (e.g., aggregation of more unit particles) with their relevant changing space-time environmental circumstantial reality? We will argue that, within the contextual framework of an evolving quotidian, mesoscopic existential reality, both the particle aggregates and the space time environment can change simultaneously regardless of our ability to monitor the changes. Is reality then contingent or absolute? Are we as a human species forever denied the epistemological cognitive certainty of the Kantian ‘Ding an sich’ and will accordingly remain forever imprisoned within the confines of our limited phenomenological world of superficial ontological appearances? Equally relevant, is the answer to the question, what model poem should we empirically adopt to continue improving on the quality of our lives and our environment, the ontological perspective of  physical materialism or the epistemological perspective of metaphysical logic of the mathematical or theosophy variety?

 

We will continue elaborating on previously published accounts on the advantages of a ‘not so new’ ‘epistemontological’ hybrid unit whole reality that dynamically incorporates the best of the abstract metaphysical a-priori effort and the confirmed empirical a-posteriori experimental results as an evolving synthesis of ongoing reality. This model poem content would hopefully allow, as argued, for a unique human species strategic compensation for its relatively poor adaptive biological survival resources –compared to other subhuman species- and a view of reality as a dynamically unfolding restructuring of the human species functional brain architecture capacity to develop its full cognitive potential. In the process a road map emerges as a goal to guide and prepare for future life-threatening contingencies while continuing to develop the wondrous civilization it has heretofore witnessed. This way, the abstract, metaphysical logic speculations/conjectures about the future become a probable journey compass needle to guide us through the complexities of the immanent now and into the transcendental tomorrow for the human species across future generations. For life is a journey not a fixed destination.

 

Within the context of the above mentioned premises of an anthropic, mesoscopic biopsychosocial (BPS) model of brain dynamics proposition -as a point of departure- we proceed to briefly reconcile the physical description with the metaphysical explanation, the invariants with the observable or ‘hidden variable’, the observable ‘seen’ with the probable ‘unseen’. Among the sensory observables we witness the first person self and other, the third person accounts and, of course, the explicit or inferred ‘motion’ of objects and/or events, whether caused by visible rigid bodies or massive but invisible falling macro objects under gravitational acceleration pull.

 

Sense-phenomenal accounts of physical reality are not necessarily what physical appearances descriptions reveal and often it is more convincing to adopt the metaphysical explanation provided instead. To illustrate, unless we dig deeper into our analysis, the earth looks flat like an Euclidian 2-d x-y plane and the skies seems to the rational intellect as the non-Euclidian Minkowsky 4-d x-y-z sphere, center of the spirally changing universe in space-time. Which model is the absolute truth about our existential reality? Is changing reality absolute or contingent? We argue, with others, that it is both and, what is worse, barring a human chromosome mutation event, we may never know the absolute truth. But, stay tuned for more multidisciplinary arguments…

 

The seemingly spontaneous cosmological, macro ontological order observed by the primitive human sensory brain slowly became the micro epistemological probable order, from the Greek Gods, the Euclidean 1-d line, 2-d plane, 3-d volume, 4-d space time, 9-d non-Euclidean compactified space time, and a full cycle back to the organized JudeoChrIslamic theologies or their equivalent theosophical Gods of the physical materialists, Scientologists and other secular cults. Which one will exclusively attain absolute truth status? We believe and argue that none probably will unless we inclusively integrate the probable aspects of all into a new Epistemontological unit whole centered on a basic inescapable biopsychosocial BPS subconscious effort shared by all living species and elevated to a free will conscious status made possible by the uniquely human introspective ability of the human brain dynamic activity supported by a language based coordination of the emotional human survival reflex  and rational adaptive responses.

 

 

 

 

Argumentation.

It is not surprising for curious human observers who historically have always been impressed by all varieties of embodied motions by objects or events, linear, circular or otherwise, to often wonder about the driving force causally responsible for the experienced observable linear displacements along what looked like the Euclidian  1-d line, 2-d xy plane flat earth surface or the non Euclidean 3-d xyz volumes of curved trajectories across the firmament as time elapsed. It was natural that the easier to understand, abstract mathematical representations were favored. Their favorites and their use, would have no immediate impact on the observable sensory reality as day-night and seasonal cycles repeated. The more these sensory observables changed the more they apparently remained the same, They became embodied in our brain memory as their corresponding neuronal network representations. The same argument applies when bipedestrian life made possible observing the volumetric skies above the horizon.  But then, even these same heretofore ‘reliable’ observables may swiftly change as experienced by subsequent generations as a function of environmental circumstantial idiosyncrasies. We may properly ask, what is consistently reliable and absolute and what is contingent for the putative ‘invisible observables’, those indiscernible entities of the infinite macro cosmological manifold? How do we get at least an explanatory handle on the ‘non-local’, sense phenomenal physical order ontology and the metaphysical logic epistemology in an ever expanding universe? What about the ‘local’ mesoscopic ontology and the metaphysical logic epistemology in the ever receding infinities of the macro cosmological order, not to mention also micro sub-Planckian manifold invisibilities?

Epistemologically, getting an explanatory handle on the ‘local’ micro manifold invisibilities has always been a necessary human species priority because it directly impacts the very biological survival potential of the human narrator of this life and consciousness drama, now standing erect on the neo-Copernican platform stage at the center of creation, to explain in the present –first person mode- where we are going in the foreseeable future based on the past brain memories of where we came from. The model poems propositions have ranged from the physical materialist’s ‘how’ to the metaphysical ‘why’ theosophies and in between. But again, the model-poem explanatory content should not change the existential reality facts surrounding the intended living beneficiaries species, especially the exclusively human narrator of this complex conundrum. As expected from a real-time human narrator, the accounts communicated necessarily reflect an individualized frame of reference that subconsciously or with conscious deliberation, markets a subjective cosmogony point of view where the objectivity, ethics and morality of the narrator are put to a test. We may distinguish three radical extreme exclusivists cults, the physical materialist phenomenologist, the metaphysical logic axiomatic theoretician and the metaphysical theosophy preachers among other varieties. Fortunately, another emerging variation are the moderate existential realists that synthesize the best physical ontological phenomenology observables with the best metaphysical logic epistemology of the axiomatic and theosophy groups that are consistent with the biopsychosocial (BPS) species survival imperative in a real time mesoscopic existential reality, i.e., the ‘epistemontology’ synthesis of human brain dynamics activity that we call the evolving ‘mind’.

.It is fair to say that the BPS model poem has assumed the stance that, barring a human genetic (with transcription potential) mutation, absolute truth knowledge is beyond the cognitive capacity of any species we know about. But, the structure/functional brain limitations of the human species has not been an impediment for it to effectively and successfully compensate for the species relative limitations in adaptive resource strategies –compared to other species- to survive on the earth environmental ecosystem. Considering our species inaccessibility to the absolute truth, the mesoscopic empirical worth requires, as a viable alternative, the adoption of conventionalisms to avoid working with receding unreachable infinities and avoiding experimentally un-testable propositions. Exceptions are allowed when the latter ‘un-testables’ have consistently become instruments of biological survival value to its practitioners in the various Biblical theologies or their equivalent scientology cult practices in churches, synagogues, mosques or equivalent secular assemblies for prayer. Among the most important conventionalisms are the laws of nature as exemplified by quantum mechanics along with conservation laws (of matter, energy, momentum), gravity, etc. Needless to say that this moderate, realistic approach is consistent with the Kantian vision of existential reality as detailed in his original nominal “Critique of Pure Reason” and his later more mature “Critique of Applied Reason”..

It is important to understand that the BPS model compatibility with theosophies with experimentally un-testable predicates and rituals is the undeniable convenience of behavioral practices that promote healthy, happy and convivial cooperative community or tribal living in the defense of life, based on emotional faith alone, as witnessed for the great majority of believers with no titles, training, interest or genetic/acquired endowment but are nonetheless still good, moral and ethical law abiding citizens. Just as true is that the minority of participants endowed with the inherited or acquired intellectual resources to improve on the imperfect environment they found when born, are encouraged to become familiar with the relevant conceptualizations underlying the various specialized disciplines entering into their consideration -such as brain dynamics- for opinion judgments. All of this in furtherance of the tenets of the evolving model that critically discriminates between the universal absolute generalities to pursue in our living journey to become and the contingent, individualized/biased particularity that we transiently embody.

I would agree with much of what the referenced competing models, claiming exclusivity, are saying except for a few admittedly controversial issues. One such is my own proposition about the cogeneration of an ‘inner language’ and thought from an inherited primitive language machinery (see Chomsky) slated to evolve as determined by ones’ individualized environmental circumstances. These imperfect judgments are further influenced and expressed as a function of your adopted language syntax structure. I published in detail this analysis as a tentative conclusion because there is no alternative -known to me- to explain Peugeot’s results on toddlers’ distinction between self and others. This result, takes me to the next conclusion we may disagree on the putative existence of an ‘intelligent design’ based exclusively on referenced observables like strictly experimental (f-MRI, etc.) and metaphysical logic calculus of probabilities. In summary, there is an unfathomable absolute and an evolving but discoverable road map guide of contingent truth in our ongoing mesoscopic existential reality journey through life….today and tomorrow, trans generationally. It is called biological life and its expression of activity is called consciousness.

*** The early Gods of Greeks phenomenology.

 

 

The 1-d line (x) and 2-d (x, y) flat plane surface aspects of ongoing quotidian life. The first 4 Books of Euclid’s Elements deals exclusively with straight and circular lines and reign supreme until the 19th. Century. The conceptualization of a straight line as a definite length segment between two points that can otherwise extend indefinitely is of no surprise because of the then empirical concern about distance to carry goods and services by providers with an undeveloped road infrastructure leading to what appeared to the senses, an infinite flat earth surface. Likewise an infinite number of a-dimensional points can fit in a line. But already the rudiments of a surface are beginning to become relevant. They already consider a triangular surface being formed when any two intersecting lines (a, b) of equal length are separated by same angle (alpha), the lines joining their ends (c) are of equal length and all such triangles can be superimposed on each other. Motions of vehicles along these individual triangular plane surfaces can be estimated when joined together in a mosaic. Equally confusing should have been for empiricists to witness a train’s rails joining at a distance and a metaphysicist to argue that if any straight line intersects two or more straight lines at the same angle, the lines cannot join at a distance because they are parallel and extend that way into infinity. Which assertion is always correct and true?  By the same token, any number of parallel lines intersecting other parallel lines become the opposite equal length sides of parallelogram where the distance between a pair of such parallel lines is a constant.

It should be noticed how elementary straight lines lead slowly into cyclic recurrent phenomenological observations of curvilinear motion paths that do not extend into infinity and may form 2-d planar orbits such as the repeating cycles of day-night, seasonal variations and others to be expected within the then Ptolemaic geocentric earth model. The 2-d planar x, y surface now transforms into an equally planar 2-d pi r2 of a circular circumferential motion with an equidistant point center like in some planetary orbits. This conceptualization will later evolve into the still primitive abstraction of conservation of states, matter, energy or an inertial rest or linear motion momentum observed when a moving object displays a centrifugal escape tangential to its circular path. All of this within the non evolved model of a 2-d planar (x, y) aspects of a geocentric earth surface.

But the real earth surface has a depth dimension to reckon with, the forerunner of tridimensional 3-d (x, y, z) volumetric ‘solid geometry’ including primitive spatial considerations. By now it is clear that when two flat planar surfaces intersect then they do so either in a straight line of a given dimensional extension or an infinite extension plus a linear direction in the same or opposite orientation. The simplest planar surface of defined extension is the triangle of sides a, b, c. If they share a common side a’, b’, c’ the have a restricted rotation along that common axis if they intend to expand on their sharing flat planar surface. More significant was the realization that any straight lines intersecting another z line at right angles (perpendicular), forming flat 2-d (x, y) planes x, y x, z and y, z. These considerations on line dimensional extension, and direction on any and all of the x, y, z  planes will later evolve into vector calculus and the 9-d and more spatial conceptualizations of the 19th Century.

 

The 3-d volume aspects (x, y, z).

 

The Euclidian 2-d Plane Geometry was launched into a 3-d Solid Geometry focus through the mediation of a 2-d Projective Geometry. The etymology of the word ‘geometry’ implies the metric considerations of the solid body earth and its dimensional extension and position coordinates at a given moment as d’ Alembert published as early as the mid 18th  Century. Fourier provided the earliest serious consideration of 3-d solid bodies within the context of spatial volumes. He defined the straight line, the plane and the sphere. The practical, empirical descriptions of physicality and the theoretical metaphysical explanations began drifting apart, each one claiming exclusivity and enjoying a protagonist self indulgence in their search for an evanescent absolute certainty truth where there is only room for contingent truths about a continuously evolving super complexity. That was the beginnings of the still surviving epistemological gap between the empirical a posteriori facts based on reliable observable consequences and a priori metaphysical logic knowledge –often contrived into fitting convenient model poems-, especially those that still provide institutional research grants or accolades for some insecure investigator. As repeatedly expressed, the best synthetic a priori abstractions, when based on the best fitting analytical a posteriori falsifiable and consistent observables, remains the best alternative for truth content. This Kantian approach is especially valuable when it allows for ethical and moral considerations to enter the arguments on the best existential reality prescription for biopsychosocial BPS equilibrium.

 

As it turns out Euclid’s 2-d x, y plane geometry, as extended by projective geometry, is only a special case of a more encompassing 3-d x, y, z spherical geometry as elaborated by Gauss, Lobachevski and others in differential geometry terms. This view considers geometry as the relationship between rigid bodies and the media separating them inside a spherical space which is now considered as all points equidistant from the same center point. A line is formed by the projection connecting two such center points when two spheres intersect. A 2-d x, y plane is formed by all spatial points shared in common if the two spheres circumferences share equal diameters passing through same center of each circle. The parallel lines of plane geometry now become the meridians of latitude in a sphere allowing for recurrent cycles of horizontally moving points in relation to the center in one direction of the circumference and then -half way- in the opposite direction along the meridian circumference of latitude. It should be noticed that each meridian of latitude circle has a center through which its diameter extends and connects the circumference twice in opposite directions. All of the diameters individually constitute a meridian of latitude horizontal plane from top to bottom of the sphere. If we now select the longest vertical diameter line perpendicular to the longest horizontal diameter and passing through the same center of the sphere, a meridian of longitude will connect the circumference above and below forming the two north-south (n, s) poles of the sphere. This n, s diameter line becomes the axis of horizontal rotation. The largest horizontal circumference now becomes the equator of the sphere. This perspective view makes it possible to assign convenient coordinates of location to any point on any rigid body inside a putative spherical universe (assumed with n-1 transfinite boundaries to conveniently avoid expanding/receding infinities n), the earth included. Needless to say that this abstract metaphysical geometrical conceptualization of a spherical earth and universe is rooted on preceding centuries of observations, measurements, confirmed predictions and conjectures by a few scientists and philosophers. It made possible the subsequent important developments, among them the important Gauss/Riemannian spherical geometry cosmology still holding its worth.

Euclid’s 2-d x, y flat plane geometry was so influential, pervasive and persuasive that even the genius of Gauss had to imagine a 3-d x, y, z  world could only be metaphysically put together as a combination of three such 2-d x, y Euclidian planes and formulated as points on a spatial surface thus:  x(u,v), y(u,v), and z(u,v) where the u and v represent the equivalent x, y coordinates in each of the three intersecting planes at right angles (perpendicular) to each other at straight lines x, y, and z.. For each individual flat plane the distance between any two points u, v on the flat surface (u + du, v + dv) on the plane –when the 3 intersecting lines pass in either direction through a common point of origin O and extend a defined distance du, dv into the plane- is given by the Pythagoras’ theorem:

ds2 = E(u,v)du2 + 2F(u,v)dudv + G(u,v)dv2 where EF, and G are determined by the functions xy, and z and satisfy EG − F2 & gt; 0 What this is suggesting is that, as Klein points out, dimensionless points, 1-d straight lines, 2-d x, y flat planes –or their equivalent curved lines or curved surfaces-, constitute a 3-d projective spherical geometry of space lending itself to their corresponding, convenient, projective mathematical symbolic representations and transformations that could be mapped one into the other generating in the process different symbolic representations of the same invariant phenomenological rigid body reality at a given moment’s observation or measurement. This reasoning implies that phenomenological descriptions of macro physical observables are now provided a new rationally derived new scope/perspective and probable explanation of their micro metaphysical structure/function of their invariant particulate rigid body unit dimension constituents. This way the ‘hidden variables’ can be subject of analytical dissections, such as the precise coordinates of a point location in the spatial 3-d volume of space inside the invisible earth interior, the invisible macro cosmos and the equivalent precision inside the invisible subatomic realm. All of this intellectual metaphysical logic activity would not affect the rigid body phenomenological truth content during an observation or measurement because the results are entirely determined by observations/measurements or confirmed speculative predictions about observables on the surface of a putative spherical 3-d volume whether an invisible micro subatomic particle, a mesoscopic macromolecule, a macroscopic pyramid in Mexico or at the invisible confines of a bound macro cosmological universe. The fact that their corresponding spherical curvature is consistently formulated as the reciprocal of the square of their radii does not in any possible way alter the structure/functional profile of the underlying rigid body, as some aspiring materialist mathematical theorists would have us believe to exclude theosophical considerations. As long as the distance between point A and point B is d units, so are their mapping projections on any other surface, flat or curved. The curved surface of a cylinder of diameter d cannot be distinguished from the equivalent surface of a sphere of equal diameter d, i.e., they are isometric. If both rigid bodies are beyond sense-phenomenological resolution at the micro Planckian or macro cosmological manifolds I can still make rational predictions of subsequent probable observable or measurable behaviors of all known geometrical rigid bodies sharing that same dimensional curvature. Experimental confirmations of predictable subsequent behavior will, by a ‘reductio ad absurdum’ logic, identify the probable geometry without altering the rigid body geometry. The emerging metaphysical logic model poem is continuously subject to increase sophistication as more penetrating insights are provided by an exploding information technology. To briefly illustrate, notice how the fallibility of instrumental measurements such as radii, diameters, curvatures could be operationally cancelled out if instead we consider their mutually dependent ratio of measured variation such as, how is the varying radius r of a sphere’s meridian of latitude dependent on the simultaneous variation of its corresponding circumference. The possible resolution error of measurements cancels out when mathematically expressed as a ratio of the same measurements. This way, e.g., the longest vertical straight line diameter of a sphere passing through its center and at right angles (perpendicular) to the equivalent longest horizontal straight line passing through center, constitutes the axis of horizontal West-East-West cyclic rotation of a rigid body in space and identifies the North-South (NS) pole of intersection or the vertical line with the spherical surface plane. Likewise the longest horizontal line perpendicular to the axis of rotation and passing through the sphere center intersects the sphere surface at two opposite points and determines the largest meridian of latitude termed the ‘equator’. It divides the spherical surface into two identical ‘hemispheres’.

 

The epistemological contributions of Gauss and Riemann have resulted in the planning and executions of precise manned landings on the moon surface and unmanned drone landings on Mars. Mining metals from meteors and avoiding their damaging earth surface impacts, etc. is in the offing. This wondrous technological ‘magic’ is not exclusively possible because of the genial mathematical logic calculus of participating scientists and philosophers or the equally genial design by the ‘hands on’ laboratory practitioners, but by their joint cooperative effort integrating the recorded physical ontological measurements and probable metaphysical models as a hybrid Epistemontological unit whole. As to what activity preceded the other is an ‘chicken-egg causality argument’ undeserving of our distraction now. Considering the fact that the properties of spherical space, like distances between points, curvatures, etc. seem independent of the coordinate system used to explain them, it was fair game for a creative mathematical genius to equate the different formulations applicable to the same properties and see what new correlations come forth and thereby having geometry liberate itself from the shackle restrictions of an Euclidian based 2-d flat plane or similarly derived 3-d projective geometry. Enter non Euclidian spherical geometry into the epistemological cognitive domain of discourse. This made possible for Weyl in mid 20th century to define two or more parallel curved vector lines as they progress along the curvature to join another point along its path, what is called a ‘geodesic’ when it determines the shortest curved length between its two end points that is perpendicular to a line passing through the center of the sphere and independent of the various vector representations that explain such curved arcs of circles, part of the spherical surface boundary. Now the mathematical theorist is at large to make various symbolic mathematical logic poems for the experimentalist to adopt the best fitting formalism on the basis of its intelligibility, ease of comprehension, testable and with predictive potential.

 

The evolution of these complex epistemological conceptualizations made it possible to realize, early on, how their mathematical logic abstract representations were closely related to the subject’s brain dynamics’ phenomenological ontology. Repeated audiovisual brain stimulations influenced cognitive behavior which made clear that, in addition to inherited, reflex unconscious motor adaptive responses there also coexisted correlated acquired/learned, behavioral, subconscious, adaptive motor responses. The observed, consistent responses to environmental stimulation is constructed from preceding experiences, both ongoing and past memories that required having been coded into neuronal network language. From this encoded brain representations the brain was able to translate it into another language code, such as a-dimensional points, 1-d lines, 2-d planes, 3-d volumes, 4-d Minkowsky space time, 9-d constructions and n-1 d model poems with the help of compactifications and other convenient short cuts. It was clear that the classic Kantian synthetic a-priori model as such contained acquired/learned elements before the analytical a-posteriori sub model can be put together. This meant that a clear distinction between the inherited unconscious, the inherited/learned sub conscious and the uniquely human, free willed conscious and their corresponding brain architectural structure loci had to be reckoned with. Riemann’s undeniable penetrating mathematical logic impacts on technological development are still such that many of the best mathematical logic apologists for the ‘axiomatic’ approach to reality argumentation are wrongly convinced that the epistemological mind precedes the ontological brain, such as the modern Leibniz monads apologists and their theosophy equivalents …., of course, until they realize that the first inexperienced human mind had to be embodied in a physical brain to develop language conceptualizations of their new ongoing biopsychosocial experiences to stay alive, healthy and cooperatively convivial enough to survive against odds and continue to build our complex civilization. Today, it is not difficult, after so many abstract models’ predictions have been confirmed by modern technology, that our own planet is not a flat planar surface that likely extends to infinity like our limited sensory data still suggests and was developed by Euclid. Today, contrary to what our sensory experience suggests, we’d rather choose the non Euclidean spherical geometry model for the earth surface. Which one is true now and always? It made sense to reconcile the sensory 2-d flat surface with the abstract 3-d spherical surface as one unit reality model. This was done by the projective geometry model that eventually lead to the derived construction of the 9-d basic and more compactified models, including the variations on ‘string theory’. Unfortunately, the resulting project had far too many assumptions that wouldn’t be testable in the laboratory. As discussed above, a 3-d spherical geometry model evolved with both inconsistent elements from Euclidian and non Euclidian sources. It would be wise to admit that our view of the geometrical structure depends on our well established limitations of our brain sensory and combinatorial resolution capacities to either measure/observe or conceptualize the rigid body motions in a spherical volumetric space that evolving non-Euclidian models convincingly suggest? The search for a fool proof absolute model is a journey into the future as we adapt to the constantly evolving contingencies we meet as we make free choices of the simplest, expedient and more understandable model that makes our species survive in our real 3-d existential reality as it evolves into the uncertainties of a changing and potentially damaging biosphere environment as articulated and communicated to our fellow men/women in our vital ecosystem as time passes. The epistemological insights we have gained due to modern technology investigations of brain dynamics makes it easier to understand why we had to construct space-time as Euclidean with all of the constraints it entails, as envisioned by Poincare and analyzed by this author elsewhere. As we gain more experimental insight into the very probable spherical geometry universe evolving with time reconcile what now appears as irreconcilable, relativity and quantum mechanics. We are still developing such model by testing the limited number of geometries that may qualify based on the consistency and falsifiable stability of the observables on which they rest. With the incorporation of evolution and space time, the resulting 4-d x, y, z, t Minkowsky and subsequent models of 9-d n-1 mesoscopic existential reality makes the probable explanations more credible, especially to younger generations impacted by the current information explosion. After all, all models of human existential reality in all disciplines are convenient interpretations with a common denominator ‘a strategy for human biopsychosocial (BPS) survival’. Within a neuro-philosophical context of brain dynamic activity in search for ultimate first causes of self and the other objects and events, all things considered, we ultimately have to settle for approximations and necessary but insufficient and incomplete pragmatic conveniences. This is especially true when we have to reckon with the imponderables we cannot have control of that potentially threaten our very existence, we are talking about the very laws of nature that no empirical or metaphysical logic model can get an effective handle on to control, revise, modify or significantly influence locally and never non locally. Theories rewriting or reinterpreting the laws of nature come and go as technology gets more sophisticated. But nature’s explanatory laws are not eternal truths but behavior remains essentially outside our control to change. The best we can do is anticipate and avoid their negative impacts. Metaphysical logic models, however incomplete and ephemeral, when objectively communicated, makes possible any knowledge of the external world. The contingent truth mode of Kantian analytic a posteriori indeterminism contradicts Kant’s own version of synthetic a priori cognition which assumes no learned/acquired content because of it’s a priori status. This contradiction, when applied to a non Euclidian spherical geometry model for  rigid body mechanics, was the reason for a famous controversy between Bertrand Russell and Poincare that ran through their entire productive lives. The confusion comes about, in my view, when we do not distinguish the unconscious innate reflex adaptive response from the subconscious innate + learned/acquired which is also a reflex adaptive response. In real-time experience the learned/acquired experiences during early childhood become de facto innate-like subsequently when adaptively responding to related environmental stimuli. For Bertrand Russell, subconsciously adopting a Euclidian frame of reference, a knowledge about points, lines, planes are a priori, un learned concepts, for Poincare their cognition was preceded by their learned/acquired equivalent sensory experience. When you extrapolate this argument to a putative first human being born you realize that specific metaphysical conceptualizations of  e.g., a line a, b does not have an independent existence from a co-related specific sensory object or event such as specific ‘physical solid line’ a, b that made the conceptual representation possible. In my model the latter version is more credible. One should always be prepared to credibly justify a judgment instead of making pronouncements about their magic emergence from nowhere, another version of an unjustifiable ‘creationism’ with no experimental verification possible. If possible, one should be able to map every point, line, plane or volume –point for point correspondence- and posit the probable testable mathematical logic formulations that apply and let whichever of the probable mathematical models survive the progressive evolution into fitness. The unavoidable certainty need of our current species to attain certainty, notwithstanding the certainty of their ineffable presence, mandates the logical invocation of the existence on an intelligent design somewhere in transfinite n-1 d space-time. Even in the event one is not able to obtain a one to one corresponding mapping one can intuitively devise a strategy for probable localization coordinates in n-1 transfinite space time. The possibility of measuring location coordinates in a spherical universe requires the ability to measure/estimate the distance between any two points directly on a spherical geometry space. We briefly discussed above the importance of ‘distance’ as a minimum linear/curved measurable trajectory between two points a, b. called a ‘geodesic’ allowing one to measure positions coordinates in spherical space time and determine forces f acting on rigid body masses m when their uniform velocity v =d/t is acted on by an externally applied force f and accelerated according to the formulation f= m x a =m x d v/t. all of which made it possible to get a handle on the kinetics of rigid bodies at rest or when acted on by applied forces that changed their velocity rate in space, their acceleration a  according to f=ma. We all have recently witnessed the unrelenting pace of technological sophistication relating theoretical physics to practical problem-solving issues in spatial geometry applying to the guidance controls of manned and un-manned trips across space. Theoretical geometry doesn’t always yield explicit and close correlations with changing environmental realities yet it provides many unexpected surprises embodied in its complex formulations it they are testable in practice and their predictions become corroborated or at least probable under circumstances yet to materialize but calculated to emerge in the future.

These metaphysical logic considerations lead to the seminal work on the 4-d Minkowsky (x, y, z, t) explanation of observable temporal changes that made possible the credible ‘orthogonalization’ of a 4-d space time that eventually brought in the 9-d consideration of a curved spherical surface and volumetric spherical space time geometry and the compactified 9-d + provision to explain the observable ontological and the inferred metaphysical elements based on ‘hidden variables’ arguments. The basic arguments are all discussed in some detail in other volumes of this book. But there lies ahead the real challenge to reconcile the relativistic with quantum mechanical observables via Cramer’s Transactional Interpretation (TIQM) and super symmetry spin super partners SUNY interpretations. Further research is needed on the role that the posited baryonic dark matter receptor located in a RNA/DNA brain transcriptionable codon may have on explaining the required connections between unidentified transfinite coordinates location of cosmic radiation sources influencing the human brain’s neocortical phase space in the decision-making process. We have taken the first step in suggesting a mechanism involving axion/neutrino particles (?) in quantum gravity monopoles. Stay tuned, more to come.

 

Summary and Conclusions.

 

From the arguments that preceded it would seem as if existential reality is akin to experiencing an endless recurrence of never ending day-night and annual seasonal cycles as we journey along a boundless 2-dimensional x, y flat planar surface on earth in contrast to the equally boundless, 3-dimensional x, y, z volumetric infinity where reflecting celestial, color hues of light perform a complex but harmonious interactive dance against a fixed starry darkness, a background humans have no control of and whose orderly behavior hardly qualifies as spontaneously driven, in apparent defiance of all physical laws of nature such as thermodynamic entropy. An awesome non random display of order and beauty indeed, as if informed and driven by a preceding functional architecture design plan with a purpose. But there remain some unexplained aspects that demand much further elaboration such as the testable description of the mechanisms of attaining an introspective self consciousness state.

 

I would agree with much of what competing models claiming exclusivity are saying except for a few admittedly controversial issues like my proposition about the cogeneration of an ‘inner language’ and thought; it starts by the activation of an inherited primitive language machinery (see Chomsky) slated to continuously evolve -influenced under your individualized environmental circumstances- and be expressed as a function of your adopted language syntax structure. I published in detail this analysis as a tentative conclusion because there is no alternative -known to me- to explain Peugeot’s results on toddlers’ distinction between self and others. This result, takes me to the next conclusion we may disagree on, the putative existence of an ‘intelligent design’ based exclusively on observables like strictly experimental (f-MRI, etc.) and metaphysical logic calculus of probabilities. In summary, there is an unfathomable absolute and an evolving but discoverable road map guide of contingent truth in our ongoing mesoscopic existential reality journey through life….today and tomorrow, trans generationally. It is called biological life and its expression of activity is called consciousness.

 

We have stressed above on the importance of language communications that share common syntax structure by agreeing on definitions of terms used in the information exchange. Some investigators have defended the notion that “….Truth should be what we desire to know. If we each desire to know the truth then we should be able to agree on definitions and, when we agree on definitions, then we are traveling on a narrower path…… ” I have modified that assertion by making sure that not only agreement on definitions but also define/identify the frame of reference or perspective being considered in the definition. If we are talking about the immediate, empirical perspective the argument focus is analytically very different because if the focus is transcendental, conceptual, then it is not focused on a synthesis of existential reality strategies (‘truths’) to survive in a real world. Instead it is more like a speculative analytical differential searching for first causes. The immediate mode is essentially an integral synthesis whereas the transcendental is a differential analysis of the phenomenological reality that we all experience. Both are intimately related but have different immediate goals before synthesized together as a hybrid unit whole. This new synthesis is hopefully deductively confirmed with predictive value because it incorporates the calculus of probabilities and Bayesian logic within the general scope of an evolving transactional interpretation of quantum mechanics (TIQM). However, some investigators continue to conceptualize a Mind-Body duality as if they were correlated but independent entities as evidenced from a quote:

“But I suspect that you are questioning how truth can have a physical location in the scheme of the universe. Truth is produced by humans so to speak. We determine what it is and when it was and any and all things about truth are produced by human minds because truth is a distinction produced by the human mind.  (animals as well but why complicate the discussion). True and false are considered opposites. In the quantum world of electrons, the spin of the photons emitted by the electron can be opposite as well..….you can choose attachments and then select a 9-d (nine dimensional) electron. In this model of how an electron might look inside a molecular dimer in a microtubule in a neuron, inside a brain cell, becomes the basis for three conscious realms that are conscious because they are entangled with the light in the tunnel which are photons of opposite spin. with the nine dimensional electron having three conscious realms, which each have a different perspective of the light in the tunnel or what is truth. The behavior that the electron engages in, emitting, absorption, entanglement, as it attempts to reconcile opposite information received from the two opposing molecules in the dimer and that behavior is reproduced by the two halves of the brain. The little electron in the dimer checks for truth by what is called super-radiant emission or cooperative emission which is a contribution from each molecule in the dimer. When we cooperate we become efficient and therefore cooperation is necessary for survival.

That truth checking behavior can be repeated by the brain but only if one desires to know truth. The whole body works together efficiently by seeking truth which is knowledge of opposites which is path of efficiency which is path life progresses along. Then as our minds evolved we learned to be deceitful and lie to ourselves first and then to others. Now the internal dialogue has been commandeered by the imagination and truth is given little thrift unless one is in court and swearing to tell the truth, like as if they were going to start now?” See Goldminer, Mind-Body Society and scientific evidence to support it called the Minnesota Experiment from the sixties or seventies. “They squeezed some helium or some such and got 1/3 electron bubbles on the other end.”, he said.

 

It may be noticed that much of the confusion in these exchanges are linguistically based, sometimes deliberately so, in pursuit of ego centered goals. I simplify my life and, until otherwise proven, I assume that ‘mind’ explains/describes the functional activity of a physical brain structure. It is a linguistically contrived way to describe the brain’s various adaptive responses to environmental stimulation, body proper and external. That is the main reason why one needs to assume the pivotal importance of language in our view of sensory reality as experienced by otherwise normal, healthy individuals. The Achilles tendon of any theory of consciousness is the necessity of having an internal language able to distinguish ‘self’ from ‘others’. I struggled to provide justifiable analytical arguments about ‘self consciousness’ based on objective measurables’ studies but failed and finally had to settle for the contrived convenience. No one has challenged the premise, since its publication, that both language and thought are simultaneously co-generated. This way one is constantly engaged in language structured ‘chats’ (however primitive) with oneself (inner language) during the analysis of any familiar or new stimulus during the process of decision-making. This is a crucial aspect of inter human species communication. I am sure one may have noticed how often colleagues may not even realize they agree with you, that in criticizing you they are defending your position! All seems rooted on a careless choice of a language semantic structure to report his ‘objection’. Others, of course, do it with Machiavelic sophistry tools to advance their entrenched beliefs (framework of reference), what Nietsche described as an advancing ‘nihilistic tide’. We all choose our existentialist slogans and quote from historical heroes that conform to your biased view…….until proven otherwise. So, if it works (consequences) and is justifiably understood, it’s good enough. That is the slogan. The heroes go from Ockham, Leibniz, Kant, Dirac, Einstein, and many others living in our ‘real’ mesoscopic world. In case anyone forgot what I mean by “it works”, I will repeat my ‘leit motif’, if the consequences are to keep your species alive, healthy, happy and cooperatively convivial now, tomorrow and ‘per secula seculorum’ that’s it! Please excuse the BPS commercial! 🙂

 

Other dissenters of my BPS model argue, quote: “You appear to be saying that truth is a relationship between the propositions of languages (and their symbols) and reality. A relationship is not nothing. If you are correct, then truth is certainly not nothing at all.  But without language or symbols concepts would still exist and so would the reality to which concepts refer. If we did not have language the concepts we express in language, including mathematics, would still exist and so would truth still exist. Truth is not nothing at all and truth would still exist if there were no languages, human or otherwise, to express it.” See Richard May, Prometheus Society.
The phenomenal reality would always exist regardless of our awareness of its presence. Once our sensory organs detect its presence or, in the alternative, from the resulting sense-phenomenal causal effects observed, we can  intuit its probable presence, then we are ready to elaborate an adequate symbolic/sentential representation of the interaction of unit size particles or their aggregates with each other and the environment. This brain effort or mind activity we call the ‘concept’. The conceptual epistemological effort to communicate this activity to another human being includes the symbols, logic sentences, code, and syntax structure of narrative, audiovisual cues, body language, etc. In other words, any phenomenal object/event that a human observer can capture directly (description) or indirectly (explanation) and incorporate in his/her conceptual model would be tested according to its predictive value when put to a test. It incorporates the local macro ontologies (descriptions) and non local micro/cosmological epistemologies (explanations). IMHO, neither exclusive approach suffices and we have to consider both as a unit structural/functional hybrid wholeness. Hope this clarifies a little my point.

 

Having read about the framework of reference of a materialist physicist judging reality, we find examples of the opposite metaphysical theological framework of reference  persuasion defending Leibniz Monads. See Dr. Roger Clough, Leibniz Institute. My reply to that other view claiming exclusive status follows:

 

Regarding the evolutionary progression of Leibniz cosmogony across centuries, there is no doubt about the genial foresight of the ‘Leibniz equivalence proposition’. It is not an exaggeration to even claim it was the forerunner of today’s ‘Gauge transformations’ that permeates all speculations and conjectures about the ever evolving mesoscopic existential reality. These transformations make it possible to relate the two or more metric fields especially in particle physics, where they have provided a powerful means of constructing reliable and consistent theories of interaction fields. The problem comes up when the resulting elegance of such elaborate structural/functional mathematical representational  creatures of neocortical brain activity do in fact represent the same physical situation, so that the transformation is a bonafide ‘gauge transformation’ where the observable descriptions or the logically inferred explanations of the invariant physical substrate is real, regardless of its observable invisibility? Could there be different physical realities, as posited by so many model poems such as ‘multiverses’?. Do we need 2 different general equations, one for each micro (sub-atomic) and macro (universal) n-1 transfinity? If Leibniz proposition, as modified, were to ultimately prevail, it would require a reconciliation of relativity with quantum mechanics even on a probability calculus basis. Unfortunately, our well known species limitations in the sensory and combinatorial brain resolution capacities makes us focus on that activity that, albeit constantly evolving, we can get a handle on analytically and critically, The best combination of verifiable, consistent sense phenomenology as the invariant element plus the best corresponding mathematical logic formulation in representation of both the observable descriptions and the logically inferred explanations. This way, a model poem of a mesoscopic, evolving  existential reality that keeps us biopsicosocially viable and able to describe and explain in the most simple way the reality of quotidian existence in harmony with our phenomenal physical nature as described by natural laws would be our choice. This would make room for the physical ‘seen’ and the metaphysical ‘unseen’. If there is an impossibility to incorporate both aspects in one general, universal epistemontological unit hybrid explanation, unable ever to be reconciled under a super symmetry, such as the ‘string(s)’ theories, then. To conclude, Leibniz genial foresight of a pure gauge transformation has convincingly evolved into a physical issue that cannot be resolved exclusively by physical materialistic or metaphysical considerations in isolation. However in our mesoscopic existential reality it becomes a physical issue to be settled by physical considerations until otherwise proven otherwise.

 

Thus, would it be truthful to say that, when we truly think there is no ontological divide between what we think and verifiable reality, then epistemological distorsions disappear? Quare.

 

In Deltona, Florida this wintry midnight of December 4, 2013 where the beginning of Advent and Hanukah strangely coincide.

 

Dr. Angell O. de la Sierra, Esq.

 

..

 

 

 

Posted in Uncategorized | Leave a comment

Protected: Analytical Examination of an Ongoing, Mesoscopic Existential Reality.

This content is password protected. To view it please enter your password below:

Posted in Neurophilosophy of Consciousness | Tagged ,

The evolution of Leibniz foresight.

Dr. Roger, there is no doubt about the the genial foresight of the ‘Leibniz equivalence proposition’. It is not an exaggeration to even claim it was the forerunner of today’s ‘Gauge transformations’ that permeates all speculations and conjectures about the ever evolving mesoscopic existential reality. These transformations make it possible to relate the two or more metric fields especially in particle physics, where they have provided a powerful means of constructing reliable and consistent theories of interaction fields. The problem comes up when the resulting elegance of such elaborate structural/functional mathematical representational creatures of neocortical brain activity do in fact represent the same physical situation, so that the transformation is a bonafide ‘gauge transformation’ where the observable descriptions or the logically inferred explanations of the invariant physical substrate is real, regardless of its observable invisibility? Could there be different physical realities, as posited by so many model poems such as ‘multiverses’?. Do we need 2 different general equations, one for each micro (sub-atomic) and macro (universal) n-1 transfinity? If Leibniz proposition, as modified, were to ultimately prevail, it would require a reconciliation of relativity with quantum mechanics even on a probability calculus basis. Unfortunately, our well known species limitations in the sensory and combinatorial brain resolution capacities makes us focus on that activity that, albeit constantly evolving, we can get a handle on analytically and critically, The best combination of verifiable, consistent sense phenomenology as the invariant element plus the best corresponding mathematical logic formulation in representation of both the observable descriptions and the logically inferred explanations. This way, a model poem of a mesoscopic, evolving existential reality that keeps us biopsicosocially viable and able to describe and explain in the most simple way the reality of quotidian existence in harmony with our phenomenal physical nature as described by natural laws would be our choice. This would make room for the physical ‘seen’ and the metaphysical ‘unseen’. If there is an impossibility to incorporate both aspects in one general, universal epistemontological unit hybrid explanation, unable ever to be reconciled under a super symmetry, such as the ‘string(s)’ theories. To conclude, Leibniz genial foresight of a pure gauge transformation has convincingly evolved into a physical issue that cannot be resolved exclusively by physical materialistic or metaphysical considerations in isolation. However in our mesoscopic existential reality it becomes a physical issue to be settled by physical considerations until otherwise proven otherwise.
Angell

Posted in Uncategorized | Tagged , , | Leave a comment

Update on the Transactional Interpretation of Brain Dynamics

Follow up on Summary and Conclusions left out.
o Along the same lines as the ‘real’ versus ‘ideal’ justifiable arguments we now add additional evidence in behalf of the anticipated eventual transition from the abstract ‘ideal’ to the measurable ‘real’ paradigms explaining existential reality:
The report “Breakthrough Study Reveals Biological Basis for Sensory Processing Disorders in Kids.” constitutes the general explanation of the specific sensory processing disorders we clinically see in autism and the attention deficit hyperactivity disease (ADHD). It also underlines the importance of the human brain’s right hemisphere first impact with sensory input from ‘real’ environmental (external or body proper) input sources of new or familiar information before an adaptive response is either subconsciously implemented reflexly or after further analytical processing especially when processing unfamiliar sensory inputs. The master control nerve networks of frontal brain neocortex is continually processing information input arising from right hemisphere initial effort to coordinate ongoing activity from multiple sources such as Left temporal and parietal lobe multisensory input and Left frontal language processing Broca’s area. The frontal neocortex brain is continually assessed of the ongoing ‘real’ environmental biopsychosocial circumstances, from passive meditation to active social partying. When the new or familiar input is received at the Right hemisphere the master frontal neocortex analytical sorting of available response alternatives present in the neocortical pre motor attractor phase space. This is subsequently followed by a conscious activation of the adaptive motor choice neurohumoral response, all things considered. What is important to notice is how the brain synaptic ‘real’ time processing precedes in time the ‘ideal’ adaptive motor response especially in the presence of new/unfamiliar information sense-phenomenal input arrives in Right brain hemisphere. All of this complex analytical sorting immediately follows after a subconscious reflex motor response is released for the overall biological preservation of the human species priority as previously published and discussed by me in various HiQ listings and fora. This detailed explanation hopefully supports my biased view about the importance of the ‘real’ preceding the ‘ideal’ solution.

The Unexpected Transition from Idealism to Realism.
The information explosion we have witnessed in the last two decades has unexpectedly accelerated the relentless, forward evolutionary process of complexity as experienced and narrated by human language semantic accounts in our communications. There is an insurmountable amount of verifiable evidence sustaining the ‘real’ demonstrable fact that only the human brain and that of our primate relatives have the ability to pay attention to objects/events in the audio-visual scene without always looking or listening at them directly. This is done by recreating an internal map of the previous sense phenomenal world we experience by mapping our sensory field onto specific brain cells. The mapping includes local and non-local verifiable observables. This is the existentially real case whether the ‘wave’ or ‘field’ mass particle carrier we conveniently derive as an ideal notion fits the previous consistent falsifiable experience or not. Thus, the local quantum physics interpretation implies being bounded within a finite space-time region where an observable object/event exhibits a ‘real’ behavior conditioned to the relevant environmental circumstances properly belonging to the space-time region itself. This is the classical Copenhagen Interpretation (CI). It is along these lines that the linear algebraic approach focuses on ‘real’ physical local and ‘ideal’ metaphysical non-local representations (symbolic or sentential logic) on a probability basis emphasizing that the notion of a field or wave is only a convenient derivative notion from the ‘real’ local actual measurements that preceded the ‘ideal’ non-local explanation.
The competing model-poems that ‘ideally’ explain the same ‘real’ phenomenological description of the object/event sensory reality that preceded it is called the Transactional Interpretation of Quantum Mechanics TIQM. The advantage of the transactional interpretation is, in my opinion, that it incorporates probable interpretations of ongoing verifiable existential experiences that are irreducible to linguistic coding in symbolic or sentential logic representations. It’s emphasis on sense-phenomenal empirical ‘reality’ descriptions are more reliable especially when the Transactional Interpretation (TI) predictions are confirmed. Consequently the Lagrangian Quantum Field Theory (QFT) is our most empirically well-confirmed physical theory where the ‘ideal’ explanation of the metaphysical component of ‘real’ empirical object/events descriptions harmonize. The reliance on verifiable sensory facts excels in the expediency of calculations and their intuitive understanding because it is closer to phenomenological experimental manipulation in the physics lab. That makes the derived metaphysical ‘ideal’ model poem more credible when applying the theory to make predictions. Anytime that a pragmatic empirical ‘reality’ description of the occurrence of an object/event and an ‘ideal’ metaphysical logic sophistication explanation (i.e., there is an isomorphic mapping of the elements of a C*-algebra into the set of bounded operators of the Dirac/Hilbert space.) lead to the same existential reality conclusion, then pragmatic ‘realism’ trumps mathematical rigor due to the resulting simplicity, efficiency, and ease in understanding. When the TIQM model is mathematically modified further to incorporate the speculative probability of identifying the n-1 dimension space coordinates of probable sources of cosmological information input located beyond our local 4-dimensional space-time, the TIQM becomes a superior ‘ideal’ model when also explaining non-local sources of information input (e.g., measurable cosmic radiation) causally efficient in influencing the human evolving complex ‘reality’. This emphasis has made possible to grasp the seemingly conflicting multidisciplinary aspects of the same human reality in a real mesoscopic biosphere vital environment. This has been my personal experience.
The TIQM model superiority is best illustrated by adopting the Dirac notation analytical tool, an empirically based ‘ideal’ representation where the ‘entanglement’ coupling is based on the complementary pairing/coupling of opposite spin unit particulate matter. The mechanism needs to be at the subatomic micro level but two macromolecules like DNA or RNA can be functionally coupled iff their micro components are spin-coupled first. I am not even sure that experimentally our sources are actually separating spin couples at random, it may be separating in opposite directions larger photon components already coupled that remain so even at cosmological dimensional units, as explained in the introduction above. We may never know. That way local or ‘non-local transfinite information’ input can gain empirical ‘de facto’ codon control of the transduction/translation genetic genotype machinery resulting in verifiable empirical phenotype results as observed in experimental labs as an induced re-arranging of the corresponding polynucleotide DNA helical structure and the subsequent RNA translation of environmental alterations in the corresponding functional enzyme production controlling phenotype expressions as briefly explained below in tracking the phenotype results from environmentally induced genotype alterations, e.g., ‘optogenetic’ tests.
It should be noted that the sophisticated mathematical axiomatic (logic) ‘ideal’ representation of the verifiable ‘real’ fact observation of an object/event in a derived wave or field conveyance regards the conveyance conceptualization as the fundamental notion for no convincing reason other than the symbolic/sentential representational elegance. The ‘ideal’ elegant map sophistication has unjustifiably become thereby more important than the ‘real’ empirical observable territory! Thus, the Wightman axiomatic quantum field theory (QFT) becomes thereby superior to the linear algebraic QFT even when both are abstract explanations of an ‘ideal’ field with infinite degrees of freedom for putative sub-Planckian quantum particles that appear in special circumstances.
As noted earlier, the less mathematically elaborated algebraic QFT abstraction originated from observables in the measurable local and the probable non-local environments whereas the more mathematically sophisticated axiomatic approach is limited to a conceptual elaboration of the field, a derived carrier model notion from classical local quantum physics. Furthermore, in the classical local quantum physics interpretation an observable is regarded as a property belonging to space-time region itself, i.e., Higgs Bosons ‘creating’ something out of a nothing vacuum? Is this a new mathematical ‘Genesis’ criticizing the ‘Delta Function’ as improper and laden with self derived contradictions as Von Neumann opined?
Fortunately, as it turns out, von Neumann was the proponent of a new ‘ideal’ framework based on Hilbert’s theory of operators and Dirac was the proponent of a ‘real’ framework of reference amenable to the rigors of testing of the local phenomenal events in the biophysical chemistry labs (e.g., optogenetic testing to cover the micro invisibilities) and the non-local events in the astronomy observatories covering the cosmological transfinite n-1 dimensional invisibilities.
Mathematically, the Dirac Delta Function is limited in scope when defined over the ‘real’ line, is zero everywhere except for one point at which it is infinite, and yields unity when integrated over the real line. Von Neumann promotes an alternative framework, which he characterizes as being “just as clear and unified, but without derived mathematical objections.” He emphasizes that his framework is not merely a refinement of Dirac’s; rather, it is a radically different framework that is based on Hilbert’s theory of operators-valued distributions. When objectively, dispassionately analyzed both arguments have their own merits but would be incomplete if either one claims exclusivity. If we had to choose only one it is clear that when pragmatics and rigor lead to the same conclusion, then, as I said above, pragmatics trumps rigor due to the resulting simplicity, efficiency, and increase in understanding made possible. Most important, however, is that it allows for unexpected new environmental circumstances as they get empirically detected. In other words, the TIQM model approach adopts the pragmatic orientation in the Lagrangian QFT (based on perturbation theory, Feynman’s path integrals and renormalization techniques). The “axiomatic” QFT refers specifically to the ‘ideal’ derived component of existential reality based on operator-valued mathematical distributions.
The undersigned author is not that familiar with the Weinberg ‘real’ pragmatic formulation that allegedly zeroes in human physical intuition and provides heuristics that are important when performing calculations; however, the mathematical theorists do not consider it mathematically rigorous enough and pay little attention to the fact that their proposed mathematical structure does not provide any techniques for connecting with familiar or new experimentally determined quantities. It is clear that these two approaches to QFT, the rigorous axiomatic and the Lagrangian pragmatic are rival research programs. I think we can get the best of both propositions that harmonizes with ongoing complex phenomenal reality as it evolves and I satisfy my curiosity as to their philosophical foundations. Then, because of my neurophilosophical interests in using the best available strategic tool when analyzing the mysterious struggle of our human species trans-generational survival against odds I compare the competing mathematical strategies and wonder why use the infinitesimals of classical quantum physics when you can use n-1 dimension transfinite parameters, more meaningful in the analysis of current existential reality.
Who, when leisurely meditating on the issues of life and human consciousness upon retirement, has not immediately reckon the relevance of particulate brain matter in reciprocal motion inside and outside a physical brain and the force(s) fueling the physical mass unit particles to exhibit dynamic reciprocal motion according to the well established laws of physics? It was all about analytically speculating how such motion carrying the invisible micro unit dimensional particle in a putative field (electric, magnetic or both) or wave conveyance can be explained best using the available mathematical logic metaphysics tools. Dirac’s pragmatic approach proposed the equivalence of matrix mechanics and wave or field mechanics by using the Delta Function, an improvement on the original Hilbert Space use by incorporating scalar metrics definable in terms of the mathematical scalar’s complex conjugate (a coupling/conjugation of a ‘real’ number + an ‘ideal’ imaginary number). That strike of genius makes possible to analogize such coupling with the still mysterious experimental non-local coupling of opposite spin particles (Φ is the topological anti-dual of Φx) at a distance when the source allegedly fired them in opposite space-time directions. The double slit experiments show their remaining entangled connectedness even when miles apart. As mentioned above, the stochastic nature of the physical particles as they travel in opposite directions until one of them ‘randomly’ selects one of various spin orientations available in the instrument after which the other particle mysteriously selects only the opposite complementary orientation, a ‘spooky’ action at a distance as relativity considers the result. I personally think they remained coupled when they left the source. Our human brain limits in both the perceptual and conceptual resolution capacities makes my speculation impossible to measure but neither is it justified to assume their original randomness. How else can we simultaneously measure the position of a unit mass particle ‘m’ being accelerated ‘a’ by a causally efficient force ‘f’ according to f=ma when the ‘ideal’ operators have no eigen values or eigen vectors? Here is the opportunity to calibrate the adequacy of competing mathematical physics ‘ideal’ algebras, one based on ‘real’ experimental observables related to bounded space-time locations like the finite double cone ‘twistor’ model where light traveling in opposite directions intersect at the vortex of the light cone or an ‘ideal’ algebra based on ‘real’ relativistic QFT interpretations? You be the judge, stay with the ‘ideal’ axiomatic version about how things should be (a la von Neumann) or transition to the ‘reality’ based version about how things probably are or predictably will be (a la Dirac), all things existentially relevant to human beings reality limitations being considered. Dirac’s Hilbert space assigns generalized eigen functions to unit particle mass ‘m’ position and their instantaneous velocity ‘v’ from measurable F= mv momentum operators resulting in the nuclear spectral theorem where Φ and Φx remain connected as mathematically derived by an algebraic QFT of observables in our ‘real’ 4-dimensional Minkowski space-time.

Another instance for comparing the axiomatic or Dirac models is found in the recently reported “Breakthrough Study Reveals Biological Basis for Sensory Processing Disorders in Kids.” As narrated, it constitutes the general explanation of the specific sensory processing disorders we clinically see in autism and the attention deficit hyperactivity disease (ADHD). It also underlines the importance of the human brain’s right hemisphere first impact with sensory input from a ‘real’ phenomenological environmental (external or body proper) input sources of new or familiar information before an adaptive response is either subconsciously implemented reflexly or after further analytical processing especially when processing unfamiliar sensory inputs. It has been observed in the brain clinical lab how the master control nerve networks of the frontal brain neocortex is continually processing information input arising originally from the right ® hemisphere initial effort to coordinate ongoing ‘real’ phenomenological activity from multiple sources such as Left (L) temporal and parietal lobe convergence of multisensory input and Left (L) frontal language semantic processing Broca’s area. The frontal neocortex brain is continually assessed of the whole spectrum of ongoing ‘real’ environmental biopsychosocial (BPS) relevant circumstances, from passive meditation to active social partying. When the new or familiar input is received at the Right hemisphere the master frontal neocortex analytical sorting of available response alternatives present in the neocortical pre motor attractor phase space working memory. This subconscious effort is subsequently followed by a conscious activation of the best adaptive motor choice neurohumoral response, all things considered. What is important to notice is how the brain synaptic ‘real’ time processing of information input precedes in time the freely conscious choice of the ‘ideal’ adaptive motor response. This choice is especially important in the presence of new/unfamiliar information sense-phenomenal input arrives in Right brain hemisphere. All of this complex analytical sorting immediately follows after a subconscious reflex motor response is released with priority for the overall biological preservation of the human species priority as previously published and discussed by me in various HiQ listings and fora. This detailed explanation hopefully supports my biased view about the importance of the ‘real’ preceding the ‘ideal’ solution. For more details see my Blog at:

When defending the human biological survival priority (BPS) premises as argued before the objections of some professional mathematical physics theorists, I state that by conditioning the sophistications of the axiomatic mathematical operations to conform the standard ‘real’ locality axioms e.g., isotony, locality, covariance, additivity, positive spectrum, etc., Dirac’s original model theory can be extended to reach the cosmologic invisibilities of transfinite non-local space-time manifolds beyond our local Minkowsky 4-dimensional manifold. For the reasons stated above about the human species brain perceptual and conceptual resolution limitations, I am not including an unjustifiable inclusion of a mysterious and unique invariant vacuum state as already noted. This is the basis on which I am still working on an all encompassing TOE model poem of human mesoscopic existential reality incorporating and modifying the original Cramer’s TIQM model and proposing the measurable details for the corroboration of a ‘Transactional dark baryonic reciprocal receptor DNA/RNA mediating a two way information transfer between unidentified transfinite space-time coordinates and a premotor cortical acceptor in the human brain neocortical attractor phase space as published.
The resulting set of algebras on Minkowski space-time that satisfy these axioms is referred to as the net of local algebras. It has been shown that special subsets of the net of local algebras — those corresponding to various types of unbounded space-time regions such as tubes, monotones (a tube that extends infinitely in one direction only), and wedges — are type-III factors.
The classical N-dimensional complex vector space representation of a complete human brain dynamics of mesoscopic existential reality as the linear algebraic combination of the observable ‘real’ plus the logically inferred ‘ideal’ components waits for experimental corroboration of the putative ‘reciprocal dark baryonic receptor’ in human brain networks.
There is nothing new when we focus on our complex, evolving ‘real’ lives with the objects and events in our quotidian 4-dimensional Minkowsky space-time physical environment. A reliable simplification can be achieved if the ‘ideal’ component is dimensionally considered a vector A in an N dimensional vector space over the field of complex numbers , symbolically stated as . The vector A is still conventionally represented by a linear combination or sum (from n=1 to infinity N) of basis vectors as represented in a column matrix from A1 to AN:

even though the coordinates and vectors are all complex-valued by including putative negative dimensions to explain the ‘transactions’ between local and non-local cosmological coordinates.
We can improve on the reliability of such model if we avoid immeasurable infinities (N) and settle for probable transfinite n-1d approximations. This way, A can be a vector in a complex Hilbert space. Some Dirac/Hilbert spaces, like , have local finite dimension (d), while others have a non-local infinite dimension (n) adjustable to a transfinite (n-1 d) dimension so it becomes related to local sensory objects/events. In an infinite-dimensional space, the column-vector representation of A above would be a list of infinitely many complex numbers from A1 to An as shown above. The symbol to the left, above, indicates that all n-dimensions are summed/integrated and may be represented as a row as seen below if represented as a ‘ket’ B. Finite dimensions are experimentally testable for local observable/environmental ‘real’ conditions while transfinite coordinate locations are more adequate for probable testable non-local ‘ideal’ environmental situations especially for their predictable potential of future catastrophic events threatening the human species transgenerational biological survival against ‘real’ adaptive odds.

or in a more easily generalized notation,
where the left ‘bra’ member is a ‘row’ and the right ‘ket’ member is a ‘column’ equivalent functionally linked. A bra next to a ket implies a linked/coupled matrix multiplication. (n x n). x
The bra row may be written in short as a ket row equivalent or in any convenient symbol, letter or word inside the ‘ket’ column space. By common practice ket columns are used for labeling energy eigenkets in quantum mechanics with a list of their quantum numbers. In quantum mechanics, a stationary state is an eigenvector of the Hamiltonian, implying the probability density associated with the wavefunction is independent of time.[1] and thus the assumed invariant bra component in my ‘ideal’ 9-dimensional model of a ‘complete’ TOE model of human reality. The ket corresponds to a quantum state with a single definite energy (instead of a quantum superposition of different energies). The quantum states are rays of vectors in the Dirac/Hilbert space, as c|ψ⟩ corresponds to the same state for any nonzero complex number c. It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar and equivalent to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences as briefly explained below..

We use ‘quantum numbers’ to describe the micro spin values of conserved unit dimensional particulate matter quantities in the brain dynamics of the quantum system. Perhaps the most peculiar aspect of quantum mechanics is the quantization of observable quantities, since quantum numbers are discrete sets of integers or half-integers. This is distinguished from classical mechanics where the values are time-dependent variables and can range continuously. While quantum numbers often describe specifically the energies of electrons in atoms, they can also apply to angular momentum, spin, etc. as variables. Any quantum system can have one or more quantum numbers; it is thus difficult to list all possible quantum numbers.[1]
An inner product (n x n) is a generalization of the dot product. The inner product of two vectors is a complex number because it contains values in both the ‘real’ number positive and an ‘ideal’ virtual number negative vector directional domain. A bra–ket notation x uses a specific notation for inner products. A bra next to a ket implies matrix multiplication. We can also use Dirac notation to represent inner or dot products.
One can also use the bra-ket notation to isolate different but related individualized sets of information content inside (brackets). . If we analyze ‘real’ local verifiable environmental conditions inside our three-dimensional (3-d coordinate axes x,y,z at right angles from each other), a complex 9-dimensional Euclidean space is represented, the bra for a static, time-independent invariant situation and the ket portion representing the variable portion of the same reality represented in a linear metrics. The bra-ket representation takes the general form where denotes the complex conjugate of . We can easily see the 9-dimensional ‘ideal’ space come to life as published in more detail before.

Another important special case to notice is the inner product of a vector with itself, which is the square of its norm (magnitude) the bra-ket symbolic notation takes the row form
The real importance of the bra–ket notation is allowing the formation of sets confined inside (brackets) as when multiplying two sets of either stationary or variable quantities. . This way both the bra and the ket are meaningful on their own, and can be used in other local or non-local contexts other than an inner or dot product representation. There is an obvious advantage about this ingenious way of representing the invariables and the variables, the local and the non-local, the ‘real’ and the ‘ideal’, the present as it verifiably is in the ‘real’ world and the probable future as it may be or the ‘ideal’ world should be as a goal, the immediate from the transcendental, the journey and the destination, the map and the territory. This clear distinction in the ongoing complex evolution of the human existential reality, confuses the lay and the experts, with subconscious or consciously deliberate results, but ultimately to protect human lives from eventual extinction after completing their life cycles.
Whatever the good, bad or indifferent intention to maintain our human biopsychosocial (BPS) integrity and keep track of the continuous relentless challenges in our environmental biosphere may be, we need an appropriate analytical logic tool allowing a continuous scrutiny of the present that is and the probable future that may or should be. The Dirac ‘real’ approach of focusing on the specific physical, verifiable observables using linear metrics represents to me the necessary frame of reference to metaphysically predict the probability of dangerous occurrences ultimately threatening human lives. The transactional interpretation of the quantum mechanics (TIQM) model, as modified, can accommodate relevant unsuspected occurrences. To follow is a brief account of the related sentential/symbolic linguistic representations in a linear algebra approach to vector calculus theory models.
The use of a scalar quantity metrics allows for the rigorous representation of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors of local environments, a scalar product of zero inner or dot product by generalizing our 3-d ‘real’ Euclidean spaces and also the probable ‘ideal’ vector spaces of any (possibly infinite) dimension, as are studied in functional analysis. Thus, there are finite-dimensional Dirac/Hilbert ‘real’ local spaces as fully elaborated in linear algebra, and there are n-infinite-dimensional separable ‘ideal’ non-local Dirac/Hilbert spaces that can be made structurally and functionally into n-1 dimensional isomorphic/equivalent to transfinite space . Modified Hilbert spaces are Dirac space equivalents and there is a unique Dirac space up to isomorphism for every cardinality of the orthonormal basis. See below the vector notation for the xy plane

Dirac’s bra-ket notation reliably makes possible a separation of the ‘real’ 4-d Minkowsky space-time describing what happens in the ‘real’ local dynamic human brain (as measured by fMRI and other techniques) from the ‘ideal’ non-local but probable and predictable n-1d transfinite space conditioned to the elusive proof that every bounded linear operator on a Dirac/Hilbert space has a proper invariant subspace. In my biased opinion some cases of this invariant subspace problem have already been proposed along with my own published speculative arguments. A reconciliation of quantum and relativistic aspects of human existential reality has proven to be insurmountable, especially the mathematical conundrum of describing the sizes of infinite sets using the transfinite cardinal numbers as briefly summarized now..
. The notion of using (brackets) is predicated on their ability to isolate as ‘sets’ related aspects of one same human existential ’reality’ some of which may be functionally linked (entangled) in the local or non-local environment. The cardinality concept was intuitively developed by Georg Cantor, also the originator of set theory, in 1874–1884. Cardinality can be used to compare two or more different finite sets such as e.g. the two sets {1,2,3} and {4,5,6} as having the same cardinality if combined and arranged to have a one-to-one correspondence reciprocal link between them {1->4, 2->5, 3->6}). When applied to infinite sets;[1] e.g. the set of natural numbers n = {0, 1, 2, 3, …}. now becoming denumerable (countably infinite) sets and this cardinal number is called , aleph-null also called transfinite cardinal numbers.
The magic of the TIQM Dirac’s genius when handling the ‘ideal’ non-local environmental coordinates in n-1 dimensional transfinite space is to be able to ‘see the unseen’ objects/events by humans and make probable predictions and preparation strategies to protect against their potentially harmful future causal effects on human lives on earth. As mentioned above, the notation demonstrates how a bra can become an equivalent ket by way of a conjugate transpose (Hermitian conjugate) of a bra into the corresponding ket and vice-versa: because if one starts with the bra row representation: and then perform a complex conjugation followed by a matrix transpose, one ends up with the equivalent ket column or vice verse as seen below:

Bras can become the linear operators on kets such as states whose wavefunctions are Dirac delta functions or infinite 2-d plane waves, pretty much like modified Hilbert spaces but more flexible in that it doesn’t require normalization of wave functions in quantum states that assigns a strictly positive length or size to each vector in a vector space, a goal for the possible future. Measurements are associated with linear operators called observables on the Dirac/Hilbert space of quantum states.
The dynamic environmentally induced interactive variations in the unit-dimensional aggregates, like time independent static invariants of the unit dimensional physical particule are also be described by linear operators on the Dirac/Hilbert space. It should be noted that ‘real’ sense-phenomenal ‘local’ and describable information input is taken care of by the classic Hilbert space but verifiable ‘non-local’ objects/events beyond human threshold of resolution capabilities can still be ‘ideally’ explained as probable by the inner product which by definition is linear in the first argument and bounded as derived comes from the Cauchy-Schwartz inequality. The functional integration of the ‘local’ and ‘non-local’ elements provides a much more complete description of human existential reality as argued.
In the specific case of our speculative arguments on how the human brain dynamically analyzes and processes reciprocal information inputs from environmental local (including body proper) and non-local transfinite sources we conveniently normalize or scale the quantum wave function to = +1 when involving vectors or linear operators. We find how bra-ket notations best explains the causal link between verifiable observables and the predictable micro spin coupling, as amply discussed elsewhere. The local (albeit of an infinite scope) verifiable objects/events are discrete and countable (quantized). The non-local (infinite) are continuous and non countable but may become verifiable on a quantum probability basis. ‘Real’ & local discrete with an infinite scope. ’Ideal’ & non-local continuous with an infinite scope.

At the risk of being repetitious, we need to emphasize that any ket |Ψ⟩ can define a complex scalar function of r, i.e., a wavefunction as where Ψ(r) on left is a mathematical function mapping any point in space acting on kets, by , e.g., the momentum operator p in space r relates to wave function as .The Dirac/Hilbert notation may just as well explain the evolution of the complex brain dynamics when adjusting to ongoing familiar or new objects/events representing a potential threat to the human species biological survival as it causally affects its biopsychosocial (BPS) parameters.
One may conveniently consider a time independent unit dimensional discrete, invariant physical mass particle at a given moment in real time, mesoscopic existential reality as simultaneously coexisting with the corresponding time dependent continuous variations of their particulate characteristics as they aggregate in the same moment in time. The invariant infinite is countable by definition and the variant infinite is uncountable beyond the threshold of phenomenological resolution. They represent the entire spectrum of infinite reality from the micro sub-Planckian to the macro cosmological manifolds. Since infinite (N dimensional) objects/events are not phenomenological measurable in principle, we have to invent a putative countable transfinite (n-1 dimensional) manifold of discourse. The rest of the arguments will focus on the evolutionary component of the unit whole complex aggregate structure and function as anticipated and hopefully predicted and confirmed as causally related by their justifiable and consistent verification as probable. A transition from the unreliable indeterminate invisibility domain to the probable and more reliable determinate invisibility now able to be tracked in its evolutionary path with the mathematical logic tool of linear metrics that has proven so successful in science, technology and philosophy. We need to define the causally efficient force (f) behind acceleration (a) of the unit dimensional mass (m) particle responsible for their physical aggregates variations in structure and function (f=ma) affecting our existential human reality in our 9-dimensional biosphere environment as published elsewhere

Posted in Uncategorized | Leave a comment

Update on the Transactional Model of Brain Dynamics

The Unexpected Transition from idealism to Realism
Reciprocal Information Transfer
Introduction.
General. There should really be nothing tricky or mysterious about faith values, whether the reference frame belief is theosophist, atheist, scientologist, agnostic or their variations. In normal, healthy human beings all beliefs have a common denominator, either they are rooted on sensory (phenomenological) factual observations, measurements of physical objects or events or rooted on extrasensory (non-phenomenological logical probabilities that can be inferred from those descriptions) explanations. The credibility of any ensuing judgment depends on the falsifiable consistence and predictability of the occurrence in question under standard environmental conditions during a given moment in time and space location. Another important aspect of the credibility of a given judgment is related to the probable truth content of either the sensory experience (ontology) or the extrasensory logical inference (epistemology) directly relevant to the occurrence being analyzed for reliable truth content.

It should be obvious that the sensory ontological/physical presence is directly verifiable whereas the extrasensory epistemological/metaphysical ‘presence’ is based on their probability of occurrence. Please consider that the object or event may be physically present but outside the threshold of the human brain sensory resolution (such as microorganisms, molecules, atoms, etc.). In this case its physical but invisible presence has to be established inferentially by the predictable consequences of its probable presence if and only we can justify with logical arguments a probable direct causally efficient agency responsible for the predictable consequences epistemologically. In the absence of direct sensory information we now properly substitute the perceptual information with temporary, reliable cognitive information, the epistemological inference. Hopefully in the future a positive prediction of the occurrence is reinforced by instrumental micro descriptions of the direct causally efficient physical agents. It is also possible to infer a probable physical causal agency by functional criteria based on their predictable influence on specific brain loci activity as indirectly measured by falsifiable f-MRI recordings. Notice how we depart from a direct sensory experience to an indirect instrumental equivalent. Can this equivalence still reside inside the ontological domain of discourse, quare.
But, what if a direct or indirect physical causal agent may not even be ontologically described or epistemologically explained?

Specific. Enter the domain of the hybrid model-poem ‘singularity’ we have baptized as the ‘epistemontological’ biopsychosocial (BPS) unit model of brain dynamics. We hope that by attempting (an unfinished work yet) a reconciliation of the well documented and reliable ontological and epistemological literature it will provide another credible and truthful restatement of the age old question ‘what is the absolute nature of human conscious existential reality’. In so doing it will add added another dimension to the ongoing discourse on the evolution of complexity as it underscores the human species biopsychosocial survival imperative against natural destructive odds and the human exclusive capacity to transcend the primitive BPS mode to create the unique civilization no other advanced species can. The transcendental escape from the limiting 4-d space-time Minkowsky biosphere into transfinite manifolds is an ongoing effort to search for credible and convincing answers to explain the elusive conundrum of human life and consciousness that survives across generations while other better adapted species become history.

Unfortunately, many intelligent and well educated humans have not been able to escape and transcend the limiting scope of the exclusive BPS existential reality and use their natural intellectual endowments to pursue the self indulging conveniences of power, wealth and control, many a times at the expense of the less fortunate citizens that are entitled also to be healthy, happy and cooperatively convivial so they can actively participate in the creation of our progressive civilization to the extent of their endowed capacities. Consequently, there is no doubt about the biological survival priority of preserving all life, that is the proper role of organized religions in the Judeo ChrIslamic tradition and their equivalent belief systems whether atheistic, agnostic or scientological. Once this neuro-humoral control of the emotional aspects of ongoing existence that properly creates the psychosocial emotional experience of blind faith when exposed to it, it now needs to add another dimension of belief for those with those interests, commitments, curiosity and abilities to carry them on as documented in recorded history accounts as the life of the prophets. These prophets and their equivalents are the few ones that have carried the burden of preserving human lives against unfavorable odds across generations.

What institutions will produce the new prophets of the 21st. Century, the materialist cults, the radical extreme religionists, the Sartrean hedonic existentialists or the updated religious institutions? The update consists in providing additional rational arguments for those already enjoying the emotional faith to consolidate their belief. We have seen how as the result of the technological information explosion new generations have evolved free from the shackles of radical extremists in the Middle East and in our local midst. In a more sophisticated way the technological information explosion has also generated the means of creating controls, greed, wealth and power for exploitation of the ‘condemned of the earth’ as we witness the globalization of the economy and the monopolistic capitalism effort to control the means of production at the expense of others less fortunate in resources to survive the new technology demands, a new version of survival of the fittest at any cost and it may worsen as the traditional religious organizations continue their rituals ignoring the societal unrelenting evolutionary paths towards an exclusive materialistic interpretation of reality fueled by the information explosion. Adherents of the physicalist cults are now geared to grab attention and power using epistemological tools and unconvincing arguments suggesting that theoretical constructs of reality can create reality. Like suggesting predicates or attributes of objects/events like shape, form or color can exist independently from the object/event that made it possible! The map representation created the territory being mapped!

I think the confusion with epistemology is that many a times a deliberate attempt by some scholars to market their ideas with no concern for their probable truth content, so long as their ideas’ sales pitch look brainy and elegant; it’s mostly about self-indulgence. If everybody, anywhere, always, in a predictable and consistent way gets sick with same signs and symptoms because of their traceable common denominator experience of e.g., eating dirty fruits or never washing your hands when eating, then it is not necessary to always insist on the healer to demonstrate a sensory-based identity of the offending bacteria/microorganism to relieve a patient of his ailment. One can cognitively posit the presence of offending microorganisms invisible to the naked eye when based on the epistemological knowledge of the consistent and predictable consequences of their environmental physical presence. This is the easiest scenario for illustration purposes.
The confusion arises when dealing with very complex events dynamically evolving. In this scenario the indirect knowledge of a causally efficient agency may properly substitute the factual, phenomenological agent perceptual identification until the latter can at least be temporarily inferred from the obvious consequences it causes.
The confusion is compounded when the predicate consequences become ‘animated’ by their users and are made to become a magical mysterious reality independent from the physical causal agent that made it possible!!
Unfortunately there will always be consistently-experienced consequences that will escape sensory description or even explanatory epistemological inferences when linguistic precision is absent. This last scenario is the fertile soil that breeds the model poems of reality appealing only to the emotional component of our existential reality that all religions properly encourage to keep people safely alive, feeling happy and socially/cooperatively convivial.

But we cannot leave out the rational component for the sake of completeness, we need to integrate both the physical ontological-perceptual description and the metaphysical epistemological-conceptualization of coexisting/interacting dynamic variables awaiting representation in symbolic or sentential logic format that facilitates their analysis.

If we can distinguish the epistemological conceptual explanation from the ontological perceptual description of the same object/event and see the need to consider both as a complementary unit whole; the empirical with the ideal, then iy shouldn’t be so difficult to understand the details of the complex argumentation that follows.

To better appreciate the overwhelming complexity of an objective analysis, it should be noticed the number and quality of the physical ontological descriptions of the object/event as perceptually sensed, measured or observed, directly or indirectly with instruments, from metaphysical, epistemological conceptualizations explanations directly or indirectly derived from such sense-phenomenal object or event. The conceptual representation may be expressed in symbolic or sentential logic for ease of analysis. We need also to distinguish the invariant from the variable parameters dynamically interacting or not as Dirac’s model genially conceptualized. (see below) Equally relevant are clear distinctions between the inherited or acquired origin/source of the information content. As we discover more and more examples of microorganisms and chemicals ability to modify and causally influence the RNA/DNA transcription process the less importance we should give this distinction between the inherited and the learned.
Last but not least in importance is the language syntax structure of the adopted language when reporting, in either subjective first or objective third person, a narrative account of the occurrence.
Almost exhaustive searches for the simplest analytical tool capable of generating a credible model poem encompassing all the identified relevant complex variables at play, suggested a combination of Cramer’s ‘transactional model’ and Dirac’s analytical approach, all being published in numerous blogs and many books sold by Amazon Books, Inc. including a “Treatise on Neurophilosophy of Consciousness, a BPS Model of Brain Dynamics” and 6 volumes of textbooks with similar titles. Most of this content can be found in our family domain site at .

We found it convenient to adopt and modify Dirac’s model to account for the space-time evolutionary path of a dynamically evolving supercomplexity we briefly discuss below as ‘speculations and conjectures’.
Perhaps the most important feature of this model approach is the distinction between the invariant features of the unit microscopic physical particle component as they interactively aggregate into the total macroscopic bulk with characteristic measurable identifying attributions we describe as their predicates. We will stress the importance of model representations which must derive from solid falsifiable inputs of information as illustrated by the ‘twistor’ theory.

Argumentation.
The Transactional Interpretation of Quantum Mechanics (TIQM) still faces a number of valid challenges. We will briefly address some of the fundamental criticisms mentioned in the Introduction above. According to the original account (See Cramer 1986), a transactional interpretation (TI) explains a transaction as a four-vector standing wave whose endpoints are the emission and absorption events. Its usefulness has been challenged on the basis of its actual 4-dimensional space-time process or is it just taking place on a level of possibility rather than actuality.
In my opinion, many learned scientists and philosophers scholars have neglected to consider that either interpretation is inevitably the account version of a human being linguistic narrative to other human beings with all the species cognitive and sensory limitations that it entails as briefly exposed below.
Mathematical justifications and precision. In my personal opinion the ‘offer waves’ (OW) fit of the Schrödinger equation and its expansion to include negative domains by using complex numbers makes it possible to posit the presence of ‘confirmation waves’ (CW) fitting the complex conjugate Schrödinger equation. While I may agree that a transaction is a genuinely stochastic event, I disagree with the certainty that TIQM does not obey a deterministic equation. Stochastic events are so complex in nature that give the human observer the appearance of being random in nature, until an instrumental measure is performed (double slit experiment). Experimental results based on completed transactions provide a reliable derivation of the Bohm Rule rather than assuming it exclusively applies according to the standard Copenhagen Interpretation (CI) of quantum mechanics (QM). However, the big challenge of proving if the transactional interpretation (TI) will ever be testable in the laboratory remains.
My short quip answer to the conundrum is to recognize the superiority of sense phenomenal validation for ontological macro objects/events such as those we can experience if we walk inside the house blindfolded or, in a given moment we walk outside the house in a clear, starry night and observe the puzzling recursive cycles of predictable cosmological complex order we humans cannot control/influence and which cannot spontaneously be generated and sustained as also predicted by the entropy physical laws of nature.. But just as convincing is the presence of those objects/events we all consistently, falsifiable and predictably experience that escape sensory detection or sometimes even a linguistic description? Are we justified in denying their vital presence in the existential reality of our environmental midst? Instead we can always explain their occurrence rationally using symbolic or sentential logic representations and using epistemological, mathematical logic tools, be they quantum logic and/or conditional/Bayesian probability calculus. The arguments we have defended is that neither the physical phenomenological nor metaphysical theoretical abstractions can exclude each other because they constitute a complementary/functional unit whole to compensate for our human physical brain perceptual and conceptual deficits compared to other better adapted species sharing our biosphere environment.
While TIQM may not be currently testable in classical labs, that may be the consequence of our human limitations and should not be a deterrent to adopt it because it has been proven capable of generating new testable predictions on the basis of the probability of their occurrence as witnessed by the unrelenting march of technological and other complex developments. Keep in mind that even our most intelligent, better adapted chimp brothers and sisters can never accomplish that feat. In whose hands then are we leaving our civilization to develop as humans complete their life cycles? Do we have a responsibility for future generations? Or do we live just to satisfy the biopsychosocial (BPS) imperative that we share with other evolved species? The TI is not an exact interpretation of classical QM Bohm Rule and the Copenhagen Interpretation demands but, like the many-worlds interpretation (MWI), TI provides a logical physical map to follow with all the formal symbolic/sentential logic representations as a compass guide into a possible evolving reality as detailed in the Bohm Rule. It is a probable road map path to ideally explain that which the senses cannot describe as complexity continues its unrelenting evolutionary progression into the future.
Just as ‘untestable’ it appears to be my proposed theoretical blueprint sub-model of a dark baryonic DNA/RNA receptor codon in neuron networks to bridge the connection between unidentified coordinates in transfinite n-1 dimension space-time and the human pre-motor neocortex attractor phase space. I hope it can be experimentally shown to control the reciprocal information transfer as detailed elsewhere and briefly mentioned here. This has been my incomplete partial reply to another challenge to the Copenhagen interpretation: “Where in space-time does a transaction occur?” So long as the brain dynamics map representation is not conceived as causally efficient in generating the neocortical tissue territory it is describing, as some radical theoretical exclusivists and religionists would have it to fuel hidden political agendas. IMHO, any justifiable explanation that works and can keep humans alive, biopsychosocially happy and socially accepted so (s)he can continue participating –to the extent of their inherited or acquired resources- across generations in the creation of this wondrous civilization that no other species can, is welcome.
Another causal efficiency issue has seriously challenged the consistency of the TIQM especially when combined with the super symmetry requirements of ‘string theory’, ‘loop’ variation. The formal arguments are beyond the scope of this brief presentation but a related explanation is available in the ‘Twistor’ theory. (See below) But first, the foundations of TIQM as modified and adapted to our own BPS Model of Brain Dynamics in “Neuro-philosophy of Consciousness.”
Non locality Argument.
This is perhaps the most important and most controversial aspect of TIQM. We need an explanation for the consistent, mysterious measurements correlations between the properties of distant systems that remain operationally linked through non-local influences across space where no light signal can travel. The best known example of the invisible link is provided by the famous Einstein-Podolsky-Rosen/Bohm (EPR/B) strongly suggesting it. In a nutshell, as seen in the diagram below, if particle spin pairs are separated and emitted in opposite directions from a source, they remain mysteriously entangled as measured by spin meters instruments capable of measuring spin components along various directions even when situated miles apart as shown below.

My own explanation, within the context of my proposed transactional reciprocal codon receptor in neurons linking/entangling/bridging a transfinite n-1 dimensional source and an acceptor brain site at the pre-motor neocortex. They are functionally coupled and influence each other by spin coupling as published in detail in “Neurophilosophy of Consciousness”, Volume IV under “Speculations and Conjectures.” Chapter I argues for the “Reciprocal Transactional Information Transfer Neocortex Transfinite” where neocortex refers to the decision-making machinery in the premotor area of the human neocortex attractor phase space whereas transfinite refers to a set of undefined spatial-temporal coordinates outside our local Minkowsky macro 4-dimensional sensory space situated at n-1 dimensional transfinite space coordinates. It all boils down to an (metaphysical) explanation and not an ontological (material physics) description of objects and events of experimental data by the Bell experiments testing the Einstein-Podolsky-Rosen/Bohm (EPR/B) actually implying non-local events. The unexpected results suggested to me the possibility of an entanglement between the coordinates in the extrasensory non-local space time realm and our local premotor area of the brain where probable solutions to ongoing problems are considered before their neuro-humoral mediated motor execution.
In my model the communication between a transmitter and a receiver is mediated via a receptor bridging a transfinity source of cosmic radiation and a brain premotor area receiver via a dark baryionic receptor transduction DNA/RNA codon. Cramer’s original model explaining entanglement between source and destination was rudimentary and did not involve either reciprocity feature or a mediator receptor of dark baryonic matter to mediate the transfer when Cramer studied the olfactory system. I modified Cramer’s model by including reciprocal transmission and suggesting the dark baryonic codon mediator. This allowed me to extend the application beyond the olfactory system by generalizing the transfinite cosmic radiation effects to general modifications in the translation/transduction processing of genetic codons information into altered enzyme production without being detail-specific about any particular enzyme protein modification. The brain neuronal circuitry I proposed represented a cooperation between the slow poke synaptic information transfer and the faster than light non-local entanglement providing the means of spontaneous spin-spin coupling and synchronization we experimentally measure between transmitter and receiver are separated by great distances.
The details of the non-local effect suggests that the particles have a random spin when they leave the source in opposite directions and become definite only with the first spin measurement implying that the outcome of this measurement is a matter of chance. IMO a stochastic progression needs not to be random, only more difficult to ascertain its location experimentally. Be it as it may, if, e.g., the first measurement is set for a z-spin measurement on the L-particle, the L-particle will register a spin either clockwise (spin up) or anti-clockwise (spin down) about the z-axis (perpendicular to the 2-dimensional x-y plane) with an equal chance of producing an instantaneous change in the spin orientation of the distant R-particle by non-local spin coupling we call entanglement. The explanation is given as a ‘collapse’ of a traveling Schrodinger’s wave function. In my view, the experimenter causes a collapse of a particle being carried by a wave (de Broglie’s ‘wavicle’), wherever that particle may be. Particles generate waves, waves never generate particles in our human experience!
A proper human understanding of TIQM requires a special self directed introspective distinction between the “I” observer and that ‘other’ object/event being observed. This effort allows you to adopt the proper perspective (framework of reference) before your analysis. I feel there are too many brilliant people either paying too much attention to a branch of the tree and losing perspective of the forest or the reverse, generalizing too much about the forest ecosystem and not being very detailed about the specifics of a particular branch in a given tree. Or none of the above for the known listing trollers that joke or demonize with ‘ad hominems’ instead. 🙂 That distinction will help to understand better what a ‘wave function collapse’ is. IMHO it was all about an attempt to explain the mystery of the Einstenian ‘spooky actions at a distance’ or non-locality requiring ultra luminal speeds. I call it a mystery because the Bell’s speculations results from falsifiable, consistent probability-filtered measurements but lacking solid ontological/phenomenological underpinnings and thereby not amenable to experimental testing on its truth value certainty. But, sure as hell, valid to understand spin-spin, matter-antimatter coupling and instantaneous phase synchronization between particles separated by distances measured in cosmological units. Here is the way I explain it to myself how the whole panoply of facts and fancy about TIQM come together in a nutshell:
I will start with a reply to a well known theoretical mathematician/physicist who still challenges the necessary requirement of identifying at least the probable location of a causally efficient force responsible for the consequent results he can only formulate in symbolic or sentential metaphysical abstractions he swears by as an exclusive necessary and sufficient explanation. Here was my reply to a discussion of ‘twistors’ relation to TIQM theory when I challenged the necessary assumption of a random spin orientation of the particles when leaving the source in opposite directions of a stochastic/chaotic travel. (refer to figure above):
“Unless I am overlooking something important here, the causally efficient force (f=ma) is accelerating a physical mass spin particle being carried in a wave (de Broglie’s ‘wavicle’) across a magnetic field. Conservation of energy principles drives the particle (photon?) across the least resistance path as a spiraling ‘twistor’ to minimize the anticipated frictional resistance of a linear path. Light cannot exist without a source (the particle photon or an unidentified outside source (cosmic radiation from transfinite sources?). According to the transactional model (TIQM) the photon continues the twistor path below and above the light cone while the light enters the ‘twistor’ (left  right) and exits the ‘twistor’ in the opposite direction (right  left) soon after spiral completes a 90 degrees rotation. IMO, what is important to notice here is that the accelerated massive particle remains entangled with the carrier wave (‘spooky’ non-local action at a distance) -now in a negative domain after completing a full rotation- even though it the spin particles left the source in opposite directiondirections. This leads me to suspect that it is the photon particle itself the source of ‘light radiation’ through perhaps a nuclide radioactive(?) decomposition. Light needs a material particle to produce it, like other predicates of matter (shape, form, color, etc.).
.Why Epistemology. Finally, another argument about the necessary but insufficient exclusive consideration of the phenomenology-based Copenhagen Interpretation (CI) of human existential reality as opposed to the modified TIQM BPS model incorporating also the metaphysical aspects of our real time, ongoing human experience into a unit ‘epistemontological’ hybrid wholeness rooted on measurable human brain dynamics. This comes about because, as it seems, our lab researchers and arm chair theoreticians have neglected the obvious fact that ours is a human story of our lives narrated in our adopted language, with all the implications of our human brain limitations in the cognitive epistemological explanations of those vital aspects of species survival we cannot describe adequately on a phenomenological/ontological basis.
As it happens many times there are always many consistent invisible objects/events in nature present that even escape linguistic explanations where no distinction is possible between the state of a natural object/event and what I cognitively know or could conceivably know about it because experientially there is only an awareness of its consistent and demonstrable presence by their causally-linked effects or qualia experienced. This is intrinsically the case for the distant n-1 dimensional cosmological macro object /events or the 4-dimensional micro objects/events about atoms and electrons, quarks and strings, we can only indirectly measure or consistently observe their effects that we then represent as metaphysical logic symbols to epistemologically substitute for the more reliable but absent phenomenology. Enter TIQM to reconcile the fundamentally conflicting positions of the phenomenal realism of the seen and the unquestionable presence of the unseen as developed in the unmodified Cramer transactional model poems and the necessary but incomplete physicalist positivism of the Copenhagen Interpretation.
Once we become aware of our human species comparative limitations in sensory and brain combinatory resolution of reality, it leaves open the question of whether or not there could be possible to have an absolute macro description of phenomenal reality as a fact. We can detail similar arguments to our conceptual explanations of physical reality below the threshold of human sensory resolution. We believe, however, that both analytical considerations can compensate each other’s limitations and a hybrid Epistemontological theory is possible.

Summary and Conclusions.

The Unexpected Transition from Idealism to Realism.
The information explosion we have witnessed in the last two decades has unexpectedly accelerated the relentless, forward evolutionary process of complexity as experienced and narrated by human language accounts in communications. There is an insurmountable amount of verifiable evidence sustaining the ‘real’ demonstrable fact that only the human brain and that of our primate relatives have the ability to pay attention to objects/events in the audio-visual scene without always looking or listening at them directly. This is done by recreating an internal map of the previous sense phenomenal world we experience by mapping our sensory field onto specific brain cells. The mapping includes local and non-local verifiable observables. This is the existentially real case whether the ‘wave’ or ‘field’ particle carrier we conveniently derive as an ideal notion fits the previous consistent falsifiable experience or not. Thus, the local quantum physics interpretation implies being bounded within a finite space-time region where an observable object/event exhibits a ‘real’ behavior conditioned to the relevant environmental circumstances properly belonging to the space-time region itself. This is the classical Copenhagen Interpretation (CI). It is along these lines that the algebraic approach focuses on ‘real’ physical local and ‘ideal’ metaphysical non-local representations (symbolic or sentential logic) on a probability basis emphasizing that the notion of a field or wave is only a convenient derivative notion from the ‘real’ local actual measurements that preceded the ‘ideal’ non-local explanation.
The competing model-poem that ‘ideally’ explains the same ‘real’ phenomenological description of the object/event sensory reality that preceded it is called the Transactional Interpretation of Quantum Mechanics TIQM. The advantage of the transactional interpretation is, in my opinion, that it incorporates probable interpretations of ongoing verifiable existential experiences that are irreducible to linguistic coding in symbolic or sentential logic representations. It’s emphasis on sense-phenomenal empirical ‘reality’ descriptions are more reliable especially when the Transactional Interpretation (TI) predictions are confirmed. Consequently the Lagrangian Quantum Field Theory (QFT) is our most empirically well-confirmed physical theory where the ‘ideal’ explanation of the metaphysical component of ‘real’ empirical object/events descriptions harmonize. The reliance on verifiable sensory facts excels in the expediency of calculations and their intuitive understanding because it is closer to phenomenological experimental manipulation in the physics lab. That makes the derived metaphysical ‘ideal’ model poem more credible when applying the theory to make predictions. Anytime that a pragmatic empirical ‘reality’ description of the occurrence of an object/event and an ‘ideal’ metaphysical logic sophistication explanation (i.e., there is an isomorphic mapping of the elements of a C*-algebra into the set of bounded operators of the Hilbert space.) lead to the same existential reality conclusion, then pragmatic ‘realism’ trumps mathematical rigor due to the resulting simplicity, efficiency, and ease in understanding. When TIQM is mathematically modified further to incorporate the speculative probability of identifying the n-1 dimension space coordinates of probable sources of cosmological information input located beyond our local 4-dimensional space-time, the TIQM becomes a superior ‘ideal’ model when also explaining non-local sources of information input (e.g., measurable cosmic radiation) causally efficient in influencing the human evolving complex ‘reality’.
The TIQM model superiority is best illustrated by the Dirac notation, an empirically based ‘ideal’ representation where the ‘entanglement’ coupling is based on the complementary pairing/coupling of opposite spin unit particulate matter. The mechanism needs to be at the subatomic micro level but two macromolecules like DNA or RNA can be functionally coupled iff their micro components are spin-coupled first. That way local or ‘non-local transfinite information’ input can gain empirical ‘de facto’ codon control of the transduction/translation genetic genotype machinery resulting in verifiable empirical phenotype results as observed in experimental labs as an induced re-arranging of the corresponding polynucleotide helical structure and the subsequent functional enzyme production.
It should be noted that the sophisticated mathematical axiomatic (logic) ‘ideal’ representation of the verifiable ‘real’ fact observation of an object/event in a derived wave or field conveyance regards the conveyance conceptualization as the fundamental notion for no convincing reason other than the symbolic/sentential representational elegance. The ‘ideal’ elegant map sophistication has unjustifiably become more important than the ‘real’ empirical territory it observes! Thus, the Wightman axiomatic quantum field theory (QFT) becomes thereby superior to the algebraic QFT even when both are abstract explanations of an ‘ideal’ field with infinite degrees of freedom for sub-atomic quantum particles that appear in special circumstances.
As noted earlier, the less mathematically elaborated algebraic QFT abstraction originated from observables in the measurable local and the probable non-local environments whereas the more mathematically sophisticated axiomatic approach is limited to a conceptual elaboration of the field, a derived carrier model notion from local quantum physics. Furthermore, in the classical local quantum physics interpretation an observable is regarded as a property belonging to space-time region itself, i.e., Higgs Bosons ‘creating’ something out of a nothing vacuum? Is this a new mathematical ‘Genesis’ criticizing the ‘Delta Function’ as improper and laden with self contradictions as Von Neumann opined?
Fortunately, as it turns out, von Neumann was the proponent of a new ‘ideal’ framework based on Hilbert’s theory of operators and Dirac was the proponent of a ‘real’ framework of reference amenable to the rigors of testing of the local phenomenal events in the biophysical chemistry labs (e.g., optogenetic testing to cover the micro invisibilities) and the non-local events in the astronomy observatories covering the cosmological invisibilities.
The Dirac Delta Function is limited in scope when defined over the ‘real’ line, is zero everywhere except for one point at which it is infinite, and yields unity when integrated over the real line. Von Neumann promotes an alternative framework, which he characterizes as being “just as clear and unified, but without mathematical objections.” He emphasizes that his framework is not merely a refinement of Dirac’s; rather, it is a radically different framework that is based on Hilbert’s theory of operators-valued distributions. When objectively analyzed both arguments have their own merits but would be incomplete if either one claims exclusivity. If we had to choose only one it is clear that when pragmatics and rigor lead to the same conclusion, then, as I said above, pragmatics trumps rigor due to the resulting simplicity, efficiency, and increase in understanding made possible. Most important, however, is that it allows for unexpected new environmental circumstances as they get empirically detected. In other words, the TIQM approach adopts the pragmatic orientation in the Lagrangian QFT (based on perturbation theory, Feynman’s path integrals and renormalization techniques). The “axiomatic” QFT refers specifically to the ‘ideal’ component of existential reality based on operator-valued mathematical distributions.
The undersigned is not that familiar with the Weinberg ‘real’ pragmatic formulation that allegedly zeroes in human physical intuition and provides heuristics that are important when performing calculations; however, the mathematical theorists do not consider it mathematically rigorous enough and pay little attention to the fact that their proposed mathematical structure does not provide any techniques for connecting with experimentally determined quantities. It is clear that these two approaches to QFT, the rigorous axiomatic and the Lagrangian pragmatic are rival research programs. I think we can get the best of both propositions that harmonizes with ongoing complex phenomenal reality as it evolves and I satisfy my curiosity as to their philosophical foundations. Then, because of my neurophilosophical interests in using the best available strategic tool when analyzing the mysterious struggle of our human species trans-generational survival against odds I compare the competing mathematical strategies and wonder why use the infinitesimals of classical quantum physics when you can use n-1 dimension transfinite parameters, more meaningful in the analysis of current existential reality.
Who, when leisurely meditating on the issues of life and human consciousness upon retirement, has not immediately reckon the relevance of particulate brain matter in reciprocal motion inside and outside a physical brain and the force(s) fueling the massive unit particles to exhibit dynamic reciprocal motion according to the well established laws of physics? It was all about analytically speculating how such motion carrying the invisible micro unit dimensional particle in a putative field (electric, magnetic or both) or wave conveyance can be explained best using the available mathematical logic metaphysics tools. Dirac’s pragmatic approach proposed the equivalence of matrix mechanics and wave or field mechanics by using the Delta Function, an improvement on the Hilbert Space use by incorporating scalar metrics definable in terms of the mathematical scalar’s complex conjugate (a coupling/conjugation of a ‘real’ number + an ‘ideal’ imaginary number). That strike of genius makes possible to analogize such coupling with the mysterious experimental non-local coupling of opposite spin particles (Φ is the topological anti-dual of Φx) at a distance when the source fired them in opposite space-time directions. The double slit experiments show their remaining entangled connectedness even when miles apart. The stochastic nature of the particles as they travel in opposite directions until one of them ‘randomly’ selects one of various spin orientations available in the instrument after which the other particle mysteriously selects only the opposite complementary orientation, a ‘spooky’ action at a distance as relativity considers the result. I personally think they remained coupled when they left the source. Our human brain limits in both the perceptual and conceptual resolution capacities makes my speculation impossible to measure but neither is it justified to assume their original randomness. How else can we simultaneously measure the position of a unit mass particle ‘m’ being accelerated ‘a’ a causally efficient force ‘f’ according to f=ma when the ‘ideal’ operators have no eigen values or eigen vectors? Here is the opportunity to calibrate the adequacy of competing mathematical physics ‘ideal’ algebras, one based on ‘real’ experimental observables related to bounded space-time locations like the finite double cone where light traveling in opposite directions intersect or an ‘ideal’ algebra based on ‘real’ relativistic QFT interpretations? You be the judge stay with the ‘ideal’ axiomatic version about how things should be (a la von Neumann) or transition to the ‘reality’ based version about how things probably are or predictably will be (a la Dirac), all things existentially relevant to human beings reality being considered. Dirac’s Hilbert space assigns generalized eigen functions to unit particle mass ‘m’ position and their instantaneous velocity ‘v’ from measurable F= mv momentum operators resulting in the nuclear spectral theorem where Φ and Φx remain connected as mathematically derived by an algebraic QFT of observables in 4-dimensional Minkowski space-time.
It is argued by professional mathematical physics theorists that by conditioning the mathematical operations to conform the standard ‘real’ locality axioms e.g., isotony, locality, covariance, additivity, positive spectrum, Dirac’s theory can be extended to reach the cosmologic invisibilities of transfinite space-time beyond. For the reasons stated above about the human species brain perceptual and conceptual resolution limitations, I am not including an unjustifiable inclusion of a mysterious and unique invariant vacuum state. This is the basis on which I am still working on an all encompassing TOE model poem of human existential reality incorporating TIQM and proposing the measurable details for the corroboration of a ‘Transactional dark baryonic reciprocal receptor DNA/RNA mediating information transfer between un identified transfinite space-time coordinates and premotor cortical acceptor in neocortical attractor phase space as published.

o Along the same lines as the ‘real’ versus ‘ideal’ justifiable arguments we now add additional evidence in behalf of the anticipated eventual transition from the abstract ‘ideal’ to the measurable ‘real’ paradigms explaining existential reality:

The report “Breakthrough Study Reveals Biological Basis for Sensory Processing Disorders in Kids.” constitutes the general explanation of the specific sensory processing disorders we clinically see in autism and the attention deficit hyperactivity disease (ADHD). It also underlines the importance of the human brain’s right hemisphere first impact with sensory input from ‘real’ environmental (external or body proper) input sources of new or familiar information before an adaptive response is either subconsciously implemented reflexly or after further analytical processing especially when processing unfamiliar sensory inputs. The master control nerve networks of frontal brain neocortex is continually processing information input arising from right hemisphere initial effort to coordinate ongoing activity from multiple sources such as Left temporal and parietal lobe multisensory input and Left frontal language processing Broca’s area. The frontal neocortex brain is continually assessed of the ongoing ‘real’ environmental biopsychosocial circumstances, from passive meditation to active social partying. When the new or familiar input is received at the Right hemisphere the master frontal neocortex analytical sorting of available response alternatives present in the neocortical pre motor attractor phase space. This is subsequently followed by a conscious activation of the adaptive motor choice neurohumoral response, all things considered. What is important to notice is how the brain synaptic ‘real’ time processing precedes in time the ‘ideal’ adaptive motor response especially in the presence of new/unfamiliar information sense-phenomenal input arrives in Right brain hemisphere. All of this complex analytical sorting immediately follows after a subconscious reflex motor response is released for the overall biological preservation of the human species priority as previously published and discussed by me in various HiQ listings and fora. This detailed explanation hopefully supports my biased view about the importance of the ‘real’ preceding the ‘ideal’ solution.

References:
Maudlin (1996, 2002) has demonstrated that TI is inconsistent.
Berkovitz, J. (2002). “On Causal Loops in the Quantum Realm,” in T. Placek and J. Butterfield (Ed.), Proceedings of the NATO Advanced Research Workshop on Modality, Probability and Bell’s Theorems, Kluwer, 233-255.
Cramer J. G. (2005). “The Quantum Handshake: A Review of the Transactional Interpretation of Quantum Mechanics,” presented at “Time-Symmetry in Quantum Mechanics” Conference, Sydney, Australia, July 23, 2005. Available at: http://faculty.washington.edu/jcramer/PowerPoint/Sydney_20050723_a.ppt
Kastner, R. E. (2006). “Cramer’s Transactional Interpretation and Causal Loop Problems,” Synthese 150, 1-14.
Marchildon, L. (2006). “Causal Loops and Collapse in the Transactional Interpretation of Quantum Mechanics,” Physics Essays 19, 422.
Daniel F. Styer, Miranda S. Balkin, Kathryn M. Becker, Matthew R. Burns, Christopher E. Dudley, Scott T. Forth, Jeremy S. Gaumer, Mark A. Kramer, David C. Oertel, Leonard H. Park, Marie T. Rinkoski, Clait T. Smith and Timothy D. Wotherspoon (2002) “Nine formulations of quantum mechanics,” American Journal of Physics 70, 288-297.
Family Domain site: http://delaSierra-Sheffer.net
Blog site: https://angelldls.wordpress.com/;
Books published: ;

Dr. Angell O. de la Sierra, Esq Deltona, Florida July 2013

Posted in Neurophilosophy of Consciousness | Tagged , , | Leave a comment

International Critique on BPS Brain Dynamics Model

Book Published by the Director Pacific Neuropsychiatric Institute, Dr. Vernon Neppe. Dr.d,Sporty

De La Sierra: Neurophilosophy of Consciousness
Dr. Angell de la Sierra has produced a number of innovative, creative writings on research in Quantum Brain Dynamics, Process Philosophy, attractor hypotheses, psychosociocultural perspectivism mental images, spirituality, and brain sinks. This is based on the works of John Emlen and Walter Freeman. The conceptualization is complex but does not fit the fabric of a theory of everything, but we pay homage to a great thinker here.
We illustrate the complexity here: “We can no longer say that the past has been but is no longer, while the future will come to be but is not yet.” 477
“From the many sense-phenomenal objects and/or events in our immediate environment (including memories) only a limited number of steady states of discrete, individualized neuronal patterns (attractor basins) are set-up to respond exclusively to particular stimuli in the future. These would
activate a particular set of bulbar neurons acting as a relay switch to a corresponding attractor basin uniquely coupled to different memory, emotional and physiological pattern of responses (mental state). When these signals were analyzed on the oscilloscope screen they were found to resemble
chaotic systems with ‘attractor basins’.
Once it was experimentally documented that there is probabilistic nature to brain dynamics, he concludes “we are forced to consider not just the fleeting moment we call present, the ‘being’, as it evolves or ‘becomes’ past in transit into a potential future, but also to predict with variable degrees of certainty its evolution into that future, the ‘becoming’ we may control: and free-will to choose from available ‘futures scenarios’. In so doing we acknowledge an involuntary shift away from the reductionist physical approach into the metaphysical ‘emergence’ realm of ‘process’ philosophy.”
233
This thinking derives in part from Emlen 478 who points out the difficulty of evolution in explaining day to day to day changes, including cerebral plasticity, an emergent phenomenon they call the “attractor hypothesis” 478. De La Sierra later built on this hypothesis in his model, which effectively limits brain sinks that concentrate information. Moreover, Meyer cogently points out the profound complexity of the cell and of DNA and the needs for certain biophysical elements to be complete for them to work. This is a very potent argument against evolution without meaning.

479 Again, this supports the model of meaning at every level of science and origin within TDVP
.
Whiteman’s Philosophy of Space and Time

Posted in Neurophilosophy of Consciousness | Tagged , , | Leave a comment

Dirac’s Vector Analysis Update.

Dirac’s Vector Analysis Update.

Introduction.

Anyone who has ever stood up in front of a classroom to address his/her students will tell you that simplicity is a worthwhile pragmatic and theoretical virtue goal iff the expected and appropriate pedagogic results are aimed at the student and not the teacher, independent of the corresponding level of complexity to be communicated. There is a presumption that ‘selling/marketing’ an idea by a professor implies there must be a ‘buyer’ student for a pedagogical transaction to be completed. Unless, of course, the professor, consciously knowing (or not) is engaged in a self-serving soliloquy justified as primitive, ‘self-evident’ propositions and often expressed as either theology or probable/statistical science inspired radical extremist pronouncements. Yet, a complex and changing nature, in its dynamic evolutionary progression in our 4-d space time existential reality, opts to reveal its complexity to human narrators in the form of the simplest possible model-poems solution that are compatible with the narrators’ brain dynamics’ phenomenology and combinatorial limitations, as amply detailed in our other publications. We now expand on the justifications for our general poem on the evolution of complexity as discussed under “The Immanent Invariant and the Transcendental Transforming Horizons.” See Ch. 12, “Nurophilosophy of Consciousness.”, Vol. IV and Vol. V.
Argumentation.
If we ever have expectations from our biopsychosocial (BPS) model poem of human brain dynamics ever evolving into a reliable theory of everything (TOE) it must satisfy some minimum requirements as detailed below. The most important requirement being that the model’s principles must be rationally/logically justified as a general/universal application to any aspect of human enquiry, whether its content is exclusively epistemological idealism or an exclusively pragmatic, methodological and empirical type. The model approach can also take the form of a hybrid combination of idealism and empiricism in nature like our own Epistemontological hybrid tracking this super-complex reality as it evolves in our 4-d space-time biosphere niche. We hope that our consciously free willed choice of simplified analytic mathematical elaboration is readable and reaches the curiosity of all informed readers in any discipline. In the search for an adequate universal mathematical formulation we had to justify the need for ‘a priori’ metaphysical elements (including philosophy, theosophy, mathematics, etc.) and the need for ‘a posteriori’ pragmatic/physical elements as emerging from scientific methodology measurements/observations of nature. It is due time to rationally integrate pragmatic empiricism and rational idealism as a functional unit whole in living human reality as justified by consistent, falsifiable and predictable results from quantum theory based probability theory and/or Bayesian conditional statistics including theosophically-justified speculations and conjectures as exemplified by the famous Leibniz Monadology or our own sub-model arguments for the probability of a ‘transactional reciprocal information transfer between the human pre-motor neo-cortex attractor phase space and unspecific space-time n-1 coordinates in transfinity as mediated by the human brain baryonic dark matter DNA/RNA receptor site.

Our central focus on the human biopsychosocial (BPS) existential reality equilibrium adapting our species to familiar (or new) contingencies presenting a potential threat to human survival has led us to seriously consider how may the human inferior adaptive limitations to a changing environmental landscape (compared to other evolved subhuman species) notwithstanding, the human species have survived across generations performing the wondrous evolutionary technological and societal transformations other species are innately incapable of? What keeps the human species at the helm of the Bergsonian evolution of complexity, above the mere BPS survival other evolved subhuman species also share?
Intuitively, the easy answer is to search for entities outside/beyond our immediate 4-d space-time earth biosphere environment that specifically/selectively influence the human species. The same intuition, always looking for simplicity, makes you posit the probability that before being the architect responsible for the wondrous transformation the human species has first to be alive, healthy and psychosocially adapted, in equilibrium with his immediate, individualized environment. It makes intuitive sense to suspect that such transfinite source should first be able to functionally equip humans with the resources to offset the inferior adaptive capabilities in the biosphere milieu and second ‘create’ the super symmetry transfinite conditions that minimizes the probability of disruptive transfinite radiation affecting our biosphere. Part of that radiation may exclusively reach the human premotor neocortex to anticipate damaging radiation events.

The careful reader may have noticed the posited existence of two unreachable infinities at play, the micro sub-Planckian manifold actively controlling the local events and the cosmological manifold controlling transfinity. We immediately wonder what resources may our poorly adapted human species to life on earth, with its known phenomenal and brain combinatorial limitations, may mobilize to stay alive and become the architect of this wondrous civilization?? Can anyone imagine better adapted subhuman species like Rhesus monkeys or ants, roaches, etc creating optogenetic and gene transplant technology controlling DNA/RNA transduction, the same way transfinite radiation does? How else may any human being explain, if not describe, the cosmological order being influenced by natural complex asymmetries evolving to become the super symmetries that minimize universal damaging radiation impacting the earth and facilitating the reciprocal information transfer mechanism between humans and transfinity sources. We need a model poem formula that operates both at the local mesoscopic biosphere and at the cosmological order level that is compatible with the structural/functional idiosyncrasies of a human brain.
We have climbed on the shoulders of Dirac and others to formulate probable approximations compatible with all disciplines created by the same human physical brain. What follows is a brief summary of the salient technical features still undergoing revisions with the joint ‘help’ of the good willed informed literati and the ill willed vicious trollers that plague some HiQ online listings.  .

For details on these formulations see Blog site: https://angelldls.wordpress.com/; and http://delaSierra-Sheffer.net For the present needs of this brief article on the merits of a modified Dirac notation we only highlite the possible interactive correlations between the local and transfinite sources of information and the need for functional approximations requiring a minimization of relevant interacting variables at both extremes of the spectrum, the phenomenologically invisible levels of activity at both the subplanckian local level and the cosmological level. At the micro level our sub model requires the attainment of supersymmetry to posit the presence of monopoles and gravitons to facilitate the influence of low intensity transfinite cosmic radiation on the local genetic transduction process of the brain neuron target cell via a dark matter baryon receptor. The ongoing debate centers on what subatomic micro particle is involved in the information transfer, the neutrino, the axion, etc. and where does it originate? We concentrate on the measurable how and give a lower priority to the theosophical why.
The charge free ‘neutrino’ is the candidate of choice to penetrate un-opposed miles deep through geological barriers to reach the buried instrument receptors at CERN. Radioactive, stable hydrogen H1 (spin 1) atoms are known to spontaneously emit ½ spin electrons during their Beta decay process to a suitable acceptor of opposite-1/2 spin leaving the originally stable spin 1 source spin -½ deficient as measured. In our model this naturally decaying or radiation-induced hydrogen atom can be on either side of the reciprocal communication path, the human brain cell or a transfinity source of radiation. It is assumed that path direction is a function of need to insure the availability of an BPS survival adaptive response to challenging environmental contingencies. All experiments confirm the same fractional deficit attending radioactive degradation. If, as the result of nucleosynthesis activity immediately after the Big Bang, dark baryonic particle radiation found its way into cellular DNA/RNA is not as farfetched as it seems to arm chair idealist theorists who rather prefer the charming argument of the ‘massless’ physical particle to justify the deep penetration of the particulate matter. Somehow the ‘massless’ particle particle has to be charge-neutral and only micro gravitational forces in the form of magnetic monopoles will do to avoid the dipolar nature of electromagnetic (EM) induced fields. A ‘massless’ particulate matter inducing EM fields?

Enter gravitons and Dirac who questioned what good theoretical reason explains why the un-observed/undetected monopoles could not exist within a quantum theory framework? A mathematical logic explanation made more overall sense to posit the existence of monopoles than its absence. It is clear that a consistent, falsifiable observation or a physics-laboratory experiment should not necessarily be always considered as a check on the necessary and sufficient proof of its truth content. In our opinion the same incompleteness applies to the conceptual mathematical correctness of the symbolic or sentential representations of the current Dirac-equation solution. However, when both the physical empirical measurement of a consistent/falsifiable effect and a metaphysical logic are integrated, the confirmation of the electron particle physical mass, graviton or monopoles structure become a goal whose detection ability in the electron physics lab is still beyond reach, if ever, but at least its predictions on a probability basis is the next best option.

The best way to explain the temporal evolution of the multidimensional complexity of any quantum state in a linear scalar progression (like the way our brain linearizes sensory information input) is to update the Schrodinger and Hamiltonian space into a Hilbert space that may take into account any vector ray projection path direction. This mathematical combination allows for the differential representation of a multiple number of relevant variable paths interacting between themselves as one single resultant package. Each of the participating quantum states can be represented in the standard ‘Bracket’ notation consisting of a left part, called the bra and a right part, called the ket. The notation was invented by Paul Dirac. The effective use requires minimizing the number of relevant dependent variants by approximations as they affect the phenomenological perceptualization of an independent invariant unit dimensional physical particle or aggregations thereof. In a TOE model any conserved value, matter, momentum, etc. will do as the invariant. It should be remembered that the progression range of all of the N particles (positive integer; 0) individual 3-d x,y,z dependent variables inputs as they project into a Hilbert space of 6N real dimensions gets reduced to a single valued function output. Quantum theory integrates the 3-d configuration space plus the 6-d Hilbert space into a 9-d space may now ‘represent’ the classical brain phase space. This way each projective ray path projection represents a frame-independent Schrodinger wave function where the operator defines the appropriate frame of reference. If one path harmonic ray rotates, stretches, etc., they all do (at right angles to each other or orthogonal) except when several points cohabit and interact in same 2-d plane (not the x,y,z line axis!) where each point coordinates lies (each eigenvalue, momentum, position, spin, etc. defining its probability amplitude) may have conflicting physical existence. Not all possible observables can be simultaneously measured, eg., position and spin of particle, giving rise to Heisenberg’ uncertainty principle. At the sub-planckian scale (10-33 cm) quantum gravity space is a lattice and we can have an infinity of dimensions for an open ended forever-expanding universe. Each of these mutually perpendicular basic rays represents a particular potential behavior of a quantum system and the set of all basic rays for a given property constitute the relativistic frame of reference in Hilbert space. Orthogonality provides for potential activities that are classically distinct or mutually exclusive. Also, the number of dimensions needed in this abstract space corresponds to the number of choices available for the quantum system, and this, as we have just seen, can go to infinity. In such cases the product of their Hilbert spaces gives rise to the “entangled states” of the Einstein-Podolsky-Rosen (EPR) effect. In Hilbert space the ubiquitous electron can be in all possible places all the time. In this respect a Hilbert space concept is more than adequate to carry on the baton for the representation of the quotidian familiar everyday world.

In general terms the Dirac notation below satisfies the identities inside the *brackets where is the complex conjugate. By mathematical logic transformations the general terms can be further transformed into operational functional representations allowing a ‘visualization’ of the temporal course of evolution of any unit dimensional invariant particle aggregates as they project their evolutionary progression into the multidimensional Hilbert space allowing for predictable warnings about probable future happenings in our biosphere of interest.

Summary and Conclusions.
If the objective reader still considers recorded history as at least a reliable guide as to how complexity has evolved from memorable Aristotelian times to our convulsive 21th. Century, it should be obvious that each historical period had the task to reconcile the immanent/pragmatic, phenomenological ‘seen’ and the transcendental, relevant epistemological ‘unseen’. We can summarily mention Aristotle’s analytical guide in his ‘ceteris paribus’ strategy to minimize the number of postulates or hypotheses to make your model poem more credible. Even St. Thomas Aquinos recognized in the Middle Ages how natural laws of simplicity adequately guide the course of universal evolution. Likewise Kant—in the Critique of Pure Reason—supports the idea of minimizing the number of non-phenomenological assumptions/principles contained in ‘Pure Reason’-based arguments underlying scientists’ theorizing about nature. If true and sufficient, as consistently verified by all human observers, whether philosophers, experimentalists, and/or practitioners, why muddle the truth content goal with the claim of exclusivity based on pronouncements about subjective radical sensory or extrasensory individualized experiences? Why not heed today the pragmatic, universal suggestions from 14th. Century Occam’s Razor, Galileo or Newton’s Principia Mathematica? Why settle for the self serving pomp and circumstance of superfluous causalities as defended by the radicalized arm chair physicalist theorists or the experimentalists, practitioners and philosophers? On the other radical extreme, why should anyone accept as the exclusive truth the metaphysical, subjective, individualized content of a physical human brain’s theosophy-inspired cosmogony experiences? Three centuries ago the chemist Lavoisier ridiculed the hypothetical metaphysical ‘phlogiston’ as the exclusive explanation of phenomenological chemical reactions observed, a rejection based exclusively on mathematical logic principles that minimizes the arbitrariness of non phenomenal brainstorms. Again, mesoscopic existential reality demands the easiest and simplest explanations to explain the realities a healthy physical human brain experiences. To guaranty the maximum probability of truth content we need to integrate the maximum number of empirical consistent, falsifiable human measurements/observations resting on logical deductions and a bare minimum of axiomatic-based model poems. This is true of all disciplines created and narrated by the exclusive, individualized human brain dynamic activity whether we like it or not. The polarization we witness between the hands-on physical ontologists and the armchair metaphysical epistemologists in current 21st. century debate on ‘consciousness’ seem to rely exclusively on the burden of proof summoned in defense of one’s point of view, thanks to the magic experimental results coming from modern technology, especially when they score high in the predictive value. All things being hopefully considered, the undersigned author still believes that providing credible arguments rooted on solid consistent measured/observed empirical facts refuting competing theories is more important as a starting point in the debate as to the probable truth content of our conscious model choice. A case in point is the un-necessary causality debate on probable truth content between the undeniable linguistic elegance of armchair mathematics theorists and the hands-on cold probable/statistical laboratory facts reports of real time-space practitioners. Considering the evolving super complexity of both the physical human brain structure/function and that of the universe how dare either extreme version proclaim the exclusivity of their domain of discourse at the exclusion of the rival unknown other? Why do materialist physicists knowingly posit the existence of two coexisting but different ontologies in the mind-brain dualist interpretation of the human brain dynamics when they should know that etymologically ontology belongs to the phenomenological domain whereas mind is an epistemological denotation? Why not share and learn from each other including the justifiable arguments of each other in a current but evolving hybrid Epistemontological synthesis as we have proposed and have exhaustingly analyzed in our BPS model of brain **dynamics.
The reason we briefly discussed the Dirac methodology is because we also believe that if our model aspirations of becoming an universal theory of everything (TOE) we should justify the BPS model poem of brain dynamics as applicable to any area of human enquiry regardless of being formulated as an epistemology or methodology principle. Because we consider both principles as two coexisting, inseparable aspects of the same mesoscopic existential reality in our human species biosphere.

Dr. Angell O. de la Sierra, Esq. Deltona, Florida Spring, 2013-04-27.

..

.

Posted in Neurophilosophy of Consciousness | Tagged , , | Leave a comment