101 .of integers to boot. 436

97ХХo97A.

.ОХXАХXО. .замацаванне. .выбух.камунікацыі. !МЫ цудоўныя МЫ! .годна.згодна. .АХXОХXА.

0. t(e) = pre ! [13;20] ! 441

101 .of integers to boot. 436

97ХХo97A.

.ОХXАХXО. .замацаванне. .выбух.камунікацыі. !МЫ цудоўныя МЫ! .годна.згодна. .АХXОХXА.

0. t(e) = pre ! [13;20] ! 441

1. МЫ = УСЁЯ-1 [мінус (-) адна (1) індывідуальнасць]

=== US = TRIUNITY/TOTALITY/MATRIXIam-1 [minus (-) the one (1) individuality]

2. 31:150::3233::520:13531

3. http://dauns01.math.tulane.edu/~tipler/populararticles.htm

  • «Intelligent Life in Cosmology», published in International Journal of Astrobiology, volume 2 (2003), pages 141-148.
  • «The Value/Fact Distinction: Coase’s Theorem Unifies Normative and Positive Economics», (January 15, 2007).
  • «The Star of Bethlehem: a Type Ia/Ic Supernova in the Andromeda Galaxy», Observatory, volume 125 (2005), pages 168-174.

«Theory of Everything based on Feynman-Weinberg Quantum Gravity and the Extended Standard Model», (original title; published title is «The Structure of the World from Pure Numbers), published in Reports on Progress in Physics, volume 68 (2005), pages 897-964.
Frank J. Tipler
Department of Mathematics
Tulane University
New Orleans, LA 70118

4. We shall investigate the idea that physical reality is pure number in the second section of this paper. We shall point out that quantum mechanics—more precisely the Bekenstein Bound, a relativistic version of the Heisenberg uncertainty principle—implies that the complexity of the universe at the present time is finite, and hence the entire universe can be emulated down to the quantum state on a computer. Thus, it would seem that indeed the universe is a mere expression of mathematical reality, more specifically an expression of number theory, and of integers to boot.

If the universe is closed—We shall argue in later sections that quantum mechanical consistency requires it to be not only spatially compact but a three-sphere S3—then the Bekenstein Bound shows that the complexity of the universe at any time to be finite. Or more precisely, the Bound requires a universe of the multiverse to be finite at any given time. As we shall see, there are an uncountable number of universes in the multiverse, but there are only a finite number of physically distinguishable universes in the multiverse of a given size and non-vacuum energy content. So fixing the size and non-vacuum energy content, there must be an uncountable number of identical copies of each universe with a given information content. For example, a universe the size of the visible universe and with the non-vacuum energy content assumed by Penrose could be in any one of 1010123 possible quantum states. (Indouble exponentiation, it does not matter if one uses 2 or 10 as the lowest base: 1010123210123.) There will be an uncountable number of identical copies of each of these 1010123 universes. As time increases, these identical copies will differentiate, but at any time there will be an uncountable number of identical copies of each possible quantum state allowed by the laws of physics.

5. Let us outline an argument based on the Bekenstein Bound that the entropy of our universe must diverge to infinity as its final state is approached.

MacCallum has shown that an S3 closed universe with a single point future c-boundary is of measure zero in initial dataspace. Barrow has shown that the evolution of an S3 closed universe into its final singularity is chaotic. Yorke has shown that a chaotic physical system is likely to evolve into a measure zero state if and only if its control parameters are intelligently manipulated. Thus, life (≡intelligent computers) almost certainly must be present arbitrarily close to the final singularity in order for the known laws of physics to be mutually consistent at all times. Misner has shown in effect that event horizon elimination requires an infinite number of distinct manipulations, so an infinite amount of information must be processed between now and the final singularity. Each manipulation will generate at least one bit of entropy, since each manipulation will require first observing the universe, and each (irreversible) observation will require increasing the entropy by at least one bit. This yields S→+∞ as the final singularity is approached. Furthermore, the amount of information stored at any time diverges to infinity as the OmegaPoint is approached, since the divergence of the universe’s entropy implies the divergence of the complexity of the system that must be understood to be controlled.

6. So we have obtained two divergences for the price of one! Not only must the entropy of the universe diverge, but so must the information coded in the biosphere.

7. The laws of physics require progress and life to continue to the very end of time, and improve to infinity. If the laws of physics be for us, who can be against us?

8. Wheeler points out [5] that the natural meaning of “today” — the choice of a spacelike hypersurface though the Earth — is the constant mean curvature hypersurface through the Earth today. Tipler ([12], p. 440) has shown thatif the strong energy condition holds, and if the universe began closeto homo-geneity and isotropy, an Omega Point spacetime can be uniquely foliated byconstant mean curvature hypersurfaces, so Wheeler’s proposal does indeed de-fine a unique “today” over the entire universe (Tipler also shows that a constant mean curvature hypersurface probably coincides with the rest frame of the CBRat any event, so Wheeler’s “today across the entire universe” is even easy tolocate experimentally.) Putting all of these criteria together yields.

9. I have argued in my book that life’s drive to total knowledge in the far future will cause our far future descendants to carry out this emulation of their distant ancestors. After all, we are now attempting to reproduce our ultimate biological ancestor, the first living cell from which all life on Earth descended.  We would be the first rational beings from which all rational beings in the far future would be descended, so in reproducing us in these far future computers, life in the far future would just be learning about their history. So the laws of physics will not only be for us in the sense of requiring the biosphere to survive, they are for us in the sense that they will eventually allow every human who has ever lived have a second chance at life. Notice that this ‘life goes on forever’ picture really makes use only of the integers.  At any one time, the complexity of the universe is finite.  In fact, we could now be an emulation in a digital computer!  But since we have no way of reaching the computer from inside the emulation, we could just regard the emulation as fundamental.  This would mean regarding physical reality as a subset of mathematical reality.  This is the Platonic universe: physical reality is not ‘real’ ultimately; only number—the integers comprising the true ultimate reality—is actually real. What does mathematics tell us about this ultimate integer reality? To answer this question, let us first remind ourselves of a few basic notions from logic(see Jech 2003, pp 155–7 for more details).

10. On the other hand, the G ̈odel theorems do not prove that no proof of consistency of Peano Arithmetic is possible.  The theorems merely show that a valid proof cannot be mapped into arithmetic in which sentences must be of finite length.   It might be the case,  for example, that a valid proof of consistency can be obtained if we allow proof of infinite length.  To this possibility we now turn.

Physical   reality   is   ultimately   quantum   mechanical,   and   quantum   mechanics   is fundamentally a theory of linear superposition, based on the continuum. The ‘natural’ numbers, which are tacitly in the mental background when the ZF axioms are formulated (think finite number of symbols, finite number of steps allowed in an acceptable proof), are not a natural foundation at all.  Rather, it is the continuum that is the basic entity, and the positive integers a derivative quantity. Specifically, the integers we see in the world around us—five coins, six birds, the distinct lines of the Balmer series—are expressions of the Exclusion Principle and the discrete eigen functions of the Schr ̈odinger equation applied to atoms. But the Schr ̈odinger equation  also  has  plane  wave  solutions,  and  these  solutions  have  a  continuous  spectrum. Ultimate reality is continuous, not discrete. Discreteness—the integers—arises from boundary conditions imposed on an underlying continuum.

11. However,   in  one  sense,   the  integers  were  fundamental  for  Euclid  as  well  as  for contemporary mathematicians.   Euclid,  Hilbert and G ̈odel allowed only a finite number of steps in a valid mathematical proof. But we should consider whether this constraint is merely a consequence of the human inability to check a proof with an infinite number of steps rather than a constraint coming from mathematical or physical reality.  If the constraint comes from human limitations, is there then any difference between an actual infinity of steps, and a huge, but still finite, number of steps in a proof? This last question came to the painful attention of the mathematical community when Thomas Hales announced he had proven Kepler’s Sphere Packing Conjecture that the face-centred  cubic  lattice  is  the  most  efficient  way  to  pack  spheres  (gives  the  greatest  number density).   Hales submitted his proof,  which was of gigantic length because computers had been used in many of the steps, to the most prestigious of all mathematics journals, Annals of Mathematics, whose editor assembled a team of 12 referees, headed by Fejes Toth.  In early 2004, Toth delivered a report to the editor that although he and the rest of the team were 99% certain that Hales ’proof was correct, they were not completely certain, and after 5 years of effort they had become convinced that they would never be certain.  The length of the proof was such that a single human could never check the proof in a ‘reasonable’ amount of time. A computer could check the proof, and certify the proof as correct, but a correct proof of what?

Physicists will recall George Gamow’s ‘One, Two, Three,—Infinity’.

12. I have argued that physics assumes the continuum, but is any higher infinity required? The Power Set Axiom generates an infinite hierarchy, but physically only the continuum appears. Physics deals with the set of all functions defined on n-dimensional space, and in general this set would be the set of all subsets of n-dimensional space.  But if we impose the condition that the only allowed functions are continuous—if all the curvature derivatives are present, as required by quantum theory, the only allowed functions are only continuous but C ∞—∞ the n the cardinality of this restricted set of functions is the cardinality of the continuum, since a continuous function is completely determined by its values at the rational points in its domain. This will not change in the path integral. So we have no justification from physics for any set of higher cardinality than the continuum.  If we accept physics as our guide to mathematical reality, the Power Set Axiom must be abandoned. Physics begins and ends with the continuum. It is possible that the continuum should not be thought of as a set, but as a class. But even in ZF there are interesting classes that are not sets.  The class of all ordinal numbers is not a set, for example, Dugundji (1970, p 43).  Or it may be that we should reformulate the Power Set Axiom  to  yield  the  continuum,  but  not  the  infinite  hierarchy  beyond  the  continuum.  

13. The word ‘free’ is in quotes because as is well known, the values of the 23 parameters are not completely arbitrary.  If the parameters were set at too large a value, the SM would become inconsistent. For example, if the value of the Higgs mass (given the top quark mass at about 180 GeV) were larger than about 200 GeV, the Higgs vacuum would become unstable. However, if the universe were to end in a final singularity, and if we were sufficient close to the final singularity, then this instability would not matter because the universe would end before the Higgs vacuum instability could manifest itself. And the closer we are to the final singularity, the larger the Higgs mass—and indeed all the coupling constants can be before the instability can be realized.  There is no limit to this: for any possible values of the coupling constants, there will be a time sufficiently close to the final singularity so that the SM will be consistent for all times before the universe ends in the final singularity. If we imagine that the parameters do indeed get larger with cosmic time, and there is indeed a final singularity near which the coupling constants become arbitrarily large, then notice that the Gauge Hierarchy Problem disappears. There will be a time in the far future at which the particle masses are comparable in magnitude to the Planck mass, and beyond the Planck mass. This suggests that perhaps SM problems 2 and 3 are connected. One of the cosmological parameters recently determined by the WMAP observations is the Hubble constant H0. But, of course, the Hubble parameter is not a constant. It varies with cosmological epoch, having the value +∞ at the initial singularity, and the value−∞at the final singularity.  The value H0=71 km (s-Mpc) − 1 (Spergelet al 2003) is a measure of our epoch of cosmological history rather than a constant of nature. The Hubble parameter H takes on all possible values at some point in universal history.  Perhaps the constants of the SM are the same: not constants, but parameters taking on all possible values allowed by mathematical consistency over the whole of universal time and over the entire multiverse.  In this case, the solution to SM model problem 4 would be the same as the solution to problems 2 and 3.

.АХXАХXА. .згодна.годна. !МЫ цудоўныя МЫ! .камунікацыі.выбух. .насувязі. .АХXАХXА.Рука и мозг. Человек и его орудие. Инженерная точка зрения (общие черты древнеримского и современного американского общества). Роль техники и орудий. Бергсон об интеллекте и инстинкте. Сопоставление с социальными системами беспозвоночных и их коммуникацией (танцы пчел). Вынесение интеллектуальных и языковых операций вовне (язык жестов, устный естественный язык, другие системы знаков). Наскальная живопись и ее символика по Леруа-Гурану, Универсальные семиотические комплексы эпохи мирового дерева согласно Топорову (дерево Боддхи, образ распятия). Музыкальные инструменты как внешняя реализация потенциала височных отделов правого полушария. Значение кино, телевидения и других средств общения. Компьютеры как продолжение левого полушария мозга. Интернет и возможности мировой культуры. Вероятное развитие квантовых компьютеров и перспективы на будущее. Мозг как дискретный компьютер Тьюринга в сочетании с квантовым.

Д. Гильберт и П. Бернайс в монографии «Основания математики» (1934) замечают по поводу апории «Ахиллес и черепаха»[32]:

Обычно этот парадокс пытаются обойти рассуждением о том, что сумма бесконечного числа этих временных интервалов всё-таки сходится и, таким образом, даёт конечный промежуток времени. Однако это рассуждение абсолютно не затрагивает один существенно парадоксальный момент, а именно парадокс, заключающийся в том, что некая бесконечная последовательность следующих друг за другом событий, последовательность, завершаемость которой мы не можем себе даже представить (не только физически, но хотя бы в принципе), на самом деле всё-таки должна завершиться.

Davies’ super-machine

Proposed by E. B. Davies,[15] this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that – due to fundamental properties of the real universe such as quantum mechanics, thermal noise and information theory – his machine can’t actually be built.

Hypercomputation

From Wikipedia, the free encyclopedia

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

The Church–Turing thesis states that any «computable» function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.

Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions.

Principles of the standard predicate

The following principles follow from the above intuitive motivation and so should be deducible from the formal axioms. For the moment we take the domain of discussion as being the familiar set of whole numbers.

  • Any mathematical expression that does not use the new predicate standard explicitly or implicitly is an internal formula.
  • Any definition that does so is an external formula.
  • Any number uniquely specified by an internal formula is standard (by definition).
  • Nonstandard numbers are precisely those that cannot be uniquely specified (due to limitations of time and space) by an internal formula.
  • Nonstandard numbers are elusive: each one is too enormous to be manageable in decimal notation or any other representation, explicit or implicit, no matter how ingenious your notation. Whatever you succeed in producing is by-definition merely another standard number.
  • Nevertheless, there are (many) nonstandard whole numbers in any infinite subset of N.
  • Nonstandard numbers are completely ordinary numbers, having decimal representations, prime factorizations, etc. Every classical theorem that applies to the natural numbers applies to the nonstandard natural numbers. We have created, not new numbers, but a new method of discriminating between existing numbers.
  • Moreover, any classical theorem that is true for all standard numbers is necessarily true for all natural numbers. Otherwise the formulation «the smallest number that fails to satisfy the theorem» would be an internal formula that uniquely defined a nonstandard number.
  • The predicate «nonstandard» is a logically consistent method for distinguishing large numbers — the usual term will be illimited. Reciprocals of these illimited numbers will necessarily be extremely small real numbers — infinitesimals. To avoid confusion with other interpretations of these words, in newer articles on IST those words are replaced with the constructs «i-large» and «i-small».
  • There are necessarily only finitely many standard numbers — but caution is required: we cannot gather them together and hold that the result is a well-defined mathematical set. This will not be supported by the formalism (the intuitive justification being that the precise bounds of this set vary with time and history). In particular we will not be able to talk about the largest standard number, or the smallest nonstandard number. It will be valid to talk about some finite set that contains all standard numbers — but this non-classical formulation could only apply to a nonstandard set.

Critiques of use theories of meaning

Sometimes between the 1950-1990s, cognitive scientist Jerry Fodor said that use theories of meaning (of the Wittgensteinian kind) seem to assume that language is solely a public phenomenon, that there is no such thing as a «private language». Fodor thinks it is necessary to create or describe the language of thought, which would seemingly require the existence of a «private language».

In the 1960s, David Kellogg Lewis described meaning as use, a feature of a social convention and conventions as regularities of a specific sort. Lewis’ work was an application of game theory in philosophical topics.[39] Conventions, he argued, are a species of coordination equilibria.

Prezientacyi pra reaĺnasć nie isnujuć, reaĺnasć niama, a pradstaŭlienni josć častka reaĺnasci i jany reaĺna — jak pradstaŭliennia.

Krytery blizkasci pradstaŭlienniaŭ da reaĺnasci — efiektyŭnasć prymajemych rašenniaŭ, vykarystannie kankretnaj karty aĺbo razmieščanych maršrutaŭ, vykarystannie dadzienych pradstaŭlienniaŭ.

Čjem vyšej detalizacyja kart, tym boĺš efiektyŭnyja maršruty.
Čam vyšej entropija pradstaŭlienniaŭ, efiektyŭnyja
rašenni.

Reaĺnasć zmiaščaje ŭsio, stvaraje ŭsio i ŭ hety čas — za miežy ŭsiaho.

U mozhu prymataŭ evaliucyjna farmiravalasia niejronnaja sieć, zabiaspiečvala i dazvaliala apieravać sieć da 150 (ličba Danbara) abjektaŭ i ich uzajemasviaziej.

Hrafiki i sietki, matrycy, vierahodnasci i kvantavyja supierpazicyi 🙂

Každyj z dvuch abjektaŭ slažna zviazany z inšymi. «Nie viedaju adnaho pra druhu» — heta taksama vyhliad uzajemaadnosin, jakija abiacajuć nieabchodnasć pamianiać …

U čym slyšal, što pad fMRT scenarystaŭ kladut 130 čalaviek? dzie jašče 20 z 150? U ich niama boĺš hrošaj, čym u armii 🙂 Moža 20 zaniatkaŭ, fiksacyja i nie važnyja dlia limbičnaha vyklikannia? …

Cikava, što DSM raniej usiaho nastroju na balansiroŭku ŭsioj hetaj sietki — my bačym pošuk jakoha-niebudź raŭnapraŭnaha, stabiĺnaha staŭliennia «nas», jak uzlajem hetuju sietku da inšych z našaj staronki … 130 uzlam (?) efiektyŭna i siabie nie abradzić i druziej paradavać 🙂

Heta vieĺmi, paražajuča, vialiki abjom vyličennia … Naš adzin mozh kruče ŭsich supierkamputaraŭ) kruč … mienš, čym 130 kvantovych supierkamputaraŭ, abviazvajučysia adzin z druhim) ))

Tohda i znajomstva, anhieĺskaja karalieva, — heta evaliucyjna nieabchodny i vyhadny miechanizm … zahruzić adzin taki trejlier abjektaŭ i adčuć (u tym časie i biez sloŭ) adnosiny ŭ hetym treŭhoĺniku. I .. izmienit́ ich. Adpravić važnasci. Zmianiennie navat adnoj suviazi chutka (mhnovienna-kvantava?) Pieračytvaje i rebalansuje praz usiu siekundu …

No jak paznannie heta rabić, jano ničoha nie razumieje? =) Da i treŭhoĺnikaŭ takich u holavie … vieĺmi šmat.

Padumala jašče… atrymlivaje ža, što eta sieć — heta i josć sama najaŭnaja nie mastackaja, alie i nie sinoptyčnaja, niejronnaja sietka druhoha paradku ŭ nas mozhu?

A Fakty Karty… heta zdoĺnasć adliustravannia stanu hetaj sietki znachodzicca na papiery. Heta dazvaliaje hetaj samaj sietcy atrymać vizuaĺny zvarotny klik dlia mahčymasci samaabsluhoŭvannia i samanastrojki … kali boĺš abjektaŭ možna zachoŭvać u, to i vychad za abmiežavannie koĺkasci Danbary!

A 130 takich stajaŭ buduć stvarać jašče adnu takuju ​​ž niejronnuju sietku!

Očjeń intieriesnyje mysli polučajutsia, blahodariu 🙂

Žyviot u mianie padruha ŭ Niepalie, joj za 40, zoŭt jejo Summaja i jana niehramotnaja.
U Niepalie vieĺmi šmat niehramotnych, alie vieĺmi razumnych liudziej. Daže erudziravanych možna skazać. No nie načitannych)

Spierva ja nie vieĺmi hetaha zamianiŭ, alie pamniu mojo ŭdyŭliennie, kali ja zrazumieŭ, što va ŭmovach slabasci z zapaminanniem spiska boĺš za 3 (?)!
JA na aŭtamaty kažu:

  • tak adznačyć aĺbo zapisać.
    A ona:
  • tak ja nie ŭmieju!
    JA zrazumieŭ … što z hramatyčnymi niejronavikami ŭ komplieksie iduć jašče i paniacci spiskaŭ, unutranaja pamiać, daty … dy i samo paniaccie «karta» bylo paniacciem toĺki jak «što-to, dzie liudzi paznajuć kudu idci», a suviazi zniešnie-vakol i linij na karcie nie adbylosia … Nie paviedamliajem joj paniaccia «karta» pa fakcie

Blahodariu! Fakt Karty — heta ŭzbudžaĺnaje vynachodnictva! 🙂

  • boĺš kart — boĺš efiektyŭnasć maršrutaŭ
  • fakty zaliežyć ad kropak abzory
  • čym boĺš točak abzora — tym boĺš roznyja fakty i efiektyŭnasć rašenniaŭ
  • z adnoj kropkaj usich faktaŭ nie vydavać
  • biez inšych — nie znajsci rašenniaŭ
  • raznastajnasć indyviduaĺnasci — najboĺš efiektyŭnaje rašennie
  • «JA» — abjekt / sviaź u DSM
  • «JA» — kanstruirujecca pad zadaču / roliu
  • «MY» — sukupnasć «JA» (kaniečna, toĺki jak rekanstrukcyja ŭ mozhu))
  • da 130 indyviduaĺnych «JA» u našym adnaasobnym «MY» (mnie zdajecca, heta lik — no naviernaje i boĺš…)
  • «MY» — sukupnasć usich «JA» minus (-) adna (1) indyviduaĺnasć. prydumalasia voś toĺki što =)
  • «MY» zviazany (abbiahajemsia) z usimi inšymi «MY»
  • i… kali… my ŭsie znajomstvy praz 5 rukopožatij, jakija atrymlivajucca, što hrupa z 5 čalaviek moža «zrazumieć» i zbalansavać usiu sukupnasć čalaviečych «MY»…?

Z hetaha punktu abzoru asnoŭny adkaz myšliennia na «što, jak, začem» u mianie vykazalasia tak:

Usio-Usio, Prydumyvajem-Stvarajem, Udyviĺna-Krasiva!

Blahodariu za jasnasć! )

A semantic theory of truth was produced by Alfred Tarski for formal semantics. According to Tarski’s account, meaning consists of a recursive set of rules that end up yielding an infinite set of sentences, «‘p’ is true if and only if p», covering the whole language. His innovation produced the notion of propositional functions discussed on the section on universals (which he called «sentential functions»), and a model-theoretic approach to semantics (as opposed to a proof-theoretic one). Finally, some links were forged to the correspondence theory of truth (Tarski, 1944).

Это изображение имеет пустой атрибут alt
Моя Факт-Карта ‘Образование’

Critiques of truth theories of meaning

W. V. O. Quine attacked both verificationism and the very notion of meaning in his famous essay, «Two Dogmas of Empiricism«. In it, he suggested that meaning was nothing more than a vague and dispensable notion. Instead, he asserted, what was more interesting to study was the synonymy between signs. He also pointed out that verificationism was tied to the distinction between analytic and synthetic statements, and asserted that such a divide was defended ambiguously. He also suggested that the unit of analysis for any potential investigation into the world (and, perhaps, meaning) would be the entire body of statements taken as a collective, not just individual statements on their own.

Other criticisms can be raised on the basis of the limitations that truth-conditional theorists themselves admit to. Tarski, for instance, recognized that truth-conditional theories of meaning only make sense of statements, but fail to explain the meanings of the lexical parts that make up statements. Rather, the meaning of the parts of statements is presupposed by an understanding of the truth-conditions of a whole statement, and explained in terms of what he called «satisfaction conditions».

Still another objection (noted by Frege and others) was that some kinds of statements don’t seem to have any truth-conditions at all. For instance, «Hello!» has no truth-conditions, because it doesn’t even attempt to tell the listener anything about the state of affairs in the world.

Eleanor Rosch and George Lakoff have advanced a theory of «prototypes» which suggests that many lexical categories, at least on the face of things, have «radial structures». That is to say, there are some ideal member(s) in the category that seem to represent the category better than other members. For example, the category of «birds» may feature the robin as the prototype, or the ideal kind of bird. With experience, subjects might come to evaluate membership in the category of «bird» by comparing candidate members to the prototype and evaluating for similarities. So, for example, a penguin or an ostrich would sit at the fringe of the meaning of «bird», because a penguin is unlike a robin.

Intimately related to these researches is the notion of a psychologically basic level, which is both the first level named and understood by children, and «the highest level at which a single mental image can reflect the entire category» (Lakoff 1987:46). The «basic level» of cognition is understood by Lakoff as crucially drawing upon «image-schemas» along with various other cognitive processes.

Philosophers Ned Block, Gilbert Harman and Hartry Field, and cognitive scientists G. Miller and P. Johnson-Laird say that the meaning of a term can be found by investigating its role in relation to other concepts and mental states. They endorse a «conceptual role semantics«. Those proponents of this view who understand meanings to be exhausted by the content of mental states can be said to endorse «one-factor» accounts of conceptual role semantics and thus to fit within the tradition of idea theories.

😯😇😃

vasilki

The sort of truth theories presented here can also be attacked for their formalism both in practice and principle. The principle of formalism is challenged by the informalists, who suggest that language is largely a construction of the speaker, and so, not compatible with formalization. The practice of formalism is challenged by those who observe that formal languages (such as present-day quantificational logic) fail to capture the expressive power of natural languages (as is arguably demonstrated in the awkward character of the quantificational explanation of definite description statements, as laid out by Bertrand Russell).

begin program    write 0 on the first position of the output tape;    begin loop        simulate 1 successive step of the given Turing machine on the given input;        if the Turing machine has halted then            write 1 on the first position of the output tape and break out of loop;    end loopend program

Finally, over the past century, forms of logic have been developed that are not dependent exclusively on the notions of truth and falsity. Some of these types of logic have been called modal logics. They explain how certain logical connectives such as «if-then» work in terms of necessity and possibility. Indeed, modal logic was the basis of one of the most popular and rigorous formulations in modern semantics called the Montague grammar. The successes of such systems naturally give rise to the argument that these systems have captured the natural meaning of connectives like if-then far better than an ordinary, truth-functional logic ever could.

{\displaystyle X'=\{x\mid \varphi _{x}^{X}(x)\ {\mbox{is defined}}\}.}
Это изображение имеет пустой атрибут alt

.АХXА.101XXAXX101.АХXА.

436

goto 0:

<p xmlns:dct=»http://purl.org/dc/terms/» xmlns:vcard=»http://www.w3.org/2001/vcard-rdf/3.0#»> <a rel=»license» href=»http://creativecommons.org/publicdomain/zero/1.0/»> <img src=»http://i.creativecommons.org/p/zero/1.0/88×31.png» style=»border-style: none;» alt=»CC0″> </a> <br> To the extent possible under law, <a rel=»dct:publisher» href=»https://www.law-of-time.ru/boot»> <span property=»dct:title»>Raman Oshimish-Pershynkau</span></a> has waived all copyright and related or neighboring rights to <span property=»dct:title»>101 integers to boot</span>. This work is published from: <span property=»vcard:Country» datatype=»dct:ISO3166″ content=»BY» about=»https://www.law-of-time.ru/boot»> Belarus</span>. </p>



SEO

Фокусное ключевое словоПомощь по выбору идеального фокусного ключевого слова.(Откроется в новой вкладке браузера)Предпросмотр как:Мобильный результатРезультат на ПКПредпросмотр URL:www.law-of-time.ru › bootПредварительный просмотр SEO названия: :101.of integers to boot. 436 — Закон ВремениПросмотр мета-описания: :

Фев 23, 2021 ⋅ We shall investigate the idea that physical reality is pure number in the second section of this paper. We shall point out that quantum mechanics — more …SEO-заголовокНазвание Страница Разделитель Название сайта Название сайтаНазваниеОсновная рубрикаРазделительЯрлыкМета-описаниеWe shall investigate the idea that physical reality is pure number in the second section of this paper. We shall point out that quantum mechanics — more precisely the Bekenstein Bound, a relativistic version of the Heisenberg uncertainty principle— implies that the complexity of the universe at the present time is finite, and hence the entire universe can be emulated down to the quantum state on a computer. Thus, it would seem that indeed the universe is a mere expression of mathematical reality, more specifically an expression of number theory, and of integers to boot.Название сайтаНазваниеОсновная рубрикаРазделитель

1. МЫ = УСЁЯ-1 [мінус (-) адна (1) індывідуальнасць]

=== US = TRIUNITY/TOTALITY/MATRIXIam-1 [minus (-) the one (1) individuality]

2. 31:150::3233::520:13531

3. http://dauns01.math.tulane.edu/~tipler/populararticles.htm

  • «Intelligent Life in Cosmology», published in International Journal of Astrobiology, volume 2 (2003), pages 141-148.
  • «The Value/Fact Distinction: Coase’s Theorem Unifies Normative and Positive Economics», (January 15, 2007).
  • «The Star of Bethlehem: a Type Ia/Ic Supernova in the Andromeda Galaxy», Observatory, volume 125 (2005), pages 168-174.
«Theory of Everything based on Feynman-Weinberg Quantum Gravity and the Extended Standard Model», (original title; published title is «The Structure of the World from Pure Numbers), published in Reports on Progress in Physics, volume 68 (2005), pages 897-964.
Frank J. Tipler
Department of Mathematics
Tulane University
New Orleans, LA 70118

4. We shall investigate the idea that physical reality is pure number in the second section of this paper. We shall point out that quantum mechanics—more precisely the Bekenstein Bound, a relativistic version of the Heisenberg uncertainty principle—implies that the complexity of the universe at the present time is finite, and hence the entire universe can be emulated down to the quantum state on a computer. Thus, it would seem that indeed the universe is a mere expression of mathematical reality, more specifically an expression of number theory, and of integers to boot.

If the universe is closed—We shall argue in later sections that quantum mechanical consistency requires it to be not only spatially compact but a three-sphere S3—then the Bekenstein Bound shows that the complexity of the universe at any time to be finite. Or more precisely, the Bound requires a universe of the multiverse to be finite at any given time. As we shall see, there are an uncountable number of universes in the multiverse, but there are only a finite number of physically distinguishable universes in the multiverse of a given size and non-vacuum energy content. So fixing the size and non-vacuum energy content, there must be an uncountable number of identical copies of each universe with a given information content. For example, a universe the size of the visible universe and with the non-vacuum energy content assumed by Penrose could be in any one of 1010123 possible quantum states. (Indouble exponentiation, it does not matter if one uses 2 or 10 as the lowest base: 1010123210123.) There will be an uncountable number of identical copies of each of these 1010123 universes. As time increases, these identical copies will differentiate, but at any time there will be an uncountable number of identical copies of each possible quantum state allowed by the laws of physics.

5. Let us outline an argument based on the Bekenstein Bound that the entropy of our universe must diverge to infinity as its final state is approached.

MacCallum has shown that an S3 closed universe with a single point future c-boundary is of measure zero in initial dataspace. Barrow has shown that the evolution of an S3 closed universe into its final singularity is chaotic. Yorke has shown that a chaotic physical system is likely to evolve into a measure zero state if and only if its control parameters are intelligently manipulated. Thus, life (≡intelligent computers) almost certainly must be present arbitrarily close to the final singularity in order for the known laws of physics to be mutually consistent at all times. Misner has shown in effect that event horizon elimination requires an infinite number of distinct manipulations, so an infinite amount of information must be processed between now and the final singularity. Each manipulation will generate at least one bit of entropy, since each manipulation will require first observing the universe, and each (irreversible) observation will require increasing the entropy by at least one bit. This yields S→+∞ as the final singularity is approached. Furthermore, the amount of information stored at any time diverges to infinity as the OmegaPoint is approached, since the divergence of the universe’s entropy implies the divergence of the complexity of the system that must be understood to be controlled.

6. So we have obtained two divergences for the price of one! Not only must the entropy of the universe diverge, but so must the information coded in the biosphere.

7. The laws of physics require progress and life to continue to the very end of time, and improve to infinity. If the laws of physics be for us, who can be against us?

8. Wheeler points out [5] that the natural meaning of “today” — the choice of a spacelike hypersurface though the Earth — is the constant mean curvature hypersurface through the Earth today. Tipler ([12], p. 440) has shown thatif the strong energy condition holds, and if the universe began closeto homo-geneity and isotropy, an Omega Point spacetime can be uniquely foliated byconstant mean curvature hypersurfaces, so Wheeler’s proposal does indeed de-fine a unique “today” over the entire universe (Tipler also shows that a constant mean curvature hypersurface probably coincides with the rest frame of the CBRat any event, so Wheeler’s “today across the entire universe” is even easy tolocate experimentally.) Putting all of these criteria together yields.

9. I have argued in my book that life’s drive to total knowledge in the far future will cause our far future descendants to carry out this emulation of their distant ancestors. After all, we are now attempting to reproduce our ultimate biological ancestor, the first living cell from which all life on Earth descended.  We would be the first rational beings from which all rational beings in the far future would be descended, so in reproducing us in these far future computers, life in the far future would just be learning about their history. So the laws of physics will not only be for us in the sense of requiring the biosphere to survive, they are for us in the sense that they will eventually allow every human who has ever lived have a second chance at life. Notice that this ‘life goes on forever’ picture really makes use only of the integers.  At any one time, the complexity of the universe is finite.  In fact, we could now be an emulation in a digital computer!  But since we have no way of reaching the computer from inside the emulation, we could just regard the emulation as fundamental.  This would mean regarding physical reality as a subset of mathematical reality.  This is the Platonic universe: physical reality is not ‘real’ ultimately; only number—the integers comprising the true ultimate reality—is actually real. What does mathematics tell us about this ultimate integer reality? To answer this question, let us first remind ourselves of a few basic notions from logic(see Jech 2003, pp 155–7 for more details).

10. On the other hand, the G ̈odel theorems do not prove that no proof of consistency of Peano Arithmetic is possible.  The theorems merely show that a valid proof cannot be mapped into arithmetic in which sentences must be of finite length.   It might be the case,  for example, that a valid proof of consistency can be obtained if we allow proof of infinite length.  To this possibility we now turn.

Physical   reality   is   ultimately   quantum   mechanical,   and   quantum   mechanics   is fundamentally a theory of linear superposition, based on the continuum. The ‘natural’ numbers, which are tacitly in the mental background when the ZF axioms are formulated (think finite number of symbols, finite number of steps allowed in an acceptable proof), are not a natural foundation at all.  Rather, it is the continuum that is the basic entity, and the positive integers a derivative quantity. Specifically, the integers we see in the world around us—five coins, six birds, the distinct lines of the Balmer series—are expressions of the Exclusion Principle and the discrete eigen functions of the Schr ̈odinger equation applied to atoms. But the Schr ̈odinger equation  also  has  plane  wave  solutions,  and  these  solutions  have  a  continuous  spectrum. Ultimate reality is continuous, not discrete. Discreteness—the integers—arises from boundary conditions imposed on an underlying continuum.

11. However,   in  one  sense,   the  integers  were  fundamental  for  Euclid  as  well  as  for contemporary mathematicians.   Euclid,  Hilbert and G ̈odel allowed only a finite number of steps in a valid mathematical proof. But we should consider whether this constraint is merely a consequence of the human inability to check a proof with an infinite number of steps rather than a constraint coming from mathematical or physical reality.  If the constraint comes from human limitations, is there then any difference between an actual infinity of steps, and a huge, but still finite, number of steps in a proof? This last question came to the painful attention of the mathematical community when Thomas Hales announced he had proven Kepler’s Sphere Packing Conjecture that the face-centred  cubic  lattice  is  the  most  efficient  way  to  pack  spheres  (gives  the  greatest  number density).   Hales submitted his proof,  which was of gigantic length because computers had been used in many of the steps, to the most prestigious of all mathematics journals, Annals of Mathematics, whose editor assembled a team of 12 referees, headed by Fejes Toth.  In early 2004, Toth delivered a report to the editor that although he and the rest of the team were 99% certain that Hales ’proof was correct, they were not completely certain, and after 5 years of effort they had become convinced that they would never be certain.  The length of the proof was such that a single human could never check the proof in a ‘reasonable’ amount of time. A computer could check the proof, and certify the proof as correct, but a correct proof of what?

Physicists will recall George Gamow’s ‘One, Two, Three,—Infinity’.

12. I have argued that physics assumes the continuum, but is any higher infinity required? The Power Set Axiom generates an infinite hierarchy, but physically only the continuum appears. Physics deals with the set of all functions defined on n-dimensional space, and in general this set would be the set of all subsets of n-dimensional space.  But if we impose the condition that the only allowed functions are continuous—if all the curvature derivatives are present, as required by quantum theory, the only allowed functions are only continuous but C ∞—∞ the n the cardinality of this restricted set of functions is the cardinality of the continuum, since a continuous function is completely determined by its values at the rational points in its domain. This will not change in the path integral. So we have no justification from physics for any set of higher cardinality than the continuum.  If we accept physics as our guide to mathematical reality, the Power Set Axiom must be abandoned. Physics begins and ends with the continuum. It is possible that the continuum should not be thought of as a set, but as a class. But even in ZF there are interesting classes that are not sets.  The class of all ordinal numbers is not a set, for example, Dugundji (1970, p 43).  Or it may be that we should reformulate the Power Set Axiom  to  yield  the  continuum,  but  not  the  infinite  hierarchy  beyond  the  continuum.  

13. The word ‘free’ is in quotes because as is well known, the values of the 23 parameters are not completely arbitrary.  If the parameters were set at too large a value, the SM would become inconsistent. For example, if the value of the Higgs mass (given the top quark mass at about 180 GeV) were larger than about 200 GeV, the Higgs vacuum would become unstable. However, if the universe were to end in a final singularity, and if we were sufficient close to the final singularity, then this instability would not matter because the universe would end before the Higgs vacuum instability could manifest itself. And the closer we are to the final singularity, the larger the Higgs mass—and indeed all the coupling constants can be before the instability can be realized.  There is no limit to this: for any possible values of the coupling constants, there will be a time sufficiently close to the final singularity so that the SM will be consistent for all times before the universe ends in the final singularity. If we imagine that the parameters do indeed get larger with cosmic time, and there is indeed a final singularity near which the coupling constants become arbitrarily large, then notice that the Gauge Hierarchy Problem disappears. There will be a time in the far future at which the particle masses are comparable in magnitude to the Planck mass, and beyond the Planck mass. This suggests that perhaps SM problems 2 and 3 are connected. One of the cosmological parameters recently determined by the WMAP observations is the Hubble constant H0. But, of course, the Hubble parameter is not a constant. It varies with cosmological epoch, having the value +∞ at the initial singularity, and the value−∞at the final singularity.  The value H0=71 km (s-Mpc) − 1 (Spergelet al 2003) is a measure of our epoch of cosmological history rather than a constant of nature. The Hubble parameter H takes on all possible values at some point in universal history.  Perhaps the constants of the SM are the same: not constants, but parameters taking on all possible values allowed by mathematical consistency over the whole of universal time and over the entire multiverse.  In this case, the solution to SM model problem 4 would be the same as the solution to problems 2 and 3.

.АХXАХXА. .згодна.годна. !МЫ цудоўныя МЫ! .камунікацыі.выбух. .насувязі. .АХXАХXА.

Рука и мозг. Человек и его орудие. Инженерная точка зрения (общие черты древнеримского и современного американского общества). Роль техники и орудий. Бергсон об интеллекте и инстинкте. Сопоставление с социальными системами беспозвоночных и их коммуникацией (танцы пчел). Вынесение интеллектуальных и языковых операций вовне (язык жестов, устный естественный язык, другие системы знаков). Наскальная живопись и ее символика по Леруа-Гурану, Универсальные семиотические комплексы эпохи мирового дерева согласно Топорову (дерево Боддхи, образ распятия). Музыкальные инструменты как внешняя реализация потенциала височных отделов правого полушария. Значение кино, телевидения и других средств общения. Компьютеры как продолжение левого полушария мозга. Интернет и возможности мировой культуры. Вероятное развитие квантовых компьютеров и перспективы на будущее. Мозг как дискретный компьютер Тьюринга в сочетании с квантовым.

Д. Гильберт и П. Бернайс в монографии «Основания математики» (1934) замечают по поводу апории «Ахиллес и черепаха»[32]:

Обычно этот парадокс пытаются обойти рассуждением о том, что сумма бесконечного числа этих временных интервалов всё-таки сходится и, таким образом, даёт конечный промежуток времени. Однако это рассуждение абсолютно не затрагивает один существенно парадоксальный момент, а именно парадокс, заключающийся в том, что некая бесконечная последовательность следующих друг за другом событий, последовательность, завершаемость которой мы не можем себе даже представить (не только физически, но хотя бы в принципе), на самом деле всё-таки должна завершиться.

Davies’ super-machine

Proposed by E. B. Davies,[15] this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that – due to fundamental properties of the real universe such as quantum mechanics, thermal noise and information theory – his machine can’t actually be built.

Hypercomputation

From Wikipedia, the free encyclopedia

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

The Church–Turing thesis states that any «computable» function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.

Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions.

Principles of the standard predicate

The following principles follow from the above intuitive motivation and so should be deducible from the formal axioms. For the moment we take the domain of discussion as being the familiar set of whole numbers.

  • Any mathematical expression that does not use the new predicate standard explicitly or implicitly is an internal formula.
  • Any definition that does so is an external formula.
  • Any number uniquely specified by an internal formula is standard (by definition).
  • Nonstandard numbers are precisely those that cannot be uniquely specified (due to limitations of time and space) by an internal formula.
  • Nonstandard numbers are elusive: each one is too enormous to be manageable in decimal notation or any other representation, explicit or implicit, no matter how ingenious your notation. Whatever you succeed in producing is by-definition merely another standard number.
  • Nevertheless, there are (many) nonstandard whole numbers in any infinite subset of N.
  • Nonstandard numbers are completely ordinary numbers, having decimal representations, prime factorizations, etc. Every classical theorem that applies to the natural numbers applies to the nonstandard natural numbers. We have created, not new numbers, but a new method of discriminating between existing numbers.
  • Moreover, any classical theorem that is true for all standard numbers is necessarily true for all natural numbers. Otherwise the formulation «the smallest number that fails to satisfy the theorem» would be an internal formula that uniquely defined a nonstandard number.
  • The predicate «nonstandard» is a logically consistent method for distinguishing large numbers — the usual term will be illimited. Reciprocals of these illimited numbers will necessarily be extremely small real numbers — infinitesimals. To avoid confusion with other interpretations of these words, in newer articles on IST those words are replaced with the constructs «i-large» and «i-small».
  • There are necessarily only finitely many standard numbers — but caution is required: we cannot gather them together and hold that the result is a well-defined mathematical set. This will not be supported by the formalism (the intuitive justification being that the precise bounds of this set vary with time and history). In particular we will not be able to talk about the largest standard number, or the smallest nonstandard number. It will be valid to talk about some finite set that contains all standard numbers — but this non-classical formulation could only apply to a nonstandard set.

Critiques of use theories of meaning

Sometimes between the 1950-1990s, cognitive scientist Jerry Fodor said that use theories of meaning (of the Wittgensteinian kind) seem to assume that language is solely a public phenomenon, that there is no such thing as a «private language». Fodor thinks it is necessary to create or describe the language of thought, which would seemingly require the existence of a «private language».

In the 1960s, David Kellogg Lewis described meaning as use, a feature of a social convention and conventions as regularities of a specific sort. Lewis’ work was an application of game theory in philosophical topics.[39] Conventions, he argued, are a species of coordination equilibria.

Prezientacyi pra reaĺnasć nie isnujuć, reaĺnasć niama, a pradstaŭlienni josć častka reaĺnasci i jany reaĺna — jak pradstaŭliennia.

Krytery blizkasci pradstaŭlienniaŭ da reaĺnasci — efiektyŭnasć prymajemych rašenniaŭ, vykarystannie kankretnaj karty aĺbo razmieščanych maršrutaŭ, vykarystannie dadzienych pradstaŭlienniaŭ.

Čjem vyšej detalizacyja kart, tym boĺš efiektyŭnyja maršruty.
Čam vyšej entropija pradstaŭlienniaŭ, efiektyŭnyja
rašenni.

Reaĺnasć zmiaščaje ŭsio, stvaraje ŭsio i ŭ hety čas — za miežy ŭsiaho.

U mozhu prymataŭ evaliucyjna farmiravalasia niejronnaja sieć, zabiaspiečvala i dazvaliala apieravać sieć da 150 (ličba Danbara) abjektaŭ i ich uzajemasviaziej.

Hrafiki i sietki, matrycy, vierahodnasci i kvantavyja supierpazicyi 🙂

Každyj z dvuch abjektaŭ slažna zviazany z inšymi. «Nie viedaju adnaho pra druhu» — heta taksama vyhliad uzajemaadnosin, jakija abiacajuć nieabchodnasć pamianiać …

U čym slyšal, što pad fMRT scenarystaŭ kladut 130 čalaviek? dzie jašče 20 z 150? U ich niama boĺš hrošaj, čym u armii 🙂 Moža 20 zaniatkaŭ, fiksacyja i nie važnyja dlia limbičnaha vyklikannia? …

Cikava, što DSM raniej usiaho nastroju na balansiroŭku ŭsioj hetaj sietki — my bačym pošuk jakoha-niebudź raŭnapraŭnaha, stabiĺnaha staŭliennia «nas», jak uzlajem hetuju sietku da inšych z našaj staronki … 130 uzlam (?) efiektyŭna i siabie nie abradzić i druziej paradavać 🙂

Heta vieĺmi, paražajuča, vialiki abjom vyličennia … Naš adzin mozh kruče ŭsich supierkamputaraŭ) kruč … mienš, čym 130 kvantovych supierkamputaraŭ, abviazvajučysia adzin z druhim) ))

Tohda i znajomstva, anhieĺskaja karalieva, — heta evaliucyjna nieabchodny i vyhadny miechanizm … zahruzić adzin taki trejlier abjektaŭ i adčuć (u tym časie i biez sloŭ) adnosiny ŭ hetym treŭhoĺniku. I .. izmienit́ ich. Adpravić važnasci. Zmianiennie navat adnoj suviazi chutka (mhnovienna-kvantava?) Pieračytvaje i rebalansuje praz usiu siekundu …

No jak paznannie heta rabić, jano ničoha nie razumieje? =) Da i treŭhoĺnikaŭ takich u holavie … vieĺmi šmat.

Padumala jašče… atrymlivaje ža, što eta sieć — heta i josć sama najaŭnaja nie mastackaja, alie i nie sinoptyčnaja, niejronnaja sietka druhoha paradku ŭ nas mozhu?

A Fakty Karty… heta zdoĺnasć adliustravannia stanu hetaj sietki znachodzicca na papiery. Heta dazvaliaje hetaj samaj sietcy atrymać vizuaĺny zvarotny klik dlia mahčymasci samaabsluhoŭvannia i samanastrojki … kali boĺš abjektaŭ možna zachoŭvać u, to i vychad za abmiežavannie koĺkasci Danbary!

A 130 takich stajaŭ buduć stvarać jašče adnu takuju ​​ž niejronnuju sietku!

Očjeń intieriesnyje mysli polučajutsia, blahodariu 🙂

Žyviot u mianie padruha ŭ Niepalie, joj za 40, zoŭt jejo Summaja i jana niehramotnaja.
U Niepalie vieĺmi šmat niehramotnych, alie vieĺmi razumnych liudziej. Daže erudziravanych možna skazać. No nie načitannych)

Spierva ja nie vieĺmi hetaha zamianiŭ, alie pamniu mojo ŭdyŭliennie, kali ja zrazumieŭ, što va ŭmovach slabasci z zapaminanniem spiska boĺš za 3 (?)!
JA na aŭtamaty kažu:

  • tak adznačyć aĺbo zapisać.
    A ona:
  • tak ja nie ŭmieju!
    JA zrazumieŭ … što z hramatyčnymi niejronavikami ŭ komplieksie iduć jašče i paniacci spiskaŭ, unutranaja pamiać, daty … dy i samo paniaccie «karta» bylo paniacciem toĺki jak «što-to, dzie liudzi paznajuć kudu idci», a suviazi zniešnie-vakol i linij na karcie nie adbylosia … Nie paviedamliajem joj paniaccia «karta» pa fakcie

Blahodariu! Fakt Karty — heta ŭzbudžaĺnaje vynachodnictva! 🙂

  • boĺš kart — boĺš efiektyŭnasć maršrutaŭ
  • fakty zaliežyć ad kropak abzory
  • čym boĺš točak abzora — tym boĺš roznyja fakty i efiektyŭnasć rašenniaŭ
  • z adnoj kropkaj usich faktaŭ nie vydavać
  • biez inšych — nie znajsci rašenniaŭ
  • raznastajnasć indyviduaĺnasci — najboĺš efiektyŭnaje rašennie
  • «JA» — abjekt / sviaź u DSM
  • «JA» — kanstruirujecca pad zadaču / roliu
  • «MY» — sukupnasć «JA» (kaniečna, toĺki jak rekanstrukcyja ŭ mozhu))
  • da 130 indyviduaĺnych «JA» u našym adnaasobnym «MY» (mnie zdajecca, heta lik — no naviernaje i boĺš…)
  • «MY» — sukupnasć usich «JA» minus (-) adna (1) indyviduaĺnasć. prydumalasia voś toĺki što =)
  • «MY» zviazany (abbiahajemsia) z usimi inšymi «MY»
  • i… kali… my ŭsie znajomstvy praz 5 rukopožatij, jakija atrymlivajucca, što hrupa z 5 čalaviek moža «zrazumieć» i zbalansavać usiu sukupnasć čalaviečych «MY»…?

Z hetaha punktu abzoru asnoŭny adkaz myšliennia na «što, jak, začem» u mianie vykazalasia tak:

Usio-Usio, Prydumyvajem-Stvarajem, Udyviĺna-Krasiva!

Blahodariu za jasnasć! )

A semantic theory of truth was produced by Alfred Tarski for formal semantics. According to Tarski’s account, meaning consists of a recursive set of rules that end up yielding an infinite set of sentences, «‘p’ is true if and only if p», covering the whole language. His innovation produced the notion of propositional functions discussed on the section on universals (which he called «sentential functions»), and a model-theoretic approach to semantics (as opposed to a proof-theoretic one). Finally, some links were forged to the correspondence theory of truth (Tarski, 1944).

Моя Факт-Карта ‘Образование’

Critiques of truth theories of meaning

W. V. O. Quine attacked both verificationism and the very notion of meaning in his famous essay, «Two Dogmas of Empiricism«. In it, he suggested that meaning was nothing more than a vague and dispensable notion. Instead, he asserted, what was more interesting to study was the synonymy between signs. He also pointed out that verificationism was tied to the distinction between analytic and synthetic statements, and asserted that such a divide was defended ambiguously. He also suggested that the unit of analysis for any potential investigation into the world (and, perhaps, meaning) would be the entire body of statements taken as a collective, not just individual statements on their own.

Other criticisms can be raised on the basis of the limitations that truth-conditional theorists themselves admit to. Tarski, for instance, recognized that truth-conditional theories of meaning only make sense of statements, but fail to explain the meanings of the lexical parts that make up statements. Rather, the meaning of the parts of statements is presupposed by an understanding of the truth-conditions of a whole statement, and explained in terms of what he called «satisfaction conditions».

Still another objection (noted by Frege and others) was that some kinds of statements don’t seem to have any truth-conditions at all. For instance, «Hello!» has no truth-conditions, because it doesn’t even attempt to tell the listener anything about the state of affairs in the world.

Eleanor Rosch and George Lakoff have advanced a theory of «prototypes» which suggests that many lexical categories, at least on the face of things, have «radial structures». That is to say, there are some ideal member(s) in the category that seem to represent the category better than other members. For example, the category of «birds» may feature the robin as the prototype, or the ideal kind of bird. With experience, subjects might come to evaluate membership in the category of «bird» by comparing candidate members to the prototype and evaluating for similarities. So, for example, a penguin or an ostrich would sit at the fringe of the meaning of «bird», because a penguin is unlike a robin.

Intimately related to these researches is the notion of a psychologically basic level, which is both the first level named and understood by children, and «the highest level at which a single mental image can reflect the entire category» (Lakoff 1987:46). The «basic level» of cognition is understood by Lakoff as crucially drawing upon «image-schemas» along with various other cognitive processes.

Philosophers Ned Block, Gilbert Harman and Hartry Field, and cognitive scientists G. Miller and P. Johnson-Laird say that the meaning of a term can be found by investigating its role in relation to other concepts and mental states. They endorse a «conceptual role semantics«. Those proponents of this view who understand meanings to be exhausted by the content of mental states can be said to endorse «one-factor» accounts of conceptual role semantics and thus to fit within the tradition of idea theories.

😯😇😃

The sort of truth theories presented here can also be attacked for their formalism both in practice and principle. The principle of formalism is challenged by the informalists, who suggest that language is largely a construction of the speaker, and so, not compatible with formalization. The practice of formalism is challenged by those who observe that formal languages (such as present-day quantificational logic) fail to capture the expressive power of natural languages (as is arguably demonstrated in the awkward character of the quantificational explanation of definite description statements, as laid out by Bertrand Russell).

begin program
    write 0 on the first position of the output tape;
    begin loop
        simulate 1 successive step of the given Turing machine on the given input;
        if the Turing machine has halted then
            write 1 on the first position of the output tape and break out of loop;
    end loop
end program

Finally, over the past century, forms of logic have been developed that are not dependent exclusively on the notions of truth and falsity. Some of these types of logic have been called modal logics. They explain how certain logical connectives such as «if-then» work in terms of necessity and possibility. Indeed, modal logic was the basis of one of the most popular and rigorous formulations in modern semantics called the Montague grammar. The successes of such systems naturally give rise to the argument that these systems have captured the natural meaning of connectives like if-then far better than an ordinary, truth-functional logic ever could.

{\displaystyle X'=\{x\mid \varphi _{x}^{X}(x)\ {\mbox{is defined}}\}.}

.АХXА.101XXAXX101.АХXА.

1.3.3.1

goto entry_point

CC0
To the extent possible under law, Raman Oshimish-Pershynkau has waived all copyright and related or neighboring rights to 101 integers to boot. This work is published from: Belarus.