Image for post
Image for post
Fig 1: Python 3.8 on vscode

Calculator of Tomorrow: Using Arbitrary Precision

Do I exaggerate by calling it “heartbreak” or would “disillusionment” be better?

The latter implies enlightenment of a sort, as one’s illusions fall away. The former suggests one still needs to heal.

I’m talking about that day you discover that computers are using “fuzzy numbers” that don’t give the same results as the manual methods you used in school.

Are computers really all that great if they don’t get the right answers, according to the rules you were taught to follow?

Where does this six come from, on the last line?

IEEE 754, a standard for binary computers, is unable to represent some powers of 10 without a repeat-forever pattern, meaning the round trip, from decimal to binary and back, is inevitably lossy.

The perennial issue of “incommensurability” lurks behind this great divide between the ubiquitous floating point standard (IEEE 754) and various binary coded decimal (BCD) libraries.

There’s that age-old tension between infinite precision and the indefinite finitude of Universe. Integrated circuits inhabit the world of finitude, as do any and all discrete phenomena.

“Infinite precision” is for purists who point in the direction of the Mandelbrot zoom-in (taking us to ever higher frequency, like in Powers of 10 by Eames Office), a favorite Youtube genre for a lot of us. They’ll prove to us the complex plane is perfectly continuous.

Pythagoreans were reputedly among the first to suffer from existential vertigo, after nailing “irrationality” with a perfect proof, a reductio ad absurdum showing that the 2nd root of 2 (about 1.41) could never be perfectly expressed as a ratio p/q with p, q both natural numbers.

When still in high school, I became enamored of both calculators and slide rules. The latter were then going out of style, but it was fun having dad teach me.

Dad was also willing to invest in my professional development and got me an HP-45. I was privileged. Calculators tend to use BCD algorithms. That doesn’t mean they know pi to even a thousand places.

Notice how defensive these Pythonistas get, always changing the subject away from the inherent shortcomings of floating point numbers. Instead, they berate the questioner for both use cases for needing more exactitude than floating points offer, either single precision or double.

Excerpt:

Okay, whatever your requirements are, Fredrik is certainly right in that you
don’t know what you’re talking about with respect to floating point arithmetic.
Please read the paper “What Every Computer Scientist Should Know About
Floating-Point Arithmetic”:

http://docs.sun.com/source/806-3568/ncg_goldberg.html

You would also do well to get a book on basic numerical analysis. If you have
any transcendental functions involved (and if you are computing distances
between geographical coordinates, you certainly will), you will encounter
numbers that are irrational; that is, they *cannot* be expressed exactly in any
finite form. Decimal() and GMP are *arbitrary* precision data types, not infinite.

I admit, I am curious now about the application that you think requires these
exact results. What operations are you actually performing? Surely there’s a
square root or trig function in there somewhere.

Mathematica lets you set the precision (the number of decimal places) you need to work with. Python offers similar arbitrary precision powers by means of libraries, including decimal in the Standard Library, and gmpy2, a 3rd party free offering.

The latter is more sophisticated in offering native trig functions, although these may be computed in decimal using the exp function.

Using the decimal library, we can go:

and get:

Ф=Decimal(‘1.61803398874989484820458683436563811772030917980576286213544862270526046281890244970720720418939113748475408807538689175212663386222353693179318006076672635443338908659593958290563832266131992829026788067520876689250171169620703222104321626954862629631361443814975870122034080588795445474924618569536486444924104432077134494704956584678850987433944221254487706647809158846074998871240076521705751797883416625624940758906970400028121042762177111777805315317141011704666599146697987317613560067087480710131795236894275219484353056783002287856997829778347845878228911097625003026961561700250464338243776486102838312683303724292675263116533924731671112115881863851331620384005222165791286675294654906811317159934323597349498509040947621322298101726107059611645629909816290555208524790352406020172799747175342777592778625619432082750513121815628551222480939471234145170223735805772786160086883829523045926’)

Let’s check that against a published source, Nerd Paradise:

Otherwise known as the golden ratio

φ = 1.

6180339887 4989484820 4586834365 6381177203 0917980576 2862135448 6227052604 6281890244 9707207204 1893911374 8475408807 5386891752 1266338622 2353693179 3180060766 7263544333 8908659593 9582905638 3226613199 2829026788 0675208766 8925017116 9620703222 1043216269 5486262963 1361443814 9758701220 3408058879 5445474924 6185695364 8644492410 4432077134 4947049565 8467885098 7433944221 2544877066 4780915884 6074998871 2400765217 0575179788 3416625624 9407589069 7040002812 1042762177 1117778053 1531714101 1704666599 1466979873 1761356006 7087480710 1317952368 9427521948 4353056783 0022878569 9782977834 7845878228 9110976250 0302696156 1700250464 3382437764 8610283831 2683303724 2926752631 1653392473 1671112115 8818638513 3162038400 5222165791 2866752946 5490681131 7159934323 5973494985 0904094762 1322298101 7261070596 1164562990 9816290555 2085247903 5240602017 2799747175 3427775927 7862561943 2082750513 1218156285 5122248093 9471234145 1702237358 0577278616 0086883829 523045926

All right! Now we’re talking. This is more what I’d hope my computer could do for me, leaving that HP-45 in the dust, in terms of significant digits. Hooray!

In my curriculum, we like to check that 24 S-modules, with volumes expressed using Phi, add to an icosahedron with faces flush to the IVM octahedron, to give that octahedron’s volume of 4.

In other words, the empty space between the octahedron and faces-flush internal icosahedron, may be divided into 24 S-modules. That won’t make much sense without a picture.

Image for post
Image for post
Fig 2: One of 24 S-modules (12 positive, 12 negative)
Image for post
Image for post
Fig 3: Edges of the S-Module (Plane Net)

Now that internal icosahedron, call it “the icosahedron within”, is skew to a cuboctahedron with edges half those of the octahedron. Here’s that picture as well:

Image for post
Image for post
Fig 4: icosahedron + cuboctahedron inside a shared octahedron

Above I said “IVM octahedron” meaning its edges are all edges of the “isotropic vector matrix” as we name it in Synergetics. Here you will see more of the Matrix:

Image for post
Image for post
Fig 5: Octahedron Shown as 2-Frequency, IVM edges R (1/2 D)

Finally, we need one more module, the E-module, which is 1/120th of a rhombic triacontahedron.

OK, ready for the E-module now?

Image for post
Image for post
Fig 6: Rhombic Triacontahedron around IVM Ball

These IVM balls pack together in what’s known as “cubic close packing” (CCP) and connecting neighboring ball centers provides the “vectors” (or edges) of our IVM. If we “shrink wrap” this 30-rhombus faced polyhedron around the unit-radius IVM ball, we get, in outline, the 120 E-modules.

What’s heartbreaking to some of us, a small subculture, is that none of this vocabulary, with the exception of Phi maybe, is introduced in the high schools. All of the above is completely alien to your average twelfth grader perhaps looking to college. And guess what, in college they won’t get it either.

Today’s high schoolers have no idea what the Jitterbug Transformation might be, how it takes a cuboctahedron of volume 20, to an icosahedron of 20 times 1/sfactor, where sfactor = smod/emod.

Image for post
Image for post
Fig 7: Jitterbug Relationship (edge length is constant)

Nor do their teachers share that, starting with the internal cuboctahedron of volume 2.5, two applications of the sfactor (~1.08) will grow it to become the skew icosahedron with faces flush to the same octahedron (as shown above, Figure 4).

Ergo:

is our expected identity. We check it out below, and give the answer to the left expression:

Image for post
Image for post
Fig 8: checking results

What does check() return? How close is the answer to the expected 4.0?

(py38) Kirbys-MacBook-Pro:Session_01 mac$ /Users/mac/anaconda3/envs/py38/bin/python /Users/mac/Documents/pyt-pr/Session_01/checkmate.py
4.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005

Yes, there’s still a bit of trailing imprecision. Yes, we could alter the context to ask for even more digits of precision.

However we’re already feeling sufficiently powerful to express satisfaction, with our tools, if not with the mainstream curriculum.

related lecture on Youtube (explores this same content, referring to this story)

Written by

Lots online.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store