I’m currently on 8 drafts. Some of them are nearly empty, others have a fair bit in them and have been sitting around for months. It’s so frustrating. I feel like I should at least say something about them so it’s not all for nothing. I’ll do them from oldest to newest, all current working titles. In fact, this can double as a plan / notes, so I will add to these as ideas come forth.
Completing the ad-hoc object system
This is supposed to be the next installment in my exploration / critique of compile-time “static” concepts in programming. It is about inheritance, delegation, prototypes, parents, and stuff like that. Unfortunately I really can’t figure out what I want to say about this, partly because it hasn’t taken my interest in a long time. Ideally I’d like to cover:
- Lieberman’s comparison of delegation vs inheritance
- The Universal Design Pattern
- General idea of saying something is “like” something else
- Describing a concept by a “delta” or “small difference” from another, known concept; relationship to the power of calculus?
- Link to types-as-taxonomies i.e. Only One Type Per Object, crossover with the Types series
- Concrete implementations in different languages; classes, typeclasses, traits, prototypes, vtables…
- Connection with information compression and knowledge representation
The inevitability of late-binding
Something I consider immensely important for people to understand. I want to expose the pernicious influence of early-binding mentality and its role in perpetuating inflexible, brittle and misanthropic software systems, and make a convincing case for maximum self-modifiability all the way down. This will likely be a series, and will involve:
- Demonstrating how the static / dynamic and compile-time / runtime distinctions are artificial and counterproductive
- Looking at “granularity” and scale
- The influence of Unix mentality on maintaining all this (files, text, processes, applications, editors)
- How the benefits of “static” optimisation and analysis can easily continue once the boundaries of “static” and “dynamic” are dissolved, and possibly even more.
- A discussion of self-reference, dynamism and self-modifiability, and associated fears: complexity, security, suitability for automatic analysis and transformation, etc.
- Links to the Basic Principle of Recursive Design (see below)
Reverse-engineering the Quaternions
At the end of my introduction to the Quaternions series, I said I wanted to motivate a coherent story. Well, I bit off more than I could chew, and I still can’t quite do that. But even this is a giant leap past what I actually did in the first place, which was largely about starting from the quaternions and Geometric Algebra, and reverse-engineering them. So that is where I should at least start my documentation. Some points:
- “position”, “translation” as somewhat alien, non-linear concepts. Vectors as primarily about direction and scale.
- Importance of parallel / perpendicular; relationship to coordinate-ful vs. coordinate-free
- Linearity and bases
- Language / grammar of rotations
- Grrrr, stuff. dunno
Everything that’s wrong with calculus in a single equation
Introduction to my cleaning-up of differential and integral calculus with sane notation, to give a crystal-clear actual understanding of what’s going on. I have already worked out most of the ideas but, as ever, I want to present them as more than just a stream-of-consciousness, so I’ve got stuck. Among what to look forward to, is a simple derivation (no pun intended) of the integral u-substitution rule! (albeit, reverse-engineered, so it only makes retrospective sense. But our standard notation does not let us do even this.)
- Relates to vectors-are-functions and indices
- Horizontally flipping the notation for function application and composition, to cut out a lot of counter-intuitive BS (OK, maybe not for Hebrew or Arabic writers)
On the Notion of ‘Configuration’
I suspect that “config” files, in all their maddening bajillion different text file formats, and preferences / options GUI dialogs, are a clear symptom of something deeply wrong with software development. But I need to work it out.
Separation of Concerns
How much of this hallowed principle simply comes from the fact that we create software by writing on simulated paper? It comes from the opposing forces of (a) the tendency of adjacent lines of code to also execute close to one another in time, and (b) our desire to have code related by domain / topic, rather than time, to be in the same place. Case study: HTML and CSS as “content” vs “presentation”. (cough, bullshit)
This is, quite clearly, a visualisation problem. And because paper is static, it makes sense to use wholly separate pieces of paper rather than interleaving all “concerns” on one piece of paper. But we are under no obligation to use the computer to simulate the shortcomings of paper!!
Composability and Compositionality
Related to the Basic Principle of Recursive Design. May end up merging.
- Composability = how to combine things together and have them still work. Dead clockwork vs. multicellular life.
- Compositionality = favourite sacred cow of the Functional Programming community, i.e. if you understand the parts you understand the whole.
- But if you try and work out what this actually means, it either evaporates into tautology, or leads to C and Assembly having this property…!
The Basic Principle of Recursive Design
… is to give the parts the same power as the whole. A discussion of this concept in relation to software scaling, and Alan Kay’s conception of objects-as-networked-computers. Two directions: start with the parts, and make the whole no more powerful; or, start with the whole, and make the parts just as powerful. Or just keep the parts weaker than the whole, and prevent scaling. Alan Kay’s “architecture dominates materials” ideas: pyramids vs cathedrals, trying to scale a doghouse, etc.
Software as a giant clusterfuck of clockwork vs software as a living, fault-tolerant organism of encapsulated units. Most importantly: the idea that we should adapt the latter model for many scales within the “computer” level. The ladder, from smallest to largest, would roughly go like this:
- Procedural programming, algorithms and data structures; no objects. Individual machine-like entities that cannot be broken down further, and cannot be understood except in their entirety. Extreme early binding and optimization. Absolute time, absolute and global knowledge. Emphasis on shared memory and random-access machines.
- Objects as “abstract data types”, barely more than syntactic sugar for procedural programming, e.g. Java, C++, C#, even C. Often unwittingly referred to as “Object-Orientated Programming” — a name it deserves to keep. Emphasis on classes
- Objects as fully encapsulated, distributed, autonomous “cells” or computers, whether simulated (intra-computer) or literal bona-fide physical machines (inter-computer). Full fault-tolerance, growability, scalability, etc. Everything is relative, CAP theorem, only local knowledge is accurate. No bind operation, just message sends. Messages are “events that happened”. Object-Oriented (not orientated) Programming. Emphasis on messaging
Stuff that isn’t even a draft yet
So much more, could last me years and years.