Language isn’t everything

Over the past year, I’ve been completely changed by the knowledge of an entire untold history of things we take for granted—personal computing, object-oriented programming, the GUI.

For the vast majority of my coding lifetime, I had considered the act of constructing software as being identical to describing it, using some appropriate or inappropriate language. Anything beyond that, such as ‘visual’ tools, were mere crutches for those who do not know how to Program. Even though they’re nice and pretty, they hide what’s really going on—with no prospect of opening up the friendly abstractions they offer.

This reached its culmination last summer, when I got interested in User Interface Management Systems, and the idea of GUI interaction as a formal language (the Seeheim model). Previously I had used cumbersome general-purpose languages to code the event-handling logic of GUIs; now, I searched for and tried to construct custom languages; it was only a matter of finding the correct one. Language comes first; everything is describable, and everything in a computer, including GUIs, boils down to machine language in the end, so that is that.

Despite this, cracks did show, and I had allowed myself to dwell on one of them. Even if interaction is “essentially” linguistic (as I saw it), it is quite paradoxical to insist on describing, using words, the visual qualities of shapes, colours, positions, and so on. Especially since a picture paints a thousand words!

In fact, typing out pixel coordinates of vertices and adjusting them prior to recompilation is not a mere paradox—it is perverse. This forced me to consider visual editing appropriate for visual data, even if everything else is code.

I started applying this to my central motivation, games. Normally, to specify the content of a level, I’d write numbers in a text file and parse this in the game code to build Java objects or whatever. The “level editor” was just my text editor. But it also meant I could not see what it would look like, or even whether it would parse properly, so I had to use my imagination to predict the results. This did not strike me as particularly odd; I’d seen this done a lot, and besides, it’s good to support text editing; it’s how everything else is done, after all.

I now conceded that the idea of a level-editing tool really was a better way to do this, no matter how good one is at coding. But then, a crucial thought: why should such a tool be separate from the game itself? In fact, why not have an “edit mode”, that you can dip in and out of at will—play, pause, edit, resume? This gave way to some headaches: how to treat editing and ordinary interaction in a semantically appropriate way, how to reify object properties to make them editable; how far to bother with all this before I am no longer really using Java. There were hints of treating everything uniformly as stuff that can be edited and interacted with.

I was very muddled and unclear and dreaded implementing it. I was also clinging to the interaction-as-language thread, which was not as tangled. And I think this was the point where I finally spent time discovering the work of Alan Kay.

I’d heard of the guy before, but didn’t know anything. He designed a system called FLEX for his PhD, which came up as I was researching interaction grammars. I think it used them. Anyway, his story was fascinating and somewhat sad. He was originally inspired by the scaling potential of biological cells: small, relatively unintelligent components that can comprise fully functioning organisms a trillion times their size. This was his main motivation for OOP, a very different conception to how it turned out in the mainstream.

He makes reference to many research directions and people that I’d never heard of. It has been both enlightening and disheartening to discover the incredible things that researchers did decades ago, that have been ignored, forgotten or poorly re-invented by programmers and computer scientists since.

I stumbled across Alan Kay and his colleagues’ work at Viewpoints as if by destiny. I easily recognised their concepts of introspective, dynamic, easily self-modifiable systems as the logical conclusion—many steps ahead but in the same direction—of what I had been trying to do with my games and level editors. And that was a really good feeling, because it told me that I wasn’t crazy to have insisted on the experience of software development being different; I no longer had to fear it being a dead end that I’d have to abandon sooner or later. I could leap ahead to build on what better minds had already done.

But even with the fresh new attitude of VPRI, and some emphasis on direct-manipulation interfaces, I can’t shake the fact that much of what they do is very language-focused. Don’t get me wrong; while I can’t blame somebody who brushes off Open Reusable Object Models, or Combined Object Lambda Architectures, because of those ill-fated terms “Object model” and ‘Architecture’, I would try to get across that there is something genuinely novel about the ideas presented inside these papers, miles ahead of whatever the terminology might evoke. And there really isn’t anything wrong with them at their core.

It’s just that, earlier this year, I also came across Bret Victor, who showed me how static-text interfaces limit our ability to create, and did some pretty incredible demos to emphasise how much direct manipulation could help us in constructing software.

The COLA platform is supposed to be bootstrapped by way of source files, translators, and languages. Piumarta and Warth did some seriously cool recursive knot-tying in the Object Models paper, in developing a minimal possible pervasively late-bound system which can be used to develop itself. But it’s all done using language, and I wonder if there is a more interesting question of how to bootstrap the same thing visually rather than linguistically. This forms the basis of a new project I’ll be writing about.

I used to consider language as fundamental, and everything else reducible to it. Despite this being technically true in this case of computing, it also completely misses the point. This is so hard to see, because language is ubiquitous in software construction. It is all around us. I would dare to say that it is the only way the effective totality of software is developed, and has been since computing began.

But nowhere else in life is this true. When human beings construct anything else, it is often a combination of direct manipulation, drawing diagrams, and teaching by demonstration, with language as one of several tools. Even in mathematics, its formulation in language and symbols is often the final stage of a thought process involving imagery and demonstration.

So no; I defy my original viewpoint that got me started on this circular path. GameMaker got it right. Scratch got it right. Even rubbish ‘kiddie’ tools that don’t offer any language-based coding whatsoever fail to get it any more wrong than the flipside of insisting exclusively on descriptions.

But I should not be too hard on myself. It was not my fault that so much of programming consists of specifying dynamic, emergent behaviour ahead of time in a static, eternal form such as text. How could I be expected to think any differently? In fact—how did I get to where I currently stand??

It’s always easy to rationalise the status quo, it’s “just the way things are”. Everything currently works a similar way, why not be compatible with that? Everyone is guaranteed to have primitive text-based tools; if I make my game levels text files, then I can edit them using SSH on the other side of the world! I can use Git to version them without feeling guilty! They’ll be compatible with Unix tools! That’s cool, huh?

I refused to rationalise the complexity of my game programming experiments. Instead of reluctantly hacking it together in C++ or Java or C or Lua or Java again, I said: No! There has to be an easier way! I traded several tangled, inelegant, unmaintainable messes of “project” for a journey that’s shown me why they were doomed in the first place. Not necessarily for others who could try the same thing, but why they were doomed for me (if that makes sense). If you recognise any of this in yourself, you’re not alone.

Programming is complicated. We do scatter printf()s across our code. Software doesn’t scale easily. The main code, the graphics libraries, and the system itself are guaranteed to each define their own custom yet functionally identical versions of Vector, Color, Shape, etc. We do have a gazillion hot new Web frameworks every goddamn month to, I don’t know, circumvent the DOM to make GUIs, or make CSS actually usable.

But it doesn’t have to be this way. There is no reason why any of the problems facing computing, from software scaling, to Windows 10 freezing if I don’t shut it down every night (indeed, it did so just as I finished writing that down, no kidding), are simple inevitabilities of the nature of computer systems, or software engineering, despite what we may be led to believe. Anyone who suggests they are is forgetting that computing is still less than a century old, and that alone is ample reason to suspect that only a small fraction of what is possible has been covered so far.

But we won’t make any progress if we remain stuck in the past. And the past, not the present, is what an embarrassing majority of our fundamental tools and ideas preserve as state-of-the-art. And over-emphasis on language is but one facet.

6 thoughts on “Language isn’t everything

  1. to start abstract: if you really think of language as communication (communicating ideas to the machine) then “language” includes visual elements.

    for example, have you ever talked to a bee? probably not. but its yellow stripes are a clear warning– they convey a threat of some kind.

    there are times when pointing and clicking is absolutely the best thing– its more precise than tapping with a finger or stylus, (which are better for drawing, usually) and its quicker than “describing” an area on the screen.

    dont forget that todays guis and graphical languages are still very crude and clumsy. i would be happy to see people develop them further.

    when i designed fig, i wanted to make *writing* code as easy as possible to learn. i took cues from visual languages, but fig is coded as words (and quotes and hashes.) i still like writing code. but i got my first start with pcs drawing with the mouse. i like the mouse/trackpad, when its the right tool for the job. and i really, really like having a way to get at interfaces using text. to me, that comes first. first, but not only.


    1. (Thanks for the video, made me laugh. The link you sent me is region-blocked for me, though)

      It’s true that visuals are just as much for communication as language, so they can be construed as ‘visual’ language as well — but in a different sense. One difference is that visual shapes, colours and patterns are far more universally recognisable than text or speech. The yellow-black stripes of a bee scream “danger!” to everybody and even some animals, whereas a sign that says “danger!” only makes sense to people who know English.

      The vast majority of programming languages use English words; should this be of concern? I do not know. I have heard arguments for and against. One interesting benefit of programming in English without speaking it is that, if you don’t know what ‘class’ means, then you will not be misled by connotations of the word’s ordinary use. And so on for other concepts. But I have no idea whether this makes a difference in the end.

      Alan Kay suggests that language is one reason OOP ended up overemphasising the “object” over the “messaging”. The interesting part isn’t the cells themselves; it’s how they organise and communicate, without poking around with each others’ internals, to somehow form stable, intelligent structures. But do we have a word for this “in-between-ness” in Western languages? Probably the best approximation is: ‘interstitial’. By contrast, the Japanese have the short, common word 間, ‘ma’, for ‘space’ or “in-between-ness” directly. Maybe they would instantly recognise the concept if presented to them, and invent Messaging-Oriented Programming.

      Of course, it’s not exactly a rigorous explanation. But it’s at least an illustration of things being limited by what happens to be easy and not so easy to express in the language you happen to be using. I’d just also invite people to consider the ways that words and symbols in any language, arranged in a single long line that wraps into a 2D shape, might also be limiting.

      Editors can and should help with this, ie. showing what you are interested in and hiding extraneous information, or finding what you’re looking for without you having to name it, but they’re not there yet. And the fact that it’s all text, and we have to decide how to artificially fit conceptual boundaries into file boundaries, I think, holds it back.

      It’s good that you’re helping the process of learning to write code, and I gather that you do so because you want to improve computer literacy. Since so much of software is constructed and maintained by writing code, it makes perfect sense to work on that.

      I suppose what I’m doing is rebelling against my own reductionism of everything to language. But several things fed into that; most important was how hard it was to imagine any other ways of doing software, because text is everywhere and, as you say, the attempts to do it graphically have been crude and clumsy. But I’ve been convinced, mainly through Bret Victor’s demos and Alan Kay’s writings, that there’s a lot more to explore. Have a play with the Apparatus editor ( to get a feel for one way it could work.

      The other factor is that the workhorse of the computer, the calculator at its core known as the processor, operates on lists of commands that could reasonably be called a primitive language. But just because the *instructions* have to be described doesn’t mean that the structures in memory necessarily have to, at least once we’ve built up enough infrastructure and tools to visualise them.

      Of course, everything *can* be described by words; it’s verbose to describe a picture, but it can be done. But it shouldn’t be considered normal to have to do that, and there is huge potential to automatically work with things other than pictures in representations other than lines of code.

      Your comment’s appreciated, and I apologise that mine turned into such a deluge. I’ll watch myself next time 😛

      Liked by 1 person

      1. i dont mind long comments. i means youre probably excited about this. people who are excited about thinking about computer languages in new ways are the reason i started my blog. (well, that and computer literacy, as you guessed.)

        alan kay is awesome. hes on the board at olpc, and i spent 5 years looking for “the basic of the 21st century” something that was modern and basic-like and had enough similarities (its also ubiquitous. new versions of basic exist, but python mostly fills the niche they filled.)

        alan kay has something to do with me learning pythong– id dismissed it (whitespace? i came around the second time.)

        i was interested in olpc, and particularly the educational software, the sugar platform. through sugar i found pippy, the closest thing to the famous qbasic ide (for young adults, anyway) and it was pippy, nothing else that really got me into python.

        i couldve implemented fig in basic but it wouldnt be the same at all. and ive used python for almost 10 years now. (i used basic for most of 25.) looks like an olpc/sugar activity for sure. also the first drag/drop language i liked (turtle-art, now called turtle-blocks perhaps) was on sugar.

        two things are very irksome when using such languages, apart from wanting to type and having to go to a lot more trouble to select part of it to cut and paste.

        one is that so much less typically fits on the screen. i like to be able to look at the program, and doing this on a visual language is like looking at clutter. the product of all that “ease” is difficulty. but i covered this when i said its still crude and clumsy. if these design points are considered and addressed it could give someone a brilliant idea for innovating drag/drop language. i even have some ideas about that based on fig (which is a very tactile-friendly language– parentheses make visual languages bulkier. you can quote that if you like.)

        the other thing to think about when trying something like is:

        proficiency with the mouse or trackpad comes with practice. people start out pretty clumsy. touchscreens are beginner-friendly by comparison, but less accurate.

        if someone is not very proficient with the mouse, then either the controls will have to take up more space, or the user will be slower to accomplish anything, or both, regardless of control size.

        what i want to do is actually help people design languages, so there are far more people thinking about these things from far more angles.

        these drag/drop languages all come from logo. i wish more people were inspired by logo and macro languages, as they have a powerful and elegant simplicity. certainly we need “complex” languages too, but we dont need to get all our inspiration from them.

        Liked by 1 person

      2. one other thing, regarding english as the language of coding:

        it works well enough. i made a portuguese-based version of fig called figueira for a friend of mine who i was teaching coding to. she said that it was easier for her, as english was something she was still learning.

        for now, im of the opinion that while it isnt worth localising big languages like python, (far too costly both in the production and the use of such a language) that for very small, introductory languages (like fig or librelogo, which localises its keywords as well) its worth exploring. as for youtube– i hate them. i deleted my account when they started region-blocking. they dont even tell you which people will be able to see it– why bother, when they get a free link no one can use?

        Liked by 1 person

      3. I’ve heard of OLPC, but I didn’t know about sugar.

        Environments aimed at children’s learning are good because designers know it’s a lost cause to just sit a child down with a shell prompt and say type ‘vi’ … so it forces a certain amount of human-friendliness. On the other hand, because they’re geared at children, they have to be simplified a lot, dare I say ‘dumbed down’. Then it looks like visual stuff is just a crutch that you have to grow out of, instead of a good style to develop into an advanced tool, because of course it had to be adapted to its target audience. Alan Kay says that much of the GUI, as it is at the moment, was originally developed for children, and so everybody’s riding a bike with training wheels on it, but thinks that the training wheels are just part of the bike… :S

        You raised good points about visual dev.

        First, wanting to type: personally, I think we should keep typing, but have parsing etc. happen on the fly. I’m thinking of a LaTeX or maybe Markdown WYSIWYG editor I once used, where you type the syntax for bold and when it recognises this it simply turns the word bold. Press backspace and you can edit the syntax again, etc. But take this further somehow, e.g. type out a data structure definition and then this is reified into a visual spec or example of the data structure that you can reposition and play with, instead of it remaining as text only.

        Selection, cut+paste: well, textual selection is quite specific; we have multiple selection (shift click), selection via rectangular box, select via search query etc. in different interfaces. Cut+paste can apply to text or visual objects; delete, duplicate, etc. I suppose I’m answering a question about the current status quo with speculation about future progress.

        Screen space? One thing I notice about writing code, at a pleasant font size, is that the text is dense only up to about 1/3 of the screen width from the left-hand side. This makes multiple views, such as split panes, necessary to avoid wasting the screen space. But you’re right, layout becomes much more of an issue with visual forms of arbitrary shapes and sizes.

        What do you mean by “I like to be able to look at the program”? I agree with Bret Victor that it’s more important to visualise the data structures, and sample runs, rather than the decision paths in the code in full generality. In fact, language and text might even be best for representing commands, instructions, decisions. But if both code and data were made visual, you would be able to look at whichever bits of the program you were concerned with at the moment.

        I think clutter might be due to excessive padding and decoration of visual elements; it all adds up with lots of objects. If we say, “look, it doesn’t need to have fancy 3d shading, we might not even need drop shadows, we just need clean lines to act as separators” then maybe a lot of the clutter would vanish? 🙂

        People need to learn proficiency with the mouse, yes, but so too with all technology, including keyboards and typing. It seems to me like the mouse is easier to get used to than typing, despite being far more limited in what it can do. (2 or 3 buttons, plus positional data. Compared with all the buttons on the keyboard)

        The touchscreen needn’t be subject to the inaccuracies of our fingertips. It’s a shame that pens or styluses aren’t more common, because who draws or writes regularly in real life with the end of their finger?

        On the topic of designing languages, have you seen Alex Warth’s OMeta, and its successor Ohm: It was developed to surmount the many artificial barriers that ordinary coders face when wanting to experiment with their own languages. eg: forcing the language spec to be a CFG, even though it’s 99% certain that the language itself won’t be context-free. Also, it permits left-recursion, and just avoids the issue of ambiguity entirely by using ordered choice.

        I agree, simplicity is great. What are ‘macro languages’? eg LISP? LISP seems to be regarded as the epitome of powerful, elegant simplicity. It’s just that it does not have concrete syntax, in a sense, which for me kind of defeats the purpose of a language. Everything is an abstract syntax tree, expressions are prefix notated, not resembling nat lang very much at all. But past that, it looks really cool.

        I suspect that a lot of our conceptions of ‘simple’ and ‘complex’ languages come from their syntax. It’s true for me, anyway. After learning Haskell, I never wanted to write `f(x,y,z,w)` again. Because `f x y z w` looks cleaner. And Smalltalk looks like: `rect draw on: canvas in: blue with: border`, closely mirroring nat lang. But this is partly because I have to type all the symbols. We could still type `f x y z w`, and have it display as `f(x,y,z,w)`.

        Another simplicity of Logo, and fig, is that graphics is assumed. You don’t have to mess around deciding whether to use OpenGL or SFML or whatever, you don’t have to create a context or a window or compile any shaders. You can just draw. But most languages can’t assume this, so they make you go through all that.

        Yeah, Youtube bugs me a lot. Especially since they consider listening to music in the background on my computer as just fine, but as soon as I want to do so on my mobile or have the screen off to save power, nope sorry. This is deliberate as one of the perks of youtube red subscription lets you bypass this artificial problem… any apps that try to circumvent this, get pulled from the Play store. One of my dreams involves subverting this one day, but I’ll have to learn Android dev.

        Liked by 1 person

      4. “have you seen Alex Warth’s OMeta,”

        its a parser generator, requiring parser generator grammars. id like to make a parser generator (im not that good…) that just asks you questions or lets you change default values, a parser generator that is to language creation what basic was to teaching computers in the 1960s.

        “It’s a shame that pens or styluses aren’t more common”

        despite the fact that they have precision at the tip, they are one more physical implement that can introduce a feeling of awkwardness to the user. if theres one and only one major selling point for users / the masses with computing, the point is “make me feel like i know what im doing.” its psychology as much as good design. i have mixed feelings about that. i dont want to just give someone confidence, i want to give them the opportunity to be proficient.

        the stylus has more precision than a fingertip, but it has to be fished out of a pocket, it ruins the “line” of the device (unless you put a dip in the casing to accomodate the stylus, which is done sometimes) and for whatever reason, it looks “dorkier” than using a finger, though you wouldnt intuitively think so. they dont make most people feel more confident, so they fail even if theyre more proficient– thats my theory.

        “It seems to me like the mouse is easier to get used to than typing, despite being far more limited in what it can do.”

        in theory. i have tons of patience while a person slowly types. watching people slowly coerce the cursor to the right coordinates while using the mouse, it just never improves for so many people. the mouse is a bottleneck, which is why a lot of advanced users want less of it.

        the thing about a keyboard is, the buttons are always the same dimensions (they dont change while on the same keyboard) and the same locations (per keyboard, again.) while typing may be considered mildly threatening, the mouse takes more physical coordination.

        younger users are more likely to take to the mouse– i actually learned computing with a bus mouse in the mid 80s, i took to it right away. older users especially ive watched try to get what they want out of a mouse– double clicking is the worst part of it, some people get left/right mixed up, but the constant bottleneck is just getting the thing where you want it to go. even if youre good at it.

        the myth of the mouse is that it is friendly and not intimidating. but ive watched people struggle with it just as much as a keyboard. the thing is, between google and texting and chat apps (pc and phone) people are not threatened by keyboards. using android meanwhile, is an endless selection from list, after list, after list, after list… its only friendly up to n iterations before it becomes absolutely bureaucratic.

        “thats easy: just to go options > settings > network > network options > advanced > security… passwords. weekly. tap change… tap yes, type something, tap confirm, tap yes… 401k > retirement > grandkids…”

        keywords are brilliant because if you know a word, you dont have to cross several borders to get to it. theyre shortcuts, theyre portkeys. instead of walking to diagon alley, you just say “diagon-illy” and hit enter. though mis-typing will bring you to the wrong thing, so will mis-clicking with the mouse or mis-tapping.

        “What do you mean by “I like to be able to look at the program”?

        while typewriters with word processor features were a huge step for line composition, you had the entire page to look at (at least the part sticking out of the typewriter.) much nicer is when the page fits on the screen

        its not just about what features of the code youre looking at, but how easy it is to find information. not everyone is the same, but with little screens you have to swipe, swipe, swipe, swipe, swipe, swipe, keep going– its marathon-like swiping.

        the fewer lines of code you can fit on the screen (if you halve them, then halve again) then scrolling as you look over the code (pretend you havent mastered program structure yet, and have to deal with your own novice skills) becomes more like reading a book on a smartphone.

        its not that visual design is hopeless– really, the more people doing it the more likely it is to work out eventually– but im glad we arent stuck with it because honestly, it is far easier to get text design “right” than visual design. visual design has so many more pitfalls than people realise. to this day, i find using a mac to be an absolutely inane process without redoing a large portion of it in keyboard. bottlenecks everywhere.

        when you click on a link on a webpage, you are reducing a bottleneck– when you click though 7 layers of gui, you are introducing one. the secret as you said, is getting guis/visual design to do what it does best.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.